8.5 C
New York
Friday, March 14, 2025

Amazon Redshift proclaims historical past mode for zero-ETL integrations to simplify historic information monitoring and evaluation


Within the ever-evolving panorama of cloud computing and information administration, AWS has persistently been on the forefront of innovation. One of many groundbreaking developments lately is zero-ETL integration, a set of absolutely managed integrations by AWS that minimizes the necessity to construct extract, remodel, and cargo (ETL) information pipelines. This submit will discover temporary historical past of zero-ETL, its significance for purchasers, and introduce an thrilling new function: historical past mode for Amazon Aurora PostgreSQL-Suitable Version, Amazon Aurora MySQL-Suitable Version, Amazon Relational Database Service (Amazon RDS) for MySQL, and Amazon DynamoDB zero-ETL integration with Amazon Redshift.

A short historical past of zero-ETL integrations

The idea of zero-ETL integrations emerged as a response to the rising complexities and inefficiencies in conventional ETL processes. Conventional ETL processes are time-consuming and sophisticated to develop, preserve, and scale. Though not all use circumstances might be changed with zero-ETL, it simplifies the replication and lets you apply transformation post-replication. This eliminates the necessity for extra ETL expertise between the supply database and Amazon Redshift. We at AWS acknowledged the necessity for a extra streamlined method to information integration, significantly between operational databases and the cloud information warehouses. The journey of zero-ETL started in late 2022 after we launched the function for Aurora MySQL with Amazon Redshift. This function marked a pivotal second in streamlining complicated information workflows, enabling close to real-time information replication and evaluation whereas eliminating the necessity for ETL processes.

Constructing on the success of our first zero-ETL integration, we’ve made steady strides on this area by working backward from our prospects’ wants and launching options like information filtering, auto and incremental refresh of materialized views, refresh interval, and extra. Moreover, we elevated the breadth of sources to incorporate Aurora PostgreSQL, DynamoDB, and Amazon RDS for MySQL to Amazon Redshift integrations, solidifying our dedication to creating it seamless so that you can run analytics in your information. The introduction of zero-ETL was not only a technological development; it represented a paradigm shift in how organizations might method their information methods. By eradicating the necessity for intermediate information processing steps, we opened up new prospects for close to real-time analytics and decision-making.

Introducing historical past mode: A brand new frontier in information evaluation

Zero-ETL has already simplified the information integration, and we’re excited to additional improve the capabilities by asserting a brand new function that takes it a step additional: historical past mode with Amazon Redshift. Utilizing historical past mode with zero-ETL integrations, you’ll be able to streamline your historic information evaluation by sustaining full change information seize (CDC) from the supply in Amazon Redshift. Historical past mode allows you to unlock the complete potential of your information by seamlessly capturing and retaining historic variations of information throughout your zero-ETL information sources. You may carry out superior historic evaluation, construct look again studies, carry out pattern evaluation, and create slowly altering dimensions (SCD) Kind 2 tables on Amazon Redshift. This lets you consolidate your core analytical property and derive insights throughout a number of functions, gaining price financial savings and operational efficiencies. Historical past mode allows organizations to adjust to regulatory necessities for sustaining historic information, facilitating complete information governance and knowledgeable decision-making.

Zero-ETL integrations present a present view of information in close to actual time, which means solely the most recent modifications from supply databases are retained on Amazon Redshift. With historical past mode, Amazon Redshift introduces a revolutionary method to historic information evaluation. Now you can configure your zero-ETL integrations to trace each model of your information in supply tables straight in Amazon Redshift, together with the supply timestamp with each file model indicating when every file was inserted, modified, or deleted. As a result of information modifications are tracked and retained by Amazon Redshift, this may help you meet your compliance necessities with out having to keep up duplicate copies in information sources. As well as, you don’t have to keep up and handle partitioned tables to maintain older information intact as separate partitions to model information, and preserve historic information in supply databases.

In a knowledge warehouse, the most typical dimensional modeling methods is a star schema, the place there’s a truth desk on the middle surrounded by various related dimension tables. A dimension is a construction that categorizes info and measures with the intention to allow customers to reply enterprise questions. As an example an instance, in a typical gross sales area, buyer, time, or product are dimensions and gross sales transactions is a truth. An SCD is a knowledge warehousing idea that accommodates comparatively static information that may change slowly over a time frame. There are three main forms of SCDs maintained in information warehousing: Kind 1 (no historical past), Kind 2 (full historical past), and Kind 3 (restricted historical past). CDC is a attribute of a database that gives a capability to establish the information that modified between two database masses, in order that an motion might be carried out on the modified information.

On this submit, we show how one can allow historical past mode for tables in a zero-ETL integration and seize the complete historic information modifications as SCD2.

Answer overview

On this use case, we discover how a fictional nationwide retail chain, AnyCompany, makes use of AWS providers to realize beneficial insights into their buyer base. With a number of areas throughout the nation, AnyCompany goals to reinforce their understanding of buyer habits and enhance their advertising methods by two key initiatives:

  • Buyer migration evaluation – AnyCompany seeks to trace and analyze buyer relocation patterns, specializing in how geographical strikes influence buying habits. By monitoring these modifications, the corporate can adapt its stock, providers, and native advertising efforts to higher serve prospects of their new areas.
  • Advertising marketing campaign effectiveness – The retailer needs to judge the influence of focused advertising campaigns based mostly on buyer demographics on the time of marketing campaign execution. This evaluation may help AnyCompany refine its advertising methods, optimize useful resource allocation, and enhance total marketing campaign efficiency.

By intently monitoring modifications in buyer profiles for each geographic motion and advertising responsiveness, AnyCompany is positioning itself to make extra knowledgeable, data-driven choices.

On this demonstration, we start by loading a pattern dataset into the supply desk, buyer, in Aurora PostgreSQL-Suitable. To keep up historic information, we allow historical past mode on the buyer desk, which routinely tracks modifications in Amazon Redshift.

When historical past mode is turned on, the next columns are routinely added to the goal desk, buyer, in Amazon Redshift to maintain monitor of modifications within the supply.

Column title Information sort Description
_record_is_active Boolean Signifies if a file within the goal is at present lively within the supply. True signifies the file is lively.
_record_create_time Timestamp Beginning time (UTC) when the supply file is lively.
_record_delete_time Timestamp Ending time (UTC) when the supply file is up to date or deleted.

Subsequent, we create a dimension desk, customer_dim, in Amazon Redshift with an extra surrogate key column to point out an instance of making an SCD desk. To optimize question efficiency for various queries, a few of which is likely to be analyzing lively or inactive information solely whereas different queries is likely to be analyzing information as of a sure date, we outlined the kind key consisting of _record_is_active, _record_create_time, and _record_delete_time attributes within the customer_dim desk.

The next determine offers the schema of the supply desk in Aurora PostgreSQL-Suitable, and the goal desk and goal buyer dimension desk in Amazon Redshift.
schema

To streamline the information inhabitants course of, we developed a saved process named SP_Customer_Type2_SCD(). This process is designed to populate incremental information into the customer_dim desk from the replicated buyer desk. It handles numerous information modifications, together with updates, inserts, and deletes within the supply desk and implementing an SCD2 method.

Conditions

Earlier than you get began, full the next steps:

  1. Configure your Aurora DB cluster and your Redshift information warehouse with the required parameters and permissions. For directions, seek advice from Getting began with Aurora zero-ETL integrations with Amazon Redshift.
  2. Create an Aurora zero-ETL integration with Amazon Redshift.
  3. From an Amazon Elastic Compute Cloud (Amazon EC2) terminal or utilizing AWS CloudShell, SSH into the Aurora PostgreSQL cluster and run the next instructions to put in psql:
sudo dnf set up postgresql15
psql --version

  1. Load the pattern supply information:
    • Obtain the TPC-DS pattern dataset for the buyer desk onto the machine working psql.
    • From the EC2 terminal, run the next command to hook up with the Aurora PostgreSQL DB utilizing the default tremendous person postgres:
      psql -h  -p 5432 -U postgres

    • Run the next SQL command to create the database zetl:
      create database zetl template template1;

    • Change the connection to the newly created database:
    • Create the buyer desk (the next instance creates it within the public schema):
      CREATE TABLE buyer(
          c_customer_id char(16) NOT NULL PRIMARY KEY,
          c_salutation char(10),
          c_first_name char(20),
          c_last_name char(30),
          c_preferred_cust_flag char(1),
          c_birth_day int4,
          c_birth_month int4,
          c_birth_year int4,
          c_birth_country varchar(20),
          c_login char(13),
          c_email_address char(50),
          ca_street_number char(10),
          ca_street_name varchar(60),
          ca_street_type char(15),
          ca_suite_number char(10),
          ca_city varchar(60),
          ca_county varchar(30),
          ca_state char(2),
          ca_zip char(10),
          ca_country varchar(20),
          ca_gmt_offset numeric(5, 2),
          ca_location_type char(20)
      );

    • Run the next command to load buyer information from the downloaded dataset after altering the highlighted location of the dataset to your listing path:
      copy buyer from '/residence/ec2-user/customer_sample_data.dat' WITH DELIMITER '|' CSV;

    • Run the next question to validate the profitable creation of the desk and loading of pattern information:
      SELECT table_catalog, table_schema, table_name, n_live_tup AS row_count
      FROM information_schema.tables JOIN g_stat_user_tables ON table_name = relname
      WHERE table_type="BASE TABLE"
      ORDER BY row_count DESC;

The SQL output needs to be as follows:

table_catalog | table_schema | table_name | row_count
---------------+--------------+------------+-----------
zetl          | public       | buyer   |   1200585
(1 row)

Create a goal database in Amazon Redshift

To duplicate information out of your supply into Amazon Redshift, you have to create a goal database out of your integration in Amazon Redshift. For this submit, we’ve got already created a supply database referred to as zetl in Aurora PostgreSQL-Suitable as a part of the conditions. Full the next steps to create the goal database:

  1. On the Amazon Redshift console, select Question editor v2 within the navigation pane.
  2. Run the next instructions to create a database referred to as postgres in Amazon Redshift utilizing the zero-ETL integration_id with historical past mode turned on.
-- Amazon Redshift SQL instructions to create database
SELECT integration_id FROM svv_integration; -- copy this outcome, use within the subsequent sql
CREATE DATABASE "postgres" FROM INTEGRATION '' DATABASE "zetl" SET HISTORY_MODE = TRUE;

Historical past mode turned on on the time of goal database creation on Amazon Redshift will allow historical past mode for current and new tables created sooner or later.

  1. Run the next question to validate the profitable replication of the preliminary information from the supply into Amazon Redshift:
choose is_history_mode, table_name, table_state, * from svv_integration_table_state;

The desk buyer ought to present table_state as Synced with is_history_mode as true.
histmode-true

Allow historical past mode for current zero-ETL integrations

Historical past mode might be enabled in your current zero-ETL integrations utilizing both the Amazon Redshift console or SQL instructions. Based mostly in your use case, you’ll be able to activate historical past mode on the database, schema, or desk degree. To make use of the Amazon Redshift console, full the next steps:

  1. On the Amazon Redshift console, select Zero-ETL integrations within the navigation pane.
  2. Select your required integration.
  3. Select Handle historical past mode.
    zelt-integratin

On this web page, you’ll be able to both allow or disable historical past mode for all tables or a subset of tables.

  1. Choose Handle historical past mode for particular person tables and choose Activate for the historical past mode for the buyer
  2. Select Save modifications.
    table-hist-mode
  3. To verify modifications, select Desk statistics and ensure Historical past mode is On for the buyer.
    table-stats
  4. Optionally, you’ll be able to run the next SQL command in Amazon Redshift to allow historical past mode for the buyer desk:
ALTER DATABASE "postgres" INTEGRATION SET HISTORY_MODE = TRUE FOR TABLE public.buyer;

  1. Optionally, you’ll be able to allow historical past mode for all present and tables created sooner or later within the database:
ALTER DATABASE "postgres" INTEGRATION SET HISTORY_MODE = TRUE FOR ALL TABLES;

  1. Optionally, you’ll be able to allow historical past mode for all present and tables created sooner or later in a number of schemas. The next question allows historical past mode for all present and tables created sooner or later for the public schema:
ALTER DATABASE "postgres" INTEGRATION SET HISTORY_MODE = TRUE FOR ALL TABLES IN SCHEMA public;

  1. Run the next question to validate if the buyer desk has been efficiently modified to historical past mode with the is_history_mode column as true in order that it will possibly start monitoring each model (together with updates and deletes) of all information modified within the supply:
choose is_history_mode, table_name, table_state, * from svv_integration_table_state;

Initially, the desk might be in ResyncInitiated state earlier than altering to Synced.
table-synced

  1. Run the next question within the zetl database of Aurora PostgreSQL-Suitable to switch a supply file and observe the habits of historical past mode within the Amazon Redshift goal:
UPDATE buyer
SET
    ca_suite_number="Suite 100",
    ca_street_number="500",
    ca_street_name="Major",
    ca_street_type="St.",
    ca_city = 'New York',
    ca_county = 'Manhattan',
    ca_state="NY",
    ca_zip = '10001'
WHERE c_customer_id = 'AAAAAAAAAAAKNAAA';

  1. Now run the next question within the postgres database of Amazon Redshift to see all variations of the identical file:
SELECT   
    c_customer_id,
    ca_street_number,
    ca_street_name,
    ca_suite_number,
    ca_city,
    ca_county,
    ca_state,
    ca_zip,
    _record_is_active,
    _record_create_time,
    _record_delete_time
FROM postgres.public.buyer
WHERE c_customer_id = 'AAAAAAAAAAAKNAAA';

Zero-ETL integrations with historical past mode has inactivated the outdated file with the _record_is_active column worth to false and created a brand new file with _record_is_active as true. You too can see the way it maintains the _record_create_time and _record_delete_time column values for each information. The inactive file has a delete timestamp that matches the lively file’s create timestamp.
table-history

Load incremental information in an SCD2 desk

Full the next steps to create an SCD2 desk and implement an incremental information load course of in an everyday database of Amazon Redshift, on this case dev:

  1. Create an empty buyer SDC2 desk referred to as customer_dim with SCD fields. The desk additionally has DISTSTYLE AUTO and SORTKEY columns _record_is_active, _record_create_time, and _record_delete_time. While you outline a form key on a desk, Amazon Redshift can skip studying complete blocks of knowledge for that column. It could achieve this as a result of it tracks the minimal and most column values saved on every block and might skip blocks that don’t apply to the predicate vary.
CREATE TABLE dev.public.customer_dim (
    c_customer_sk bigint NOT NULL DEFAULT 0 ENCODE uncooked distkey,
    c_customer_id character various(19) DEFAULT '' :: character various ENCODE lzo,
    c_salutation character various(12) ENCODE bytedict,
    c_first_name character various(24) ENCODE lzo,
    c_last_name character various(36) ENCODE lzo,
    c_preferred_cust_flag character various(1) ENCODE lzo,
    c_birth_day integer ENCODE az64,
    c_birth_month integer ENCODE az64,
    c_birth_year integer ENCODE az64,
    c_birth_country character various(24) ENCODE bytedict,
    c_login character various(15) ENCODE lzo,
    c_email_address character various(60) ENCODE lzo,
    ca_street_number character various(12) ENCODE lzo,
    ca_street_name character various(72) ENCODE lzo,
    ca_street_type character various(18) ENCODE bytedict,
    ca_suite_number character various(12) ENCODE bytedict,
    ca_city character various(72) ENCODE lzo,
    ca_county character various(36) ENCODE lzo,
    ca_state character various(2) ENCODE lzo,
    ca_zip character various(12) ENCODE lzo,
    ca_country character various(24) ENCODE lzo,
    ca_gmt_offset numeric(5, 2) ENCODE az64,
    ca_location_type character various(24) ENCODE bytedict,
    _record_is_active boolean ENCODE uncooked,
    _record_create_time timestamp with out time zone ENCODE az64,
    _record_delete_time timestamp with out time zone ENCODE az64,
    PRIMARY KEY (c_customer_sk)
) SORTKEY (
    _record_is_active,
    _record_create_time,
    _record_delete_time
);

Subsequent, you create a saved process referred to as SP_Customer_Type2_SCD() to populate incremental information within the customer_dim SCD2 desk created within the previous step. The saved process accommodates the next elements:

    • First, it fetches the max _record_create_time and max _record_delete_time for every customer_id.
    • Then, it compares the output of the previous step with the continued zero-ETL integration replicated desk for information created after the max creation time within the dimension desk or the file within the replicated desk with _record_delete_time after the max _record_delete_time within the dimension desk for every customer_id.
    • The output of the previous step captures the modified information between the replicated buyer desk and goal customer_dim dimension desk. The interim information is staged to a customer_stg desk, which is able to be merged with the goal desk.
    • Through the merge course of, information that have to be deleted are marked with _record_delete_time and _record_is_active is ready to false, whereas newly created information are inserted into the goal desk customer_dim with _record_is_active as true.
  1. Create the saved process with the next code:
CREATE OR REPLACE PROCEDURE public.sp_customer_type2_scd()
LANGUAGE plpgsql
AS $$
    BEGIN

    DROP TABLE IF EXISTS cust_latest;

    -- Create temp desk with newest file timestamps
         CREATE TEMP TABLE cust_latest DISTKEY (c_customer_id) 
    AS
        SELECT
            c_customer_id,
            max(_record_create_time) AS _record_create_time,
            max(_record_delete_time) AS _record_delete_time
        FROM customer_dim 
        GROUP BY c_customer_id;
    
    DROP TABLE IF EXISTS customer_stg;

    -- Establish and stage modified information
    CREATE TEMP TABLE customer_stg 
    AS           
    SELECT
            ABS(fnv_hash(cust.c_customer_id)) as customer_sk,
            cust.*
            FROM
                postgres.public.buyer cust
LEFT OUTER JOIN cust_latest ON cust.c_customer_id = cust_latest.c_customer_id
WHERE (cust._record_create_time > NVL(cust_latest._record_create_time, '1099-01-01 01:01:01') AND cust._record_is_active is true)
OR (cust._record_delete_time > NVL(cust_latest._record_delete_time, '1099-01-01 01:01:01') AND cust._record_is_active is fake);

    -- Merge modifications to buyer dimension desk
    MERGE INTO public.customer_dim 
    USING customer_stg stg 
    ON customer_dim.c_customer_id = stg.c_customer_id
        AND customer_dim._record_is_active = TRUE
        AND stg._record_is_active = false
    WHEN MATCHED THEN
        UPDATE
        SET
            _record_is_active = stg._record_is_active,
            _record_create_time = stg._record_create_time,
            _record_delete_time = stg._record_delete_time
    WHEN NOT MATCHED THEN
        INSERT
        VALUES
            (
                stg.customer_sk,
                stg.c_customer_id,
                stg.c_salutation,
                stg.c_first_name,
                stg.c_last_name,
                stg.c_preferred_cust_flag,
                stg.c_birth_day,
                 	     stg.c_birth_month,
                stg.c_birth_year,
                stg.c_birth_country,
                stg.c_login,
                stg.c_email_address,
                stg.ca_street_number,
                stg.ca_street_name,
                stg.ca_street_type,
                stg.ca_suite_number,
                stg.ca_city,
                stg.ca_county,
                stg.ca_state,
                stg.ca_zip,
                stg.ca_country,
                stg.ca_gmt_offset,
                stg.ca_location_type,
                stg._record_is_active,
                stg._record_create_time,
                stg._record_delete_time
            );

    END;
    $$

  1. Run and schedule the saved process to load the preliminary and ongoing incremental information into the customer_dim SCD2 desk:
CALL SP_Customer_Type2_SCD();

  1. Validate the information within the customer_dim desk for a similar buyer with a modified handle:
SELECT
    c_customer_id,
    ca_street_number,
    ca_street_name,
    ca_suite_number,
    ca_city,
    ca_county,
    ca_state,
    ca_zip,
    _record_is_active,
    _record_create_time,
    _record_delete_time
FROM customer_dim
WHERE c_customer_id = 'AAAAAAAAAAAKNAAA';

dim-history

You’ve got efficiently applied an incremental load technique for the shopper SCD2 desk. Going ahead, all modifications to buyer might be tracked and maintained on this buyer dimension desk by working the saved process. This allows you to analyze buyer information at a desired time limit for various use circumstances, for instance, performing buyer migration evaluation and seeing how geographical strikes influence buying habits, or advertising marketing campaign effectiveness to research the influence of focused advertising campaigns on buyer demographics on the time of marketing campaign execution.

Trade use circumstances for historical past mode

The next are different trade use circumstances enabled by historical past mode between operational information shops and Amazon Redshift:

  • Monetary auditing or regulatory compliance – Monitor modifications in monetary information over time to assist compliance and audit necessities. Historical past mode permits auditors to reconstruct the state of economic information at any time limit, which is essential for investigations and regulatory reporting.
  • Buyer journey evaluation – Perceive how buyer information evolves to realize insights into habits patterns and preferences. Entrepreneurs can analyze how buyer profiles change over time, informing personalization methods and lifelong worth calculations.
  • Provide chain optimization – Analyze historic stock and order information to establish developments and optimize inventory ranges. Provide chain managers can overview how demand patterns have shifted over time, bettering forecasting accuracy.
  • HR analytics – Monitor worker information modifications over time for higher workforce planning and efficiency evaluation. HR professionals can analyze profession development, wage modifications, and talent improvement developments throughout the group.
  • Machine studying mannequin auditing – Information scientists can use historic information to coach fashions, examine predictions vs. actuals to enhance accuracy, and assist clarify mannequin habits and establish potential biases over time.
  • Hospitality and airline trade use circumstances – For instance:
    • Customer support – Entry historic reservation information to swiftly handle buyer queries, enhancing service high quality and buyer satisfaction.
    • Crew scheduling – Monitor crew schedule modifications to assist adjust to union contracts, sustaining constructive labor relations and optimizing workforce administration.
    • Information science functions – Use historic information to coach fashions on a number of situations from completely different time intervals. Examine predictions towards actuals to enhance mannequin accuracy for key operations similar to airport gate administration, flight prioritization, and crew scheduling optimization.

Finest practices

In case your requirement is to separate lively and inactive information, you should use _record_is_active as the primary kind key. For different patterns the place you need to analyze information as of a particular date previously, regardless of whether or not information is lively or inactive, _record_create_time and _record_delete_time might be added as kind keys.

Historical past mode retains file variations, which can enhance the desk measurement in Amazon Redshift and will influence question efficiency. Due to this fact, periodically carry out DML deletes for outdated file variations (delete information past a sure timeframe if not wanted for evaluation). When executing these deletions, preserve information integrity by deleting throughout all associated tables. Vacuuming additionally turns into mandatory while you carry out DML deletes on information whose versioning is not required. To enhance auto vacuum delete effectivity, Amazon Redshift auto vacuum delete is extra environment friendly when working on bulk deletes. You may monitor vacuum development utilizing the SYS_VACUUM_HISTORY desk.

Clear up

Full the next steps to scrub up your sources:

  1. Delete the Aurora PostgreSQL cluster.
  2. Delete the Redshift cluster.
  3. Delete the EC2 occasion.

Conclusion

Zero-ETL integrations have already made important strides in simplifying information integration and enabling close to real-time analytics. With the addition of historical past mode, AWS continues to innovate, offering you with much more highly effective instruments to derive worth out of your information.

As companies more and more depend on data-driven decision-making, zero-ETL with historical past mode might be essential in sustaining a aggressive edge within the digital economic system. These developments not solely streamline information processes but additionally open up new avenues for evaluation and perception era.

To be taught extra about zero-ETL integration with historical past mode, seek advice from Zero-ETL integrations and Limitations. Get began with zero-ETL on AWS by making a free account right this moment!


Concerning the Authors

Raks KhareRaks Khare is a Senior Analytics Specialist Options Architect at AWS based mostly out of Pennsylvania. He helps prospects throughout various industries and areas architect information analytics options at scale on the AWS platform. Exterior of labor, he likes exploring new journey and meals locations and spending high quality time together with his household.

Jyoti Aggarwal is a Product Administration Lead for AWS zero-ETL. She leads the product and enterprise technique, together with driving initiatives round efficiency, buyer expertise, and safety. She brings alongside an experience in cloud compute, information pipelines, analytics, synthetic intelligence (AI), and information providers together with databases, information warehouses and information lakes.

Gopal Paliwal is a Principal Engineer for Amazon Redshift, main the software program improvement of ZeroETL initiatives for Amazon Redshift.

Harman Nagra is a Principal Options Architect at AWS, based mostly in San Francisco. He works with world monetary providers organizations to design, develop, and optimize their workloads on AWS.

Sumanth Punyamurthula is a Senior Information and Analytics Architect at Amazon Internet Companies with greater than 20 years of expertise in main massive analytical initiatives, together with analytics, information warehouse, information lakes, information governance, safety, and cloud infrastructure throughout journey, hospitality, monetary, and healthcare industries.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles