Himanshu Padole, Author at Perficient Blogs https://blogs.perficient.com/author/hpadole/ Expert Digital Insights Fri, 05 May 2023 16:30:02 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Himanshu Padole, Author at Perficient Blogs https://blogs.perficient.com/author/hpadole/ 32 32 30508587 Slowly Changing Dimension(SCD) TYPE 3 in Informatica PowerCenter https://blogs.perficient.com/2023/05/02/slowly-changing-dimensionscd-type-3-in-informatica-powercenter/ https://blogs.perficient.com/2023/05/02/slowly-changing-dimensionscd-type-3-in-informatica-powercenter/#comments Tue, 02 May 2023 09:11:58 +0000 https://blogs.perficient.com/?p=333211

Journey Builder V1 768x309

What is a Slowly Changing Dimension?

Slowly Changing Dimension (SCD) is a dimension that allows us to store and manage both current and previous data over time in a data warehouse. It is considered and implemented as one of the most critical ETL tasks in tracking the history of dimension records. There are three types of SCD Type 1, Type 2, and Type 3. In this blog, we will look at SCD type 3.

What is SCD type 3?

A Type 3 SCD stores two versions of values for certain selected level attributes. Each record stores the selected attribute’s previous and current values. When the value of any of the selected attributes changes, the current value is stored as the old value and the new value becomes the current value.

Now let us start with how we can implement SCD Type 3 in Informatica PowerCenter.

In the below sample mapping, we will check for Employee data in which Emp_branch column will get updated, If the value of Emp_Branch changes, then current value will be stored as the old value in EMP_Branch1 and the new value will become the current value in Emp_Branch2 column.

Source and Target

1. Drag and drop the required source and target instance to the mapping workspace.

Ss1

(Note here we required 2 target instances one for inserting data and another for updating data)

Ss2

2. Add the lookup to the mapping to check whether the income row/data exists in the target. Select the Lookup Icon and click on mapping workspace, and we will get a screen. Select the Target table inside the window. And select your required target table on which you want to do a lookup or check the income row/data exist or not

Ss3

LOOKUP TRANSFORMATION

3. Lookup transformation will be created which is the same as the target instance. Drag & Drop Required or All the ports/columns to the lookup transformation from Source qualifier.

Ss4

4. Double click on Lookup transformation then –> condition tab. Select the condition columns.

Ss5

ROUTER TRANSFORMATION

5. Add a router and create two groups (Insert Group and Update Group). Now drag and drop all columns coming from the source & Unique columns from the lookup.

Ss6

6. From Insert Flow of the Router group map columns to the target Definition (INSERT GROUP)  as shown below. As they are new records they will go as insert.

Ss7

7. Drag and drop columns from the Update Flow of the router group mapping incoming ports to the Update Strategy.

Ss8

8. Select Update Strategy, double click and go to the properties tab –> Under the formula mention “DD_UPDATE” as they are going  to update the history.

Ss9

Mapping

9. Save and validate mapping.

Ss10

WORKFLOW

Now Create Workflow and Session for the above-created mapping.

10. Connect to workflow manager. From Menu,

Click Tools –> select ‘Workflows’ –> Create

Ss11

11. To create a session, click on the session icon selected in Red Box in the below screenshot. Then a screen will pop-up with a list of mapping available in that folder. Please select the mapping for which you want to create this session. Click Ok.

Ss12

 

12. Now connect your session with Start Icon in the workspace.

Ss21

13. Select the session and double-click on it. Click on mapping.

Ss13

 

14. Go to the Source Folder and select the SQ instance to defined connection. Click on down arrow button highlighted below to select the required connect for the instance.

Ss14

15. Go the Target Folder and select the Target instance for Update flow to defined connection. Click on down arrow button highlighted below to select the required connect for the instance. And under properties select “update as update” only as here we are update the existing records.

Ss15

16. Go the Target Folder and select the Target instance for Insert flow to defined connection. Click on down arrow button   highlighted below to select the required connect for the instance. And under properties select “Insert” only as here we are insert the new records.

Ss16

17. Go the Transformation Folder and select the Lookup instance to defined connection. Click on down arrow button highlighted  below to select the required connect for the instance.

Ss17

18. Click Apply and Ok.

19. Save workflow and validation. Now you can run your job.

OUTPUT

Below is the screenshot of source table. Here it is having 3 records.

Ss18

       Below is the screenshot of target table after running the job successfully.

       EMP_BRANCH1 is the update records column and EMP_BRANCH2 is the historical record.

Ss20

Here we can check that

New record is inserted where EMP_ID – ‘104’ and

EMP_BRANCH of EMP_ID – ‘101’ is updated to EMP_BRANCH – ” BHARAT ” .

This is all about the implementation of  Slowly Changing Dimension(SCD) TYPE 3 Mapping.
Hope you enjoyed reading this blog and it was helpful.

Conclusion

I hope this 4-minute read has helped the enthusiast inclined to know about SCD Type3 in Informatica and get a broader view of how to create mapping of scd 3 in informatica. Referring to this blog, users can learn how to map and execute an end-to-end flow using these steps.

Happy Reading!

]]>
https://blogs.perficient.com/2023/05/02/slowly-changing-dimensionscd-type-3-in-informatica-powercenter/feed/ 5 333211
Combining The Data In Denodo Platform. https://blogs.perficient.com/2022/08/01/combining-the-data-in-denodo-platform/ https://blogs.perficient.com/2022/08/01/combining-the-data-in-denodo-platform/#comments Mon, 01 Aug 2022 12:26:25 +0000 https://blogs.perficient.com/?p=315108

Denodo: A  data virtualization platform

Data virtualization is a core technology that enables modern data integration and data management solutions.

Factors of data virtualization:

  • Connect, introspect, and govern any data source with zero data replication: Quickly connect disparate structured and unstructured sources. Catalog your entire data ecosystem. Data stays in the sources and it is accessed on-demand, with no need to create another copy.
  • Combine and integrate into business data views: Build data models that suit the needs of the consumer, even across multiple sources. Hide the complexity of your back-end technologies from the end users.
  • Consume and secure data views in multiple formats: The virtual model can be secured and consumed using standard SQL and other formats like REST, SOAP, and OData.

Microsoftteams Image (4)

Combining the data

With the data sources and base views in place we are in a position of starting to combine all the data. The first steps are to have a denormalized, standardized view of each data source so we get a set of base business entities.

Step1:

  • Right-click over the “bv_wo_customer” folder and select New > Join.

One

 

 

 

Step2:

The “customer” view will get dragged onto the workspace.

Step3:

          Drag the “bv_wo_orders” view onto the workspace.

 

Two

 

Step4:

          Connect the “customer_code_internal” column from the customer table to the “customer_code” column of the orders table.

Three

Step5:

          In the Output tab, remove the “customer_code_internal” and “customer_code” columns

Four

Step6:

          Click save.

 

Five

Step7:  With these steps we have created a new view that contains the standardized names for each customer_code, instead of the wrong ones that we originally had. To check that everything is working properly click on  and then Execute  again:

Six

Here we have successfully combined the customers and order data in denodo.

For more such blogs click here

Thanks for reading!! Hope you enjoyed reading this blog.

]]>
https://blogs.perficient.com/2022/08/01/combining-the-data-in-denodo-platform/feed/ 1 315108