Skip to main content

Development

IDA or Erwin Data Modeller, Which is the Tool for You..?

Across the Globe, we Homo sapiens always find things/ideas/concepts easier to understand if they are demonstrated to us in a pictorial representation. A pictorial diagram always revolves around the key areas of an idea/concept and doesn’t deviate from the core which helps people to understand more easily rather than explaining the same concept via article or Oral explanation.

Business people who owns colossal amount of data take key decisions based on this data. It is the key responsibility of IT Service providers to maintain the below key points and create a data design solution in an easily understandable manner,

  • Data flow in the database
  • Relationship between the data in the database
  • Data Consistency in the database
  • Data quality

All the above help IT to maintain alignment of data design based on the business requirement.

Data modeling is the process of creating blue-print/Pictorial diagram which depicts how different tables within a database are related  and they are organized which makes the data design easily understandable and helps the service providers to keep the data design in sync with the business requirement & expectations.

A data model usually represents the nature of the data, business rules governing the data and how it will be organized in the database. A data model is comprised of two parts logical data model and physical data model. Data models are created either in Top down approach or Bottom up approach. In top down approach, the data models are created by understanding and analyzing the business requirement. In Bottom Up approach, data models are created from the existing databases, which have no data model.

In the IT market, we have several Data modeling tools which are used to design a database by letting you quickly capture business data and their relationships and enforce data integrity with business rules.  In this blog, we are going to compare Erwin Data modeller and IBM Infosphere Data Architect and then pick out a prominent feature of IBM Infosphere Data Architect over Erwin Data modeller. The features of IBM Infosphere Data Architect over Erwin Data modeller are listed in tabular format for better understanding. The similarities are highlighted in green and the differences are highlighted in yellow.

Table1

Table2

One of the most prominent feature of IBM InfoSphere Data Architect is the Multidimensional functionality. In the next section, we are going to learn about IBM Infosphere Data Architect and one of its features of how to perform a forward engineering for a Multi-dimensional modeling using IBM Infosphere Data Architect.

What is forward engineering?

Data models and databases are built from scratch, without a preexisting model to work from. This type of modeling is best done for new systems or when modeling new business processes that are not yet addressed by your current systems.

In this blog, I am going to walk through on step by step procedure to perform forward engineering for a simple multi-dimensional model:

Tables

  • Sales – Fact
  • Time – Dimension
  • Branch – Dimension
  • Item – Dimension
  • Location – Dimension

Step 1: A simple logical data model is created for tracking the sales transaction of a retail company where the dimensions and facts are represented as entities.

P1

Step 2: In order to convert the entities into dimensions and facts automatically, the option Discover Facts and Dimensions is used. Right-click on the data model, and on the pop-up that appears, click Discover Facts and Dimensions.

P2

A box will pop up asking you if you want the hierarchy to be generated for any entities of type Dimension. Choose No. After completion of discovery, you will have different dimensional properties being applied to the entities. This is a normalized dimensional logical model.

P3

Step 3: Having created the normalized dimensional logical data model, you need to de-normalize it to suit your business needs. Click on the logical data model node. Choose Data > Transform > New Configuration.

P4

In the New transformation configuration pop up, specify LDM2DLDM for the configuration, Choose Logical Data Model to Dimensional-Logical Data Model option and click next

P5

Choose the input logical model and the output folder as shown in the screenshot and click next

P6

Choose the following options in the next screen.

  • Create a star schema.
  • Create the date and time dimension if applicable.

Enable the generate traceability option.

P7

Click Finish. In the transformation configuration window that opens, click Run. A new file, Package1_D.ldm, is created. This is a de-normalized version of your logical model.

Step 4: In the de-normalized dimensional logical model, all the numeric columns in fact entity will be classified as Measure.

P8

Step 5: After creating the de-normalized data model, it can be viewed in dimension specific diagram. Right click on the diagrams node under the package Package1_D.ldm and click on the New Dimensional Blank Diagram menu item.

P9

A new diagram (Diagram1) will be created, and the empty diagram editor opens on the right side. Select all the entities in your normalized dimensional model, and drag them and drop them into the Diagram editor. The dimensional data model can now be viewed in star schema.

P10

Step 6: In this section, we are going to transform the de-normalized dimensional logical data model to dimensional physical data model. Right click the de-normalized dimensional logical data model node, and click Transform to Physical Data Model from the context menu.

P11

In the Transform To Physical Data Model wizard, select Create new model and then click Next.

P12

Keeping the Destination folder and File name set at their defaults, we are going to transform the model to Oracle. Select Database as Oracle, and Version as 11, then click next.

P13

Keeping the other options in the below screenshot as such, Select Generate traceability, which can be used for object trace in future, and update Schema name, then click Next.

P14

In the Output page, the transformation status is displayed. Click Finish to generate the dimensional physical data model(Package1_D.dbm).

P15

Step 7: Next analyze the dimensional physical data model. Right-click on the schema mentioned in the transformed dimensional physical data model in the Data Project Explorer, then click Analyze Model.

p16

In the Analyze Model wizard, all the analyze rules under category physical data model are selected by default. Seven rules are added in Info Sphere Data Architect for dimensional physical data model validation. Click Finish to run the analyze model process.

P17

Step 8: From the dimensional physical data model, DDL can be generated. Right-click on the schema node in the Data Project Explorer, then click the context menu item Generate DDL.

P18

Customize the options to generate DDL as per your requirement and then click Next.

P19

Customize the objects to generate DDL as per your requirement and then click Next.

P20

Now the preview of the DDL along with folder and destination information will be available in the next pop up.

P21

Click Next once the above info are filled. The summary page of the Generate DDL wizard will now be displayed. Click Finish.

P22

Now a DDL file will be generated under Sample_test folder.  Once the DDL script is verified, they can be used to run in the database. The screenshot of the tables created are given below.

P23

In the above implementation, we had seen a detailed step by step procedure to perform forward engineering for a simple multi-dimensional model. In the subsequent blogs, we will further delve into the other advanced features of IDA. Until then, happy data modelling….

Thoughts on “IDA or Erwin Data Modeller, Which is the Tool for You..?”

  1. Ashoke Sengupta

    It is very nice and explain very detail. I would like to get more of your blog. Please let me know how I can fine them. Thank you for your beautiful blog.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Niveditha Muralikrishnan, Technical Consultant

More from this Author

Categories
Follow Us