In this blog, I will be describing how to configure Tableau 10 Desktop connection to Apache Drill and explore Hive or HBase instantly on Hadoop. By using the combined feature of these tools, we are convenient to get direct access on semi-structured data such as key-value format and even document storage, without having to rely […]
Silvia Cai
Blogs from this Author
Perficient Customized IBM Unified Data Model for Healthcare
UDMH Model Components The Perficient Healthcare Analytics Gateway asset is a data integration engine that allows healthcare data from multiple sources be integrated and presented in atomic level detail (Inmon model) and in conformed dimensional layer (Kimball model). The atomic and dimensional data models are part of IBM’s Unified Data Model for Healthcare (UDMH). IBM […]
Resolving a Data Architect Physical Model Saving Failure Issue
Symptom When we transform a Dimensional Warehouse Logical Model to a Physical Data Model, IBM InfoSphere Data Architect results in the error ” Save Failed Java heap space” even though the Physical Data Model size is 0 KB. The full error message is as following: Cause Dimensional Warehouse Logical Models are quite large which can […]
How to create a new entity in the IBM InfoSphere Data Architect
Typically, the customization of the Atomic Warehouse Model is driven by the customization performed on the Business Data Model in the context of a project. And if a preexisting element of the Business Data Model got customized, the customization needs to be reflected accordingly into the Atomic Warehouse Model. Or Dimensional Warehouse Model have the […]
IBM InfoSphere Data Architect Model Change Report Generation
IDA Model Change Report is helpful when analyzing the differences between industry models and customized content. It can give attribute, entity, relationship level different between two models. You can find change reports at folder “{install folder}\IBM\Industry Models\IBM Unified Data Model for Healthcare\v9.1.0.0\Data_Models \Change Reports” after install Industry Models. You will see six BIRT reports in […]
How to Import a Source File into Netezza Database
Aginity Workbench is a GUI-based tool that enhances your performance when you are working with your Netezza data warehouse. Data import in the Aginity workbench has a fairly easy method to get data into Netezza system. Aginity workbench provides a couple of options to import data from Excel, CSV, Fixed width and external databases. Best Option: […]
How to Connect Hortonworks Hive from Qlikview with ODBC driver
As with most BI tools, QlikView can use Apache Hive (via ODBC connection) as the SQL access to data in Hadoop. Here we are going to talk about qlikview how to connect Hortonworks Hive via ODBC. Prerequisites 1.Those are versions of each component we installed in Hortonworks Hue HDP Hadoop Hive-Hcatalog Ambari HBase Hortonworks ODBC […]
Exchange Data between TM1 Instances by TM1RunTI
Here is an example to transfer data between TM1 Instances byTM1RunTI configuration file and some tips for trouble shooting. TM1RunTI is a command line interface tool that can initiate a TM1 TurboIntegrator (TI) process from within any application capable of issuing operating system commands. That means it can be called cross TM1 Instance, especially when […]
Handle Slowly Changing Dimensions with Pentaho Kettle – Part2
We examine SCD Type I and Type II here. Overview of Kettle transformation Here is overview of transformation. It will be loaded from excel source file and loaded into database.
Handle Slowly Changing Dimensions with Pentaho Kettle – Part1
In this blog we will talk about how to implement various types of slowly changing dimensions (SCDs) with Kettle in details. And I will introduce the examination of SCD Type I and Type II in Part2. Types of Slowly Changing Dimensions Following Kimball, we distinguish two main types of slowly changing dimensions: Type I, Type […]