In my blog post about ‘Qlik Leadership’ – back in April – I pointed out how Qlik was going to reinvent itself and the BI market once again. A few months later Qlik Sense was released. Qlik Sense Desktop is a Windows-based desktop application, and I view it as Qlik’s first installment on the .Next wave of innovation.
“Just as Qlik disrupted the business intelligence industry to pioneer the data discovery category, the company is now helping transform the category as it matures to governed, user-driven creation” – per TDWI (see full TDWI Article).
Here are some features of Qlik Sense:
These are just some of the new features in Qlik Sense leading to a more and more “consumerized” analytic software. Read the rest of this post »
One of the key points I make in our Executive Big Data Workshops is that effective use of Big Data analytics will require transforming both business and IT organizations. Big Data with access to cross-functional data will transform the strategic processes within a company that guide long term and year to year investments. With the ability to apply machine learning, data mining, and advance analytics to view how different business processes interact with each other, companies now have empirical information for use in their strategic processes.
We are now seeing evidence of this transformation happening with the emergence of the Chief Analytics Officer position. As detailed in this InfoWorld article, Chief analytics officer: The ultimate big data job, it’s not about data but what you do with the data. And it is important enough to create a new position, the CAO. I recommend reading this article.
A few years back I worked for a client that was implementing cell level security on every data structure within their data warehouse. They had nearly 1,000 tables and 200,000 columns — yikes! Talking about administrative overhead. The logic was that data access should only be given on a need-to-know basis. The idea would be that users would have to request access to certain tables and columns.
Need-to-know is a term frequently used in military and government institutions that refers to granting access to sensitive information to cleared individuals. This is a good concept, but the key here is the part about “granting access to SENSITIVE data.” The key is that the information has to be classified first, then need-to-know (for cleared individuals) is applied.
Most government documents are not sensitive. This allows the administrative resources to focus on the sensitive, classified information. The system for classifying information as Top Secret, Secret, and Confidential, has relatively stringent rules for, but also discourages the over classification of information. This is because when a document is classified, its use becomes limited.
This same phenomenon is true in the corporate world. The more a set of data is locked down, the less it will be used. Unnecessary limiting an information’s workers access to data obviously does not help the overall objectives of the organization. Big Data just magnifies this dynamic and unnecessarily restricting access to Big Data is the best way to limit its value. Unreasonably lock down Big Data, its value will be severely limited.
Read the rest of this post »
In the Hadoop space we have a number of terms for the Hadoop File System used for data management. Data Lake is probably the most popular. I have heard it called a Data Refinery as well as some other not so mentionable names. The one that has stuck with me has been is the Data Reservoir. Mainly because this most accurate water analogy to what actually happens in a Hadoop implementation that is used for data storage and integration.
Consider, that data is first landed in the Hadoop file system. This is the un-processed data just like water running into a reservoir from different sources. The data in this form in only fit for limited use, like analytics by trained power users. The data is then processed just like water is processed. Process water you end up with water that is consumable. Go one step further and distill it, and you have water that is suitable for medical applications. Data is the same way in a Big Data environment. Process it enough and one ends up with conformed dimensions and fact tables. Process it even more, and you have data that is suitable for basing bonuses or even publishing to government regulators. Read the rest of this post »
The data warehouse has been a part of the EIM vernacular for nearly 20 years. The vision of the single source of the truth and a single repository for reporting and analysis are two objectives that have resulted in a never-ending journey. The data warehouse never has had enough data and the quality required for a single version of the truth demands significant investment that only rare business cases could support. Further, the role of the analytical database has generally been difficult to achieve. Ad-hoc analysis on large sets of complex data has generally been a significant challenge for the traditional data warehouse. Historically, to address this, companies have implemented appliances, analytical data marts, or a varying set of database features and compromises (think bit mapped indexing, a variety of hardware and software caching techniques, indexed stored data to name a few). All with significant investment and usually adding significant overhead. Read the rest of this post »
It is amazing to see the technology terms we come up with to explain new technology or trend. The consulting thought leadership coins the words to group a set of technology, trend to make it easier for people to have a context. However the success and adoption of the technology/trend defines the term’s reputation. For example Data warehouse was an in-thing only to be shunned when it did not deliver on its promises. Industry quickly realized the mistake and called it Business Intelligence and hid Data Warehouse behind BI until things settled. Now no one questions value of DW or EDW or perceive that as a risky project.
Some terms are really great and they are here to stay for a long time. Some withers away, some change and take a different meaning. One such term which got my attention is IoT – Internet of Things – what is this? It sounds like ‘Those things’ but really what is this trend or technology?
Wikipedia gives you this definition:
“The Internet of Things (IoT) is the interconnection of uniquely identifiable embedded computing devices within the existing Internet infrastructure. Typically, IoT is expected to offer advanced connectivity of devices, systems, and services that goes beyond machine-to-machine communications (M2M) and covers a variety of protocols, domains, and applications. The interconnection of these embedded devices (including smart objects), is expected to usher in automation in nearly all fields, while also enabling advanced applications like a Smart Grid.”
That is a lot of stuff. Looks like pretty much everything we do with Internet. I am sure this term will change and take shape. But let’s look how this relates to Enterprise Data Management. So from an enterprise data perspective, Let us consider a subset of IoT – machine generated internet data and consolidation of data from the systems operating on the cloud. What we end up with is a whole lot of data which is new, and also not in the traditional Enterprise Data framework. The impact and exposure are real, and much of the IoT data may live outside the firewalls.
In essence, the Enterprise Data Management need to deal with the added dimension of Architecture, Technology, and Governance of IoT. Considering IoT Data as out of scope for Enterprise Data Management will lead to more issues than it can solve, especially if you are generating or depend on the IoT data.
It is almost always advantageous to be able to “make sure” your Cognos TM1 environment is “ready for use” after a server restart. For example, you may want to:
Hopefully, you know what a TM1 chore is (“a chore is a set of tasks that can be executed in sequence that are typically TurboIntegrator processes) and understand that, as an administrator, you could login to TM1 and manually execute a chore or process, but there is a better way.
Let TM1 Server do it!
To have the TM1 server execute a chore immediately after (every time) it starts up, you can leverage a TM1 configuration file parameter to designate a chore as a “startup chore”. This is similar to w MS Windows service that is set to “automatic” (most likely like your machines TM1 servers):
To indicate that your chore should be run when the server starts up, you go into the (TM1s.cfg) configuration file and add the parameter: StartupChores.
You simply list your chore (or chores separated by a colon, for example:
Don’t worry too much about adding this to the configuration; if this parameter is not specified, then no Chores will be run, and if the chore name specified does not match an existing Chore then an error is written to the server log, and TM1 tries to execution the next chore indicated (if no valid chores are found, the server will simply start/become available as normal).
These chores will run before the server starts up (technically, the server is “up”, just not “available” to any user yet):
Startup chores run before user logins are accepted and before any other chores begin processing.
Here is my example:
Once I restarted my server, I checked my server log and verified that the chore (Backup TM1) did in fact execute:
Since Startup chores are run before any logins are allowed, you’ll have trouble trying to monitor the Startup chores with tools like TM1Top or even Operations Console – and therefore there is no way to cancel a Startup chore with the exception of killing the server process.
While there are several BI technologies and more coming into the foray every day, SSRS has remained a key player in this area for quite some time now. One of the biggest advantages of SSRS reporting is that it involves the participation of the end user and that is very intuitive to use.
Let’s go back few years when excel was the go to tool for dash boarding. Every time a director or VP wanted a report, he would go to his developers to extract information from the database to help him make dashboards for his meetings. The end user had to rely on the developers to extract information and had to spend several minutes if not hours to make a dashboard. This all works ok when the meeting is scheduled for a specific day of the week or month. We all know this is a myth and most meetings happen impromptu. In such cases, there is not enough time to extract data and to extrapolate that information into graphs.
Here is why SSRS came in as a key player. With a strong foundation of Microsoft, SSRS brought in some of the best features and much needed features:
While these features may not look ground breaking in the first look, these features actually bring in a lot of value. These features save a lot of time and that time in business directly translates into revenue. The developers can design dashboards once and deploy them to a server. The VP or director can press a button to get these reports on his machine. Furthermore, the reports can be exported in several formats. What I really like about the reports though is the look and feel. Microsoft retained the aesthetics of MS excel reports and by that I mean that you can have a pie chart in excel and in SSRS look exactly same. This is a great feature especially for the audience since it most people do not like to see the look of the reports change over time. Another great feature is that SSRS has fantastic security options and one can implement a role based reporting.
In summary, SSRS is a power packed tool and you should reap benefits of the great features that come with it.
Years of work went into building the elusive single version of truth. Despite all the attempts from IT and business, Excel reporting and Access databases were impossible to eliminate. Excel is the number one BI tool in the industry and for the following good reasons : accessibility to the tool, speed and familiarity. Almost all the BI tools export data to Excel for those reasons. Business will produce the insight they need as soon as the data is available, manual or otherwise. It is time to come to terms with the fact change is imminent and there is no such thing as Perfect Data but only what is good enough to business. As the saying goes:
‘Perfect is the enemy of Good!’
So waiting for all the business rules and perfect data to produce the report or analytics, is too late for the business. Speed is of essence, when the data is available, business wants it; stale data is as good as not having it.
In the changing paradigm of Data Management, agile ideas and tools are in play. Waiting for Months, weeks or even a day to analyze the data from Data warehouse is a problem. Data Discovery through Agile BI tools which doubles as ETL, offers significant reduction in data availability. Data Virtualization provides access to data in real-time for quicker insights along with metadata. In-Memory data appliances produce analytics in fraction of the time compared to traditional Data warehouse/ BI.
Currently there are 4 options for rounding numbers in Cognos TM1. They are:
The most popular method to apply rounding in TM1 is in reporting. Cognos TM1 leverages MS Excel for reporting and supports all of the formatting and calculations available within Excel. Typically, “templates” are created that apply the organizations (or individuals) desired formatting and/or rounding in a consistent way. In addition, Excel workbooks can be published to TM1Web for viewing by wider audiences (other reporting options, such as Cognos BI, also support formatting/rounding in report presentation).
The following is a simple illustration of using Excel formatting on TM1 data:
Another popular method for rounding numbers in TM1 is to round as data is being loaded (into TM1). This allows information to always be presented in the expected format (or precision) throughout the TM1 application. Based upon specific requirements, it is also common to model a TM1 application with reporting cubes to isolate the calculation and transactional processing from the reporting and presentation (of specific information). In this case, data may be transferred from a source cube to various reporting cubes and during that transfer process logic can be applied to round to the desired precision. Simply put, you may have a summary cube specifically for reporting that holds dollars rounded up to the nearest thousand.
Cognos TM1 cube Viewer does support some formatting options for viewing data in TM1 cubes. Although this method is somewhat elementary, some precision can be set which will invoke some level of rounding for display.
The format dialog in Cognos TM1:
Finally, Cognos TM1 supports the ability to create cube rules that apply business logic to data in TM1. This business logic can include rounding. Generally speaking, as a user navigates through a cube, TM1 executes the rule (in real time) applying the logic to certain data intersection points within the cube. The result of the rule can be based upon just about any algorithm. Below is an example of very simple rounding logic.
The user-entered value is “Valueof” and there are 2 TM1 cube rules applied:
So what do I recommend? A best practice recommendation would be to evaluate your requirements and take an approach that best serves the model’s needs. The options that may best serve from an enterprise perspective would be to maximize flexibility by storing the raw numbers in TM1 and then either:
To be clear, it is important to understand that programmatically introducing rounding (either via during a load/transfer of data or by TM1 cube rule) can introduce material differences in some consolidation situations (shown in the cube view image above as “All Locations”).