Perficient Business Intelligence Solutions Blog

Blog Categories

Subscribe via Email

Subscribe to RSS feed

SSRS – Easy Hacks

In the past few weeks, we created variety of reports starting from simple tabular reports to colorful pie charts to user friendly drill down reports.   SSRS tool is very friendly and is developed to provide world class reports to the user community.  While browsing through several business intelligence forums, I found couple of questions that have been asked a lot.  A lot of users wanted to know how to create reports with one graph per page and  how to have tabs named in that report.  This article is dedicated to answering those questions.

This article is named “Easy Hacks” because both the problems can be addressed with few clicks.  You will see how.

Question 1: How can I have 1 graph per tab?

For our illustration purposes, I have created a dummy report with two graphs.  You can create your own graphs to test these steps.

eh1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Select the first chart by clicking on it and select Chart Properties.  Under the general tab, select “Add a page break after”

eh2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

By adding a page break, we are making sure that the second graph will appear on the second tab.  Adding a page break after every graph makes sure that the next graph appears on another page.

Easy, heh?

Question: How can I have named tabs on my SSRS excel report?

Firstly, you will need Report builder 3.0 to have named tabs on excel report.

Click on your chart and on your right under Chart properties, type in a name for your tab.

eh3

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Export your report in excel format for named tabbed reports J

Stay tuned for more.

Creating a drill down report – 3 easy steps

In my previous posts, we saw how to create a table report and a pie chart. Today, I am adding another dimension to our tabular matrix by giving the users an option to expand or collapse a report. Such reports are called drill down reports in SSRS.

If you haven’t yet checked out my post on creating a simple table report, you can do so by visiting – http://blogs.perficient.com/businessintelligence/2013/10/25/creating-first-ssrs-report-part-2/

I am going to create a quick and simple matrix report with Product_type as the parent group and Product_detail as the child group.

dd1

 

 

 

 

 

 

 

 

 

 

Step 1: For illustration purposes, I am assuming that we want to see or hide Product_detail field. Right click on Product_detail field and choose Group Properties.

 

Step 2: Choose Visibility tab and select Hide.

(In this step, we chose “Hide” because we want the report to open collapsed)

dd2

 

 

 

 

 

 

 

 

 

 

 

Step 3: Select “ Toggle can be displayed by this report item:” and choose “Product_Type1” from the dropdown.

This step means “Product_details” field will be visible based on whether we expand/collapse “Product_type1” field.

dd3

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Run the report and click on + sign to expand and – sign to collapse the report.  I hope you enjoy this post.  Stay tuned for more :)

Posted in News

Thoughts on Oracle Database In-Memory Option

Last month Oracle announced Oracle In-Memory database option. The overall message is that once installed, you can turn this “option” on and Oracle will become an in-memory database.   I do not think it will be that simple. However, I believe Oracle is on the correct track with this capability.

Thoughts on Oracle Database In-Memory OptionThere are two main messages with Oracle In-Memory’s vision which I view are critical capabilities in a modern data architecture. First, is the ability to store and process data based on the temperature of the data.
That is, hot, highly accessible data should be kept DRAM or as close to DRAM as possible.   As the temperature decreases, data can be stored on flash, and for cold, rarely accessed data, on disk (either in the Oracle DB or in Hadoop).   Of course we can store data of different temperatures today, however, the second feature, which is making this storage transparent to the application, makes the feature it very valuable. An application programmer, data scientist, or report developer, should not have to know where the data is stored.   It should be transparent. The Oracle DB or a DBA can optimize the storage of data based on cost /performance of storage without having to consider the compatibility with the application, cluster (RAC), or recoverability is quite powerful and useful.   Yes, Oracle has been moving this way for years, but now it has most, if not all, the pieces.

Despite the fact that the In-Memory option leverages a lot of existing core code, most of Oracle’s IT shops will need to remember that this is a version 1 product.   Plan accordingly. Understand the costs and architectural impacts. Implement the Oracle In-Memory option on a few targeted applications and then develop standards for its use. A well planned, standards-based approach, will assure that your company will maximize its return on your Oracle In-Memory investment.

Evaluating In-Memory DBs

This month Oracle is releasing its new in-memory database.   Essentially, it is an option that leverages and extends the existing RDBMs code base.   Now with Microsoft’s recent entry all four the mega-vendors (IBM, SAP, Microsoft, and Oracle) have in-memory database products.

Evaluating In-Memory DB'sWhich one that is a best fit for a company will depend on a number of factors. If a company is happy with their present RDBMs vendor, then that standard should be evaluated first. However, if a company has more than one RDMBs vendor or if they are looking to make a switch, a more comparative evaluation is needed. In this case companies should evaluate:

  1. Maturity of the Offering. All the vendors products have different support “traditional” RDMS functionality like referential integrity, support for stored procedures, and online backups – to name a few.   Make sure you understand the vendor’s current and near-term support for features that you require.
  2. Performance. All IMDB vendors promise and from all accounts, deliver significantly increased performance.   However, the vendor’s ability to provide the level of desired performance on the company’s proposed query profile and the ability of the vendor’s technology to scale out should be evaluated. Compression and columnar storage will also affect performance, so understanding these features to support a company’s requirements is necessary.
  3. Sourcing of On-Disk Data. Probably the biggest difference in architecture and maturity between the vendors is their ability to source data from on disk storage systems, either files, traditional RDBMs, or Hadoop systems.
  4. Licensing & Cost Model. The costs associated with a licensing and implementing a technology need to be closely evaluated. How much training is required to develop a competency with a new technology? Is the licensing model favorable to how an enterprise uses/purchases licenses?

There are other evaluation areas, as well. For instance with SAP’s HANA offering has a robust BI metadata layer (think Business Objects Universe) that may be of value for a number of companies.

In-Memory Databases are changing and evolving quickly. So, make sure the appropriate due diligence is competed before investing in a selected technology.

Perficient takes Cognos TM1 to the Cloud

IBM Cognos TM1 is well-known as the planning, analysis, and forecasting software that delivers flexible solutions to address requirements across an enterprise, as well as provide real-time analytics, reporting, and what-if scenario modeling and Perficient is well-known for delivering expertly designed TM1 based solutions.

Analytic Projects

Perficient takes Cognos TM1 to the CloudPerhaps phase zero of a typical analytics project would involve our topology experts determining the exact server environment required to support the implementation of a number of TM1 servers (based upon not only industry proven practices, but our own breadth of practical “in the field” experiences). Next would be the procurement and configuration of said environment (and prerequisite software) and finally the installation of Cognos TM1.

It doesn’t stop there

As TM1 development begins, our engineers work closely with internal staff to outline processes for the (application and performance) testing and deployment (of developed TM1 models) but also to establish a maintainable support structure for after the “go live” date. “Support” includes not only the administration of the developed TM1 application but the “road map” to assign responsibilities such as:

  • Hardware monitoring and administration
  • Software upgrades
  • Expansion or reconfiguration based upon additional requirements (i.e. data or user base changes or additional functionality or enhancements to deployed models)
  • And so on…

Teaming Up

Earlier this year the Perficient analytics team teamed up with the IBM Cloud team to offer an interesting alternative to the “typical”: Cognos TM1 as a service in the cloud.

Using our internal TM1 models and colleagues literally all over the country, we evaluated and tested the viability of a fully cloud based TM1 solution.

What we found was, it works and works well, offering unique advantages to our customers:

  • Lowers the “cost of entry” (getting TM1 deployed)
  • Lowers the total cost of ownership (ongoing “care and feeding”)
  • Reduces the level of capital expenditures (doesn’t require the procurement of internal hardware)
  • Reduces IT involvement (and therefore expense)
  • Removes the need to plan for, manage and execute upgrades when newer releases are available (new features are available sooner)
  • (Licensed) users anywhere in world have access form day 1 (regardless of internal constraints)
  • Provides for the availability of auxiliary environments for development and testing (without additional procurement and support)

In the field

Once we were intimate with all of the “ins and outs” of TM1 10.2 on a cloud platform, we were able to to work directly with IBM to demonstrate how a cloud based solution would work to address the specific needs of one of our larger customers. After that, the Perficient team “on the ground” developed and deployed a “proof of concept” using real customer data, and partnered with the customer for the “hands on” evaluation and testing. Once the results were in, it was unanimous: “full speed ahead!””.

A Versatile platform

During the project life-cycle, the cloud environment was seamless; allowing Perficient developers to work (at the client site or remotely) and complete all necessary tasks without issue. The IBM cloud team was available (24/7) to analyze any perceived bottlenecks and, when required, to “tweak” things per the Perficient team’s suggestions, ensuring an accurately configured cloud and a successful, on-time solution delivery.

Bottom Line

Built upon our internal teams experience and IBM’s support, our delivered cloud based solution is robust and cutting edge and infinitely scalable.

Major takeaways

Even given everyone’s extremely high expectations, the project team was delighted and reported back the following major takeaways from the experience:

  • There is no “hardware administration” to worry about
  • No software installation headaches to hold things up!
  • The cloud provided an accurately configured VM -including dedicated RAM and CPU based exactly upon the needs of the solution.
  • The application was easily accessible, yet also very secure.
  • Everything was “powerfully fast” – did not experience any “WAN effects”.
  • 24/7 support provided by the IBM cloud team was “stellar”
  • The managed RAM and “no limits” CPU’s set things up to take full advantage of features like TM1’s MTQ.
  • The users could choose a complete web based experience or install CAFÉ on their machines.

In addition, IBM Concert (provided as part of the cloud experience) is a (quote) “wonderful tool for our user community to combine both TM1 & BI to create intuitive workflows and custom dashboards”.

More to Come

To be sure, you’ll be hearing much more about Concert & Cognos in the cloud and when you do, you can count on the Perficient team for expert delivery.

A little stuffed animal called Hadoop

Doug Cutting – Hadoop creator – is reported to have explained how the name for his Big Data technology came about:

“The name my kid gave a stuffed yellow elephant. Short, relatively easy to spell and pronounce, meaningless, and not used elsewhere: those are my naming criteria.”

A little stuffed animal called HadoopThe term, of course, evolved over time and almost took on a life of its own… this little elephant kept on growing, and growing… to the point that, nowadays, the term Hadoop is often used to refer to a whole ecosystem of projects, such as:

  1. Common – components and interfaces for distributed filesystems and general I/O
  2. Avro – serialization system for RPC and persistent data storage
  3. MapReduce – distributed data processing model and execution environment running on large clusters of commodity machines
  4. HDFS – distributed filesystem running on large clusters of commodity machines
  5. Pig – data flow language / execution environment to explore huge datasets (running on HDFS and MapReduce clusters)
  6. Hive – distributed data warehouse, manages data stored in HDFS providing a query language based on SQL for querying the data
  7. HBase – distributed, column-oriented database that uses HDFS for its underlying storage, supporting both batch-style computations and random reads
  8. ZooKeeper – distributed, highly available coordination service, providing primitives to build distributed applications
  9. Sqoop – transfer bulk data between structured data stores and HDFS
  10. Oozie – service to run and schedule workflows for Hadoop jobs

This is a sizable portion of the Big Data ecosystem… an ecosystem that keeps on growing almost by the day. In fact, we could spend a considerable amount of time describing additional technologies out there that play an important part in the Big Data symphony – DataStax, Sqrrl, Hortonworks, Cloudera, Accumulo, Apache, Ambari, Cassandra, Chukwa, Mahout, Spark, Tez, Flume, Fuse, YARN, Whirr, Grunt, HiveQL, Nutch, Java, Ruby, Python, Perl, R, NoSQL, PigLatin, Scala, etc.

Interestingly enough, most of the aforementioned technologies are used in the realm of Data Science as well, mostly due to the fact that the main goal of Data Science is to make sense out of and generate value from all data, in all of its many forms, shapes, structures and sizes.

In my next blog post, we’ll see how Big Data and Data Science are actually two sides of the same coin, and how whoever does Big Data, is actually doing Data Science as well to some extent – wittingly, or unwittingly.

KScope14 Session: The Reverse Star Schema

This week, Perficient is exhibiting and presenting at Kscope14 in Seattle, WA.  On Monday, June 23, my colleague Patrick Abram gave a great presentation on empowering restaurant operations through analytics.  An overview of Patrick’s presentation and Perficient’s retail-focused solutions can be found in Patrick’s blog post.

Today, Wednesday, June 25, I gave my presentation on Reverse Star Schemas, a logical implementation technique that addresses increasingly complex business questions.  Here is the abstract for my presentation:

It has long been accepted that classically designed dimensional models provide the foundations for effective Business Intelligence applications.  But what about those cases in which the facts and their related dimensions are not, in fact, the answers?  Introducing the Reverse Star Schema, a critical pillar of business driven Business Intelligence applications.  This session will run through the what’s, why’s, and when’s of Reverse Star Schemas, highlight real-world case studies at one of the nation’s top-tier health systems, demonstrate OBIEE implementation techniques, and prepare you for architecting the complex and sophisticated Business Intelligence applications of the future.

When implemented logically in OBIEE, the Reverse Star Schema empowers BI Architects and Developers to quickly deploy analytic environments and applications that address the complex questions of the mature business user.

Read the rest of this post »

Web analytics and Enterprise data…

I was looking at the market share of  Google Analytics (GA) and it is definitely on the rise. So I was curious to see the capabilities and what this tool can do. Of course it is a great campaign management tool. It’s been a while since I worked on campaign management.

GA_graphicsI wanted to know all the more now about this tool, off to YouTube and got myself up to speed on the tools capabilities. Right off the bat I noticed campaign management has changed drastically compared to the days when we were sending email blasts or snail mail, junk mail etc. I remember the days when we generated email lists and run it through third-party campaign management tools, blast it out to the world and wait. Once we get enough data (mostly when the purchase the product) to run the results through SAS, we could see the effectiveness. It took more than a month to see any valuable insights.

Fast-track to the social media era, GA provides instant results and the intelligent click-stream data for  tracking campaign management in  real-time. Checkout the YouTube Webinars to see what GA can do in a 45 min.

GA1On a very basic level, GA can track the new visitor, micro conversion (download a newsletter, or add something in a shopping cart), Macro Conversion (buy a product), or is it a returning customer. GA can track the ad-word traffic (how did they get to the website, trigger). It also has a link tag feature– which is very useful to identify the channel (email, referral website etc), linking the traffic to a specific campaign, based on the origination. It has many other features besides cool reports and analytical abilities as well.

There is so much information collected whether the customer buys a product or not. How much of this web analytics data is part of enterprise data. Does historical analysis include this data? Is this data used  for predictive and prescriptive analytics? It is important to ask the following questions to assess what percentage of gathered information is actually used at the enterprise level:

  • How well the organizations integrate this campaign data into enterprise data?
  • Do they collect and manage new prospect information at enterprise level?
  • Does the organization use this tool to enhance their master data?

This may become a Big Data question, depending on the number of Campaigns/ hits and the amount of micro activates the site can offer. Chances are that the data resides  in silo or at a third-party location and the results are not stored in the enterprise data.

KScope14 Session: Empower Mobile Restaurant Operations Analytics

Perficient is exhibiting and presenting this week at KScope14 in Seattle, WA. On Monday, June 23 I presented my retail-focused solution offering built upon the success of Perficient’s Retail Pathways, but using the Oracle suite of products. In order to focus the discussion to fit within a one hour window I chose restaurant operations to represent the solution.

Here is the abstract for my presentation.

Multi-unit, multi-concept restaurant companies face challenging reporting requirements. How should they compare promotion, holiday, and labor performance data across concepts? How should they maximize fraud detection capabilities? How should they arm restaurant operators with the data they need to react to changes affecting day-to-day operations as well as over-time goals? An industry-leading data model, integrated metadata, and prebuilt reports and dashboards deliver the answers to these questions and more. Deliver relevant, actionable mobile analytics for the restaurant industry with an integrated solution of Oracle Business Intelligence and Oracle Endeca Information Discovery.

We have tentatively chosen to brand this offering as Crave – Designed by Perficient. Powered by Oracle. This way we can differentiate this new Oracle-based offering from the current Retail Pathways offering.

Crave Logo

Read the rest of this post »

SAP HANA and Hadoop – complementary or competitive?

In my last blog post, we learned about SAP HANA… or as I called it, “a database on steroids”. Here is what SAP former CTO and Executive Board Member, Vishal Sikka, told InformationWeek:

SAP HANA and Hadoop – complementary or competitive?“Hana is a full, ACID-compliant database, and not just a cache or accelerator. All the operations happen in memory, but every transaction is committed, stored, and persisted.”

In the same InformationWeek article you can read of how SAP is committed to become the #2 database vendor by 2015.

So, even if HANA is a new technology, it looks like SAP has pretty much bet its future on it. Soon, SAP customers may have SAP ERP, SAP NetWeaver BW, and their entire SAP system landscape sitting on a HANA database.

But if HANA is such a great database, you may wonder, why would SAP HANA need a partnership with Hadoop, or be integrated with Hadoop at all? Can HANA really integrate with Hadoop seamlessly? And, most importantly, are HANA and Hadoop complementary or competitive?

Well, in October 2012, SAP announced the integration of Hadoop into its data warehousing family – why?

The composite answer, in brief, is:

  1. tighter integration – SAP, Hadoop, Cloudera, Hitachi Data Systems, HP, and IBM are all brought together in order to address the ever-growing demands in the Big Data space
  2. analytics scenarios – in order to build more complex and mature analytics scenarios, HANA can be integrated with Hadoop via SAP Sybase IQ, SAP Data Services, or R queries, and include structured AND unstructured Big Data with prior integration and consolidation by Hadoop
  3. in-memory capabilities – some organizations already have existing Hadoop strategies or solutions but cannot do in-memory Big Data without HANA
  4. tailored solutions – by bringing together speed, scale and flexibility, SAP enables customers to integrate Hadoop into their existing BI and Data Warehousing environments in multiple ways, so as to tailor the integration to their very specific needs
  5. transparency for end-users – SAP BusinessObjects Data Integrator allows organizations to read data from Hadoop Distributed File Systems (HDFS) or Hive, and load the desired data very rapidly into SAP HANA or SAP Sybase IQ, helping ensure that SAP BusinessObjects BI users can continue to use their existing reporting and analytics tools
  6. queries federation – customers can federate queries across an SAP Sybase IQ MPP environment using built-in functionality
  7. direct exploration – SAP BusinessObjects BI users can query Hive environments giving business analysts the ability to directly explore Hadoop environments

In short, SAP is looking at a co-exist strategy with Hadoop… NOT a competitive one.

In the next blog post, we’ll look at Hadoop and its position in the Big Data landscape… stay tuned.