In the past few weeks, we created variety of reports starting from simple tabular reports to colorful pie charts to user friendly drill down reports. SSRS tool is very friendly and is developed to provide world class reports to the user community. While browsing through several business intelligence forums, I found couple of questions that have been asked a lot. A lot of users wanted to know how to create reports with one graph per page and how to have tabs named in that report. This article is dedicated to answering those questions.
This article is named “Easy Hacks” because both the problems can be addressed with few clicks. You will see how.
Question 1: How can I have 1 graph per tab?
For our illustration purposes, I have created a dummy report with two graphs. You can create your own graphs to test these steps.
Select the first chart by clicking on it and select Chart Properties. Under the general tab, select “Add a page break after”
By adding a page break, we are making sure that the second graph will appear on the second tab. Adding a page break after every graph makes sure that the next graph appears on another page.
Question: How can I have named tabs on my SSRS excel report?
Firstly, you will need Report builder 3.0 to have named tabs on excel report.
Click on your chart and on your right under Chart properties, type in a name for your tab.
Export your report in excel format for named tabbed reports J
Stay tuned for more.
In my previous posts, we saw how to create a table report and a pie chart. Today, I am adding another dimension to our tabular matrix by giving the users an option to expand or collapse a report. Such reports are called drill down reports in SSRS.
If you haven’t yet checked out my post on creating a simple table report, you can do so by visiting – http://blogs.perficient.com/businessintelligence/2013/10/25/creating-first-ssrs-report-part-2/
I am going to create a quick and simple matrix report with Product_type as the parent group and Product_detail as the child group.
Step 1: For illustration purposes, I am assuming that we want to see or hide Product_detail field. Right click on Product_detail field and choose Group Properties.
Step 2: Choose Visibility tab and select Hide.
(In this step, we chose “Hide” because we want the report to open collapsed)
Step 3: Select “ Toggle can be displayed by this report item:” and choose “Product_Type1” from the dropdown.
This step means “Product_details” field will be visible based on whether we expand/collapse “Product_type1” field.
Run the report and click on + sign to expand and – sign to collapse the report. I hope you enjoy this post. Stay tuned for more
Last month Oracle announced Oracle In-Memory database option. The overall message is that once installed, you can turn this “option” on and Oracle will become an in-memory database. I do not think it will be that simple. However, I believe Oracle is on the correct track with this capability.
There are two main messages with Oracle In-Memory’s vision which I view are critical capabilities in a modern data architecture. First, is the ability to store and process data based on the temperature of the data.
That is, hot, highly accessible data should be kept DRAM or as close to DRAM as possible. As the temperature decreases, data can be stored on flash, and for cold, rarely accessed data, on disk (either in the Oracle DB or in Hadoop). Of course we can store data of different temperatures today, however, the second feature, which is making this storage transparent to the application, makes the feature it very valuable. An application programmer, data scientist, or report developer, should not have to know where the data is stored. It should be transparent. The Oracle DB or a DBA can optimize the storage of data based on cost /performance of storage without having to consider the compatibility with the application, cluster (RAC), or recoverability is quite powerful and useful. Yes, Oracle has been moving this way for years, but now it has most, if not all, the pieces.
Despite the fact that the In-Memory option leverages a lot of existing core code, most of Oracle’s IT shops will need to remember that this is a version 1 product. Plan accordingly. Understand the costs and architectural impacts. Implement the Oracle In-Memory option on a few targeted applications and then develop standards for its use. A well planned, standards-based approach, will assure that your company will maximize its return on your Oracle In-Memory investment.
This month Oracle is releasing its new in-memory database. Essentially, it is an option that leverages and extends the existing RDBMs code base. Now with Microsoft’s recent entry all four the mega-vendors (IBM, SAP, Microsoft, and Oracle) have in-memory database products.
Which one that is a best fit for a company will depend on a number of factors. If a company is happy with their present RDBMs vendor, then that standard should be evaluated first. However, if a company has more than one RDMBs vendor or if they are looking to make a switch, a more comparative evaluation is needed. In this case companies should evaluate:
There are other evaluation areas, as well. For instance with SAP’s HANA offering has a robust BI metadata layer (think Business Objects Universe) that may be of value for a number of companies.
In-Memory Databases are changing and evolving quickly. So, make sure the appropriate due diligence is competed before investing in a selected technology.
IBM Cognos TM1 is well-known as the planning, analysis, and forecasting software that delivers flexible solutions to address requirements across an enterprise, as well as provide real-time analytics, reporting, and what-if scenario modeling and Perficient is well-known for delivering expertly designed TM1 based solutions.
Perhaps phase zero of a typical analytics project would involve our topology experts determining the exact server environment required to support the implementation of a number of TM1 servers (based upon not only industry proven practices, but our own breadth of practical “in the field” experiences). Next would be the procurement and configuration of said environment (and prerequisite software) and finally the installation of Cognos TM1.
It doesn’t stop there
As TM1 development begins, our engineers work closely with internal staff to outline processes for the (application and performance) testing and deployment (of developed TM1 models) but also to establish a maintainable support structure for after the “go live” date. “Support” includes not only the administration of the developed TM1 application but the “road map” to assign responsibilities such as:
Earlier this year the Perficient analytics team teamed up with the IBM Cloud team to offer an interesting alternative to the “typical”: Cognos TM1 as a service in the cloud.
Using our internal TM1 models and colleagues literally all over the country, we evaluated and tested the viability of a fully cloud based TM1 solution.
What we found was, it works and works well, offering unique advantages to our customers:
In the field
Once we were intimate with all of the “ins and outs” of TM1 10.2 on a cloud platform, we were able to to work directly with IBM to demonstrate how a cloud based solution would work to address the specific needs of one of our larger customers. After that, the Perficient team “on the ground” developed and deployed a “proof of concept” using real customer data, and partnered with the customer for the “hands on” evaluation and testing. Once the results were in, it was unanimous: “full speed ahead!””.
A Versatile platform
During the project life-cycle, the cloud environment was seamless; allowing Perficient developers to work (at the client site or remotely) and complete all necessary tasks without issue. The IBM cloud team was available (24/7) to analyze any perceived bottlenecks and, when required, to “tweak” things per the Perficient team’s suggestions, ensuring an accurately configured cloud and a successful, on-time solution delivery.
Built upon our internal teams experience and IBM’s support, our delivered cloud based solution is robust and cutting edge and infinitely scalable.
Even given everyone’s extremely high expectations, the project team was delighted and reported back the following major takeaways from the experience:
In addition, IBM Concert (provided as part of the cloud experience) is a (quote) “wonderful tool for our user community to combine both TM1 & BI to create intuitive workflows and custom dashboards”.
More to Come
To be sure, you’ll be hearing much more about Concert & Cognos in the cloud and when you do, you can count on the Perficient team for expert delivery.
Doug Cutting – Hadoop creator – is reported to have explained how the name for his Big Data technology came about:
“The name my kid gave a stuffed yellow elephant. Short, relatively easy to spell and pronounce, meaningless, and not used elsewhere: those are my naming criteria.”
The term, of course, evolved over time and almost took on a life of its own… this little elephant kept on growing, and growing… to the point that, nowadays, the term Hadoop is often used to refer to a whole ecosystem of projects, such as:
This is a sizable portion of the Big Data ecosystem… an ecosystem that keeps on growing almost by the day. In fact, we could spend a considerable amount of time describing additional technologies out there that play an important part in the Big Data symphony – DataStax, Sqrrl, Hortonworks, Cloudera, Accumulo, Apache, Ambari, Cassandra, Chukwa, Mahout, Spark, Tez, Flume, Fuse, YARN, Whirr, Grunt, HiveQL, Nutch, Java, Ruby, Python, Perl, R, NoSQL, PigLatin, Scala, etc.
Interestingly enough, most of the aforementioned technologies are used in the realm of Data Science as well, mostly due to the fact that the main goal of Data Science is to make sense out of and generate value from all data, in all of its many forms, shapes, structures and sizes.
In my next blog post, we’ll see how Big Data and Data Science are actually two sides of the same coin, and how whoever does Big Data, is actually doing Data Science as well to some extent – wittingly, or unwittingly.
This week, Perficient is exhibiting and presenting at Kscope14 in Seattle, WA. On Monday, June 23, my colleague Patrick Abram gave a great presentation on empowering restaurant operations through analytics. An overview of Patrick’s presentation and Perficient’s retail-focused solutions can be found in Patrick’s blog post.
Today, Wednesday, June 25, I gave my presentation on Reverse Star Schemas, a logical implementation technique that addresses increasingly complex business questions. Here is the abstract for my presentation:
It has long been accepted that classically designed dimensional models provide the foundations for effective Business Intelligence applications. But what about those cases in which the facts and their related dimensions are not, in fact, the answers? Introducing the Reverse Star Schema, a critical pillar of business driven Business Intelligence applications. This session will run through the what’s, why’s, and when’s of Reverse Star Schemas, highlight real-world case studies at one of the nation’s top-tier health systems, demonstrate OBIEE implementation techniques, and prepare you for architecting the complex and sophisticated Business Intelligence applications of the future.
When implemented logically in OBIEE, the Reverse Star Schema empowers BI Architects and Developers to quickly deploy analytic environments and applications that address the complex questions of the mature business user.
I was looking at the market share of Google Analytics (GA) and it is definitely on the rise. So I was curious to see the capabilities and what this tool can do. Of course it is a great campaign management tool. It’s been a while since I worked on campaign management.
I wanted to know all the more now about this tool, off to YouTube and got myself up to speed on the tools capabilities. Right off the bat I noticed campaign management has changed drastically compared to the days when we were sending email blasts or snail mail, junk mail etc. I remember the days when we generated email lists and run it through third-party campaign management tools, blast it out to the world and wait. Once we get enough data (mostly when the purchase the product) to run the results through SAS, we could see the effectiveness. It took more than a month to see any valuable insights.
Fast-track to the social media era, GA provides instant results and the intelligent click-stream data for tracking campaign management in real-time. Checkout the YouTube Webinars to see what GA can do in a 45 min.
On a very basic level, GA can track the new visitor, micro conversion (download a newsletter, or add something in a shopping cart), Macro Conversion (buy a product), or is it a returning customer. GA can track the ad-word traffic (how did they get to the website, trigger). It also has a link tag feature– which is very useful to identify the channel (email, referral website etc), linking the traffic to a specific campaign, based on the origination. It has many other features besides cool reports and analytical abilities as well.
There is so much information collected whether the customer buys a product or not. How much of this web analytics data is part of enterprise data. Does historical analysis include this data? Is this data used for predictive and prescriptive analytics? It is important to ask the following questions to assess what percentage of gathered information is actually used at the enterprise level:
This may become a Big Data question, depending on the number of Campaigns/ hits and the amount of micro activates the site can offer. Chances are that the data resides in silo or at a third-party location and the results are not stored in the enterprise data.
Perficient is exhibiting and presenting this week at KScope14 in Seattle, WA. On Monday, June 23 I presented my retail-focused solution offering built upon the success of Perficient’s Retail Pathways, but using the Oracle suite of products. In order to focus the discussion to fit within a one hour window I chose restaurant operations to represent the solution.
Here is the abstract for my presentation.
Multi-unit, multi-concept restaurant companies face challenging reporting requirements. How should they compare promotion, holiday, and labor performance data across concepts? How should they maximize fraud detection capabilities? How should they arm restaurant operators with the data they need to react to changes affecting day-to-day operations as well as over-time goals? An industry-leading data model, integrated metadata, and prebuilt reports and dashboards deliver the answers to these questions and more. Deliver relevant, actionable mobile analytics for the restaurant industry with an integrated solution of Oracle Business Intelligence and Oracle Endeca Information Discovery.
We have tentatively chosen to brand this offering as Crave – Designed by Perficient. Powered by Oracle. This way we can differentiate this new Oracle-based offering from the current Retail Pathways offering.
In my last blog post, we learned about SAP HANA… or as I called it, “a database on steroids”. Here is what SAP former CTO and Executive Board Member, Vishal Sikka, told InformationWeek:
In the same InformationWeek article you can read of how SAP is committed to become the #2 database vendor by 2015.
So, even if HANA is a new technology, it looks like SAP has pretty much bet its future on it. Soon, SAP customers may have SAP ERP, SAP NetWeaver BW, and their entire SAP system landscape sitting on a HANA database.
But if HANA is such a great database, you may wonder, why would SAP HANA need a partnership with Hadoop, or be integrated with Hadoop at all? Can HANA really integrate with Hadoop seamlessly? And, most importantly, are HANA and Hadoop complementary or competitive?
Well, in October 2012, SAP announced the integration of Hadoop into its data warehousing family – why?
The composite answer, in brief, is:
In short, SAP is looking at a co-exist strategy with Hadoop… NOT a competitive one.
In the next blog post, we’ll look at Hadoop and its position in the Big Data landscape… stay tuned.