SSRS is a powerful tool not because it projects queries in really good looking charts but because it enhances user experience. The reports are so intuitive that users can navigate and export data without much training. However, as business analyst or data analysts or report designers, it is our responsibility to extend these usability features to our users at every step.
We know that if sql returns no rows, SSRS will display empty tables to our customers. As a report designer and a user myself, empty table would worry me and would force me to think that reports are not pulling data correctly. Instead, I would like to see or show a clear message indicating why there is no data. For example: No records found and so on.
Lets see how to add custom messages in SSRS.
For illustration, I have created a dummy table that contains 3 Columns:
In order to add a custom message:
1.) Select your table
2.) Go to Properties
3.) Scroll down to No Rows Message and Type Your Message in the Box.
You can also change the Color and Font in the No Rows Section of the Table Properties.
This is how my report looks after I add my No Rows message J
Stay Tuned for more
Data science is a discipline conflating elements from various fields such as mathematics, machine learning, statistics, computer programming, data warehousing, pattern recognition, uncertainty modeling, computer science, high performance computing, visualization and others.
According to Cathy O’Neil and Rachel Schutt, two luminaries in the field of Data Science, there are about seven disciplines that even data scientists in training can easily identify as part of their tools set:
Most data scientists, however, are experts in only a couple of these disciplines and proficient in another two or three – that’s why Data Science is a team sport.
I’ve definitely learned the importance of teamwork in this field over the last few months, while working with Perficient Data Science team on a Big Data Lab.
Ultimately, the goal of Data Science is to extract meaning from data and create products from the data itself. Data is the raw material used for the study of “the generalizable extraction of knowledge”.
With data scaling up by the day, it should not come as a surprise that Big Data would play an important role in a data scientist’s work – herein lies the importance of our Big Data Lab and our teamwork.
Our Big Data Lab is the place where Data Science’s many underlying disciplines come together to create something greater than the summation of our individual knowledge and expertise – synergistic teamwork.
The personal computer, internet, digital music players (think ipods), smart phones, tablets are just a few of the disruptive technologies that have become common place in our lifetime. What is consistent about these technology disruptions is that they all have changed the way we work, live, and play. Whole industries have grown up around these technologies. Can you imagine a major corporation being competitive in today’s world without personal computers?
Big Data is another disruptive technology. Big Data is spawning its own industry with 100s of startups and every major technology vendor seems to have a “Big Data Offering.” Soon, companies will need to leverage Big Data to stay competitive. The Big Data technology disruption in an Enterprise’s data architecture is significant. How we source, integrate, process, analyze, manage, and deliver will evolve and change. Big Data truly is changing everything! Over the next few weeks I will focusing my blogging on how Big Data is changing our enterprise information architecture. Big Data’s effect on MDM, data integration, analytics, and overall data architecture will be covered. Stay-tuned!
Table of contents have always helped readers to navigate through thick volume of books. This feature can be extended to our users in SSRS to navigate through several pages of reports. The table of contents in SSRS is called the document map. A document map is a clickable table of contents that takes the user directly to that part of the report that he/she wants to see. For example: Consider a library with hundreds and thousands of books. Books are categorized into paperback and hardcover. Furthermore, these books are categorized into genres such as Fiction, murder mystery, biographies, etc. The document map will be particularly helpful for a librarian, who wants to see a list of all hardcover fiction books.
Let’s see how a document is created and how the usability feature can be extended to our users.
For illustration, I have created a tabular report using a wizard. For those interested, this is how my table looks.
Product Types here are Candles,Hand Sanitizers and Soaps.
Product detail here is the type of fragrance and In Store is a date field to indicate when the product arrived in store.
When I run my report, I see that are 20 pages of data. Let’s say I want to find in store data for fragrance type = “Mint”. I would have to find what product the fragrance belongs to. In doing so, I may have to go through the entire result set. Let’s create a document map and see how that can help us.
One thing that we know before creating a document map is that “Mint” is a product detail and therefore, we would need a document map on this field.
Go to your canvas and under Row Groups, click on Product_Detail. Go to the Advanced tab and under Document map, select Product_Detail in the drop down.
Click on Ok and run the report.
Your report should like the screenshot given below. Clicking on any of the product types will take you to that data point.
Stay Tuned for more J
In the past few weeks, we created variety of reports starting from simple tabular reports to colorful pie charts to user friendly drill down reports. SSRS tool is very friendly and is developed to provide world class reports to the user community. While browsing through several business intelligence forums, I found couple of questions that have been asked a lot. A lot of users wanted to know how to create reports with one graph per page and how to have tabs named in that report. This article is dedicated to answering those questions.
This article is named “Easy Hacks” because both the problems can be addressed with few clicks. You will see how.
Question 1: How can I have 1 graph per tab?
For our illustration purposes, I have created a dummy report with two graphs. You can create your own graphs to test these steps.
Select the first chart by clicking on it and select Chart Properties. Under the general tab, select “Add a page break after”
By adding a page break, we are making sure that the second graph will appear on the second tab. Adding a page break after every graph makes sure that the next graph appears on another page.
Question: How can I have named tabs on my SSRS excel report?
Firstly, you will need Report builder 3.0 to have named tabs on excel report.
Click on your chart and on your right under Chart properties, type in a name for your tab.
Export your report in excel format for named tabbed reports J
Stay tuned for more.
In my previous posts, we saw how to create a table report and a pie chart. Today, I am adding another dimension to our tabular matrix by giving the users an option to expand or collapse a report. Such reports are called drill down reports in SSRS.
If you haven’t yet checked out my post on creating a simple table report, you can do so by visiting – http://blogs.perficient.com/enterpriseinformation/2013/10/25/creating-first-ssrs-report-part-2/
I am going to create a quick and simple matrix report with Product_type as the parent group and Product_detail as the child group.
Step 1: For illustration purposes, I am assuming that we want to see or hide Product_detail field. Right click on Product_detail field and choose Group Properties.
Step 2: Choose Visibility tab and select Hide.
(In this step, we chose “Hide” because we want the report to open collapsed)
Step 3: Select “ Toggle can be displayed by this report item:” and choose “Product_Type1” from the dropdown.
This step means “Product_details” field will be visible based on whether we expand/collapse “Product_type1” field.
Run the report and click on + sign to expand and – sign to collapse the report. I hope you enjoy this post. Stay tuned for more
Last month Oracle announced Oracle In-Memory database option. The overall message is that once installed, you can turn this “option” on and Oracle will become an in-memory database. I do not think it will be that simple. However, I believe Oracle is on the correct track with this capability.
There are two main messages with Oracle In-Memory’s vision which I view are critical capabilities in a modern data architecture. First, is the ability to store and process data based on the temperature of the data.
That is, hot, highly accessible data should be kept DRAM or as close to DRAM as possible. As the temperature decreases, data can be stored on flash, and for cold, rarely accessed data, on disk (either in the Oracle DB or in Hadoop). Of course we can store data of different temperatures today, however, the second feature, which is making this storage transparent to the application, makes the feature it very valuable. An application programmer, data scientist, or report developer, should not have to know where the data is stored. It should be transparent. The Oracle DB or a DBA can optimize the storage of data based on cost /performance of storage without having to consider the compatibility with the application, cluster (RAC), or recoverability is quite powerful and useful. Yes, Oracle has been moving this way for years, but now it has most, if not all, the pieces.
Despite the fact that the In-Memory option leverages a lot of existing core code, most of Oracle’s IT shops will need to remember that this is a version 1 product. Plan accordingly. Understand the costs and architectural impacts. Implement the Oracle In-Memory option on a few targeted applications and then develop standards for its use. A well planned, standards-based approach, will assure that your company will maximize its return on your Oracle In-Memory investment.
This month Oracle is releasing its new in-memory database. Essentially, it is an option that leverages and extends the existing RDBMs code base. Now with Microsoft’s recent entry all four the mega-vendors (IBM, SAP, Microsoft, and Oracle) have in-memory database products.
Which one that is a best fit for a company will depend on a number of factors. If a company is happy with their present RDBMs vendor, then that standard should be evaluated first. However, if a company has more than one RDMBs vendor or if they are looking to make a switch, a more comparative evaluation is needed. In this case companies should evaluate:
There are other evaluation areas, as well. For instance with SAP’s HANA offering has a robust BI metadata layer (think Business Objects Universe) that may be of value for a number of companies.
In-Memory Databases are changing and evolving quickly. So, make sure the appropriate due diligence is competed before investing in a selected technology.
IBM Cognos TM1 is well-known as the planning, analysis, and forecasting software that delivers flexible solutions to address requirements across an enterprise, as well as provide real-time analytics, reporting, and what-if scenario modeling and Perficient is well-known for delivering expertly designed TM1 based solutions.
Perhaps phase zero of a typical analytics project would involve our topology experts determining the exact server environment required to support the implementation of a number of TM1 servers (based upon not only industry proven practices, but our own breadth of practical “in the field” experiences). Next would be the procurement and configuration of said environment (and prerequisite software) and finally the installation of Cognos TM1.
It doesn’t stop there
As TM1 development begins, our engineers work closely with internal staff to outline processes for the (application and performance) testing and deployment (of developed TM1 models) but also to establish a maintainable support structure for after the “go live” date. “Support” includes not only the administration of the developed TM1 application but the “road map” to assign responsibilities such as:
Earlier this year the Perficient analytics team teamed up with the IBM Cloud team to offer an interesting alternative to the “typical”: Cognos TM1 as a service in the cloud.
Using our internal TM1 models and colleagues literally all over the country, we evaluated and tested the viability of a fully cloud based TM1 solution.
What we found was, it works and works well, offering unique advantages to our customers:
In the field
Once we were intimate with all of the “ins and outs” of TM1 10.2 on a cloud platform, we were able to to work directly with IBM to demonstrate how a cloud based solution would work to address the specific needs of one of our larger customers. After that, the Perficient team “on the ground” developed and deployed a “proof of concept” using real customer data, and partnered with the customer for the “hands on” evaluation and testing. Once the results were in, it was unanimous: “full speed ahead!””.
A Versatile platform
During the project life-cycle, the cloud environment was seamless; allowing Perficient developers to work (at the client site or remotely) and complete all necessary tasks without issue. The IBM cloud team was available (24/7) to analyze any perceived bottlenecks and, when required, to “tweak” things per the Perficient team’s suggestions, ensuring an accurately configured cloud and a successful, on-time solution delivery.
Built upon our internal teams experience and IBM’s support, our delivered cloud based solution is robust and cutting edge and infinitely scalable.
Even given everyone’s extremely high expectations, the project team was delighted and reported back the following major takeaways from the experience:
In addition, IBM Concert (provided as part of the cloud experience) is a (quote) “wonderful tool for our user community to combine both TM1 & BI to create intuitive workflows and custom dashboards”.
More to Come
To be sure, you’ll be hearing much more about Concert & Cognos in the cloud and when you do, you can count on the Perficient team for expert delivery.
Doug Cutting – Hadoop creator – is reported to have explained how the name for his Big Data technology came about:
“The name my kid gave a stuffed yellow elephant. Short, relatively easy to spell and pronounce, meaningless, and not used elsewhere: those are my naming criteria.”
The term, of course, evolved over time and almost took on a life of its own… this little elephant kept on growing, and growing… to the point that, nowadays, the term Hadoop is often used to refer to a whole ecosystem of projects, such as:
This is a sizable portion of the Big Data ecosystem… an ecosystem that keeps on growing almost by the day. In fact, we could spend a considerable amount of time describing additional technologies out there that play an important part in the Big Data symphony – DataStax, Sqrrl, Hortonworks, Cloudera, Accumulo, Apache, Ambari, Cassandra, Chukwa, Mahout, Spark, Tez, Flume, Fuse, YARN, Whirr, Grunt, HiveQL, Nutch, Java, Ruby, Python, Perl, R, NoSQL, PigLatin, Scala, etc.
Interestingly enough, most of the aforementioned technologies are used in the realm of Data Science as well, mostly due to the fact that the main goal of Data Science is to make sense out of and generate value from all data, in all of its many forms, shapes, structures and sizes.
In my next blog post, we’ll see how Big Data and Data Science are actually two sides of the same coin, and how whoever does Big Data, is actually doing Data Science as well to some extent – wittingly, or unwittingly.