Perficient Enterprise Information Solutions Blog

Blog Categories

Subscribe via Email

Subscribe to RSS feed

Displaying Custom Messages in SSRS.

SSRS is a powerful tool not because it projects queries in really good looking charts but because it enhances user experience.  The reports are so intuitive that users can navigate and export data without much training.  However, as business analyst or data analysts or report designers, it is our responsibility to extend these usability features to our users at every step.

We know that if sql returns no rows, SSRS will display empty tables to our customers. As a report designer and a user myself, empty table would worry me and would force me to think that reports are not pulling data correctly. Instead, I would like to see or show a clear message indicating why there is no data.  For example: No records found and so on.

Lets see how to add custom messages in SSRS.

For illustration, I have created a dummy table that contains 3 Columns:

1.)    Product

2.)    Product_Detail

3.)    Count

cm1

 

 

 

 

 

In order to add a custom message:

1.)    Select your table

2.)    Go to Properties

3.)    Scroll down to No Rows Message and Type Your Message in the Box.

cm2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

You can also change the Color and Font in the No Rows Section of the Table Properties.

This is how my report looks after I add my No Rows message J

cm3

 

 

 

 

Stay Tuned for more :)

Data Science = Synergistic Teamwork

Data science is a discipline conflating elements from various fields such as mathematics, machine learning, statistics, computer programming, data warehousing, pattern recognition, uncertainty modeling, computer science, high performance computing, visualization and others.

Data Science = Synergistic TeamworkAccording to Cathy O’Neil and Rachel Schutt, two luminaries in the field of Data Science, there are about seven disciplines that even data scientists in training can easily identify as part of their tools set:

  • Statistics
  • Mathematics
  • Machine Learning
  • Computer Science
  • Data Visualization
  • Domain Expertise
  • Communication and Presentation Skills

Most data scientists, however, are experts in only a couple of these disciplines and proficient in another two or three – that’s why Data Science is a team sport.

I’ve definitely learned the importance of teamwork in this field over the last few months, while working with Perficient Data Science team on a Big Data Lab.

Ultimately, the goal of Data Science is to extract meaning from data and create products from the data itself. Data is the raw material used for the study of “the generalizable extraction of knowledge”.

With data scaling up by the day, it should not come as a surprise that Big Data would play an important role in a data scientist’s work – herein lies the importance of our Big Data Lab and our teamwork.

Our Big Data Lab is the place where Data Science’s many underlying disciplines come together to create something greater than the summation of our individual knowledge and expertise – synergistic teamwork.

Disruptive Scalability

The personal computer, internet, digital music players (think ipods), smart phones, tablets are just a few of the disruptive technologies that have become common place in our lifetime.   What is consistent about these technology disruptions is that they all have changed the way we work, live, and play.  Whole industries have grown up around these technologies.   Can you imagine a major corporation being competitive in today’s Disruptive Scalabilityworld without personal computers?

Big Data is another disruptive technology.    Big Data is spawning its own industry with 100s of startups and every major technology vendor seems to have a “Big Data Offering.”  Soon, companies will need to leverage Big Data to stay competitive.   The Big Data technology disruption in an Enterprise’s data architecture is significant. How we source, integrate, process, analyze, manage, and deliver will evolve and change. Big Data truly is changing everything!   Over the next few weeks I will focusing my blogging on how Big Data is changing our enterprise information architecture.   Big Data’s effect on MDM, data integration, analytics, and overall data architecture will be covered.   Stay-tuned!

Creating Table of Contents for SSRS reports

Table of contents have always helped readers to navigate through thick volume of books. This feature can be extended to our users in SSRS to navigate through several pages of reports. The table of contents in SSRS is called the document map. A document map is a clickable table of contents that takes the user directly to that part of the report that he/she wants to see. For example: Consider a library with hundreds and thousands of books. Books are categorized into paperback and hardcover. Furthermore, these books are categorized into genres such as Fiction, murder mystery, biographies, etc. The document map will be particularly helpful for a librarian, who wants to see a list of all hardcover fiction books.

Let’s see how a document is created and how the usability feature can be extended to our users.

For illustration, I have created a tabular report using a wizard. For those interested, this is how my table looks.

dm1

 

 

 

 

 

 

 

 

Product Types here are Candles,Hand Sanitizers and Soaps.

Product detail here is the type of fragrance and In Store is a date field to indicate when the product arrived in store.

When I run my report, I see that are 20 pages of data. Let’s say I want to find in store data for fragrance type = “Mint”. I would have to find what product the fragrance belongs to. In doing so, I may have to go through the entire result set. Let’s create a document map and see how that can help us.

One thing that we know before creating a document map is that “Mint” is a product detail and therefore, we would need a document map on this field.

Go to your canvas and under Row Groups, click on Product_Detail. Go to the Advanced tab and under Document map, select Product_Detail in the drop down.

dm2

 

 

 

 

 

 

 

 

 

 

dm3

 

 

 

 

 

 

 

 

 

 

 

 

Click on Ok and run the report.

Your report should like the screenshot given below. Clicking on any of the product types will take you to that data point.

dm4

 

 

 

 

 

 

 

 

Stay Tuned for more J

 

SSRS – Easy Hacks

In the past few weeks, we created variety of reports starting from simple tabular reports to colorful pie charts to user friendly drill down reports.   SSRS tool is very friendly and is developed to provide world class reports to the user community.  While browsing through several business intelligence forums, I found couple of questions that have been asked a lot.  A lot of users wanted to know how to create reports with one graph per page and  how to have tabs named in that report.  This article is dedicated to answering those questions.

This article is named “Easy Hacks” because both the problems can be addressed with few clicks.  You will see how.

Question 1: How can I have 1 graph per tab?

For our illustration purposes, I have created a dummy report with two graphs.  You can create your own graphs to test these steps.

eh1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Select the first chart by clicking on it and select Chart Properties.  Under the general tab, select “Add a page break after”

eh2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

By adding a page break, we are making sure that the second graph will appear on the second tab.  Adding a page break after every graph makes sure that the next graph appears on another page.

Easy, heh?

Question: How can I have named tabs on my SSRS excel report?

Firstly, you will need Report builder 3.0 to have named tabs on excel report.

Click on your chart and on your right under Chart properties, type in a name for your tab.

eh3

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Export your report in excel format for named tabbed reports J

Stay tuned for more.

Creating a drill down report – 3 easy steps

In my previous posts, we saw how to create a table report and a pie chart. Today, I am adding another dimension to our tabular matrix by giving the users an option to expand or collapse a report. Such reports are called drill down reports in SSRS.

If you haven’t yet checked out my post on creating a simple table report, you can do so by visiting – http://blogs.perficient.com/enterpriseinformation/2013/10/25/creating-first-ssrs-report-part-2/

I am going to create a quick and simple matrix report with Product_type as the parent group and Product_detail as the child group.

dd1

 

 

 

 

 

 

 

 

 

 

Step 1: For illustration purposes, I am assuming that we want to see or hide Product_detail field. Right click on Product_detail field and choose Group Properties.

 

Step 2: Choose Visibility tab and select Hide.

(In this step, we chose “Hide” because we want the report to open collapsed)

dd2

 

 

 

 

 

 

 

 

 

 

 

Step 3: Select “ Toggle can be displayed by this report item:” and choose “Product_Type1” from the dropdown.

This step means “Product_details” field will be visible based on whether we expand/collapse “Product_type1” field.

dd3

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Run the report and click on + sign to expand and – sign to collapse the report.  I hope you enjoy this post.  Stay tuned for more :)

Posted in News

Thoughts on Oracle Database In-Memory Option

Last month Oracle announced Oracle In-Memory database option. The overall message is that once installed, you can turn this “option” on and Oracle will become an in-memory database.   I do not think it will be that simple. However, I believe Oracle is on the correct track with this capability.

Thoughts on Oracle Database In-Memory OptionThere are two main messages with Oracle In-Memory’s vision which I view are critical capabilities in a modern data architecture. First, is the ability to store and process data based on the temperature of the data.
That is, hot, highly accessible data should be kept DRAM or as close to DRAM as possible.   As the temperature decreases, data can be stored on flash, and for cold, rarely accessed data, on disk (either in the Oracle DB or in Hadoop).   Of course we can store data of different temperatures today, however, the second feature, which is making this storage transparent to the application, makes the feature it very valuable. An application programmer, data scientist, or report developer, should not have to know where the data is stored.   It should be transparent. The Oracle DB or a DBA can optimize the storage of data based on cost /performance of storage without having to consider the compatibility with the application, cluster (RAC), or recoverability is quite powerful and useful.   Yes, Oracle has been moving this way for years, but now it has most, if not all, the pieces.

Despite the fact that the In-Memory option leverages a lot of existing core code, most of Oracle’s IT shops will need to remember that this is a version 1 product.   Plan accordingly. Understand the costs and architectural impacts. Implement the Oracle In-Memory option on a few targeted applications and then develop standards for its use. A well planned, standards-based approach, will assure that your company will maximize its return on your Oracle In-Memory investment.

Evaluating In-Memory DBs

This month Oracle is releasing its new in-memory database.   Essentially, it is an option that leverages and extends the existing RDBMs code base.   Now with Microsoft’s recent entry all four the mega-vendors (IBM, SAP, Microsoft, and Oracle) have in-memory database products.

Evaluating In-Memory DB'sWhich one that is a best fit for a company will depend on a number of factors. If a company is happy with their present RDBMs vendor, then that standard should be evaluated first. However, if a company has more than one RDMBs vendor or if they are looking to make a switch, a more comparative evaluation is needed. In this case companies should evaluate:

  1. Maturity of the Offering. All the vendors products have different support “traditional” RDMS functionality like referential integrity, support for stored procedures, and online backups – to name a few.   Make sure you understand the vendor’s current and near-term support for features that you require.
  2. Performance. All IMDB vendors promise and from all accounts, deliver significantly increased performance.   However, the vendor’s ability to provide the level of desired performance on the company’s proposed query profile and the ability of the vendor’s technology to scale out should be evaluated. Compression and columnar storage will also affect performance, so understanding these features to support a company’s requirements is necessary.
  3. Sourcing of On-Disk Data. Probably the biggest difference in architecture and maturity between the vendors is their ability to source data from on disk storage systems, either files, traditional RDBMs, or Hadoop systems.
  4. Licensing & Cost Model. The costs associated with a licensing and implementing a technology need to be closely evaluated. How much training is required to develop a competency with a new technology? Is the licensing model favorable to how an enterprise uses/purchases licenses?

There are other evaluation areas, as well. For instance with SAP’s HANA offering has a robust BI metadata layer (think Business Objects Universe) that may be of value for a number of companies.

In-Memory Databases are changing and evolving quickly. So, make sure the appropriate due diligence is competed before investing in a selected technology.

Perficient takes Cognos TM1 to the Cloud

IBM Cognos TM1 is well-known as the planning, analysis, and forecasting software that delivers flexible solutions to address requirements across an enterprise, as well as provide real-time analytics, reporting, and what-if scenario modeling and Perficient is well-known for delivering expertly designed TM1 based solutions.

Analytic Projects

Perficient takes Cognos TM1 to the CloudPerhaps phase zero of a typical analytics project would involve our topology experts determining the exact server environment required to support the implementation of a number of TM1 servers (based upon not only industry proven practices, but our own breadth of practical “in the field” experiences). Next would be the procurement and configuration of said environment (and prerequisite software) and finally the installation of Cognos TM1.

It doesn’t stop there

As TM1 development begins, our engineers work closely with internal staff to outline processes for the (application and performance) testing and deployment (of developed TM1 models) but also to establish a maintainable support structure for after the “go live” date. “Support” includes not only the administration of the developed TM1 application but the “road map” to assign responsibilities such as:

  • Hardware monitoring and administration
  • Software upgrades
  • Expansion or reconfiguration based upon additional requirements (i.e. data or user base changes or additional functionality or enhancements to deployed models)
  • And so on…

Teaming Up

Earlier this year the Perficient analytics team teamed up with the IBM Cloud team to offer an interesting alternative to the “typical”: Cognos TM1 as a service in the cloud.

Using our internal TM1 models and colleagues literally all over the country, we evaluated and tested the viability of a fully cloud based TM1 solution.

What we found was, it works and works well, offering unique advantages to our customers:

  • Lowers the “cost of entry” (getting TM1 deployed)
  • Lowers the total cost of ownership (ongoing “care and feeding”)
  • Reduces the level of capital expenditures (doesn’t require the procurement of internal hardware)
  • Reduces IT involvement (and therefore expense)
  • Removes the need to plan for, manage and execute upgrades when newer releases are available (new features are available sooner)
  • (Licensed) users anywhere in world have access form day 1 (regardless of internal constraints)
  • Provides for the availability of auxiliary environments for development and testing (without additional procurement and support)

In the field

Once we were intimate with all of the “ins and outs” of TM1 10.2 on a cloud platform, we were able to to work directly with IBM to demonstrate how a cloud based solution would work to address the specific needs of one of our larger customers. After that, the Perficient team “on the ground” developed and deployed a “proof of concept” using real customer data, and partnered with the customer for the “hands on” evaluation and testing. Once the results were in, it was unanimous: “full speed ahead!””.

A Versatile platform

During the project life-cycle, the cloud environment was seamless; allowing Perficient developers to work (at the client site or remotely) and complete all necessary tasks without issue. The IBM cloud team was available (24/7) to analyze any perceived bottlenecks and, when required, to “tweak” things per the Perficient team’s suggestions, ensuring an accurately configured cloud and a successful, on-time solution delivery.

Bottom Line

Built upon our internal teams experience and IBM’s support, our delivered cloud based solution is robust and cutting edge and infinitely scalable.

Major takeaways

Even given everyone’s extremely high expectations, the project team was delighted and reported back the following major takeaways from the experience:

  • There is no “hardware administration” to worry about
  • No software installation headaches to hold things up!
  • The cloud provided an accurately configured VM -including dedicated RAM and CPU based exactly upon the needs of the solution.
  • The application was easily accessible, yet also very secure.
  • Everything was “powerfully fast” – did not experience any “WAN effects”.
  • 24/7 support provided by the IBM cloud team was “stellar”
  • The managed RAM and “no limits” CPU’s set things up to take full advantage of features like TM1’s MTQ.
  • The users could choose a complete web based experience or install CAFÉ on their machines.

In addition, IBM Concert (provided as part of the cloud experience) is a (quote) “wonderful tool for our user community to combine both TM1 & BI to create intuitive workflows and custom dashboards”.

More to Come

To be sure, you’ll be hearing much more about Concert & Cognos in the cloud and when you do, you can count on the Perficient team for expert delivery.

A little stuffed animal called Hadoop

Doug Cutting – Hadoop creator – is reported to have explained how the name for his Big Data technology came about:

“The name my kid gave a stuffed yellow elephant. Short, relatively easy to spell and pronounce, meaningless, and not used elsewhere: those are my naming criteria.”

A little stuffed animal called HadoopThe term, of course, evolved over time and almost took on a life of its own… this little elephant kept on growing, and growing… to the point that, nowadays, the term Hadoop is often used to refer to a whole ecosystem of projects, such as:

  1. Common – components and interfaces for distributed filesystems and general I/O
  2. Avro – serialization system for RPC and persistent data storage
  3. MapReduce – distributed data processing model and execution environment running on large clusters of commodity machines
  4. HDFS – distributed filesystem running on large clusters of commodity machines
  5. Pig – data flow language / execution environment to explore huge datasets (running on HDFS and MapReduce clusters)
  6. Hive – distributed data warehouse, manages data stored in HDFS providing a query language based on SQL for querying the data
  7. HBase – distributed, column-oriented database that uses HDFS for its underlying storage, supporting both batch-style computations and random reads
  8. ZooKeeper – distributed, highly available coordination service, providing primitives to build distributed applications
  9. Sqoop – transfer bulk data between structured data stores and HDFS
  10. Oozie – service to run and schedule workflows for Hadoop jobs

This is a sizable portion of the Big Data ecosystem… an ecosystem that keeps on growing almost by the day. In fact, we could spend a considerable amount of time describing additional technologies out there that play an important part in the Big Data symphony – DataStax, Sqrrl, Hortonworks, Cloudera, Accumulo, Apache, Ambari, Cassandra, Chukwa, Mahout, Spark, Tez, Flume, Fuse, YARN, Whirr, Grunt, HiveQL, Nutch, Java, Ruby, Python, Perl, R, NoSQL, PigLatin, Scala, etc.

Interestingly enough, most of the aforementioned technologies are used in the realm of Data Science as well, mostly due to the fact that the main goal of Data Science is to make sense out of and generate value from all data, in all of its many forms, shapes, structures and sizes.

In my next blog post, we’ll see how Big Data and Data Science are actually two sides of the same coin, and how whoever does Big Data, is actually doing Data Science as well to some extent – wittingly, or unwittingly.