by November 20th, 2015
“Godfrey, I think history is going to judge you as one of the truly iconic Silicon Valley CEOs.” –Greg McDowell, JMP Securities Analyst (11/19/2015)
With Splunk’s Q3 earnings release was the additional announcement that Godfrey Sullivan would be handing over the CEO reins to Doug Merritt.
I don’t know Silicon Valley history enough to confirm or deny the statement above, but if I could offer my own twist and re-write Mr. McDowell’s statement:
Godfrey, I think history is going to judge you as one of the truly iconic Analytics CEOs.
- Godfrey built and Sold Hyperion Solutions to Oracle in 2007
- He was on the board at Informatica for 5 years
- He has been on the board of Citrix for 10 years
- He joined Splunk in 2008, took the company public and grew it from a $40 Million revenue company to one with a $600 Million run rate and an $8 Billion market capitalization.
Godfrey has created value for shareholders, customers, employees and partners using a revolutionary way to get customers to use and value software from Splunk.
When people ask me why I am excited about Splunk, I mention the fundamentally different technology built on the schema-on-read paradigm, and I talk about the value customers can get. I also talk about Godfrey. Proven, Fun, Visionary… he is certainly a reason I have been so excited about Splunk, its culture and what it can be.
There are a variety of ways in history we have offered to recognize the contributions of people. If Godfrey was a baseball player he would be a shoe-in for the Hall of Fame. If there was a Mount Rushmore for Analytics, he would be on it.
The good news about this inevitable transition, as confirmed on the Q3 earnings call, is this was a calculated plan, essentially hand-picking his possible successor and training him on “the Godfrey way”. So like the rest of his track record, Godfrey goes out the right way too. The company couldn’t be better positioned for the future. We look forward to the next phase of the journey.
by March 4th, 2015
As new companies embark on the Digital Transformation leveraging Big Data, key concerns and challenges get amplified especially for the near term before the technology and talent pool supply adjusts to the demand. Looking at the earlier post Big Data Challenges, the top 3 concerns were:
- Identifying the Business value/Monetizing the Big Data
- Setting up the Governance to manage Big Data
- Availability of skills
Big Data Skills can be broadly classified into 4 categories:
- Business / Industry Knowledge
- Analytical Expertise
- Big Data Architecture
- Big Data Tools (Infrastructure management, Development)
The value creation or the monetizing of the Big Data (see Architecture needed to monetize API’s) depends on the Business and the Analytical talent. See talent gap on the right specifically in the analytical area. Educating and augmenting the talent shortage through partner companies is critical for the niche and must have technology. As tools evolve coping up with the Architecture becomes very important as past tool / platform short comings addressed with new complexities.
While business continues to search for the Big Data gold, System Integrators and Product vendors are perfecting the methods to shrink the time to market, best practices and through Modern Architecture. How much of the gap we can shrink depends on multiple factors of Companies and their partners.
See also our webinar on: Creating a Next-Generation Big Data Architecture
by November 4th, 2014
Tomorrow I will be giving a webinar on creating business cases for Big Data. One of the reasons for the webinar was that there is very little information available on creating a Big Data business cases. Most of what is available boils down to a “trust me, Big Data will be of value.” Most information available on the internet basically states:
More information, loaded into a central Hadoop repository, will enable better analytics, thus making our company more profitable.
Although logically, this statement seems true and most analytical companies have accepted the above statement, it illustrates the 3 most common mistakes we see in creating a business case for Big Data.
The first mistake, is not directly linking the business case to the corporate strategy. The corporate strategy is the overall approach the company is taking to create shareholder value. By linking the business case to the objectives in the corporate strategy, one will be able to illustrate the strategic nature of Big Data and how the initiative will support the overall company goals. Read the rest of this post »
by October 3rd, 2014
In the Hadoop space we have a number of terms for the Hadoop File System used for data management. Data Lake is probably the most popular. I have heard it called a Data Refinery as well as some other not so mentionable names. The one that has stuck with me has been is the Data Reservoir. Mainly because this most accurate water analogy to what actually happens in a Hadoop implementation that is used for data storage and integration.
Consider, that data is first landed in the Hadoop file system. This is the un-processed data just like water running into a reservoir from different sources. The data in this form in only fit for limited use, like analytics by trained power users. The data is then processed just like water is processed. Process water you end up with water that is consumable. Go one step further and distill it, and you have water that is suitable for medical applications. Data is the same way in a Big Data environment. Process it enough and one ends up with conformed dimensions and fact tables. Process it even more, and you have data that is suitable for basing bonuses or even publishing to government regulators. Read the rest of this post »
by October 1st, 2014
The data warehouse has been a part of the EIM vernacular for nearly 20 years. The vision of the single source of the truth and a single repository for reporting and analysis are two objectives that have resulted in a never-ending journey. The data warehouse never has had enough data and the quality required for a single version of the truth demands significant investment that only rare business cases could support. Further, the role of the analytical database has generally been difficult to achieve. Ad-hoc analysis on large sets of complex data has generally been a significant challenge for the traditional data warehouse. Historically, to address this, companies have implemented appliances, analytical data marts, or a varying set of database features and compromises (think bit mapped indexing, a variety of hardware and software caching techniques, indexed stored data to name a few). All with significant investment and usually adding significant overhead. Read the rest of this post »
by September 25th, 2014
While there are several BI technologies and more coming into the foray every day, SSRS has remained a key player in this area for quite some time now. One of the biggest advantages of SSRS reporting is that it involves the participation of the end user and that is very intuitive to use.
Let’s go back few years when excel was the go to tool for dash boarding. Every time a director or VP wanted a report, he would go to his developers to extract information from the database to help him make dashboards for his meetings. The end user had to rely on the developers to extract information and had to spend several minutes if not hours to make a dashboard. This all works ok when the meeting is scheduled for a specific day of the week or month. We all know this is a myth and most meetings happen impromptu. In such cases, there is not enough time to extract data and to extrapolate that information into graphs.
Here is why SSRS came in as a key player. With a strong foundation of Microsoft, SSRS brought in some of the best features and much needed features:
- Easy connection to databases
- User friendly interface allowing users to design reports and make changes on the fly.
- Report generation on a button click.
- Subscription based delivery to deliver reports on a specific day and time of the month.
While these features may not look ground breaking in the first look, these features actually bring in a lot of value. These features save a lot of time and that time in business directly translates into revenue. The developers can design dashboards once and deploy them to a server. The VP or director can press a button to get these reports on his machine. Furthermore, the reports can be exported in several formats. What I really like about the reports though is the look and feel. Microsoft retained the aesthetics of MS excel reports and by that I mean that you can have a pie chart in excel and in SSRS look exactly same. This is a great feature especially for the audience since it most people do not like to see the look of the reports change over time. Another great feature is that SSRS has fantastic security options and one can implement a role based reporting.
In summary, SSRS is a power packed tool and you should reap benefits of the great features that come with it.
For information on Microsoft’s future BI roadmap and self-service BI options check out this post over on our Microsoft blog
by September 22nd, 2014
Years of work went into building the elusive single version of truth. Despite all the attempts from IT and business, Excel reporting and Access databases were impossible to eliminate. Excel is the number one BI tool in the industry and for the following good reasons : accessibility to the tool, speed and familiarity. Almost all the BI tools export data to Excel for those reasons. Business will produce the insight they need as soon as the data is available, manual or otherwise. It is time to come to terms with the fact change is imminent and there is no such thing as Perfect Data but only what is good enough to business. As the saying goes:
‘Perfect is the enemy of Good!’
So waiting for all the business rules and perfect data to produce the report or analytics, is too late for the business. Speed is of essence, when the data is available, business wants it; stale data is as good as not having it.
In the changing paradigm of Data Management, agile ideas and tools are in play. Waiting for Months, weeks or even a day to analyze the data from Data warehouse is a problem. Data Discovery through Agile BI tools which doubles as ETL, offers significant reduction in data availability. Data Virtualization provides access to data in real-time for quicker insights along with metadata. In-Memory data appliances produce analytics in fraction of the time compared to traditional Data warehouse/ BI.
- Tools in play:
- Data Virtualization
- In-Memory Database (appliances)
- Data Life Cycle Management
- Data Visualization
- Cloud BI
- Big Data (Data Lake & Data Discovery)
- Cloud Integration (on-prem and off-prem)
- Information Governance (Data Quality, Metadata, Master Data)
- Architectural changes traditional Vs Agile
- Data Management Impacts
- Data Governance
- Data Security & Compliance
- Cloud Application Management
by September 16th, 2014
Big Data is on everyone’s mind these days. Creating an analytical environment involving Big Data technologies is exciting and complex. New technology, new ways of looking at the data which is otherwise remained dark or not available. The exciting part of implementing the Big Data solution is to make it a production ready solution.
Once the enterprise comes to rely on the solution, dealing with typical production issues is a must. Expanding the data lakes and creating multiple applications accessing, changing and deploying new statistical learning solutions can hit the overall platform performance. In the end-user experience and trust will become an issue if the environment is not managed properly. Models which used to run in minutes may turn into hours and days based on the data changes and algorithm changes deployed. Having the right DevOps process framework is important to the success of Big Data solutions.
In many organizations the Data Scientist reports to the business and not to IT. Knowing the business and technological requirements and setting up the DevOps process is key to make the solutions production ready.
Key DevOps Measures for Big Data environment:
- Data acquisition performance (ingestion to creating a useful data set)
- Model execution performance (Analytics creation)
- Modeling platform / Tool performance
- Software change impacts (upgrades and patches)
- Development to Production – Deployment Performance (Application changes)
- Service SLA Performance (incidents, outages)
- Security robustness / compliance
One of the top key issue is Big Data security. How secured is the data and who has the access and the oversight of the data? Putting together a governance framework to manage the data is vital for the overall health and compliance of the Big Data solutions. Big Data is just getting the traction and much of best practices for Big Data DevOps scenarios yet to mature.
by September 10th, 2014
The speed in which we receive information from multiple devices and the ever-changing customer interactions providing new ways of customer experience, creates DATA! Any company that knows how to harness the data and produce actionable information is going to make a big difference to their bottom line. So Why Virtualization? The simple answer is Business Agility.
As we build the new information infrastructure and the tools for the modern Enterprise Information Management, one has to adapt and change. In the last 15 years, the Enterprise Data Warehouse has matured to a point with proper ETL framework and Dimension models.
With the new ‘Internet of Things’ (IoT) a lot more data is created and consumed from external sources. Cloud applications create data which may not be readily available for analysis. Not having the data for analysis will greatly change the critical insights outcome.
Major Benefits of Virtualization
- Address performance impact of Virtualization on the underlying Application and the overall refresh delays appropriately
- It is not a replacement for Data Integration (ETL) but it is a quicker way to get data access in a controlled way
- May not include all the Business rules, which implies Data Quality issues, may still be an issue
In conclusion, having the Virtualization tool in the Enterprise Data Management portfolio of products will add more agility in Data Management. However, use Virtualization appropriately to solve the right kind problem and not as a replacement to traditional ETL.
by September 9th, 2014
Cloud BI comes in different forms and shapes, ranging from just visualization to full-blown EDW combined with visualization and Predictive Analytics. The truth of the matter is every niche product vendor offers some unique feature which other product suite does not offer. In most case you almost always need more than one suite of BI to meet all the needs of the Enterprise.
De-centralization definitely helps the business in achieving agility and respond to the market challenges quickly. At the same token that is how companies may end up with silos of information across the enterprise.
Let us look at some scenarios where a cloud BI solution is very attractive to Departmental use.
Time to Market
Getting the business case built and approved for big CapEx projects is a time-consuming proposition. Wait times for HW/SW and IT involvement means lot longer delays in scheduling the project. Not to mention the push back to use the existing reports or wait for the next release which is allegedly around the corner forever.
Business users have immediate need for analysis and decision-making. Typical turnaround for IT to get new sources of data takes anywhere between 90 days to 180 days. This is absolutely the killer for the business which wants the data now for analysis. Spreadsheets are still the top BI tool just for this reason. With Cloud BI (not just the tool) Business users get not only the visualization and other product features but also the data which is not otherwise available. Customer analytics with social media analysis are available as a third-party BI solution. In the case of value-added analytics there is business reason to go for these solutions.
Power users need ways to slice and dice the data, need integration of other non traditional sources (Excel, departmental cloud applications) to produce a combined analysis. Many BI tools comes with light weight integration (mostly push integration) to make this a reality without too much of IT bottleneck.
So if we can add new capability, without much delay and within departmental budget where is the rub?
The issue is not looking at the Enterprise Information in a holistic way. Though speed is critical, it is equally important to engage Governance and IT to secure the information and share appropriately to integrate into the Enterprise Data Asset.
As we move into the future of Cloud based solutions, we will be able to solve many of the bottlenecks, but we will also have to deal with security, compliance and risk mitigation management of leaving the data in the cloud. Forging a strategy to meet various BI demands of the enterprise with proper Governance will yield the optimum use of resources and /solution mix.