Perficient Enterprise Information Solutions Blog

Blog Categories


Time well spent

Hi there,

The end of 2015 is fast approaching, with December looming just a week away. For most people, December is packed with the hustle and bustle of last minute gift shopping, or end of year projections and budgets for 2016. Often in the sway of all this activity, many are so focused on the approaching New Year that they abandon the current year without even a backwards glance, flipping the page in their agenda or tossing the current calendar before the days of December have completely passed. So regardless of what December holds out for you, why not take a moment to reflect on some of your accomplishments from 2015 before it is time to usher in any new resolutions for 2016?

In keeping with my own suggestions, here is what I assessed when I looked back over 2015 – time well spent.

The first thing I have to acknowledge is that 2015 has been a very busy and very productive year for me and my team.

Secondly, I am pleased to see that 2015 presented many opportunities to work with an assortment of technologies, like:
• Alteryx – including Alteryx Visual Analytics Kit for Qlik, and for Tableau
• Hadoop ecosystem projects, such as HBase, Hive, Pig, Spark, etc.
• NoSQL technologies, such as MongoDB and Cassandra, along with HBase (of course)
• JethroData
• Tableau
• SAP Business Objects, Crystal Reports, etc.
• Qlik Sense and QlikView
• Oracle Endeca Information Discovery
• Microsoft SSIS, SSAS and SSRS… along with Azure HDInsight
• Cache’ ObjectScripts and MUMPS
• SAS and R – especially in the implementation of Data Science algorithms
• Google BigQuery Analytics, along with other Google technologies, such as GSA (Google Search Appliance)
• TIBCO Spotfire
• IBM Netezza
• Embarcadero

While this list is not exhaustive, it makes me realize that I enjoy working with multiple technologies and want to do so in the future. It also helps me see that even though there are a ton of technologies I would love to work with, I want to focus most specifically on Big Data, Data Sciences and Advanced Analytics.

Thirdly, as I reflect on 2015 I am pleased to see that another year of experience has culminated in increased exposure to clients in the healthcare, insurance, risk adjustment and even multi-level marketing fields. Most importantly, as I looked back on the projects of 2015, it occurred to me that this year provided a ton of opportunities to participate on every aspect of full cycle implementations, from project management to pure development and environment setup. I also worked on strategy and governance engagements, pre-sales and thought leadership.

After taking some time to assess 2015 up to this point, I appreciate that through all the highs and lows that come with any professional career, 2015 was really a professionally fulfilling year. By taking the time to dwell on the year as a whole, I was able to gather a better understanding of my current strengths, and determine opportunities that remain for 2015 during the month of December. Best of all, having a fuller view of 2015 makes me excited to think about what is in store for 2016 – I can honestly say I am looking forward to it! More on that in a future post…

I shared my 2015 with you – not to impress you – but to impress upon you that preparing for the future sometimes means a visit to the past. So take a moment to reflect over your 2015. How has it been until now? With that in mind, what does December have in store for you? How about 2016? I look forward to your comments about your accomplishments and reflections on 2015 until this point, and on what comes next for you.

Although this was one of my more reflective posts, be ready to talk tech again in my next blog post when I discuss how TIBCO Spotfire is integrating with big data technologies efficiently and effectively – which I think is the right move for TIBCO and their customers.

Thanks Godfrey Sullivan, First Ballot Hall of Famer

SplunkGodfreySullivan“Godfrey, I think history is going to judge you as one of the truly iconic Silicon Valley CEOs.” –Greg McDowell, JMP Securities Analyst (11/19/2015)

With Splunk’s Q3 earnings release was the additional announcement that Godfrey Sullivan would be handing over the CEO reins to Doug Merritt.

I don’t know Silicon Valley history enough to confirm or deny the statement above, but if I could offer my own twist and re-write Mr. McDowell’s statement:

Godfrey, I think history is going to judge you as one of the truly iconic Analytics CEOs.

  • Godfrey built and Sold Hyperion Solutions to Oracle in 2007
  • He was on the board at Informatica for 5 years
  • He has been on the board of Citrix for 10 years
  • He joined Splunk in 2008, took the company public and grew it from a $40 Million revenue company to one with a $600 Million run rate and an $8 Billion market capitalization.

Godfrey has created value for shareholders, customers, employees and partners using a revolutionary way to get customers to use and value software from Splunk.

When people ask me why I am excited about Splunk, I mention the fundamentally different technology built on the schema-on-read paradigm, and I talk about the value customers can get.  I also talk about Godfrey.  Proven, Fun, Visionary… he is certainly a reason I have been so excited about Splunk, its culture and what it can be.

There are a variety of ways in history we have offered to recognize the contributions of people.  If Godfrey was a baseball player he would be a shoe-in for the Hall of Fame.  If there was a Mount Rushmore for Analytics, he would be on it.

The good news about this inevitable transition, as confirmed on the Q3 earnings call, is this was a calculated plan, essentially hand-picking his possible successor and training him on “the Godfrey way”.  So like the rest of his track record, Godfrey goes out the right way too.  The company couldn’t be better positioned for the future.  We look forward to the next phase of the journey.

IBM’s Advanced Analytics Portfolio

IBM’s Advanced Analytics story is now powerful yet simple. It focuses on ALL!

Hybrid (heterogeneous architectures), trust (ensuring end-to-end accuracy and trust in the entire system), and agility (speed of thought analysis) are the core principles for this transformation to IBM’s Analytics portfolio. The product stack comprises Cognos Analytics, Watson Analytics, and Spark sparking SPSS for Predictive.

With an incredible built-in Search facility that understands the context you are in, Cognos BI just got better being rebranded as Cognos Analytics, by enabling smarter self-service analytics and self-initiated discovery and visualization.

Watson Analytics takes the bias/subjectivity out of the analytical journey by enabling the Citizen Data Scientist and other Business Users to statistically interrogate the data. Detecting trends, patterns and anomalies just got a lot simpler.

IBM Predictive Analytics now empowers the new multi-dimensional Citizen Data Scientist with the open source-driven (Spark) SPSS framework. Their Predictive story just became more personalized, intuitive, managed and above all, powerful.

Posted in Analytics

Data Management the IBM Way

In a bid to win the race to Insight, IBM itself has undergone a major transformation in its Data and Analytics product portfolio. At #IBMInsight, IBM Executives each day organized several Keynotes and Super Sessions to unveil their ever-evolving approach to modern day data architecture, where Analytics and Cloud are integral components of a transformational data infrastructure.

IBM’s Data Platform now reflects the convergence of “smart” Analytics, Machine Learning, Big Data paradigms, Cognitive and Natural Language Processing along with Cloud technologies. IBM Data and Analytics Platform is now built on Spark (bringing that Open Source flexibility and feasibility to the Enterprise), supports hybrid architectures (on premise and on the cloud) with agility built into the simplicity of this transformed data infrastructure.

One of the coolest features of this platform is self-initiated Discovery using natural language to interrogate and interact with the underlying data structures. Additionally, instant fusion of temporal and geospatial awareness and single-click integration with an existing Data Warehouse or Repository make this Platform very alluring from a personalization and customer-centricity standpoint.


Posted in Analytics, Big Data, News

Leading with Insight at #IBMInsight

shutterstock_218740615_350The message at IBM Insight 2015 was loud and clear: “Insight is the new currency.” Therefore, organizations must strive to lead, and not just compete, in this “Insight Economy.”

The companies that will leverage data in its totality (“big” and “small”) combined with cognitive computing and analytics will be the intelligent brokers of game-changing information paradigms.

Analytics along with Big Data technologies are greatly fueling this insight economy, and hybrid cloud architectures, in turn, are facilitating the unlocking of newer business models. These three mega-trends continue to converge in an IoT world, thereby, bringing about a major business model disruption for companies of all sizes. Organizations that ride this wave of massive transformation and become an intelligently cognitive, data-driven company will be the leaders in economy.

Over the course of the week, several exciting examples were brought forth of companies that are on their way to embracing this 4-dimensional transformation. Coca Cola (taking precision marketing and audience segmentation to the next level), GoMoment (building Cognitive Hotels), VineSleuth (helping consumers find the perfect bottle of wine) and StatSocial (creating holistic, personalized consumer profiles for Retailers), presented some amazing ways of how they continue to strive to disrupt their respective industries and markets.


Posted in Analytics, Big Data, News

The End of IT?

shutterstock_136745267_350A few years ago Adrian Cockcroft, cloud architect at Netflix at the time, posted this blog which caused quite a stir among the IT community. It described how Netflix had almost done away with DevOps (or even plain Ops) using the cloud (AWS in this case) to come up with yet another new IT buzzword called NoOps.

Many in the DevOps community took strong issue to this, arguing that Ops by any name, whether NoOps or DevOps, is still Ops. A lot of the Platform-as-a-Service (PaaS) vendors jumped on the NoOps bandwagon even declaring the following year to be the definitive year of NoOps.

Vendors like Heroku, AWS Elastic Beanstalk and AppFog tout their PaaS platforms as pure development based without any need for operations support. I witnessed this in person during a Heroku workshop (by the way Heroku itself is hosted on AWS), it’s frighteningly simple and easy to create a website or web service using any of the supported language platforms and connect to a set of standard database backends and tools, and it scales efficiently and the setup is a breeze if you have ever worked on any kind of multi-stack project.

I think a key drawback in PaaS currently is that unless the project is a self-contained one or all your company’s data and services are located on the cloud or are accessible externally, it is difficult to punch enough holes through your company’s firewall to justify the move to PaaS especially if the data is sensitive. I think organizations are still uncomfortable with the idea of owning highly sensitive data hosted on systems outside of their control. Also being locked into a limited toolset or a particular database might not appeal to every project owner given the proliferation of specialized software resources especially in the Big Data landscape. Read the rest of this post »

Cyber Security Awareness Month brings plenty of awareness

DataSecOctober 1st marked the beginning of the United States’ National Cyber Security Awareness month.  Three days into the month, awareness is exactly what we have.

Last week, Experian and T-Mobile announced a significant data breach.  What’s worse is that the hackers were able to maintain the breach for over two years.  T-Mobile CEO John Legere, never a person known to mince words, came out with a statement saying he is “incredibly angry” about the breach of 15 million customer records and will complete a thorough review of their relationship with Experian.

While it is difficult to learn that the breach occurred, what’s worse is it happened over two years.  What breaches are still on going in the world that we don’t know about?

The very next day, Scottrade disclosed they were the casualty of a data breach exposing customer data for over 4 million customers during late 2013 and early 2014.  Interestingly enough in this case, Scottrade didn’t even know the breach was occurring.  The FBI came in to tell them. Read the rest of this post »

Kudu meets the Elephant

So, the already convoluted Open Source Hadoop ecosystem just got a little more complicated with a Kudu joining the Elephant at #StrataHadoop. Advocates of Fast Analytics on Fast Data at Scale also just got more excited regarding the potential of fast writes, fast updates, fast reads, fast everything – all with Kudu! Cloudera’s Kudu is designed to fill major gaps in Hadoop’s storage layer, especially with regard to Fast Analytics, but is not meant to replace or disrupt (just yet!) HBase or HDFS. Instead, Kudu is meant to complement and run in close proximity with the storage engine because some applications may get more benefit out of HDFS or Hbase.

Before the official release of this news, VentureBeat speculated about Kudu’s possible implications for the Big Data industry. It “could present a new threat to data warehouses from Teradata and IBM’s PureData … It may also be used as a highly scalable in-memory database that can handle massively parallel processing (MPP) workloads, not unlike HP’s Vertica and VoltDB.”

Whatever the long-term implications of Kudu, the above scenarios are not going to play out any time soon. Maturity is still what most enterprises crave in this rather diverse Open Source ecosystem, and Kudu, despite all its excitement, has a long way to go on that front.

Posted in News

Surgical Essbase Recovery: How To Back Up Your Essbase Data

shutterstock_125192411_350Recovery of data in Essbase is sometimes confusing and time consuming depending on the type and frequency of backups previously taken. In this article, I’ll give you an idea of how you can back up your Essbase data for the purpose of restoring a slice of the database.

Scenario: I am running a daily export of level 0 data from my Essbase database. One of my Planners called and said he submitted this month’s data to last month’s point of view thus overwriting a Forecast version that was still valid.

Recovery Solution 1: My first option is to import my level 0 data export and notify the Planners that they need to start submitting data that was entered since that last backup I have. The problem with this scenario is not everyone who entered data may be available to re-enter data.

Recovery Solution 2: Create a copy of the database, load the last level 0 backup, export the point of view affected by the accidental submit then load that into my production database. This solution takes a bit longer to build because it involves the creation of a new Essbase application and database as well as calculation script development; however, we can surgically specify the data to restore based on the point of view of the process that broke it to begin with. Read the rest of this post »

A Solution to Essbase Security File Backups

shutterstock_164194724_300Essbase does its best to be friendly to us system admin types, but sometimes I just want to scratch my head in bewilderment.

In and later, Essbase performs a backup of the Essbase.sec file every 300 seconds and retains a default of two backups. This means that at any given time, you have two backups of the Essbase security file that are at most 10 minutes old.

In my experience, corruption to the Essbase security file is not identified within a 10-minute period, so as a matter of habit, I set the follow in the Essbase configuration file (essbase.cfg):


The interval is in seconds. This ensures that at any given time, I have 10 backups spanning across a 24-hour period. I will determine the Essbase security file is corrupted easily within a few hours of symptoms occurring and can roll it back several hours if necessary.


Posted in essbase, Hyperion Planning