Perficient Enterprise Information Solutions Blog

Blog Categories

Subscribe via Email

Subscribe to RSS feed

Archives

Follow Enterprise Information Technology on Pinterest

What is Your Big Data Strategy…?

Big Data is big deal. Every vendor has a strategy and a suite of products. Navigating the maze and picking the right Big Data platform and tools takes some level of planning and looking beyond techie’s dream product suite. Compounding the issue is the open source option vs. going with a vendor version of the open source. Like every other new technology, product shakedowns will happen sooner or later. So picking a suite now is like betting on the stock market, exercising caution and being  conservative with long-term outlook will pay off.

Organizations tend to follow the safe route of sticking with the big vendor strategy but the downside is getting the funding and putting up with the procurement phase of waiting forever for the approval. The hard part is knowing the product landscape, assessing the strengths of each type of solution and prioritizing the short-term and long-term strategy.

I have seen smaller companies building their entire solution in open stack and don’t pay a penny for the software. bg_confusionObviously the risk and the rewards plays out.  Training the resources and hiring trained resources from the market place is a huge factor as well. Open source still has the same issues of version, bugs and compatibility, so having the knowledgeable team makes a big difference in managing the environment and the overall quality of the delivery.

But despite the confusion, there is good news. If you are in the process of figuring out how you want to play the Big Data game, big and small vendors alike are providing you with the sandbox or Dev environment almost free or for limited duration. Leveraging this option as part of the Big Data strategy will not only save money but also the learning curve. IBM Bluemix is an example of that. So does Cloudera, Datastax and the list is growing.

To maximize the benefit, follow the basic portfolio management strategy.

  • Take an inventory of tools already available within the organization
  • Identify the products which will play better with the existing tools
  • Figure out the business case and the types of tools needed to get a successful POC
  • Match the product selection with resource knowledge base
  • Get as much help from external sources (a lot of them can be free, if you have the time) from training to POC
  • Start small and use it to get the buy in for the larger project
  • Invest in developing the strategy with POC to uncover the benefits and to build strong business case

Combining this strategy with little bit external help to narrow down the selection and avoiding the pitfalls based on the industry experience will add tremendous value in navigating the complex selection process. Time to market can be drastically cut down especially when you make use of the DevOps platform on the cloud.

The direct benefits in leveraging the try-before-buy options are:

  • No Hardware / wait time or IT involvement for setting up the environment
  • All the tools are available and ready to test
  • Pricing and the product stack can be validated rather than finding out later that you need to buy one more product which is not in the budget
  • Time to market is drastically cut down
  • Initial POC and Business Case can be built with solid proof
  • Throwaway work can be minimized

Looking at the all the benefits, it is worth taking this approach especially if you are in the initial stages and you want proof before asking for the millions which is hard to justify.

Defining Big Data Prototypes – part 2

In part 1 of this series, we discussed some of the most common assumptions associated with Big Data Proof of Concept (POC) projects. Today, we’re going to begin exploring the next stage in Big Data POC definition – “The What.”

The ‘What’ for Big Data has gotten much more complicated in recent years; and now involves several key considerations:

  1. What business goals are involved – this is perhaps the most important part of defining any POC yet strangely is often ignored in many POC efforts.
  2. What scope is involved – for our purposes this means how much of the potential solution architecture will be evaluated. This can be highly targeted (database layer only) or can be comprehensive (an entire multi-tiered stack).
  3. What technology is involved – this one is tricky because often times people view a POC only in the context of proving a specific technology (or technologies). However, our recommended approach involves aligning technologies and business expectations up front – thus the technology isn’t necessarily the main driver. Once the goals are better understood then selecting the right mix of technologies becomes supremely important.  There are different types of Big Data databases and a growing list of BI platforms to choose from – these choices are not interchangeable – some are much better tailored for specific tasks  than others.
  4. What platform is needed – this is one of the first big technical decisions associated with both Big Data and Data Warehouse projects these days. While Big Data evolved sitting atop commodity hardware, now there are a huge number of device options and even Cloud platform opportunities.
  5. What technical goals or metrics are required – this consideration is of course what allows us to determine whether we’ve achieved success or not. Often times, organizations think they’re evaluating technical goals but don’t develop sufficiently detailed metrics in advance. And of course this needs to be tied to specific business goals as well.

 

Big Data POC Architecture views

Big Data POC Architecture views

 

Once we get through those first five items, we’re very close to having a POC Solution Architecture. But how is this Architecture represented and maintained? Typically, for this type of Agile project, there will be three visualizations:

  • A conceptual view that allows business stakeholders to understand the core business goals as well as technical choices (derived from the exploration above).
  • A logical view which provides more detail on some of the data structure/design and well as specific interoperability considerations (such as login between DB and analytics platform if both are present). This could be done using UML or freeform. As most of these solutions will not include Third Normal Form (3NF) Relational approaches, the data structure will not be presented using ERD diagram notation. We will discuss how to model Big Data in a future post.
  • There is also often a need to represent the core technical architecture – server information, network information and specific interface descriptions. This isn’t quite the same as a strict data model analogy (Conceptual Logical, Physical). Rather this latter representation is simply the last level of detail for the overall solution design (not merely the DBMS structure).

It is also not uncommon to represent one or more solution options in the conceptual or logical views – which helps stakeholders decide which approach to select.  Usually, the last view or POC technical architecture is completed after the selection is made.

There is another dimension to “The What” that we need to consider as well – the project framework. This project framework will likely include the following considerations:

  • Who will be involved – both from a technical and business perspective
  • Access to the capability – the interface (in some cases there won’t be open access to this and then it becomes a demo and / or presentation)
  • The processes involved – what this means essentially is that the POC is occurring in a larger context; one that likely mirrors existing processes that are either manual or handled in other systems

The POC project framework also includes identification of individual requirements, overall timeline as well as specific milestones. In other words, the POC ought to managed as a real project.  The project framework also serves as part of the “How” of the POC, but at first it represents the overall parameters of what will occur and when.

So, let’s step back a moment and take a closer look at some of the top level questions from the beginning. For example, how do you determine a Big Data POC scope? That will be my next topic in this series.

 

copyright 2014, Perficient Inc.

Ensuring a Successful Data Quality Initiative

Recently I listened in on a webinar on “Best Practices in Ensuring Data Quality” and I kept thinking to myself about all the data quality projects I have been on. Now one thing that came out as the obvious was that many of my previous and current clients all have had different standards to their data quality needs. DataI have had clients who needed their data so clean that you would not be able to find duplicates vendor, fat finger mistakes, misspelling or etc. But I also have had clients who were more relaxed with there data quality and only wanted to ensure a few data attributes were cleaned and up to par for their business needs. Now each scenario has its certain challenges and commitment in which needs to be laid out up front. Within my blog posting today, I want to discuss the following steps that will help insure that you can meet your client’s expectations with their data quality needs.

Now before you can get started, you first need to understand and explain to your client that the main types of activates/phases that are a part of a data quality initiative and these are the following:

Read the rest of this post »

How to Apply OBIEE Data Level Security against Essbase?

Over the last few years the integration between Oracle Business Intelligence Enterprise Edition (OBIEE) and Essbase has been undergoing continuous evolution. The progression is aimed at making both analytical products more compatible as they are complimentary. Security may be a challenge, however, when it comes to implementing data-level security on OBIEE reports that source data from Essbase cubes. The challenge is mainly due to the fact that there are different means to achieve data-level security on OBIEE reports sourced from Essbase, depending on the type of installation and security requirements. In this blog, I will navigate through the different methods of how to do this and reference instructions to achieve each method.

Capture

The above high level decision diagram is a guide to know which method is most suitable. I present three different methods. Following are key questions that are crucial in selecting the most suitable method:  Read the rest of this post »

Defining Big Data Prototypes – Part 1

It seems as though every large organization these days is either conducting a Big Data Proof of Concept (POC) or considering doing one. Now, there are serious questions as to whether this is even the correct path towards adoption of Big Data technologies, but of course for some potential adopters it may very well be the best way to determine the real value associated with a Big Data solution.

This week, Bill Busch provided an excellent webinar on how organizations might go through the process of making that decision or business case.  For this exploration, we will assume for the sake of argument that we’ve gotten past the ‘should we do it’ stage and are now contemplating what to do and how to do it.

 

capabilityLifecycle

Capability Evolution tends to follow a familiar path…

 

Big Data POC Assumptions:

Everything starts with assumptions – and there are a number of good ones that could be considered universal for Big Data POCs (applicable in most places), these include the following:

  • When we say ‘Big Data’ what we really mean is multiple potential technologies and maybe even an entire technology stack. The days of Big Data just being entirely focused on Hadoop are long gone. The same premise still underlies the growing set of technologies but the diversity and complexity of options have increased almost exponentially.
  • Big Data is now much more focused on Analytics. This is a key and very practical consideration – re-hosting your data is one thing – re-envisioning it is a much more pragmatic or perhaps more tangible goal.
  • A Big Data POC is not just about the data or programming some application or even just the Analytics – it’s about a “Solution.” As such it ought to be viewed and managed the way your typical IT portfolio is managed – and it should be architected.
  • The point of any POC should not be to prove that the technology works – the fact is that a lot of other people have already done that. The point is determining precisely how that new technology will help your enterprise. This means that the POC ought to be more specific and more tailored to what the eventual solution may look like. The value of having the POC is to identify any initial misconceptions so that when the transition to the operational solution occurs it will have a higher likelihood of success. This is of course the definition of an Agile approach and avoids having to re-define from scratch after ‘proof’ that the technology works has been obtained. If done properly, the POC architecture will largely mirror what the eventual solution architecture will evolve into.
  • Last but not least, keep in mind that the Big Data solution will not (in 95% of the case now anyway) replace your existing data solution ecosystem. The POC needs to take that into account up front – doing so will likely improve the value of the solution and radically reduce the possibility of running into unforeseen integration issues downstream.

Perhaps the most important consideration before launching into your Big Data POC is determining the success criteria up front. What does this mean? Essentially, it requires you to determine the key problems that the solution is targeted to solve and coming up with metrics that can be objectively obtained from the solution. Those metrics can be focused both on technical and business considerations:

  • A Technical metric might be the ability update a very large data set based on rules within a specified timeframe (consistently).
  • A Business metric might be the number of user-defined reports or dashboard visualizations supported.
  • And of course both of these aspects (technical and business capability) would be governed as part of the solution.

Without the POC success criteria it would be very difficult to determine just what value adopting Big Data technology might add to your organization. This represents the ‘proof’ that either backs up or repudiates the initial business case ROI expectation.

In my next post, we will examine the process of choosing “What to select” for a Big Data POC…

 

copyright 2014, Perficient Inc.

Introducing Agile Enterprise Transformation

IT Transformation has been a buzzword for more than a decade now, but what does it really mean? The first time I heard it used regularly was in relation to specific Department of Defense (DoD) technology initiatives from the early 2000s. I had the opportunity to work on several of those projects and as the years progressed the concept of IT Transformation evolved quite a bit – becoming much more flexible and yes – even somewhat Agile in nature.

At first IT Transformation was viewed from a more comprehensive perspective, sort of an organizational make-over if you will. This initial view involved Transformation at multiple levels and from multiple perspectives. This often included consolidation of organizational functions as well as IT systems and hosting capabilities – all at once. In some cases, IT Transformation was becoming almost synonymous with Enterprise Resource Planning (ERP) initiatives; in fact some people still view it that way.

 The holistic perspective of IT Transformation


The holistic perspective of IT Transformation

 

The problem with the comprehensive view of IT Transformation though is its scope. As hard as it is to even get a project like that off the ground (funding, stakeholder buy-in etc.) successfully executing something that large is even more challenging – sort of the ultimate “Big Bang” approach. Despite that, Transformations are still hard to avoid – organizations that don’t adjust to changing realities and emerging technologies can rapidly become ineffective or redundant.

So, how does one approach Transformation in a way that can actually succeed? First we need to redefine it:

IT Transformation

“The set of activities required for an organization or group of related organizations to successfully adopt emerging capabilities and practice. This emerging technology or practice could be focused on one major capability or may involve multiple technologies and processes associated with a specific initiative.”

This definition allows us to view Transformation differently. Rather than the entire organization changing all at once we’re focusing now on areas of specific significant change.  This Transformation based on significant enterprise change can be further decomposed into segments similar to the previous view of holistic Transformation (the business portion, the data portion, the solution portion etc.).

You might be asking yourself how this new view of Transformation differs from any other type of major IT initiative or project. The primary difference is that while today’s more Agile Transformation can be highly targeted it still exhibits these differentiating characteristics from typical IT projects:

  1.  It is designed to fit into a larger set of Transformation goals (e.g. it comes pre-integrated, enterprise-aligned from day 1)
  2.  It typically involves the combination of several distinct technologies and processes – moreso than other IT projects (because it is already enterprise or strategic-facing in nature)
  3. It typically is more mission-focused than many other IT projects. In other words, it has been selected to tackle a critical business issue, not just a technical concern.

Solution providers that support this new type of Transformation are somewhat more flexible in their perspectives on how to tackle complex Transformations than some of the more well-established consulting firms may be. While an ERP transformation may easily cost several hundred million dollars and still not succeed, Agile Transformation approaches look for smaller chunks of capability with higher ROI and success rates.  We will highlight the primary Use Cases and several case studies for Agile Transformation in the coming months.

 

 

copyright 2014, Perficient Inc.

Three Big Data Business Case Mistakes

Tomorrow I will be giving a webinar on creating business cases for Big Data. One of the reasons for the webinar was that there is very little information available on creating a Big Data business cases. Most of what is available boils down to a “trust me, Big Data will be of value.” Most information available on the internet basically states:

More information, loaded into a central Hadoop repository, will enable better analytics, thus making our company more profitable.  

Although logically, this statement seems true and most analytical companies have accepted the above statement, it illustrates the 3 most common mistakes we see in creating a business case for Big Data.

The first mistake, is not directly linking the business case to the corporate strategy. The corporate strategy is the overall approach the company is taking to create shareholder value.   By linking the business case to the objectives in the corporate strategy, one will be able to illustrate the strategic nature of Big Data and how the initiative will support the overall company goals. Read the rest of this post »

It’s all about the data, the data…

Credit_cardWhen Apple jumped into the payment processing with ApplePay, I thought this would be a great leg up for Apple. But who will be the winner and who will be the loser? Granted the payment switches from the credit card to ApplePay which indirectly pays for the purchase, who cares as long as we can charge on the card we want, right? Also what is the market share of Apple Pay going to be? Before we answer all those questions, let’s take a look at how we pay today for services and goods.

Cash may still be the king, that may very well be the last one to die, but what everyone is after is the middle class market which is fast adapting to credit cards and now to smart phones based services, dwindling check usage tells you so. With many ways of shopping using credit cards, store cards, pre-paid cards, Paypal, Internet (billpay,  bitcoin?), the convenience I see is carrying less or no cards at all. I seldom carry my store cards, especially when they can look it up.

Apple pay will be convenient, and may help get rid of the cards altogether, if it is accepted by majority of the merchants. Discover has to go through hurdles before it got accepted, so I don’t see myself getting rid of the cards in the near future, although cards may disappear before cash does.

Credit_trans

I read the news that many major merchants have signed up with Apple and I thought, what happens to the data? Who will be owning the granular consumer spend information? Before I could finish the blog I heard the news 2 major retailers pulled out of Apple. Ha, they realized it, the data is more valuable than the technology or convenience to customers. Imagine the data movement and explosion even if Apple shares the detailed information to each of the parties involved.

Apple is expected to have around 34 Million customers with an average of 200 transaction per customer it is going to explode. You can do the math, if this information has to be shared with 2- 5 parties. No wonder some retailers are wary of signing up. I won’t be surprised if each one of the financial institutions / retailers come up with their own App for payment mechanism.

In the end having the customer spend data is more valuable for the business operations, customer excellence etc. Having the right Information Governance to manage this Information asset is not only strategic but also a matter of survival to the enterprise.

Information Governance & The Cloud

The practice of Information Governance (IG) is evolving rapidly; it has become much more than just Data Governance. One of the most interesting and challenging additions to IG recently has been management of Cloud-related issues. The Cloud of 2014 is much different than how it was conceived just a few years ago (with strict and somewhat arbitrary definitions for IAAS, PAAS and SAAS). The Cloud has evolved into the ‘Multi-Cloud’ – commonly referred to as “Hybrid Cloud;” the typical enterprise is now harnessing several different types of Cloud solutions from a diverse set of vendors. And of course, these Clouds represent capability and data provided from a growing set of geographically dispersed data centers.

With this emerging landscape, many assumptions regarding IG ownership, security and lifecycle control have been effectively “rebooted.” In this post, we’ll take a look at some of the new assumptions, common issues and information Governance tasks associated with complex Cloud-driven enterprises.

Cloud IG Assumptions:

  • Most enterprises going forward will have 1 or more Cloud related solutions they need to manage (these include both customer facing and back-office capabilities).
  • Management of Cloud solutions will often involve partnerships (between Cloud providers and client enterprises)
  • Critical enterprise data will likely be spread beyond the traditional enterprise firewalls and datacenters; some of it will be solution-specific (owned) and some of it will be entirely client-owned and managed.
  • Operational governance will include various metrics and oversight similar to traditional managed services approaches – however this will be spread across more providers requiring “SLA integration” in order to ensure holistic enterprise service level management.
  • Expansion of enterprise data across Cloud realms will force a higher level of security diligence in organizations that previously hadn’t attempted adoption of comprehensive security controls.

 

Cloud-gov

 

Common Cloud IG Issues / Challenges:

  • Metadata Reconciliation – Metadata coordination is vital to providing effective enterprise search capabilities as well as for supporting advanced analytics. In complex Hybrid Cloud enterprises, metadata is fractured even further and there may be multiple levels of analytics in play.
  • Master Data Management – MDM in the Cloud was a recent topic of a Perficient Webinar; the reason this has become important is that core back-office capabilities are now moving to the Cloud, but perhaps not all of them. This means that vital business entities are now spread across multiple domains making MDM centralization more difficult (but not impossible).
  • Data Security – This is actually a huge topic and often one of the first ones that come up when discussing any external Cloud adoption. And, there isn’t one answer on how to handle it.

Cloud IG – Getting Started:

So, how does an enterprise adapt to this rapidly changing landscape? There are a few excellent starting points to begin tackling IG in the Cloud, they include:

  • Creating an “Information Asset Map” (we will present an example of one of these in a future post), that illustrates the distribution of assets across environments with descriptions of data classification, integrations, points etc.
  • Creating an Enterprise-wide Information Governance plan or updating it is if it already exists to document strategy and processes for coordinated IG across environments.
  • Definition of cross-domain security controls and integration of those within a unified managed services framework.

This is a complex topic and we will be dedicating a number of upcoming posts to explain some of what we’ve introduced here in today in greater depth.

 

 

copyright 2014 – Perficient Inc.

Splicing Open Source Projects Together

Last night I had a opportunity to see a demo of Splice Machine which was pretty cool from a technology perspective.  Splice Machine took Apache Derby, a lightweight ANSI SQL standard database and “spliced” it into HBase. This essentially created an SQL interface into Hbase. This product illustrates the power of combining different open source projects to meet business needs.   The result is an ANSI SQL compliant transaction processing database on top of Hbase/Hadoop. Pretty neat!

adding to get splice

Splice Machine as expected, presented some compelling TCO numbers.   With no upfront licensing costs we expect this from an open source company. However, with open source solutions, companies will need to realistically understand the true development and support costs. These costs tend to be 70-80% of a 3-year TCO and should be balanced against the risks associated with a startup company.

So, if you are looking to cost effectively scale a transactional database and leveraging the Hadoop infrastructure is a viable option, consider SpliceMachine. As Senator Palpatine watched the young Skywalker’s career with great interest, we shall too watch the development of Splice Machine with great interest.

One last item. Please join me for a webinar on November 5 titled “Creating a Business Case for Big Data.”   During this webinar we will be investigating different approaches to formulating solid Big Data business cases. See you there!

Read the rest of this post »

Posted in News