Perficient Enterprise Information Solutions Blog

Blog Categories

Subscribe via Email

Subscribe to RSS feed

Archives

Follow Enterprise Information Technology on Pinterest

Posts Tagged ‘Big Data’

What is Your Big Data Strategy…?

Big Data is big deal. Every vendor has a strategy and a suite of products. Navigating the maze and picking the right Big Data platform and tools takes some level of planning and looking beyond techie’s dream product suite. Compounding the issue is the open source option vs. going with a vendor version of the open source. Like every other new technology, product shakedowns will happen sooner or later. So picking a suite now is like betting on the stock market, exercising caution and being  conservative with long-term outlook will pay off.

Organizations tend to follow the safe route of sticking with the big vendor strategy but the downside is getting the funding and putting up with the procurement phase of waiting forever for the approval. The hard part is knowing the product landscape, assessing the strengths of each type of solution and prioritizing the short-term and long-term strategy.

I have seen smaller companies building their entire solution in open stack and don’t pay a penny for the software. bg_confusionObviously the risk and the rewards plays out.  Training the resources and hiring trained resources from the market place is a huge factor as well. Open source still has the same issues of version, bugs and compatibility, so having the knowledgeable team makes a big difference in managing the environment and the overall quality of the delivery.

But despite the confusion, there is good news. If you are in the process of figuring out how you want to play the Big Data game, big and small vendors alike are providing you with the sandbox or Dev environment almost free or for limited duration. Leveraging this option as part of the Big Data strategy will not only save money but also the learning curve. IBM Bluemix is an example of that. So does Cloudera, Datastax and the list is growing.

To maximize the benefit, follow the basic portfolio management strategy.

  • Take an inventory of tools already available within the organization
  • Identify the products which will play better with the existing tools
  • Figure out the business case and the types of tools needed to get a successful POC
  • Match the product selection with resource knowledge base
  • Get as much help from external sources (a lot of them can be free, if you have the time) from training to POC
  • Start small and use it to get the buy in for the larger project
  • Invest in developing the strategy with POC to uncover the benefits and to build strong business case

Combining this strategy with little bit external help to narrow down the selection and avoiding the pitfalls based on the industry experience will add tremendous value in navigating the complex selection process. Time to market can be drastically cut down especially when you make use of the DevOps platform on the cloud.

The direct benefits in leveraging the try-before-buy options are:

  • No Hardware / wait time or IT involvement for setting up the environment
  • All the tools are available and ready to test
  • Pricing and the product stack can be validated rather than finding out later that you need to buy one more product which is not in the budget
  • Time to market is drastically cut down
  • Initial POC and Business Case can be built with solid proof
  • Throwaway work can be minimized

Looking at the all the benefits, it is worth taking this approach especially if you are in the initial stages and you want proof before asking for the millions which is hard to justify.

Defining Big Data Prototypes – part 2

In part 1 of this series, we discussed some of the most common assumptions associated with Big Data Proof of Concept (POC) projects. Today, we’re going to begin exploring the next stage in Big Data POC definition – “The What.”

The ‘What’ for Big Data has gotten much more complicated in recent years; and now involves several key considerations:

  1. What business goals are involved – this is perhaps the most important part of defining any POC yet strangely is often ignored in many POC efforts.
  2. What scope is involved – for our purposes this means how much of the potential solution architecture will be evaluated. This can be highly targeted (database layer only) or can be comprehensive (an entire multi-tiered stack).
  3. What technology is involved – this one is tricky because often times people view a POC only in the context of proving a specific technology (or technologies). However, our recommended approach involves aligning technologies and business expectations up front – thus the technology isn’t necessarily the main driver. Once the goals are better understood then selecting the right mix of technologies becomes supremely important.  There are different types of Big Data databases and a growing list of BI platforms to choose from – these choices are not interchangeable – some are much better tailored for specific tasks  than others.
  4. What platform is needed – this is one of the first big technical decisions associated with both Big Data and Data Warehouse projects these days. While Big Data evolved sitting atop commodity hardware, now there are a huge number of device options and even Cloud platform opportunities.
  5. What technical goals or metrics are required – this consideration is of course what allows us to determine whether we’ve achieved success or not. Often times, organizations think they’re evaluating technical goals but don’t develop sufficiently detailed metrics in advance. And of course this needs to be tied to specific business goals as well.

 

Big Data POC Architecture views

Big Data POC Architecture views

 

Once we get through those first five items, we’re very close to having a POC Solution Architecture. But how is this Architecture represented and maintained? Typically, for this type of Agile project, there will be three visualizations:

  • A conceptual view that allows business stakeholders to understand the core business goals as well as technical choices (derived from the exploration above).
  • A logical view which provides more detail on some of the data structure/design and well as specific interoperability considerations (such as login between DB and analytics platform if both are present). This could be done using UML or freeform. As most of these solutions will not include Third Normal Form (3NF) Relational approaches, the data structure will not be presented using ERD diagram notation. We will discuss how to model Big Data in a future post.
  • There is also often a need to represent the core technical architecture – server information, network information and specific interface descriptions. This isn’t quite the same as a strict data model analogy (Conceptual Logical, Physical). Rather this latter representation is simply the last level of detail for the overall solution design (not merely the DBMS structure).

It is also not uncommon to represent one or more solution options in the conceptual or logical views – which helps stakeholders decide which approach to select.  Usually, the last view or POC technical architecture is completed after the selection is made.

There is another dimension to “The What” that we need to consider as well – the project framework. This project framework will likely include the following considerations:

  • Who will be involved – both from a technical and business perspective
  • Access to the capability – the interface (in some cases there won’t be open access to this and then it becomes a demo and / or presentation)
  • The processes involved – what this means essentially is that the POC is occurring in a larger context; one that likely mirrors existing processes that are either manual or handled in other systems

The POC project framework also includes identification of individual requirements, overall timeline as well as specific milestones. In other words, the POC ought to managed as a real project.  The project framework also serves as part of the “How” of the POC, but at first it represents the overall parameters of what will occur and when.

So, let’s step back a moment and take a closer look at some of the top level questions from the beginning. For example, how do you determine a Big Data POC scope? That will be my next topic in this series.

 

copyright 2014, Perficient Inc.

Three Big Data Business Case Mistakes

Tomorrow I will be giving a webinar on creating business cases for Big Data. One of the reasons for the webinar was that there is very little information available on creating a Big Data business cases. Most of what is available boils down to a “trust me, Big Data will be of value.” Most information available on the internet basically states:

More information, loaded into a central Hadoop repository, will enable better analytics, thus making our company more profitable.  

Although logically, this statement seems true and most analytical companies have accepted the above statement, it illustrates the 3 most common mistakes we see in creating a business case for Big Data.

The first mistake, is not directly linking the business case to the corporate strategy. The corporate strategy is the overall approach the company is taking to create shareholder value.   By linking the business case to the objectives in the corporate strategy, one will be able to illustrate the strategic nature of Big Data and how the initiative will support the overall company goals. Read the rest of this post »

It’s all about the data, the data…

Credit_cardWhen Apple jumped into the payment processing with ApplePay, I thought this would be a great leg up for Apple. But who will be the winner and who will be the loser? Granted the payment switches from the credit card to ApplePay which indirectly pays for the purchase, who cares as long as we can charge on the card we want, right? Also what is the market share of Apple Pay going to be? Before we answer all those questions, let’s take a look at how we pay today for services and goods.

Cash may still be the king, that may very well be the last one to die, but what everyone is after is the middle class market which is fast adapting to credit cards and now to smart phones based services, dwindling check usage tells you so. With many ways of shopping using credit cards, store cards, pre-paid cards, Paypal, Internet (billpay,  bitcoin?), the convenience I see is carrying less or no cards at all. I seldom carry my store cards, especially when they can look it up.

Apple pay will be convenient, and may help get rid of the cards altogether, if it is accepted by majority of the merchants. Discover has to go through hurdles before it got accepted, so I don’t see myself getting rid of the cards in the near future, although cards may disappear before cash does.

Credit_trans

I read the news that many major merchants have signed up with Apple and I thought, what happens to the data? Who will be owning the granular consumer spend information? Before I could finish the blog I heard the news 2 major retailers pulled out of Apple. Ha, they realized it, the data is more valuable than the technology or convenience to customers. Imagine the data movement and explosion even if Apple shares the detailed information to each of the parties involved.

Apple is expected to have around 34 Million customers with an average of 200 transaction per customer it is going to explode. You can do the math, if this information has to be shared with 2- 5 parties. No wonder some retailers are wary of signing up. I won’t be surprised if each one of the financial institutions / retailers come up with their own App for payment mechanism.

In the end having the customer spend data is more valuable for the business operations, customer excellence etc. Having the right Information Governance to manage this Information asset is not only strategic but also a matter of survival to the enterprise.

The Chief Analytics Officer

One of the key points I make in our Executive Big Data Workshops is that effective use of Big Data analytics will require transforming both business and IT organizations.   Big Data with access to cross-functional data will transform the strategic processes within a company that guide long term and year to year investments. With the ability to apply machine learning, data mining, and advance analytics to view how different business processes interact with each other, companies now have empirical information for use in their strategic processes.

We are now seeing evidence of this transformation happening with the emergence of the  Chief Analytics Officer position.  As detailed in this InfoWorld article, Chief analytics officer: The ultimate big data job, it’s not about data but what you do with the data. And it is important enough to create a new position, the CAO. I recommend reading this article.

The Best Way to Limit the Value of Big Data

A few years back I worked for a client that was implementing cell level security on every data structure within their data warehouse. They had nearly 1,000 tables and 200,000 columns — yikes! Talking about administrative overhead. The logic was that data access should only be given on a need-to-know basis. The idea would be that users would have to request access to certain tables and columns.

Big DataNeed-to-know is a term frequently used in military and government institutions that refers to granting access to sensitive information to cleared individuals. This is a good concept, but the key here is the part about “granting access to SENSITIVE data.” The key is that the information has to be classified first, then need-to-know (for cleared individuals) is applied.

Most government documents are not sensitive. This allows the administrative resources to focus on the sensitive, classified information. The system for classifying information as Top Secret, Secret, and Confidential, has relatively stringent rules for, but also discourages the over classification of information. This is because when a document is classified, its use becomes limited.

This same phenomenon is true in the corporate world. The more a set of data is locked down, the less it will be used. Unnecessary limiting an information’s workers access to data obviously does not help the overall objectives of the organization. Big Data just magnifies this dynamic and unnecessarily restricting access to Big Data is the best way to limit its value. Unreasonably lock down Big Data, its value will be severely limited.
Read the rest of this post »

One Cluster To Rule Them All!

In the Hadoop space we have a number of terms for the Hadoop File System used for data management. Data Lake is probably the most popular. I have heard it called a Data Refinery as well as some other not so mentionable names. The one that has stuck with me has been is the Data Reservoir. Mainly because this most accurate water analogy to what actually happens in a Hadoop implementation that is used for data storage and integration.

Consider, that data is first landed in the Hadoop file system. This is the un-processed data just like water running into a reservoir from different sources. The data in this form in only fit for limited use, like analytics by trained power users. The data is then processed just like water is processed. Process water you end up with water that is consumable. Go one step further and distill it, and you have water that is suitable for medical applications. Data is the same way in a Big Data environment. Process it enough and one ends up with conformed dimensions and fact tables. Process it even more, and you have data that is suitable for basing bonuses or even publishing to government regulators. Read the rest of this post »

Internet of Things and Enterprise Data Management…

 

It is amazing to see the technology terms we come up with to explain new technology or trend. The consulting thought leadership coins the words to group a set of technology, trend to make it easier for people to have a context. However the success and adoption of the technology/trend defines the term’s reputation. For example Data warehouse was an in-thing only to be shunned when it did not deliver on its promises. Industry quickly realized the mistake and called it Business Intelligence and hid Data Warehouse behind BI until things settled. Now no one questions value of DW or EDW or perceive that as a risky project.

Some terms are really great and they are here to stay for a long time. Some withers away, some change and take a different meaning. One such term which got my attention is IoT – Internet of Things – what is this? It sounds like ‘Those things’ but really what is this trend or technology?

Wikipedia gives you this definition:

“The Internet of Things (IoT) is the interconnection of uniquely identifiable embedded computing devices within the existing Internet infrastructure. Typically, IoT is expected to offer advanced connectivity of devices, systems, and services that goes beyond machine-to-machine communications (M2M) and covers a variety of protocols, domains, and applications.[1] The interconnection of these embedded devices (including smart objects), is expected to usher in automation in nearly all fields, while also enabling advanced applications like a Smart Grid.[2]

IoT1

That is a lot of stuff. Looks like pretty much everything we do with Internet. I am sure this term will change and take shape. But let’s look how this relates to Enterprise Data Management. So from an enterprise data perspective, Let us consider a subset of IoT – machine generated internet data and consolidation of data from the systems operating on the cloud. What we end up with is a whole lot of data which is new, and also not in the traditional Enterprise Data framework. The impact and exposure are real, and much of the IoT data may live outside the firewalls.

In essence, the Enterprise Data Management need to deal with the added dimension of Architecture, Technology, and Governance of IoT. Considering IoT Data as out of scope for Enterprise Data Management will lead to more issues than it can solve, especially if you are generating or depend on the IoT data.

Realizing Agile Data Management …

Years of work went into building the elusive single version of truth. Despite all the attempts from IT and business, Excel reporting and Access databases were impossible to eliminate. Excel is the number one BI tool in the industry and for the following good reasons : accessibility to the tool, speed and familiarity. Almost all the BI tools export data to Excel for those reasons. Business will produce the insight they need as soon as the data is available, manual or otherwise. It is time to come to terms with the fact change is imminent and there is no such thing as Perfect Data but only what is good enough to business. As the saying goes:

‘Perfect is the enemy of Good!’

So waiting for all the business rules and perfect data to produce the report or analytics, is too late for the business. Speed is of essence, when the data is available, business wants it; stale data is as good as not having it.

Data_Management_1

In the changing paradigm of Data Management, agile ideas and tools are in play. Waiting for Months, weeks or even a day to analyze the data from Data warehouse is a problem. Data Discovery through Agile BI tools which doubles as ETL, offers significant reduction in data availability. Data Virtualization provides access to data in real-time for quicker insights along with metadata. In-Memory data appliances produce analytics in fraction of the time compared to traditional Data warehouse/ BI.

We are moving from the Gourmet sit-in dining to fast food concept for Data access and analytical insights. Though both have its place, usage benefits and short comings. They complement each other in terms of use and the value they bring to the Business. In the following series let’s look at these new set of tools and how they help Agile  Data Management throughout the life cycle.

  1. Tools in play:
    1. Data Virtualization
    2. In-Memory Database (appliances)
    3. Data Life Cycle Management
    4. Data Visualization
    5. Cloud BI
    6. Big Data (Data Lake & Data Discovery)
    7. Cloud Integration (on-prem and off-prem)
    8. Information Governance (Data Quality, Metadata, Master Data)
  2. Architectural changes traditional Vs Agile
  3. Data Management Impacts
    1. Data Governance
    2. Data Security & Compliance
    3. Cloud Application Management

DevOps Considerations for Big Data

Big Data is on everyone’s mind these days. Creating an analytical environment involving Big Data technologies is exciting and complex. New technology, new ways of looking at the data which is otherwise remained dark or not available. The exciting part of implementing the Big Data solution is to make it a production ready solution.

Once the enterprise comes to rely on the solution, dealing with typical production issues is a must. Expanding the data lakes and creating multiple applications accessing, changing and deploying new statistical learning solutions can hit the overall platform performance. In the end-user experience and trust will become an issue if the environment is not managed properly. Models which used to run in minutes may turn into hours and days based on the data changes and algorithm changes deployed. bigdata_1Having the right DevOps process framework is important to the success of Big Data solutions.

In many organizations the Data Scientist reports to the business and not to IT. Knowing the business and technological requirements and setting up the DevOps process is key to make the solutions production ready.

Key DevOps Measures for Big Data environment:

  • Data acquisition performance (ingestion to creating a useful data set)
  • Model execution performance (Analytics creation)
  • Modeling platform / Tool performance
  • Software change impacts (upgrades and patches)
  • Development to Production –  Deployment Performance (Application changes)
  • Service SLA Performance (incidents, outages)
  • Security robustness / compliance

 

One of the top key issue is Big Data security. How secured is the data and who has the access and the oversight of the data? Putting together a governance framework to manage the data is vital for the overall health and compliance of the Big Data solutions. Big Data is just getting the traction and much of best practices for Big Data DevOps scenarios yet to mature.