Perficient Enterprise Information Solutions Blog

Blog Categories

Subscribe via Email

Subscribe to RSS feed

Posts Tagged ‘cloud’

Perficient takes Cognos TM1 to the Cloud

IBM Cognos TM1 is well-known as the planning, analysis, and forecasting software that delivers flexible solutions to address requirements across an enterprise, as well as provide real-time analytics, reporting, and what-if scenario modeling and Perficient is well-known for delivering expertly designed TM1 based solutions.

Analytic Projects

Perficient takes Cognos TM1 to the CloudPerhaps phase zero of a typical analytics project would involve our topology experts determining the exact server environment required to support the implementation of a number of TM1 servers (based upon not only industry proven practices, but our own breadth of practical “in the field” experiences). Next would be the procurement and configuration of said environment (and prerequisite software) and finally the installation of Cognos TM1.

It doesn’t stop there

As TM1 development begins, our engineers work closely with internal staff to outline processes for the (application and performance) testing and deployment (of developed TM1 models) but also to establish a maintainable support structure for after the “go live” date. “Support” includes not only the administration of the developed TM1 application but the “road map” to assign responsibilities such as:

  • Hardware monitoring and administration
  • Software upgrades
  • Expansion or reconfiguration based upon additional requirements (i.e. data or user base changes or additional functionality or enhancements to deployed models)
  • And so on…

Teaming Up

Earlier this year the Perficient analytics team teamed up with the IBM Cloud team to offer an interesting alternative to the “typical”: Cognos TM1 as a service in the cloud.

Using our internal TM1 models and colleagues literally all over the country, we evaluated and tested the viability of a fully cloud based TM1 solution.

What we found was, it works and works well, offering unique advantages to our customers:

  • Lowers the “cost of entry” (getting TM1 deployed)
  • Lowers the total cost of ownership (ongoing “care and feeding”)
  • Reduces the level of capital expenditures (doesn’t require the procurement of internal hardware)
  • Reduces IT involvement (and therefore expense)
  • Removes the need to plan for, manage and execute upgrades when newer releases are available (new features are available sooner)
  • (Licensed) users anywhere in world have access form day 1 (regardless of internal constraints)
  • Provides for the availability of auxiliary environments for development and testing (without additional procurement and support)

In the field

Once we were intimate with all of the “ins and outs” of TM1 10.2 on a cloud platform, we were able to to work directly with IBM to demonstrate how a cloud based solution would work to address the specific needs of one of our larger customers. After that, the Perficient team “on the ground” developed and deployed a “proof of concept” using real customer data, and partnered with the customer for the “hands on” evaluation and testing. Once the results were in, it was unanimous: “full speed ahead!””.

A Versatile platform

During the project life-cycle, the cloud environment was seamless; allowing Perficient developers to work (at the client site or remotely) and complete all necessary tasks without issue. The IBM cloud team was available (24/7) to analyze any perceived bottlenecks and, when required, to “tweak” things per the Perficient team’s suggestions, ensuring an accurately configured cloud and a successful, on-time solution delivery.

Bottom Line

Built upon our internal teams experience and IBM’s support, our delivered cloud based solution is robust and cutting edge and infinitely scalable.

Major takeaways

Even given everyone’s extremely high expectations, the project team was delighted and reported back the following major takeaways from the experience:

  • There is no “hardware administration” to worry about
  • No software installation headaches to hold things up!
  • The cloud provided an accurately configured VM -including dedicated RAM and CPU based exactly upon the needs of the solution.
  • The application was easily accessible, yet also very secure.
  • Everything was “powerfully fast” – did not experience any “WAN effects”.
  • 24/7 support provided by the IBM cloud team was “stellar”
  • The managed RAM and “no limits” CPU’s set things up to take full advantage of features like TM1’s MTQ.
  • The users could choose a complete web based experience or install CAFÉ on their machines.

In addition, IBM Concert (provided as part of the cloud experience) is a (quote) “wonderful tool for our user community to combine both TM1 & BI to create intuitive workflows and custom dashboards”.

More to Come

To be sure, you’ll be hearing much more about Concert & Cognos in the cloud and when you do, you can count on the Perficient team for expert delivery.

“Accelerate your Insights” – Indeed!

I have to say, I was very excited today as I listened to Satya Nadella describe the capabilities of the new SQL 2014 Data Platform during the Accelerate your Insights event. My excitement wasn’t tweaked by the mechanical wizardry of working with a new DB platform, nor was it driven by a need to be the first to add another version label to my resume. Considering that I manage a national Business Intelligence practice, my excitement was fueled by seeing Microsoft’s dedication to providing a truly ubiquitous analytic platform that addresses the rapidly changing needs of the clients I interact with on a daily basis.

If you’ve followed the BI/DW space for any length of time you’re surely familiar with the explosion of data, the need for self-service analytics and perhaps even the power of in-memory computing models. You probably also know that the Microsoft BI platform has several new tools (e.g. PowerPivot, Power View, etc.) that run inside of Excel while leveraging the latest in in-memory technology.

PeopleDataAnalytics But… to be able to expand your analysis into the Internet of Things (IoT) with a new Azure Intelligent Systems Service and apply new advanced algorithms all while empowering your ‘data culture’ through new hybrid architectures…, that was news to me!

OK, to be fair, part of that last paragraph wasn’t announced during the key note, it came from meetings I attended earlier this week and that I’m not at liberty to discuss, but suffice it to say, I see the vision!

What is the vision? The vision is that every company should consider what their Data Dividend is.


DataDividend
Diagram: Microsoft Data Dividend Formula

Why am I so happy to see this vision stated the way it is? Because for years I’ve evangelized to my clients to think of their data as a ‘strategic asset’. And like any asset, if given the proper care and feeding, you should expect a return on it! Holy cow and hallelujah, someone is singing my song!! :-)

What does this vision mean for our clients? From a technical standpoint it means the traditional DW, although still useful, is an antiquated model. It means hybrid architectures are our future. It means the modern DW may not be recognizable to those slow to adopt.

From a business standpoint it means that we are one step closer to being constrained only by our imaginations on what we can analyze and how we’ll do it. It means we are one step closer to incorporating ambient intelligence into our analytical platforms.

So, in future posts and an upcoming webinar on the modern DW, let’s imagine…

Cloud: It’s NOT all where it’s at…..

Like my 1st grade teacher would tell me when I ended a sentence with this preposition……”It’s between the ‘A’ and the ‘T’”.  Well, in this situation, it’s between the “cloud” and the “on premise”.

More and more companies are starting to explore and use Infrastructure as a Service (IaaS) as a viable option for developing and maintaining their data warehouse. There are many companies on the market that provide Iaas like Amazon, AT&T, and bluelock, to name only a few. We see this market taking off almost exponentially because providers are offering companies environments that are safe, secure, fast, redundant, and cheap. Also, without a doubt, many companies are already using Software as a Service (SaaS) where much of their data is also stored in the cloud (Sales Force, Workday, Facebook, Twitter, etc.).

Although much of the company’s data is being relocated and used in the cloud, there is a lot that is still on premise (On-Prem) and for all practical purposes will remain there. According to Chris Howard, managing vice president at Gartner,  “Hybrid IT is the new IT and it is here to stay. While the cloud market matures, IT organizations must adopt a hybrid IT strategy that not only builds internal clouds to house critical IT services and compete with public CSPs, but also utilizes the external cloud to house noncritical IT services and data, augment internal capacity, and increase IT agility.”

The issue now starts to become, how do I manage my data environment that is both in the Cloud and On-Prem? And, how do I keep the information in sync and current so that I can use the data where appropriate to make better business decisions?

There are several software vendors on the market that realized that this is something that quickly needs to be addressed (short of manual coding) and they provide solutions in this area. Right now, Informatica is the market leader in data integration and they also have solutions that easily manage the issues of a hybrid data environment (Cloud and On-Prem). Informatica has been recognized by ChannelWeb as the pioneer of Cloud data integration and by salesforce.com customers as the #1 integration application on AppExchange for the past 5 years.

So why is managing in the cloud and on-prem that easy? From what I have seen with this product, since Informatica already offers connectivity to just about everything (well, maybe everything), it uses some of the same logic and thought process for extending the concept of data integration to everything in the Cloud. This concept includes data synchronization, data quality, Master Data Management, etc. They have created connectors to many of the SaaS applications in the cloud so a user of this solution does not need to hand code anything to quickly connect and start using the service. Plus, if a person already knows how to use any of Informatica’s On-Prem solutions (like PowerCenter, DQ, MDM, etc.) there is very little to no learning curve to quickly apply this knowledge to the Cloud solution.

With Informatica’s concept of VIBE (virtual data machine), a person can map once and deploy anywhere. What this means is that a developer can create data mappings in PowerCenter with the On-Prem solution and then run the mappings in the Cloud solution. These solutions can also be created directly in the Cloud product and then run On-Prem if needed.

So let’s take a look at the architecture of the Informatica Cloud solution. The main thing about how this works is that the company’s data does not pass through Informatica’s environment in the cloud to reach any destination whether it is in the Cloud or On-Prem. When installing the Informatica Cloud product, a runtime agent is placed in the customer’s environment (yep, behind the firewall if needed) and this is where all the work is done. Metadata about your environments is stored  in the Informatica Cloud (data about the sources, targets, jobs, transformations, etc.) and managing and monitoring of your integration processes are performed through a web application. All the work and data movement is done in the customer’s environment. The only actual data that goes to the Cloud is data that you choose to store in the cloud (e.g. Sales Force, your data warehouse in Amazon RedShift, etc.).

The product has prebuilt connectors to many Cloud Based solutions so it’s only a matter of selecting the application that you need to connect with in the Cloud and the Informatica Cloud solution automatically understands it’s structure and how to access the data stored there. I was very surprised how quick and easy a job could be set up to maintain data synchronicity between On-Prem and Cloud data.

Here is a diagram of the architecture that I mentioned earlier. The dotted line represents the management of the metadata in the Informatica Cloud. The company’s actual data travels only between the On-Prem location and the Cloud applications that the company subscribes to…… Well there you go; I ended my blog with a preposition. Forgive me Mrs. Rita Hart….

Image courtesy of Informatica

Google makes BI push with Cloud Platform Partner Program

One of the more notable announcements in the last month was that of Google’s Cloud Platform Partner Program. The platform currently offers Google Compute Engine, Google App Engine, Google BigQuery, and Google Cloud Storage. The partner program is designed to extend the use of these applications by allowing integration of other
Technology Partner applications as well as expanding implementation options with Service Partners. In an earlier blog post, IBM CEO Study: Leading Through Connections, we looked at the idea of amplifying innovation with partnerships. Google has certainly positioned themselves to embrace this mentality throughout several cloud-based fields with this move. However, we will focus on the moves within the business intelligence space, particularly around Google’s BigQuery application.  Here are the highlights of the eight partnerships announced with some details on the objective of each:

Read the rest of this post »

Birst and TriCore Partner

TriCore Solutions, LLC and Birst Inc have combined forces to offer a more complete cloud-based BI solution.

Birst, founded in 2004, has been make great strides as a disrupter in the Business Intelligence space. The San Francisco- based starup also just completed series D financing to start the month by raising a cool $26 million. This puts Birst’s total funding to $46 million according to a recent Tech Crunch article.

Birst offers a set of products through the cloud which in turn offers rapid deployment, ease of use and greater affordability. The intent with this approach is to open big data and analytics to a wider audience. Birst also comes with flexibility of data sources as it is able to connect to a wide array of CRM, ERP, financial and operational systems.

TriCore is a remote service focused company specializing in applications. In the partnership, “they provide sales, deployment and maintenance of Birst solutions.” While Birst has been product focused, TriCore will be able to extend services that will enhance the adoption success and retention of customers.

The partnership seems to be very complementary when comparing the respective services offered. With the combination of financing and partnerships, Birst has seen an eventful month. Very exciting for this emerging subset of the BI industry.

“New Analytics” in Spotlight

Great article highlighting changes in business analytics industry: Enterprise BI models undergo radical transformation from ComputerWorld.com.

Some mention of ROI, the reporting tools in the forefront of “new analytics” and the “service-heavy implementation model” seen in these BI projects today.

Making sense of the Cloud hype Part 2: Infrastructure as Service

In part one of this series I discussed about Software as a service (Saas) and some findings in that area. This post is about Infrastructure as a Service (IaaS).

In essence it’s just like the SaaS model instead of buying a server and putting it into your data center and then hire people to administrate it you rent a server in a cloud provider’s data center and their employees deal with the infrastructure administration while your team can use the server as they will. Further you pay per hour of use instead of paying for a server that could be idle for large parts of the day.

Many will say, “Hey! That’s just the ASP model! That didn’t work out so well.” Well there’s two major differences now. Advances in broadband and bandwidth have seriously changed the game making this a much more viable option.  And the second difference is the biggie: Virtualization.

To test stuff out I decided to get myself an account with a Cloud provider and see just how it really drove.  I got my account with the server I had selected and decided I wanted to put up an additional server. I went through the forms and I had a new server up and ready for use in 8 minutes. 8 minutes! Now imagine how long it would have taken to requisition a new server, get it shipped, install it, put it on the network and get it ready for use.  8 minutes is mind blowing. It’s even more so when you create an image with all your software. That means you can spin up a new server and have your app running on it in just minutes. The time to “spin up” is unparalleled.

Further the “pay per the hour” model can be used to save some serious money. For example, I once worked with a client that had a massive beast of a machine to run ETLs that ran only for 2 hours every night. Now, imagine instead of having to pay for having that immense server that was idle for 11/12ths of the day only spinning up the server when it’s needed. The savings would be huge.

And the uses in BI go beyond just saving money on an ETL server. Have  a complex data migration that takes  days to run? Spin up a few servers. Assign them to the task and get rid of them when you’re done. It could be far more economical than having it just run for days. Is your report rendering server maxed due to month end close? Spin up a new server and add it to the cluster and spin it down when you’re done.

That being said, the tech isn’t entirely all there to do such fanciful stuff so easily. You might  have to script/build much of it on your own. It doesn’t come for free. But with more and more people moving to the cloud, combined with the great work going on with openstack it’s only a matter of time til that level of sophistication is available.

Of course there are always cons and I’d be remiss not to mention them. Cloud providers make the argument that since they bill you per hour you have more control over what you spend. And they’re partially right. You can control the availability of your system to save some cash. However, you cannot control the rate which they charge you. You might lock in a good price for 2 years or so but they will be free to renegotiate after the contract is over and they’ll have all your servers. So be aware of that. Also there’s the issue that most companies are unwilling to put all their apps in the cloud (E.G.: A lot of companies will never move their financial data to the cloud) so if you put your business intelligence and ETL servers out in the cloud I/O and bandwidth become issues as your source will most likely be out of the cloud.  And obviously there’s are the security concerns I brought up in part one of this series.

Nevertheless I think this area of the cloud is the most promising and exciting. We really are on the verge of a paradigm shift not just in BI but IT in general. And it will be bigger than the shift we’ve seen so far.

 

Read Making Sense of the Cloud Hype, Part 1: Software as a Service

Tags: , ,

Posted in Emerging BI Trends

Making BI sense of the Cloud hype: Part 1 Software as a Service

Cloud cloud cloud? CLOUD!!

If the hype is to be believed, the Cloud is the most amazing thing ever. Something that you’ll want to wad up into your IT life and roll  up to be a singular star in you IT sky.  I’ve seen taciturn CIOs turn into giggling fanboys at the very mention of the Cloud. I’ve even seen respected publications say that ANY risk you could possibly face in moving to the Cloud are far outweighed by the benefits.  If IT had a savior its name would be Cloud.

And why not? The promise of the Cloud is expansive, the possible savings huge and the ease of certain aspects is shocking.  But let’s be honest,  this is the real world and despite the giddyness we tech geeks feel there is no mythical panacea to IT woes.  Pipers must be paid. Certain risks get traded for others.  So in trying to help readers make sense of it all I’ll be writing a series of posts to try and explain the major areas of the cloud and what it means for BI.  This post will cover the Software as a Service (or SaaS) model.

Read the rest of this post »

Emerging Trends in BI 2012 and the Microsoft stack

I just finished watching a webinar from The Data Warehousing Institute (TDWI) on the top BI trends for 2012 and thought I’d match those trends to the Microsoft stack for comparison.  According to TDWI, the top 7 trends are as follows:

 

        1. Self-service BI – Enabling business users to perform their own modeling and analysis with little to no input from IT.  This need has existed for a while and continues to grow.  The following technologies cover this well and if you want to see them live, just ask for a demo!
        2. Mobile BI – During SQL PASS Microsoft announced their 2012 roadmap for embedded mobile support in SQL Server 2012 http://tinyurl.com/87jz66g.  Of course third party apps are always available for immediate support
        3.  Cloud computing – Or more specifically, BI in the cloud, exists in SQL Azure now. http://tinyurl.com/7sjsk82
        4.  Advanced analytics – Although this term can imply a broad topic, we’re essentially talking about predictive analytics (i.e. data mining).  SQL Server Analysis Services (SSAS) has supported data mining for a long time.  However, the roadmap continues to expand with:
          • Embedded data mining – Out of the box support http://tinyurl.com/82ulxlu
          • Data mining in the cloud – Giving your users access to unlimited scalability for large number crunching analysis http://tinyurl.com/7jrjq4a (video no demo)
          • 3rd party, self-service, cloud-based data mining with Excel integration is also available.
        5. Big Data – We’re not talking DW Appliance big (although Microsoft is well trenched in the appliance space), we’re talking so big the data can’t fit in a database!  Apache Hadoop is designed for just such a problem and Microsoft has planned support of this technology which will be embedded in their core Windows operating systems (including Azure) and integrated throughout their BI delivery tools. http://tinyurl.com/6s9c45x
        6. Data Visualization – Improving the user’s ability to absorb and understand data is a critical success factor that should be branded on every BI developer’s paycheck. Fortunately, with the numerous choices already present in the Microsoft stack, along with the introduction of Power View (see above) this trend is well covered.
        7. Social BI – Now this is an interesting one and I’ll be watching its trend closely.  The TDWI study looked at Facebook specifically, but we should consider Twitter and some other social spaces as well.  The premise here is that social BI becomes relevant for specific industries, primarily retail and marketing, where getting instant feedback on ads or shopper movements/sentiment (your cell phone is watching you, you know…) can be desirable.  With relatively little coding, this example shows how Twitter feeds can be analyzed in PowerPivot http://tinyurl.com/6lslh8b.  I would expect to see native connectors for these soon if they don’t already exist.

In short, the Microsoft BI stack has our future needs covered pretty well!

Gartner and IDC Predict Big Data Trends for 2012

In addition to increases in cloud computing and consumerization of IT, Gartner’s analysts predict that in 2012, we’ll see noticeable increases in big data.

Among other specific predictions, according to FierceCIO:

  • Gartner’s analysts fear that the ever-expanding volume of information available to organizations, coupled with IT’s diminishing control, will make it harder to ensure the consistency and effectiveness of data.

IDC predicts:

  • Worldwide IT spending will grow 6.9% year over year to $1.8 trillion in 2012. As much as 20% of this total spending will be driven by the technologies that are reshaping the IT industry – smartphones, media tablets, mobile networks, social networking, and big data analytics.
  • Big Data will earn its place as the next “must have” competency in 2012 as the volume of digital content grows to 2.7 zettabytes (ZB), up 48% from 2011.
  • Over 90% of this information will be unstructured (e.g., images, videos, MP3 files, and files based on social media and Web-enabled workloads) – full of rich information, but challenging to understand and analyze.
  • As businesses seek to squeeze high-value insights from this data, IDC expects to see offerings that more closely integrate data and analytics technologies, such as in-memory databases and BI tools, move into the mainstream.
  • And, like the cloud services market, 2012 is likely to be a busy year for Big Data-driven mergers and acquisitions as large IT vendors seek to acquire additional functionality.

Where is your big data focus going to be in 2012?