As IT consultants, we’re pretty sure that we’re smart enough to recognize phishing attacks against us. We all get emails asking us to open invoices, confirm our bank account information, or perform other actions designed to separate us from our credentials and our money. But according to a consultant specializing in penetration testing, 40% of us will still click-through to a malware payload. That’s a statistic I learned at a recent Society of Information Management (SIM) presentation of “FBI – State of Cybersecurity.”
Acting Supervisory Special Agent Brian D. Jackson of the St. Louis FBI field office was the featured speaker for the night, and he delivered a super interesting talk on trends in cyber crime that was followed by open discussion with local security professionals. Besides the 40% click-through statistic, here are four other security takeaways that all of us can use: Read the rest of this post »
It’s been a busy year in the Enterprise Information Systems space. With over 75 posts this year, our in-house experts found themselves face to face with big changes and an abundance of great information to share. We sifted through that content and present to you the Top 10 EIS posts of 2015.
Internally, Perficient was dealing with a growing amount of revenue and payments and chose to use Cognos Incentive Compensation Management (ICM). This post explains why we chose it and how we went about implementation.
In 2015, business spending continued its shift toward a customer focus with operational spending in decline. This post talked about the importance of investing in the customer experience.
There was much discussion over whether Apache Spark would replace Hadoop. This post discusses the question you should really be asking about data.
Splunk CEO Godfrey Sullivan announced that he would be handing over the CEO reins to Doug Merritt.
NoSQL database are highly scalable, provide great performance, and store and process large amounts of data at high speeds. This post discusses one big concern: security.
This post investigated how an enterprise should proceed when Hadoop is no longer the only option.
When you find yourself with a fully functional system on the Test instance, you may want to migrate everything to production. This post shows you how to do that.
The stars seem to be aligning for the infrastructure as code paradigm to be widely adopted. If self-healing and elf-correcting systems are possible, does that mean the end of IT?
The default error messages are tailored for IT rather than business users. This post explains how to customize these for easier comprehension
In April, Oracle released Data Sync, a utility that facilitates the process of loading data from various on-premise sources into BI Cloud Services. In this post, we shared the three main steps to follow when designing data loads in Data Sync.
The end of 2015 is fast approaching, with December looming just a week away. For most people, December is packed with the hustle and bustle of last-minute gift shopping, or end-of-year projections and budgets for 2016. Often in the sway of all this activity, many are so focused on the approaching New Year that they abandon the current year without even a backwards glance, flipping the page in their agenda or tossing the current calendar before the days of December have completely passed. So, regardless of what December holds out for you, why not take a moment to reflect on some of your accomplishments from 2015 before it is time to usher in any new resolutions for 2016?
In keeping with my own suggestions, here is what I assessed when I looked back over 2015 – time well spent. Read the rest of this post »
With Splunk’s Q3 earnings release was the additional announcement that Godfrey Sullivan would be handing over the CEO reins to Doug Merritt.
I don’t know Silicon Valley history enough to confirm or deny the statement above, but if I could offer my own twist and re-write Mr. McDowell’s statement:
Godfrey, I think history is going to judge you as one of the truly iconic Analytics CEOs.
Godfrey has created value for shareholders, customers, employees and partners using a revolutionary way to get customers to use and value software from Splunk.
When people ask me why I am excited about Splunk, I mention the fundamentally different technology built on the schema-on-read paradigm, and I talk about the value customers can get. I also talk about Godfrey. Proven, Fun, Visionary… he is certainly a reason I have been so excited about Splunk, its culture and what it can be.
There are a variety of ways in history we have offered to recognize the contributions of people. If Godfrey was a baseball player he would be a shoe-in for the Hall of Fame. If there was a Mount Rushmore for Analytics, he would be on it.
The good news about this inevitable transition, as confirmed on the Q3 earnings call, is this was a calculated plan, essentially hand-picking his possible successor and training him on “the Godfrey way”. So like the rest of his track record, Godfrey goes out the right way too. The company couldn’t be better positioned for the future. We look forward to the next phase of the journey.
IBM’s Advanced Analytics story is now powerful yet simple. It focuses on ALL!
Hybrid (heterogeneous architectures), trust (ensuring end-to-end accuracy and trust in the entire system), and agility (speed of thought analysis) are the core principles for this transformation to IBM’s Analytics portfolio. The product stack comprises Cognos Analytics, Watson Analytics, and Spark sparking SPSS for Predictive.
With an incredible built-in Search facility that understands the context you are in, Cognos BI just got better being rebranded as Cognos Analytics, by enabling smarter self-service analytics and self-initiated discovery and visualization.
Watson Analytics takes the bias/subjectivity out of the analytical journey by enabling the Citizen Data Scientist and other Business Users to statistically interrogate the data. Detecting trends, patterns and anomalies just got a lot simpler.
IBM Predictive Analytics now empowers the new multi-dimensional Citizen Data Scientist with the open source-driven (Spark) SPSS framework. Their Predictive story just became more personalized, intuitive, managed and above all, powerful.
In a bid to win the race to Insight, IBM itself has undergone a major transformation in its Data and Analytics product portfolio. At #IBMInsight, IBM Executives each day organized several Keynotes and Super Sessions to unveil their ever-evolving approach to modern day data architecture, where Analytics and Cloud are integral components of a transformational data infrastructure.
IBM’s Data Platform now reflects the convergence of “smart” Analytics, Machine Learning, Big Data paradigms, Cognitive and Natural Language Processing along with Cloud technologies. IBM Data and Analytics Platform is now built on Spark (bringing that Open Source flexibility and feasibility to the Enterprise), supports hybrid architectures (on premise and on the cloud) with agility built into the simplicity of this transformed data infrastructure.
One of the coolest features of this platform is self-initiated Discovery using natural language to interrogate and interact with the underlying data structures. Additionally, instant fusion of temporal and geospatial awareness and single-click integration with an existing Data Warehouse or Repository make this Platform very alluring from a personalization and customer-centricity standpoint.
The companies that will leverage data in its totality (“big” and “small”) combined with cognitive computing and analytics will be the intelligent brokers of game-changing information paradigms.
Analytics along with Big Data technologies are greatly fueling this insight economy, and hybrid cloud architectures, in turn, are facilitating the unlocking of newer business models. These three mega-trends continue to converge in an IoT world, thereby, bringing about a major business model disruption for companies of all sizes. Organizations that ride this wave of massive transformation and become an intelligently cognitive, data-driven company will be the leaders in economy.
Over the course of the week, several exciting examples were brought forth of companies that are on their way to embracing this 4-dimensional transformation. Coca Cola (taking precision marketing and audience segmentation to the next level), GoMoment (building Cognitive Hotels), VineSleuth (helping consumers find the perfect bottle of wine) and StatSocial (creating holistic, personalized consumer profiles for Retailers), presented some amazing ways of how they continue to strive to disrupt their respective industries and markets.
A few years ago Adrian Cockcroft, cloud architect at Netflix at the time, posted this blog which caused quite a stir among the IT community. It described how Netflix had almost done away with DevOps (or even plain Ops) using the cloud (AWS in this case) to come up with yet another new IT buzzword called NoOps.
Many in the DevOps community took strong issue to this, arguing that Ops by any name, whether NoOps or DevOps, is still Ops. A lot of the Platform-as-a-Service (PaaS) vendors jumped on the NoOps bandwagon even declaring the following year to be the definitive year of NoOps.
Vendors like Heroku, AWS Elastic Beanstalk and AppFog tout their PaaS platforms as pure development based without any need for operations support. I witnessed this in person during a Heroku workshop (by the way Heroku itself is hosted on AWS), it’s frighteningly simple and easy to create a website or web service using any of the supported language platforms and connect to a set of standard database backends and tools, and it scales efficiently and the setup is a breeze if you have ever worked on any kind of multi-stack project.
I think a key drawback in PaaS currently is that unless the project is a self-contained one or all your company’s data and services are located on the cloud or are accessible externally, it is difficult to punch enough holes through your company’s firewall to justify the move to PaaS especially if the data is sensitive. I think organizations are still uncomfortable with the idea of owning highly sensitive data hosted on systems outside of their control. Also being locked into a limited toolset or a particular database might not appeal to every project owner given the proliferation of specialized software resources especially in the Big Data landscape. Read the rest of this post »
Last week, Experian and T-Mobile announced a significant data breach. What’s worse is that the hackers were able to maintain the breach for over two years. T-Mobile CEO John Legere, never a person known to mince words, came out with a statement saying he is “incredibly angry” about the breach of 15 million customer records and will complete a thorough review of their relationship with Experian.
While it is difficult to learn that the breach occurred, what’s worse is it happened over two years. What breaches are still on going in the world that we don’t know about?
The very next day, Scottrade disclosed they were the casualty of a data breach exposing customer data for over 4 million customers during late 2013 and early 2014. Interestingly enough in this case, Scottrade didn’t even know the breach was occurring. The FBI came in to tell them. Read the rest of this post »
So, the already convoluted Open Source Hadoop ecosystem just got a little more complicated with a Kudu joining the Elephant at #StrataHadoop. Advocates of Fast Analytics on Fast Data at Scale also just got more excited regarding the potential of fast writes, fast updates, fast reads, fast everything – all with Kudu! Cloudera’s Kudu is designed to fill major gaps in Hadoop’s storage layer, especially with regard to Fast Analytics, but is not meant to replace or disrupt (just yet!) HBase or HDFS. Instead, Kudu is meant to complement and run in close proximity with the storage engine because some applications may get more benefit out of HDFS or Hbase.
Before the official release of this news, VentureBeat speculated about Kudu’s possible implications for the Big Data industry. It “could present a new threat to data warehouses from Teradata and IBM’s PureData … It may also be used as a highly scalable in-memory database that can handle massively parallel processing (MPP) workloads, not unlike HP’s Vertica and VoltDB.”
Whatever the long-term implications of Kudu, the above scenarios are not going to play out any time soon. Maturity is still what most enterprises crave in this rather diverse Open Source ecosystem, and Kudu, despite all its excitement, has a long way to go on that front.