We all know or have heard that the Cloud brings many benefits for both software development companies and consumers of Cloud software. Nowhere is this more evident than in the pace of enhancements and upgrades. In the old days of on-premises computing, software companies had to make sure that their software and all up upgrades worked with many operating systems, databases, middleware and security platforms. Managing this chaos was often done via a spreadsheet jokingly referred to as the ‘matrix of death’. How times have changed and we, the consumers, reap the benefits!
Oracle Analytics Cloud (OAC) is automatically upgraded every quarter. These upgrades are not just for fixing known issues, they often include significant enhancements and the 5.9 update to OAC is no different.
With the 5.9 update Oracle continues the tradition of improving and simplifying the end user experience through easy-to-use but powerful features that let users do more while doing less.
Top 10 New Features of Oracle Analytics Cloud 5.9
1. Text Tokenization – with this new feature you can easily analyze text fields. For instance, let’s say you have a database with complaints about different car models and the complaint descriptions are in a free form text field. In OAC, you can create a data flow and use the text token capability to analyze how many times different words occur for different car models. There is no coding required for this feature.
https://bit.ly/39y1Z9S
2. Improved Sorting Capability – when analyzing data there are few utilities that are more frequently used than sorting. The correct sorting makes data much easier to understand. OAC 5.9 has added a number of sorting enhancements:
3. Data Preparation Enhancements – you might be aware that Oracle Analytics Cloud comes with a powerful set of capabilities related to data preparation/wrangling. These days business analysts are doing a lot of the data preparation work that previously might have been done by IT and tools/shortcuts are greatly appreciated. OAC 5.9 includes the following data prep enhancements:
4. Filter Improvements – OAC allows users to establish filters at the dashboard level, which will apply to all the visualizations on the dashboard, as well as at the individual visualization level. With the new 5.9 enhancements, users can now easily drag filters established at the visualization level to the dashboard level and vice versa. This allows users to move more quickly as they analyze data sets.
https://bit.ly/3cqxZ1l
5. Frequent Itemset Analytics (Market Basket) with Data Flows – this enhancement is a great example of increased integration between Oracle Analytics Cloud and Autonomous Data Warehouse. For many years the Oracle database has had advanced analytics like market basket analysis built into it – however previously you had to be a database programmer to access these capabilities. With the integration of OAC and ADW, business analysts can easily take advantage of the advanced analytics capabilities built into the Oracle database through a simple point and click interface in OAC called ‘Data Flows’. With this enhancement a business analyst will launch the market basket analysis from within OAC but the actual processing will be done leveraging the power of the Oracle database (this is called ‘function shipping’). The results of the market basket analysis are then available in OAC for analysis.
https://bit.ly/2NWLaND
6. Ability to Visualize Oracle Machine Learning Model Metadata in OAC – this enhancement is another great example of leveraging the increased integration between ADW and OAC. First a little background. With Oracle Machine Learning, Oracle has built into the database over 30 machine learning algorithms that can be used by professional data scientists to create machine learning (ML) models ((https://www.oracle.com/data-science/machine-learning/). With this enhancement available in OAC 5.9, machine learning models created in the Oracle database can now be registered for use within OAC. Additionally, all the metadata related to those ML models can now be visualized in OAC through the standard OAC interface.
https://bit.ly/3tfuMrC
7. Explain Predictions on Your Data Using OAC Machine Learning Model Output Options – being able to explain the predictions made by machine learning models has emerged as a critical requirement. When machine learning models developed in the Oracle database are processed, they create output that explains how the prediction was arrived at for each and every record (i.e., what attributes contributed to the prediction with the weighting factor identified). This enhancement allows that output to be automatically available for analysis in OAC. This puts business analysts in the position of being able to explain how a model arrived at its predictions (rather than just saying “the model said so”).
https://bit.ly/3r5hNa3
8. Add New Map Backgrounds Using Web Map Service (WMS) or XYZ Tile Maps – with this enhancement it is now possible to add new map backgrounds from the Console of Oracle Analytics Cloud (you need to have access to the Console). Spatial analytics can add very important context to visualizations and the map background can bring visualizations to life.
https://bit.ly/3t99CLP
9. Increased Limits on the Number of Rows Returned for Queries – great to see the power of OAC increasing. The maximum number of rows returned when you query data for visualizations, analyses, and dashboards has increased to:
10. Improved Progress Bar Behavior – the OAC interface and user experience continues to be enhanced. Previously when data was refreshed a horizontal blue bar would appear across the screen. Now a small circle in the upper right corner indicates data refresh progress (like when you download an app on a mobile phone). It might seem like a small thing but it is more subtle and improves the overall visual experience.
We hope you enjoy these new features of OAC 5.9!
]]>In late 2019 Oracle released Fusion Analytics Warehouse (FAW). FAW is built on Oracle Analytics Cloud (OAC) and powered by Oracle Autonomous Data Warehouse (ADW). It provides personalized application analytics, benchmarks, and machine learning-powered predictive insights across all line-of-business job functions and business processes for Oracle Cloud Applications. It consists of prebuilt analytic applications that include a data pipeline, data warehouse and prebuilt KPI’s, metrics, reports, and dashboards. FAW is purpose-built for Oracle’s Cloud SaaS applications: ERP Cloud, HCM Cloud, SCM Cloud, and CX Cloud. The first two modules of FAW have been released: Fusion ERP Analytics and Fusion HCM Analytics. Additional modules for SCM Cloud and CX Cloud will follow.
In today’s data-driven world, no process can be considered complete or fully optimized without addressing the analytics related to the process. Although FAW can be deployed at any time – either during the ERP Cloud implementation or after – there are significant benefits to be realized from deploying FAW concurrent to the ERP Cloud deployment. Some of these benefits are:
The modules of FAW complement Oracle Transactional Business Intelligence (OTBI-which is part of ERP Cloud) and address the need for analytics beyond transactional reporting. Whereas OTBI is focused on transactional reporting with limited historical reporting and works with small volumes of data sourced from Oracle Fusion apps only, Fusion ERP Analytics is geared towards strategic and advanced analytics, supporting deep historical analysis, non-Oracle Cloud data sources and large volumes of data. Fusion ERP Analytics helps executives and decision-makers improve business performance and gives analysts the tools to uncover hidden insights.
Fusion ERP Analytics includes 15 subject areas and over 100 data warehouse tables covering General Ledger, Accounts Payable and Accounts Receivable. It comes with over 50 Financial KPI’s that are prebuilt to leverage the data in your ERP Cloud system. With Fusion ERP Analytics you can do the following:
Oracle ERP Cloud and Fusion ERP Analytics represent a powerful combination delivering an integrated platform for flawless business process execution and deep, machine-learning powered analytics to drive improved business performance and continuous improvement. Perficient’s credentialed Oracle expertise and team of highly skilled implementation specialists can guide you on your journey to world-class business process execution and analytics.
]]>The world of modern data and analytics continues to evolve and is very exciting. The change really began in earnest about 10 years ago with the introduction of Hadoop and big data processing. Suddenly corporations could analyze much larger data sets than before and could extract insignts from data that could transform companies and industries. While this explosion of data use cases started on premises, it is most certainly migrating to the Cloud as the primary platform.
Oracle already had a world class database. Over the last several years Oracle has upgraded its Oracle Cloud infrastructure. They started from scratch and rebuilt their Cloud improving on the lessons learned from other public cloud providers. They also quietly built out a robust set of services to support any and all use cases related to data and analytics.
Outlined below are the Top 10 Things You Didn’t Know about Data and Analytics in the Oracle Cloud:
1. Full Data Lake Capability – Either Hadoop-based or Object Storage-based – it is easy to quickly provision a data lake in the Oracle Cloud using either Hadoop/HDFS or Object Storage as the primary storage mechanism.
Oracle Big Data Service – click here for more information about setting up a Hadoop-based data lake on Oracle Cloud Infrastructure (OCI)
Object Storage-based Data Lake – this is a recent Youtube video from Oracle demonstrating how to set up a data lake on OCI using object storage
2. Data Catalog – Oracle provides a data catalog service to allow easy access to all your data – regardless of location. Whether it is in a data lake or a data warehouse, structrured or unstructured, in a relational database, in object storage or in Hadoop, the data catalog can help you keep track of your data assets.
OCI Data Catalog – this is a recent Youtube video from Oracle describing use cases for the OCI Data Catalog and how to set up OCI Data Catalog
3. Support for Streaming Data – the Oracle Cloud supports streaming data use cases via the OCI Streaming Service and Kafka Connect. Perhaps you want to stream data from social media to perform sentiment analysis or you want to take in machine sensor data in real time to perform diagnostics and run machine learning models for predictive maintenance or you are a financial services company that wants to analyze high volume transactions in real time for fraud detection – OCI Streaming and Kafka Connect support these use cases and many more. The OCI Streaming Service is fully managed so companies don’t have to worry about the complexity and operational burden of running all their data streams.
OCI Streaming Service and Kafka Connect – excellent Oracle blog on use cases, set up and benefits of OCI Streaming Service and Kafka Connect
Demo of Setting up OCI Streaming – short, recent youtube video explaning OCI Streaming with a demo on how to set it up (non Oracle video)
4. Serverless Spark Service – OCI Data Flow is a fully managed, serverless Spark service that lets you that lets you run Apache Spark applications with no infrastructure to deploy or manage. You can run Spark jobs against your data in Hadoop or Object storage without worrying about provisioning a server and only pay for what you use.
OCI Data Flow Service – recent youtube video explaining OCI Data Flow Service
5. Big Data SQL Cloud Service Allows SQL Query Access Regardless of Underlying Storage – perhaps you have data in an object storage based Data Lake, some data in Hadoop/HDFS, some data in a NoSQL database and some data in a relational data warehouse and you want to use SQL to query across all those data sets using SQL -Oracle Big Data SQL Cloud Service will support that.
Oracle Big Data Cloud SQL – this is Oracle documentation on using the Big Data Cloud SQL service
6. World Class Cloud Data Warehouse – you have almost certainly heard of Snowflake (if only for its recent IPO) and you may have heard that Cloud data warehouses are a hot technology category. You may not be aware that Oracle has a world class cloud data warehouse called ‘Autonomous Data Warehouse’ (ADW). It is a full blown Oracle autonomous database that has been optimized for analytic workloads. For instance the data is stored in a columnar manner on disk to support high performance analytic processing. ADW can be provisioned easily, you pay for what you use, it runs on Exadata machines and supports autoscaling.
Autonomous Data Warehouse Technical Deep Dive – recent Oracle youtube video discussing the technical differentiators of ADW
7. Machine Learning Capability built into the Database – Oracle’s autonomous database includes 30+ machine learning algorithms that can be modified using Python or R. Oracle’s mantra in this area is “move the algorithms, not the data”. Previously, it was necessary to separately purchase the ‘Advanced Anaytics’ option to access the maching learning capabilities of the Oracle database, but now that is not necessary – all the machine learning, data mining and advanced analytics capabilities come with the base license/subscription for the Oracle database.
Machine Learning in the Oracle Database – recent youtube video from Oracle explaning how machine learning works in the database – including how to use the built in notebook feature
Machine Learning in the Oracle Database – Short Summary – this is a 3 minute youtube video that quickly summarizes the basics of Oracle Machine Learning in the database
8. Data Science Platform for Professional Data Scientists – Does your company have an in-house team of professional data scientists whose job it is to extract value from the vast amount of data in the data lake and data warehouses? The Oracle Cloud includes a data science platform with the tools and platforms most used by professional data scientists. This platform also focuses on deploying and operationalizing ML models including ongoing tuning of the models.
OCI Data Science Platform – this is a playlist of 5 short videos explaning how to set up and use OCI Data Science platform
9. Oracle Data Integrator is Free in the Oracle Cloud Marketplace – Oracle Data Integrator (ODI) is a top-rated data integration and ETL platform. It is used by some of the largest companies for their most complex ETL tasks. ODI is on Oracle’s strategic roadmap and continues to be enhanced and supported. ODI is currently free on the Oracle Cloud Marketplace. There is no license or subscription cost. You will pay only for the Oracle Cloud compute that ODI consumes (and compute is very inexpensive in the Oracle Cloud – e.g., running a standard VM with 2 OCPU’s for 10 hrs/day will cost about $40/month or about $480 per year).
Oracle Data Integrator on the Oracle Cloud Marketplace
10. Prebuilt Analytics Leveraging Oracle Cloud SaaS Applications – this is a differentiator between Oracle and the other public cloud providers. Unlike the other public cloud providers, Oracle has top-rated Cloud applications for ERP, Supply Chain Management (SCM), Human Capital Management (HCM) and Customer Experience (CX). Oracle has developed “Fusion Analytics Warehouse” (FAW) which is a set of prebuilt analytic applications that run in the Oracle Cloud and work with Oracle’s Cloud SaaS applications. Oracle has prebuilt a data pipeline to extract data from the Cloud SaaS applications into a Cloud-based data warehouse and has prebuilt KPI’s, reports and dashboards. Fusion ERP Analytics was one of the first modules of FAW that was released and it works with Oracle’s Cloud ERP SaaS application. For more information on Fusion ERP Analytics please see my blog titles “Best Practices for Implementing Fusion ERP Analytics”.
What is Fusion Analytics Warehouse – this is a youtube video from Oracle that introduces and explains Fusion Analytics Warehouse
Perficient’s Oracle Analytics practice is a team of seasoned, dedicated and passionate data and analytics professionals. They have worked with numerous clients to successfully extract value from their data and transform them into data-driven organizations.
]]>
Oracle Fusion Enterprise Resource Planning (ERP) Analytics is a module of Oracle Fusion Analytics Warehouse (FAW). FAW was formerly known as ‘Oracle Analytics for Applications’ (OAX). This name change is permanent and going forward we will use the following terms:
The modules of Fusion Analytics Warehouse have been developed by Oracle to work with Oracle Cloud SaaS applications – aka Fusion Applications (ERP, Supply Chain Planning/SCM, HCM, Customer Experience/CX). Each module of FAW developed by Oracle contains the following components:
Here is the URL for Fusion Analytics Warehouse for further information:
https://www.oracle.com/business-analytics/fusion-analytics.html
Oracle has a long history of success with packaged analytics applications. They are continuing that tradition with FAW but with some important differences enabled by the Oracle Cloud platform.
Outlined below are the KPI’s prebuilt into Fusion ERP Analytics:
Oracle Fusion ERP Analytics can be implemented into production in about 10 weeks. The KPI’s and metrics in FAW are based on configurations and decision made during the implementation of Oracle Cloud ERP. It is therefore important to make those Cloud ERP configuration decisions with an understanding of the KPI’s and metrics that are required as analytic output. Perficient has developed a methodology that provides a roadmap for simultaneous deployment of Cloud ERP and Fusion ERP Analytics – highlighting and emphasizing the important areas for coordination.
When implementing Oracle Cloud ERP it is important to pay attention to the following values and how they are defined as they will impact the deployment of Fusion ERP Analytics.
Perficient’s Oracle Analytics professionals have deep expertise in leveraging the full portfolio of Oracle Analytics solutions to extract value from data and transform companies into data driven organizations.
]]>In today’s data-driven world, speed and agility can be the deciding factor for survival in the marketplace and we know that data and analytics are the enablers of agility. No longer can companies afford to maintain data and analytics platforms that can be outdated shortly after they are deployed. Time cannot be spent on time-consuming and expensive upgrades. The modern data and analytics platform needs to be a living, breathing organism that is continuously improved, does not require traditional upgrades, and can rapidly respond to changes in analytical needs (the COVID-19 situation of our present-day has made that point in a big way). Oracle has built such a platform in the Cloud-based Fusion Analytics Warehouse (FAW) suite.
For those that are not aware, FAW is a Cloud-based data and analytics platform developed by Oracle, leveraging Oracle Autonomous Data Warehouse (ADW) and Oracle Analytics Cloud (OAC), that is built to work ‘hand-in-glove’ with Oracle’s Cloud SaaS applications like Enterprise Resource Planning (ERP) Cloud, Human Capital Management (HCM) Cloud, Supply Chain Management (SCM) Cloud and Customer Experience (CX) Cloud.
FAW consists of a suite of analytics modules for ERP, HCM, SCM, and CX Cloud. Each module includes the following:
Please see the link below for information from Oracle’s website:
https://www.oracle.com/business-analytics/fusion-analytics.html
Important Benefits of Fusion Analytics Warehouse (FAW):
As a Cloud-based data and analytics platform that is purpose-built for integration with Oracle Cloud SaaS apps, FAW allows companies to maximize their Cloud investment while also deploying a data and analytics platform that supports continuous improvement and that can quickly adapt to changes in analytical needs – without an expensive and time-consuming upgrade.
Perficient’s Oracle Analytics team has deep expertise in Oracle Analytics, including FAW, and has helped numerous companies define and deploy Oracle Cloud-based data warehouses and analytic systems.
]]>In late 2019 Oracle released Oracle Analytics for Applications (OAX). OAX is built on Oracle Analytics Cloud (OAC) and powered by Oracle Autonomous Data Warehouse (ADW). It provides personalized application analytics, benchmarks, and machine learning-powered predictive insights across all line-of-business job functions and business processes for Oracle Cloud Applications. It consists of prebuilt analytic applications that include a data pipeline, data warehouse and prebuilt KPI’s, metrics, reports, and dashboards. OAX is purpose-built for Oracle’s Cloud SaaS applications: ERP Cloud, HCM Cloud, SCM Cloud, and CX Cloud. The first module released for OAX was Oracle Analytics for Fusion ERP (OAF) which focuses on financial analytics. Additional modules for HCM Cloud, SCM Cloud, and CX Cloud will follow.
In today’s data-driven world, no process can be considered complete or fully optimized without addressing the analytics related to the process. Although OAF can be deployed at any time – either during the ERP Cloud implementation or after – there are significant benefits to be realized from deploying OAF concurrent to the ERP Cloud deployment. Some of these benefits are:
OAF complements Oracle Transactional Business Intelligence (OTBI-which is part of ERP Cloud) and addresses the need for analytics beyond transactional reporting. Whereas OTBI is focused on transactional reporting with limited historical reporting and works with small volumes of data sourced from Oracle Fusion apps only, OAF is geared towards strategic and advanced analytics, supporting deep historical analysis, non-Oracle Cloud data sources and large volumes of data. OAF helps executives and decision-makers improve business performance and gives analysts the tools to uncover hidden insights.
OAF includes 15 subject areas and over 100 data warehouse tables covering General Ledger, Accounts Payable and Accounts Receivable. It comes with over 50 Financial KPI’s that are prebuilt to leverage the data in your ERP Cloud system. With OAF you can do the following:
Oracle ERP Cloud and Oracle Analytics for Applications represent a powerful combination delivering an integrated platform for flawless business process execution and deep, machine-learning powered analytics to drive improved business performance and continuous improvement. Perficient’s credentialed Oracle expertise and team of highly skilled implementation specialists can guide you on your journey to world-class business process execution and analytics.
]]>It was recently announced that Oracle Analytics Cloud (OAC) has been named Visionary in the 2020 Gartner Magic Quadrant for Analytics and Business Intelligence Platforms. This is great news as it validates what we have been seeing in the marketplace for quite some time. It comes as no surprise to firms like Perficient who’ve been helping clients design and deploy Oracle Analytics Cloud (and its Cloud-based precursor) for the last 4-5 years. We have seen many clients, with very different analytical needs, from departmental to enterprise-level, get great benefits from OAC.
For a while now, OAC has been the premier analytics platform for combining enterprise governance and control with world-class self-service capabilities and advanced visualizations. The analytics world has realized it is not “either/or” when it comes to enterprise governance and self-service – it is both. With OAC, it is possible to connect to any supported data source and immediately begin to query the data without any modeling. Advanced visualizations are available with one click and can be swapped in and out quickly by selecting from a palette of visualization choices. On the other hand, OAC’s enterprise semantic modeling capabilities make it possible to provide a curated and governed user experience for critical metrics where ‘single source of truth’ issues cannot be tolerated. The security model for OAC is very mature allowing integration with on-premises or cloud-based single sign-on providers and definition of permissions by role and by type of data.
With OAC, Oracle has embraced the concept of augmented analytics whereby machine learning is built into the platform to enhance and improve the capabilities of business analysts. With a single click, the OAC platform will explain key drivers of attributes, identify outliers, provide recommendations for improving a data set or generate a forecast. OAC also provides prebuilt machine learning algorithms that can be used for predictive analytics and other data science use cases without any coding. In order to leverage data to drive competitive advantage, we are seeing many companies take advantage of OAC’s no code machine learning capabilities to improve their analytics and gain new insights without having to hire a full team of data scientists.
According to many experts, the predominant user interface of the future for analytics will be our voice and, increasingly, we will be speaking to some form of a mobile device. With OAC, the future is now as it natively supports mobile devices without any additional set up. Natural language is also supported in OAC as it is possible to speak naturally into your mobile device to request the analytics that you want. Additionally, OAC will generate a natural language description of any visualization with a single click (which can be very helpful with complex visualizations).
If you would like further information about some of the great features of OAC please see my recently published blog.
Having seen the value of OAC, Perficient decided to create a number of prebuilt solution templates that leverage both OAC and Oracle’s Autonomous Data Warehouse (ADW) as their foundation architecture. OAC is tightly integrated with ADW and together they provide a comprehensive analytic solution at any scale – from departmental level to enterprise, petabyte-scale data warehouses.
Perficient’s prebuilt solution templates allow companies to get up and running quickly on OAC and provide speed to value to companies looking to improve their analytics without long custom development project cycles. Prebuilt solution templates based on OAC and ADW have been created for the following functional areas:
Whatever your data and analytics environment looks like today, you can be sure Perficient has seen it before. We can help you identify your critical KPI’s and metrics, untangle your data integration issues and develop your future state data and analytics roadmap.
]]>First a quick summary of machine learning (ML). At a high level and simplifying a bit, there are basically two types of ML:
In today’s blog post I will demonstrate how to use the machine learning capability in Oracle Analytics Cloud to predict housing prices. This is an example of supervised learning and regression because the ML model will be trained using a data set with housing prices (i.e., a labeled data set). I will do so in a step by step manner so you can follow along and try it yourself.
For this exercise, we will use a publicly available data set of Boston housing prices. This data set is available for download from Kaggle:
https://www.kaggle.com/puxama/bostoncsv/data
The Kaggle site does not include a description of what the columns mean. You can use the URL below to get a description of what each column in the Boston housing data set means:
http://math.furman.edu/~dcs/courses/math47/R/library/mlbench/html/BostonHousing.html
Before uploading to OAC, we will make the following slight changes in Excel:
The column titled ‘medv’ (ie median value) has been rounded to the 000’s on the downloadable file. We will multiply that column by 1000 to remove the rounding.
To upload the file to OAC, click on the Create button in the upper right-hand corner, then click on ‘Data Set’. Find the file on your hard drive and upload it to OAC.
Next, we will change House ID to be treated as an attribute and not a measure so we can report against it. After initially uploading your file, simply click on the House ID column and then, on the left-hand side where it says ‘Treat As’, change the value to attribute. Then click ‘Add’ to add the file as a data set.
At this point, the Boston housing data set is available to be used as a training data set for our machine learning predictive model.
To train the ML model in OAC we need to create a data flow. So click on the ‘Create button’ in the upper right-hand corner and then click ‘Data Flow’. You will be presented with a screen that asks you to pick a data set to be used by the data flow. Please pick the Boston housing data set that you just uploaded. After selecting your uploaded Boston housing data set, you will be presented with a screen like the one below:
Now it is time to select the model that we will train. Use the scroll bar on the left to scroll down to see the options available for machine learning models. Click on ‘Train Numeric Prediction’ and drag it next to the blue ‘Boston Housing’ data set symbol. You will see a green plus sign and then you will get the screen below:
Let’s select ‘Linear Regression for model training’. Click OK.
The next step is important. This is where we pick the target column which is the column we want to predict. Click on ‘Select a column’ next to ‘Target’. Select ‘medv’ from the columns displayed. ‘medv’ stands for ‘Median Value’ and this is the value we want to predict. For the other parameters, you can leave the default values. As you scroll down you will see the value called ‘Train Partition Percent’ has defaulted to 80. This is very common and means that 80% of the data in the Boston housing data set will be used to train the model. The remaining 20% will be used to test the model (i.e., in terms of predicting housing prices).
Next, we will click on the ‘Save Model’ symbol in the data flow. We will be prompted to give the model a name. Please call the model whatever you like. Next, we need to save the data flow before we can run it to train the model. Click on ‘Save’ in the upper right-hand corner and give the data flow whatever name you like. Then click on ‘Run Data Flow’ (which is right next to ‘Save’) to train the model and to create a model that can then be applied to other data sets. To see the model that you just created, click on the ‘hamburger’ in the upper left corner and select Machine Learning from the drop-down. I called my model “Numeric prediction model based on the Boston housing data set’.
On the screen above, select your model and go to the right-hand side. You will see an ‘Actions menu’ appear. Click on the ‘Actions menu’ and select ‘Inspect’ to evaluate the model. You will see the following screen:
Select ‘Quality’ to analyze the accuracy of the model. On the screen below, we see that the Coefficient of Determination or R Squared is 70% which is generally considered good.
Now let’s take a look at what the main drivers are for the prediction of housing prices according to this model. Click on ‘Related’ to get the screen below:
On the screen above, we will click on the first set of generated data called the “Numeric Prediction model based on the Boston housing dataset. Drivers”. That will bring up the data set generated by the model creation process that outlines the main drivers. Click on ‘Visualize’ in the top right-hand corner to analyze the data. Hold down CTRL and select ‘Driver Name’, ‘Coefficient’ and ‘Correlation’ and drag them onto the visualization canvas. If you select a visualization type of vertical bar chart, you will see the following main drivers (sorted by Coefficient high to low):
The number of rooms (rm) and whether the house is on the Charles River or not (chas) are strongly positively correlated with the housing price (as these go up so does the price of the house). On the other side, we can see that the pupil-teacher ratio (ptratio) and % of the population of lower status (lstat) are negatively correlated with housing prices.
Now we are ready to apply our trained ML model to predict housing prices. One thing to keep in mind is that the data set to which the ML model will be applied needs to have the same inputs (i.e., columns) as the trained ML model. To achieve this, we will apply our trained ML model to the original Boston housing data set.
To apply an ML model we need to create a data flow in OAC. Click on ‘Create’ in the upper right-hand corner and then click on ‘Data Flow’. Click on your Boston housing data set to add it to the data flow:
Use the scroll bar on the left-hand side to scroll down until you see the ‘Apply Model’. Drag ‘Apply Model’ onto the plus sign to the right of the Boston Housing data set. Select the machine learning model we just created and trained (in my case it will be the model called “Numeric Prediction model based on the Boston Housing data set). Select OK.
Now we need to give the column that will contain the predicted value a name and also save the data set that will be created when we run the model. The column name for the predicted value defaults to “PredictedValue” and we will leave it that way. Use the scroll bar on the left to scroll back up until you see ‘Save Data Set’. Drag ‘Save Data Set’ next to the ‘Apply Model’. Give the new data set a name, save it to data set storage and change House ID to be treated as an attribute.
Lastly, we need to save the data flow and then run it to create the new data set with the predicted value. As we did before, please click on ‘Save’ in the upper right-hand corner and give the data flow whatever name you like. Then click on ‘Run Data Flow’ to apply the model to the Boston housing data set to predict housing prices. This will create a data set that we will analyze in the next step.
Step 5: Analyze the predicted values
Click on the ‘hamburger’ icon in the upper left-hand corner and then click on ‘Data’ to find your data set with the predicted value. Once you find your data set, click on the ‘Actions Menu’ for your data set and select ‘Create Project’ (my data set is called ‘Housing Prices with Predicted Value’):
Select ‘PredictedValue’, ‘medv’ and ‘House ID’ and select ‘Scatter’ as the visualization type. You will see the predicted value for each House ID compared to the original median value. Although there are some outliers the majority of the predictions are close to the original house value:
I encourage you to explore the machine learning capabilities of Oracle Analytics Cloud. Have fun!
]]>Oracle Analytics Cloud (OAC) is Oracle’s cloud-based platform for reporting, analytics, data exploration, and data visualization. It encompasses many capabilities and multiple products. As a result, it was hard to limit the post to just 5 features given that there is so much to choose from. That being said, the following are my top 5 favorite features of Oracle Analytics Cloud:
I appreciate the need for enterprise data governance and how OAC enables such governance through its enterprise semantic model. But, in today’s fast-based world where analytic needs change quickly, it is great to know that OAC can connect to virtually any data source and start querying the data immediately. As a result, there is no modeling required. This is great for data exploration and for departmental data marts where the underlying data structure is not too complex. It is also great for data science teams who need to review data quickly to determine its usefulness.
Please see the following link for a list of supported data sources for OAC:
https://docs.oracle.com/en/cloud/paas/analytics-cloud/acubi/supported-data-sources.html
With OAC, one click of the mouse is all it takes to select an advanced visualization from among many visualization choices. And if you are not happy with your initial selection, a new visualization type is one click away. I love this feature because using it makes you a better data analyst. Since it is so easy to visualize your data using multiple different visualization types, through trial and error you quickly get very good at determining what is the best visualization type for your data and the message you want to send.
This feature really sets OAC apart from other business intelligence tools and analytics platforms. With OAC’s self-service data prep it is now possible for business analysts to prepare, load, transform and blend data in ways that would have required IT assistance and weeks or months of work just a few years ago. The data prep module is augmented with machine learning, so it will catch things in the data and suggest improvements. I worked in Corporate IT for many years but before that, I worked in the business and did a good bit of self-service data wrangling. I love this feature and wish that it had existed 10-15 years ago! just know that it will add efficiency to many departments.
This is another feature that didn’t exist just a few years ago on any vendor’s platform. Now it is possible with just a few clicks to establish a connection to ADW directly from OAC – without needing the help of IT or a DBA. Once the connection to ADW is established, you load data into ADW directly from OAC – with no help required from a DBA. This is hugely empowering for users of all kinds. In the past, there was a need for data integration and analysis but no desire or funds for a project with IT. A departmental data mart was created often using MS Access or Excel, and this data would not get integrated with the rest of the corporate data. Now with OAC and ADW integration, data and analytics needs can be quickly satisfied without necessarily creating additional data silos.
Machine Learning takes two forms in OAC. First, it is built into the platform and augments the capabilities of the platform behind the scenes without the user knowing that machine learning (ML) is being applied. For instance, OAC will provide recommendations to enrich a data set that has been uploaded. ML generates statistics about data sets. With a single click, OAC will tell you what values are key drivers of critical attributes in your data sets and will also point out anomalies.
These examples both use ML behind the scenes. The second form of ML usage in OAC is more explicit and driven by the end-user. OAC provides approximately 15 ML algorithms that can be used for advanced analytics. You do not have to be a data scientist to use the algorithms. A ‘regular business analyst’ can use the algorithms to do things like predictive analytics and customer segmentation. I love this feature because it makes ‘AI’ accessible to everyone.
I know I said just 5 but there is one other feature that I have to mention. With OAC and ML, visualizations can be rendered into plain English. It does a remarkably good job of breaking down even the most complex visualizations into easy to understand bullet points. This provides another vehicle for increasing understanding.
What are your favorite features of OAC?
You can read more insights about Oracle here.
]]>Perficient Presents at Oracle OpenWorld 2019. Oracle Analytics Cloud (OAC) and Oracle Autonomous Data Warehouse (ADW) are setting the standard for cloud-based data warehouse and analytics deployments with respect to speed to value, flexibility, performance, self service and advanced capabilities like AI and natural language queries. If you are thinking about moving all or some of your data and analytics environment to the cloud, you should watch this short video (and btw that should include almost everyone ).
Oracle has built tight integration between OAC and ADW. In this video you will learn about solution patterns for rapid deployment of OAC and ADW including how to load data into Autonomous Data Warehouse using only Oracle Analytics Cloud with no coding or DBA required! To learn more, watch the video now!
Whatever your data and analytics environment looks like today, you can be sure our seasoned team of analytics professionals has seen it before. We can help you identify your critical KPI’s and metrics, untangle your data integration issues and develop your future state data and analytics roadmap.
]]>Ah, yes, the 90s. I can hear the sweet, gentle pinging of my external, dial-up modem as I write this. For those of you that were not in the workforce in the 90s I will describe them a little bit. Information was a highly guarded commodity. Decisions were made at the top of an organization and then were slowly communicated downward through hierarchical, command and control structures – often times via physical memos that were shuffled from physical inbox to physical inbox (yes, really). Although most companies adopted email on their own networks at some point during the 90s, I knew executives who had their administrative assistants print out their emails and type in their responses. It was a sign of your power and position to be able to proudly proclaim that you ‘refused to use email’. So email was a step in the right direction but it did not change the prevailing mindset. (For the record I was relatively new in the work force at the time and I was definitely reading and writing my own emails!) Yes, data and information meant power. The more data and information you had the more powerful you were. If the data was difficult to interpret and you were required to be physically present to explain what it all meant, then that was even better. More power and control! It was in the 90s that we began to see the rise of the one-off, ‘server under the desk’, departmental MS Access database with all the important information for the executive running the department. These data troves were jealously guarded and were rarely spoken of publicly. No one in their right mind was freely sharing data and information. Why give away such a valuable commodity for free? Where was the payoff? There was none.
What I have just described is the genome for today’s data culture. Sharing of data and information is simply not in the DNA of today’s data culture. If we ran today’s data culture through Ancestry.com, the one thing we know we won’t find is a picture of a long lost relative getting an award for sharing data with colleagues and enforcing data standards. 23andme, the DNA testing company, would probably find a ‘data hoarding’ gene rather than a ‘data sharing’ gene. The fact of the matter is that in too many organizations today, sharing of data is not valued and information is still power (and no one freely gives away power). The culture of data and data sharing in companies has simply not kept pace with advancements in data management and analytics technologies.
So why is this important and what does culture have to do with it?
Culture is what silently influences the behavior of people without their conscious awareness. Culture is ‘how we do things around here’. Culture is ‘how we have always done it’. Culture reflects the values of an organization. Culture lets you know how you can behave and be sure that no one will call you out over it. Culture is a belief and behavior system that we all agree to follow either explicitly or implicitly. Not following the culture can lead to being ostracized and for most of human history that meant death (and we as humans are not very big on death).
“Culture eats strategy for breakfast, lunch and dinner” – Peter Drucker
This quote is attributed to Peter Drucker and speaks to the fact that no amount of strategic pronouncements or data governance initiatives or cross department big data teams can overcome the baseline culture of an organization.
Is your data culture stuck in the 90s? Here are some questions to help assess your data culture:
The business world has now realized the massive, transformative potential that lies in leveraging all an organization’s data as well as public and competitive data. There are many success stories about brand new business models emerging, customer experiences being transformed, new, more targeted products being developed and quality being improved through the innovative use of data. However, for each success story there are many organizations that are not getting the results they expected from their data initiatives or things are just not moving fast enough. In such cases, data culture could be a big part of the issue.
]]>
In 2007, Oracle Business Intelligence (BI) Applications became generally available. They were an immediate success and have enjoyed great popularity ever since. Although Oracle BI Applications are now considered ‘content complete’ and Oracle no longer sells new licenses, Oracle is committed to supporting this large and important customer base. That said, many existing customers are not clear on their options for moving forward, they are unclear about key dates for support, they don’t know if they should stay on prem, move to the cloud or pursue a hybrid approach. They don’t know what is involved in an upgrade or how to estimate the effort and they don’t know what options are available should they choose to replace the BI Apps. To answer these and other questions we have created an interactive ebook for existing customers of BI Applications.
In this interactive ebook we discuss the following topics:
1. The Background of Oracle BI Applications – when they were developed, what the major changes have been over the years and through the different versions
2. Go-Forward Strategy Considerations – we list a number of important items to take into account when developing a go-forward strategy
3. Go-Forward Options – we list out and describe in detail all the options available to existing customers of BI Applications covering on-premises, cloud and hybrid approaches
4. Perficient’s Prebuilt Solution Templates – we introduce and describe the prebuilt solutions developed by Perficient for key functional areas such as Finance, HR, Sales, Marketing, Service, Supply Chain and Procurement. These prebuilt solutions can be deployed on prem or in the cloud and support both on prem and cloud data sources. We discuss how some customers have used these as a replacement for BI Apps.
5. How Perficient Can Help – we outline the services we offer to help customers analyze their situation, pick the best go-forward strategy and implement their desired choice.
Please register below to download the guide and get a complimentary consultation.
]]>