Following up on my previous post which highlights different approaches of accessing Oracle Fusion Cloud Apps Data from Databricks, I present in this post details of Approach D, which leverages the Perficient accelerator solution. And this accelerator applies to all Oracle Fusion Cloud applications: ERP, SCM, HCM and CX.
As demonstrated in the previous post, the Perficient accelerator differs from the other approaches in that it has minimal requirements for additional cloud platform services. The other approaches of extracting data efficiently and in a scalable manner require the deployment of additional cloud services such as data integration/replication services and an intermediary data warehouse. With the Perficient accelerator, however, replication is driven by techniques that are solely reliant on native Oracle Fusion and Databricks. The accelerator consists of a Databricks workflow with configurable tasks to handle the end-to-end process of managing data replication from Oracle Fusion into the silver layer of Databricks tables. When deploying the solution, you get access to all underlying python/SQL notebooks that can be further customized based on your needs.
Why consider deploying the Perficient Accelerator?
There are several benefits to deploying this accelerator as opposed to building data replications from Oracle Fusion from the ground up. Built with automation, the solution is future-proof and enables scalability to accommodate evolving data requirements with ease. The diagram below highlights key considerations.
A Closer Look at How Its Done
In the Oracle Cloud: The Perficient solution leverages Oracle BI Cloud Connector (BICC) which is the preferred method of extracting data in bulk from Oracle Fusion while minimizing the impact to the Fusion application itself. Extracted data and metadata is temporarily made available in the OCI Object Storage buckets for downstream processing. Archival of exported data on the OCI (Oracle Cloud Infrastructure) side is also automatically handled, if required, with purging rules.
In the Databricks hosting cloud:
Whether starting small with a few tables or looking to easily scale to hundreds and thousands of tables, the Perficient Databricks accelerator for Oracle Fusion data handles the end-to-end workflow orchestration. As a result, you end up spending less time with data integration and focus efforts on business facing analytical data models.
For assistance with enabling data integration between Oracle Fusion Applications and Databricks, reach out to mazen.manasseh@perficient.com.
]]>Connecting to Oracle Fusion Cloud Applications data from external non-Oracle systems, like Databricks, is not feasible for bulk data operations via a direct connection. However, there are several approaches to making Oracle apps data available for consumption from Databricks. What makes this task less straightforward is the fact that Oracle Fusion Cloud Applications and Databricks exist in separate clouds. Oracle Fusion apps (ERP, SCM, HCM, CX) are hosted on Oracle Cloud while Databricks leverages one of AWS, Azure or Google Cloud. Nevertheless, there are several approaches that I will present in this blog on how to access Oracle Apps data from Databricks.
While there are other means of performing this integration than what I present in this post, I will be focusing on:
The following diagrams summarize four different approaches on how to replicate Oracle Fusion Apps data in Databricks. Each diagram highlights the data flow, and the technologies applied.
Choosing the right approach for your use case is dependent on the objective of performing this integration and the ecosystem of cloud platforms that are applicable to your organization. For guidance on this, you may reach Perficient by leaving a comment in the form below. Our Oracle and Databricks specialists will connect with you and provide recommendations.
]]>