Operations Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/operations/ Expert Digital Insights Sat, 28 Dec 2024 08:14:17 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Operations Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/operations/ 32 32 30508587 Unit Testing in Android Apps: A Deep Dive into MVVM https://blogs.perficient.com/2024/11/26/unit-testing-in-android-apps-a-deep-dive-into-mvvm/ https://blogs.perficient.com/2024/11/26/unit-testing-in-android-apps-a-deep-dive-into-mvvm/#respond Tue, 26 Nov 2024 19:56:40 +0000 https://blogs.perficient.com/?p=372567

Understanding Unit Testing

Unit testing is a crucial aspect of software development, especially in complex applications like Android apps. It involves testing individual units of code, such as methods or classes, in isolation. This ensures the correctness of each component, leading to a more robust and reliable application.

Why Unit Testing in MVVM?

The Model-View-ViewModel (MVVM) architectural pattern is widely adopted in Android app development. It separates the application into three distinct layers:

  • Model: Handles data logic and interacts with data sources.
  • View: Responsible for the UI and user interactions.
  • ViewModel: Acts as a bridge between the View and Model, providing data and handling UI logic.

Unit testing each layer in an MVVM architecture offers numerous benefits:

  • Early Bug Detection: Identify and fix issues before they propagate to other parts of the app.
  • Improved Code Quality: Write cleaner, more concise, and maintainable code.
  • Accelerated Development: Refactor code and add new features with confidence.
  • Enhanced Collaboration: Maintain consistent code quality across the team.

Setting Up the Environment

  1. Android Studio: Ensure you have the latest version installed.
  2. Testing Framework: Add the necessary testing framework to your app/build.gradle file:

    testImplementation 'junit:junit:4.13.2'
    androidTestImplementation 'androidx.test.ext:junit:1.1.5'
    androidTestImplementation 'androidx.test.espresso:espresso-core:3.5.1'
  3. Testing Library: Consider using a testing library like Mockito or MockK to create mock objects for testing dependencies.

Unit Testing ViewModels

  1. Create a Test Class: Create a separate test class for each ViewModel you want to test.
  2. Set Up Test Dependencies: Use dependency injection frameworks like Dagger Hilt or Koin to inject dependencies into your ViewModel. For testing, use mock objects to simulate the behavior of these dependencies.
  3. Write Test Cases: Write comprehensive test cases covering various scenarios:
  • Input Validation: Test how the ViewModel handles invalid input.
  • Data Transformation: Test how the ViewModel transforms data from the Model.
  • UI Updates: Test how the ViewModel updates the UI through LiveData or StateFlow.
  • Error Handling: Test how the ViewModel handles errors and exceptions.

Example:

@RunWith(AndroidJUnit4::class)
class MyViewModelTest {

    @Test
    fun `should update the UI when data is fetched successfully`() {
        // ... (Arrange)
        val viewModel = MyViewModel(mockRepository)

        // ... (Act)
        viewModel.fetchData()

        // ... (Assert)
        viewModel.uiState.observeForever { uiState ->
            assertThat(uiState.isLoading).isFalse()
            assertThat(uiState.error).isNull()
            assertThat(uiState.data).isEqualTo(expectedData)
        }
    }
}

Unit Testing Repositories

  1. Create Test Classes: Create separate test classes for each Repository class.
  2. Set Up Test Dependencies: Use dependency injection to inject dependencies into your Repository. For testing, use mock objects to simulate the behavior of data sources like databases or network APIs.
  3. Write Test Cases: Write test cases to cover:
  • Data Fetching: Test how the Repository fetches data from remote or local sources.
  • Data Storage: Test how the Repository stores and retrieves data.
  • Data Manipulation: Test how the Repository processes and transforms data.
  • Error Handling: Test how the Repository handles errors and exceptions.

Example:

@RunWith(AndroidJUnit4::class)
class MyRepositoryTest {

    @Test
    fun `should fetch data from remote source successfully`() {
        // ... (Arrange)
        val mockApi = mock(MyApi::class.java)
        val repository = MyRepository(mockApi)

        // ... (Act)
        repository.fetchData()

        // ... (Assert)
        verify(mockApi).fetchData()
    }
}

Implementing SonarQube

SonarQube is a powerful tool for code quality and security analysis. Here’s a detailed guide on how to integrate SonarQube with your Android project:

  1. Set Up SonarQube Server:
  • Install SonarQube Server: Download and install the SonarQube server on your machine or a server.
  • Configure SonarQube: Configure the server with database settings, user authentication, and other necessary parameters.
  • Start SonarQube Server: Start the SonarQube server.
  1. Configure SonarQube Scanner:
  • Install SonarQube Scanner: Download and install the SonarQube Scanner.
  • Configure Scanner Properties: Create a sonar-scanner.properties file in your project’s root directory and configure the following properties:

    sonar.host.url=http://localhost:9000
    sonar.login=your_sonar_login
    sonar.password=your_sonar_password
    sonar.projectKey=my-android-project
    sonar.projectName=My Android Project
    sonar.sources=src/main/java
    sonar.java.binaries=build/intermediates/javac/release/classes
  1. Integrate SonarQube with Your Build Process:
  • Gradle: Add the SonarQube Gradle plugin to your build.gradle file:

    plugins {
        id 'org.sonarsource.scanner-gradle' version '3.3'
    }

    Configure the plugin with your SonarQube server URL and authentication token.

  • Maven: Add the SonarQube Maven plugin to your pom.xml file. Configure the plugin with your SonarQube server URL and authentication token.
  1. Run SonarQube Analysis:
  • Execute the SonarQube analysis using the SonarQube Scanner. This can be done manually or integrated into your CI/CD pipeline.
  1. Analyze the Results:
  • Once the analysis is complete, you can view the results on the SonarQube dashboard. The dashboard provides insights into code quality, security vulnerabilities, and potential improvements.

Implementing Test Coverage with Bitrise

Test coverage measures the percentage of your code that is covered by tests. It’s a crucial metric to assess the quality of your test suite. Here’s how to measure test coverage with Bitrise:

  1. Configure Code Coverage Tool: Add a code coverage tool like JaCoCo to your project. Configure it to generate coverage reports in a suitable format (e.g., XML).
  2. Add Code Coverage Step to Bitrise Workflow: Add a step to your Bitrise Workflow to generate the code coverage report. This step should execute your tests and generate the report.
  3. Upload Coverage Report to SonarQube: Add a step to upload the generated code coverage report to SonarQube. This will allow SonarQube to analyze the report and display the coverage metrics.

Best Practices for Unit Testing

  • Write Clear and Concise Tests: Use descriptive names for test methods and variables.
  • Test Edge Cases: Consider testing scenarios with invalid input, empty data, or network errors.
  • Use a Testing Framework: A testing framework like JUnit provides a structured way to write and run tests.
  • Leverage Mocking: Use mocking frameworks like Mockito or MockK to isolate units of code and control their behavior.
  • Automate Testing: Integrate unit tests into your CI/CD pipeline to ensure code quality.
  • Review and Refactor Tests: Regularly review and refactor your tests to keep them up-to-date and maintainable.

By following these guidelines and incorporating unit testing into your development process, you can significantly improve the quality and reliability of your Android apps.

]]>
https://blogs.perficient.com/2024/11/26/unit-testing-in-android-apps-a-deep-dive-into-mvvm/feed/ 0 372567
3 Key Insurance Takeaways From InsureTech Connect 2024 https://blogs.perficient.com/2024/10/29/3-key-insurance-takeaways-from-insuretech-connect-2024/ https://blogs.perficient.com/2024/10/29/3-key-insurance-takeaways-from-insuretech-connect-2024/#respond Tue, 29 Oct 2024 16:49:00 +0000 https://blogs.perficient.com/?p=371156

The 2024 InsureTech Connect (ITC) conference was truly exhilarating, with key takeaways impacting the insurance industry. Each year, it continues to improve, offering more relevant content, valuable industry connections, and opportunities to delve into emerging technologies.

This year’s event was no exception, showcasing the importance of personalization to the customer, tech-driven relationship management, and AI-driven underwriting processes. The industry is constantly evolving, and ITC displays the alignment of everyone within the insurance industry surrounding the same purpose.

The Road Ahead: Transformative Trends

As I reflect on ITC and my experience, it is evident the progression of the industry is remarkable. Here are a few key takeaways from my perspective that will shape our industry roadmap:

1. Personalization at Scale

We’ve spoken for many years about the need to drive greater personalization across our interactions in our industry. We know that customers engage with companies that demonstrate authentic knowledge of their relationship. This year, we saw great examples of how companies are treating personalization, not as an incremental initiative, but rather embedding it at key moments in the insurance experience, particularly underwriting and claims.

For example, New York Life highlighted how personalization is driving generational loyalty. We’ve been working with industry leading insurers to help drive personalization across the distribution network: carriers to agents and the final policyholder.

Success In Action: Our client wanted to integrate better contact center technology to improve internal processes and allow for personalized, proactive messaging to clients. We implemented Twilio Flex and leveraged its outbound notification capabilities to support customized messaging while also integrating their cloud-based outbound dialer and workforce management suite. The insurer now has optimized agent productivity and agent-customer communication, as well as newfound access to real-time application data across the entire contact center.

2. Holistic, Well-Connected Distribution Network

Insurance has always had a complex distribution network across platforms, partnerships, carriers, agents, producers, and more. Leveraging technology to manage these relationships opens opportunities to gain real-time insights and implement effective strategies, fostering holistic solutions and moving away from point solutions. Managing this complexity and maximizing the value of this network requires a good business and digital transformation strategy.

Our proprietary Envision process has been leading the way to help carriers navigate this complex system with proprietary strategy tools, historical industry data, and best practices.

3. Artificial Intelligence (AI) for Process Automation

Not surprisingly, AI permeated many of the presentations and demos across the session. AI Offers insurers unique decisioning throughout the value chain to create differentiation. It was evident that while we often talk about AI as an overarching technology, the use cases were more point solutions across the insurance value chain. Moreover, AI is not here to replace the human, but rather assist the human. By automating the mundane process activities, mindshare and human capital can be invested toward more value-added activity and critical problems to improve customer experience. Because these point solutions are available across many disparate groups, organizational mandates demand safe and ethical use of AI models.

Our PACE framework provides a holistic approach to responsibly operationalize AI across an organization. It empowers organizations to unlock the benefits of AI while proactively addressing risks.

Our industry continues to evolve in delivering its noble purpose – to protect individual’s and businesses’ property, liability, and financial obligations. Technology is certainly an enabler of this purpose, but transformation must be managed to be effective.

Perficient Is Driving Success and Innovation in Insurance

Want to know the now, new, and next of digital transformation in insurance? Contact us and let us help you meet the challenges of today and seize the opportunities of tomorrow in the insurance industry.

]]>
https://blogs.perficient.com/2024/10/29/3-key-insurance-takeaways-from-insuretech-connect-2024/feed/ 0 371156
5 Takeaways: Enhancing Trust in Healthcare [Webinar] https://blogs.perficient.com/2024/10/24/5-takeaways-enhancing-trust-in-healthcare-webinar/ https://blogs.perficient.com/2024/10/24/5-takeaways-enhancing-trust-in-healthcare-webinar/#respond Thu, 24 Oct 2024 21:28:40 +0000 https://blogs.perficient.com/?p=371079

In our recent webinar, “Enhancing Trust in Healthcare,” experts David Allen and Michael Porter, along with Appian’s Matt Collins, addressed the concerning decline in consumer trust within the healthcare sector.

Historically, healthcare has maintained higher levels of trust compared to other industries, but a recent Gallup survey shows that this trust is now at a near-record low.

Related: 9 Healthcare Trends For 2024

The discussion explored actionable strategies to enhance trust among both patients and members, emphasizing the importance of transparency, effective communication, and improving outcomes. Our experts shared insights on how healthcare organizations can rebuild confidence and ease experiences.

5 Ways to Enhance Trust in Healthcare

1. Understand the key factors contributing to patient/member mistrust

Nearly one third of Gallup respondents cited ‘very little’ confidence in the medical system, well above the 20-year average. This highlights a significant gap in public confidence that healthcare organizations must address.

Factors contributing to this mistrust include inconsistent communication, perceived lack of transparency, and negative past experiences.

For instance, consider the following statistics:

  • 30% of consumers have delayed or skipped care after finding inaccurate provider information within their health plan’s transparency tools.
  • 49% of providers identify that patient information errors are a primary cause of denied claims (e.g., authorizations, eligibility, etc.)

Related: Build Empathy and Understanding. Ease Patient and Member Journeys.

2. Optimize your approach by keeping the consumer at the heart of progress

Traditional approaches to technology often lead to friction points that can erode trust with your patients and members. We recommend instead that healthcare organizations embrace an outcomes-based mindset and approach.

This starts by aligning the enterprise around a strategic vision and actionable KPIs. It’s a holistic, iterative process rooted in value creation and supported by change management.

Hallmarks of a business transformation approach include:

  • Alignment with organizational strategy
  • Assessment of overall readiness
  • Orchestration around the user
  • An iterative MVP approach
  • A flexible technical foundation
  • Intentional focus on data and KPIs

Discover More: Business Transformation in Healthcare

3. Tactically and strategically ease the healthcare journey

Consumers are navigating an increasing number of digital touchpoints throughout their healthcare journey. These digital interactions are crucial for engagement and proactive health monitoring.

By leveraging technology to provide timely updates and personalized care, healthcare organizations can strengthen relationships with patients and members.

Focused use cases could include:

  • Referrals + Scheduling: Simpler, faster, more-memorable referral journeys for patients
  • Health Monitoring: Patients feel known and well cared for in their health journey
  • Eligibility: Faster verification and clarity of choice for the consumer
  • Prior Authorizations: Reduce guesswork; patient already feels worried and unprepared
  • Revenue Cycle Management: Provider has insight into revenue and financial status in near-real time
  • Claims Management: Member feels confident the insurer can tell them where they stand at any time

4. Break down silos to improve outcomes

Technologies deployed in narrow silos can ultimately contribute to a challenge as much as they seek to solve it. While different technology systems are good at their specific role in the organization, effective data transfer between systems often proves challenging, hindering health and business outcomes.

Breaking down these silos through integrated systems and collaborative approaches can enhance communication and coordination across the healthcare ecosystem. Ideally, modernization efforts will maximize technology to drive health innovation, efficiency, and interoperability.

  • Orchestrate resources and decision-making processes into a culture that promotes growth
  • Set strategic parameters for operational excellence and champion iterative delivery models
  • Innovate beyond mandated goals to add business value, meet consumers’ evolving expectations, and deliver equitable care and services
  • Accelerate value with secure, compliant, and modern platforms

5. Determine if intelligent automation and advanced analytics can address challenges

Trust gaps are commonly voiced by patients and members alike. These breakdowns in trust often manifest as the result of weakly orchestrated processes and data assets.

Intelligent automation can address a number of these trust-influencing challenges, including:

  • Self-Service + Transparency: Control and visibility over actions and impacts in the digital journey
  • Accuracy + Completeness: Comprehensive, up-to-date information across the digital journey
  • Speed of Response: Close to real-time updates about critical information in the digital journey
  • Privacy + Security: Compliance aligned with appropriate flexibility in using my data to best serve and enhance the digital journey

Success Story: Improving Experiences and Offsetting Call Center Volume

Elevate Trust With Expert Healthcare Guidance

We blend healthcare and automation expertise to help leaders optimize processes and elevate experiences.

Discover why we have been trusted by the 10 largest health systems and the 10 largest health insurers in the U.S., and Modern Healthcare consistently recognizes us as one of the largest healthcare consulting firms.

Our experts will help you identify how work is performed today and how you can optimize for tomorrow. Contact us to get started.

Watch the Full Webinar Now:

]]>
https://blogs.perficient.com/2024/10/24/5-takeaways-enhancing-trust-in-healthcare-webinar/feed/ 0 371079
Unlock Efficiency: How Salesforce CPQ’s Renewal and Amend Features Simplify Your Business https://blogs.perficient.com/2024/10/01/unlock-efficiency-how-salesforce-cpqs-renewal-and-amend-features-simplify-your-business/ https://blogs.perficient.com/2024/10/01/unlock-efficiency-how-salesforce-cpqs-renewal-and-amend-features-simplify-your-business/#respond Tue, 01 Oct 2024 16:02:20 +0000 https://blogs.perficient.com/?p=369806

Imagine running a business where you offer subscription-based products. As your customer base grows, you begin to notice something slipping—renewal deadlines, contract complexities, and your sales team being bogged down with manual updates. Enter Salesforce CPQ (Configure, Price, Quote), a powerful tool designed to help businesses streamline the often-complex process of managing quotes, pricing, and contracts. But that’s not all—Salesforce CPQ’s renewal and amend functionalities are here to make your contract management process seamless and automatic.

Let’s dive into how CPQ works, how it simplifies renewals and amendments, and why it’s a game-changer for any business using subscription models.

Cpq

What is Salesforce CPQ?

At its core, Salesforce CPQ helps businesses configure their products, set pricing, and generate quotes quickly and accurately. Whether your product comes in different sizes, packages, or configurations, CPQ automates the process of calculating pricing based on your business rules, ensuring everything stays consistent. It also handles complex contracts, helping your sales team focus on selling rather than getting lost in the weeds of paperwork.

Now, imagine adding automation to this process, especially when it comes to renewing contracts or amending existing ones. This is where CPQ truly shines, offering standard functionality that reduces the workload while improving accuracy and customer satisfaction.

The Challenge of Renewals

Picture this: It’s the start of the week, and your inbox is overflowing with reminders—expiring contracts, upcoming renewals, and customer requests for service changes. Each contract has unique pricing, terms, and configurations. Manually tracking them is time-consuming and prone to human error. Missing a renewal date could lead to a loss of revenue or, worse, a dissatisfied customer.

Managing renewals manually can be overwhelming. But with Salesforce CPQ’s renewal functionality, this process is automated. Contracts are renewed at the right time, with minimal intervention from your team. No more worrying about missed deadlines or scrambling to send out renewal quotes. The system handles it for you, transforming what was once a cumbersome task into a smooth, efficient process.

 

How Renewal Functionality Works

Let’s say you have a loyal customer, Sara, whose subscription is nearing its end. In the past, you might have had to manually track her contract, reconfigure the terms, and send her a quote. But now, thanks to Salesforce CPQ’s renewal feature, the system automatically generates a renewal quote in advance, accounting for any updated pricing or discounts.

Your sales team receives a notification and can review the quote before sending it out. Sara, impressed with the efficiency, signs off on the renewal without delay. The entire process is handled smoothly, saving your team hours of manual work and ensuring customer satisfaction. Renewals become a way to strengthen your customer relationships, all while keeping your operations running efficiently.

Tackling Contract Amendments with Ease

But what happens when a customer wants to make changes mid-contract? Perhaps Sara reaches out midway through the year, wanting to upgrade her service package. In the past, you’d have to manually adjust the contract, update pricing, and notify the billing team. The whole process was time-consuming and left room for mistakes.

That’s where Salesforce CPQ’s amend functionality comes into play. Instead of starting from scratch, the system pulls up the existing contract, applies the requested changes, and automatically updates the quote. Whether Sara wants to add more users to her service or change the scope of her subscription, the amend functionality ensures everything is handled efficiently.

The amend feature also updates billing automatically, preventing errors that could arise from manual adjustments. Your team saves time, reduces the risk of miscommunication, and ensures that your customer is getting exactly what they need—without the hassle.

Automation Transforms Business Operations

Let’s face it—managing contracts manually is inefficient. Every contract expiration requires revisiting the original terms, configuring renewal details, and generating quotes. The more complex the contract, the higher the chances of errors. Handling amendments mid-term also introduces challenges, often leading to confusion or customer dissatisfaction.

But with Salesforce CPQ’s automated renewal and amend functionalities, the pressure is off. These features allow you to focus on what matters most: growing your business and building relationships with your customers. Automation increases accuracy, reduces manual effort, and ensures no details slip through the cracks.

Conclusion: A New Era of Contract Management

If your business is still managing renewals and amendments manually, now is the time to embrace the future with Salesforce CPQ. By automating these critical processes, you not only save time but also improve customer experience and protect your revenue streams.

Think about Sara—her smooth, seamless contract renewal and service upgrade are just one example of how CPQ’s renewal and amend features make a real difference. Your team can now focus on closing new deals, knowing that contract management is handled automatically.

Say goodbye to manual management and welcome the efficiency of Salesforce CPQ. It’s time to streamline your operations and let automation pave the way to a more successful, customer-focused future.

]]>
https://blogs.perficient.com/2024/10/01/unlock-efficiency-how-salesforce-cpqs-renewal-and-amend-features-simplify-your-business/feed/ 0 369806
Energy Organizations Seek Cross-Industry Solutions to Stay Competitive https://blogs.perficient.com/2024/09/17/energy-organizations-seek-cross-industry-solutions-to-stay-competitive/ https://blogs.perficient.com/2024/09/17/energy-organizations-seek-cross-industry-solutions-to-stay-competitive/#respond Tue, 17 Sep 2024 18:12:17 +0000 https://blogs.perficient.com/?p=369338

Broad changes are underway in energy and utilities organizations, many influenced by trends from other sectors. These shifts are pushing companies in utilities and oil and gas to rethink their approaches, creating new cross-industry dependencies and consumer interactions.

Working to Accommodate Changing Consumer Behaviors

Energy companies are responding to evolving consumer behavior. In utilities, the shift from viewing customers as ratepayers to treating them as consumers with retail-like expectations is evident. Consumers now expect flexible payment options, including online payments, recurring billing, and credit card acceptance. Utilities are also offering smart devices, like mobile-controlled thermostats, while oil and gas companies are rolling out loyalty programs at fuel stations.

Additionally, both industries are taking on advisory roles, helping consumers manage energy use and conservation, and entering the renewable generation market. In this, they’re increasingly relying on capital markets to address grid demand and stability challenges.

How Energy Companies Flipped the Switch on Electrification

The interaction between the energy and automotive industries is accelerating due to electrification. Consumers want fast EV charging and solutions to ease range anxiety. Oil and gas companies are adding EV charging stations alongside gas pumps, while utilities balance offering charging services with maintaining grid reliability. This deepening interplay between industries has driven companies to seek talent with cross-industry expertise.

Engineering Experience No Longer Needed for Executive Roles

Traditional pathways to leadership in energy companies, which often required engineering experience, are changing. We’re seeing executives from the automotive and retail sectors take on leadership roles in utilities. The focus is now on bringing in fresh perspectives and solutions from outside industries to meet evolving consumer needs.

Cross-Industry Expertise Is Critical to Partnership

As the energy landscape transforms, partnerships with companies that bring expertise from industries like automotive, manufacturing, and retail are becoming critical. These cross-industry collaborations are key to navigating the complex challenges and opportunities ahead.

If you’d like to learn more about tapping into expertise from other industries to accelerate your transformation, explore our industry expertise.

To dive into how we’re using cross-industry solutions to optimize the utilities industry, explore our energy and utilities expertise.

]]>
https://blogs.perficient.com/2024/09/17/energy-organizations-seek-cross-industry-solutions-to-stay-competitive/feed/ 0 369338
White Label Your Mobile Apps with Azure https://blogs.perficient.com/2023/12/21/white-label-your-mobile-apps-with-azure/ https://blogs.perficient.com/2023/12/21/white-label-your-mobile-apps-with-azure/#respond Thu, 21 Dec 2023 15:44:28 +0000 https://blogs.perficient.com/?p=338661

Enterprises and organizations that manage products with overlapping feature sets often confront a unique challenge. Their core dilemma involves creating multiple branded mobile applications that share a common codebase while enabling each app to provide a distinct user experience with minimal development overhead. As a leader in custom mobile solutions, Perficient excels in white labeling mobile applications using the power and flexibility of Azure DevOps.

Tackling the White Label Challenge

Consider a scenario where your application has gained popularity, and multiple clients desire a version that reflects their own brand identity. They want their logos, color schemes, and occasionally distinct features, yet they expect the underlying functionality to be consistent. How do you meet these demands without spawning a myriad of codebases that are a nightmare to maintain? This post outlines a strategy and best practices for white labeling applications with Azure DevOps to meet this challenge head-on.

Developing a Strategy for White Label Success

White labeling transcends merely changing logos and color palettes; it requires strategic planning and an architectural approach that incorporates flexibility.

1. Application Theming

White labeling starts with theming. Brands are recognizable through their colors, icons, and fonts, making these elements pivotal in your design. Begin by conducting a thorough audit of your current style elements. Organize these elements into variables and store them centrally, setting the stage for smooth thematic transitions.

2. Establishing Your Default Configuration

Choosing a ‘default’ configuration is crucial. It sets the baseline for development and validation. The default can reflect one of your existing branded applications and acts as a unified starting point for addressing issues, whether related to implementation or theming.

3. Embracing Remote/Cloud Configurations

Tools like the Azure App Configuration SDK or Firebase Remote Configuration allow you to modify app settings without altering the code directly. Azure’s Pipeline Library also helps manage build-time settings, supporting flexible brand-specific configurations.

Using remote configurations decouples operational aspects from app logic. This approach not only supports white labeling but also streamlines the development and customization cycle.

Note: You can add your Brand from the step 2. Adding Your “Brand” Configuration to Your Build into your build artifacts, and reference the correct values in your remote configurations for your brand.

Coordinating White Labeled Mobile Apps with Azure Pipelines

With your application ready for theming and remote configuration, use Azure Pipelines to automate the build and release of your branded app artifacts. The structure of your build stages and jobs will depend on your particular needs. Here’s a pattern you can follow to organize jobs and stages for clarity and parallelization:

1. Setting Up Your Build Stage by Platforms

Organize your pipeline by platform, not brand, to reduce duplication and simplify the build process. Start with stages for iOS, Android, and other target platforms, ensuring these build successfully with your default configuration before moving to parallel build jobs.

Run unit tests side by side with this stage to catch issues sooner.

2. Adding Your “Brand” Configuration to Your Build

Keep a master list of your brands to spawn related build jobs. This could be part of a YAML template or a file in your repository. Pass the brand value to child jobs with an input variable in your YAML template to make sure the right brand configuration is used across the pipeline.

Here’s an example of triggering Android build jobs for different brands using YAML loops:

stages:
    - stage: Build
      jobs:
          - job: BuildAndroid
            strategy:
                matrix:
                    BrandA:
                        BrandName: 'BrandA'
                    BrandB:
                        BrandName: 'BrandB'
            steps:
                - template: templates/build-android.yml
                  parameters:
                      brandName: $(BrandName)

3. Creating a YAML Job to “Re-Brand” the Default Configuration

Replace static files specific to each brand using path-based scripts. Swap out the default logo at src/img/logo.png with the brand-specific logo at src/Configurations/Foo/img/logo.png during the build process for every brand apart from the default.

An example YAML snippet for this step would be:

jobs:
    - job: RebrandAssets
      displayName: 'Rebrand Assets'
      pool:
          vmImage: 'ubuntu-latest'
      steps:
          - script: |
                cp -R src/Configurations/$(BrandName)/img/logo.png src/img/logo.png
            displayName: 'Replacing the logo with a brand-specific one'

4. Publishing Your Branded Artifacts for Distribution

Once the pipeline jobs for each brand are complete, publish the artifacts to Azure Artifacts, app stores, or other channels. Ensure this process is repeatable for any configured brand to lessen the complexity of managing multiple releases.

In Azure, decide whether to categorize your published artifacts by platform or brand based on what suits your team better. Regardless of choice, stay consistent. Here’s how you might use YAML to publish artifacts:

- stage: Publish
  jobs:
      - job: PublishArtifacts
        pool:
            vmImage: 'ubuntu-latest'
        steps:
            - task: PublishBuildArtifacts@1
              inputs:
                  PathtoPublish: '$(Build.ArtifactStagingDirectory)'
                  ArtifactName: 'drop-$(BrandName)'
                  publishLocation: 'Container'

By implementing these steps and harnessing Azure Pipelines, you can skillfully manage and disseminate white-labeled mobile applications from a single codebase, making sure each brand maintains its identity while upholding a high standard of quality and consistency.

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2023/12/21/white-label-your-mobile-apps-with-azure/feed/ 0 338661
Mastering Data Integration: Unveiling EPM Pipeline’s Cutting-Edge Features https://blogs.perficient.com/2023/09/28/mastering-data-integration-unveiling-epm-pipelines-cutting-edge-features/ https://blogs.perficient.com/2023/09/28/mastering-data-integration-unveiling-epm-pipelines-cutting-edge-features/#respond Thu, 28 Sep 2023 14:09:46 +0000 https://blogs.perficient.com/?p=344679

EPM Pipelines is quickly becoming a very useful addition to the arsenal of many of our Oracle customers. It is especially important for those users who do not have dedicated personnel or server-related resources to automate their daily business processes. In my recent blog, I detailed how to create a Pipeline to perform data load related activities in a workflow. In this blog, I will discuss a few additional features in Pipelines that will help you enhance your data integration experience.

Clear Cube Job Type

Users can now use a Clear Cube job type to clear all or specific data from a cube. With this job type, you can perform the following:

  • Clear data using member selection
  • Clear data using MDX query
  • Clear supporting details and comments
  • Clear attachments
  • Choose to clear physical or logical data
  • Use run-time prompts to define clear regions

 

Note that this job type is available only in Financial Consolidation and Close, FreeForm, Planning, Planning Modules, Tax Reporting.

 

To create a Clear job type

  • Click B1 to create a new job in the Stage of the Pipeline to which you want to add a Clear job.

B2

  • Select ‘Clear Cube‘ from the Job Type drop down.

B3

  • Select the cube to clear from the ‘Name’ drop down.

B4

  • Provide a ‘Title’ to the job.

B5

  • Optionally, add run time labels and values.

B6

  • The Pipeline is auto-saved.

B7

  • Run the updated Pipeline by clicking the play button.

B8

 

File Operations Job Type

Users can use the File Operations job type to run the following operations at runtime:

  • Copy a file – copies the file from a source directory to a target directory and retains the original file in the source directory after the copy operation to a target directory
  • Move a file – moves the file from a source directory to a target directory, but does not retain the moved file in the source directory after the move operation to a target directory
  • Unzip a file – Unzips a file in the same folder

 

Note that this job type is available only in Enterprise Profitability and Cost Management, Financial Consolidation and Close, FreeForm, Planning, Planning Modules, Tax Reporting.

 

To create a Move File Operations job type

  • Click B1 to create a new job in the Stage of the Pipeline to which you want to add a Move job.

B9

  • Select a ‘Connection‘ and provide a ‘Title’.

B10

  • Make the File operation parameter selections.
    • File Operation: Copy, Move or Unzip.
    • Source Directory: directory from which to copy, move, or unzip the file.
    • Source File Name: name of the file to copy, move, or unzip.
    • Target Directory: directory to which files are copied. The target directory can be: inbox, openbatch, openbatchml, and epminbox.
    • Target File Name: name of the file that has been copied, moved, and unzipped.
  • A target file name is not required for an “Unzip” file operation

B11

  • The Pipeline is auto-saved.
  • Run the updated Pipeline by clicking the play button. The file is moved to the target directory.

The Clear Cube and File Operations can be great additions to your EPM Pipelines for maintaining data in the cubes or file-based data integrations.

]]>
https://blogs.perficient.com/2023/09/28/mastering-data-integration-unveiling-epm-pipelines-cutting-edge-features/feed/ 0 344679
Effortless Data Integration: Step-by-step Editing and Running of Oracle EPM Cloud Pipelines https://blogs.perficient.com/2023/08/16/effortless-data-integration-step-by-step-editing-and-running-of-oracle-epm-cloud-pipelines/ https://blogs.perficient.com/2023/08/16/effortless-data-integration-step-by-step-editing-and-running-of-oracle-epm-cloud-pipelines/#respond Wed, 16 Aug 2023 14:11:27 +0000 https://blogs.perficient.com/?p=340637

In my recent blog, Efficiency Redefined: How Oracle Cloud EPM Pipelines Can Streamline Your Business Process, I discussed how Perficient implemented a single streamlined process to organize the daily data load and processing jobs of one of our clients. Now, let’s review editing an existing Pipeline to add a new job, how to execute the Pipeline, and review the log file.

Editing a Pipeline to Add a New Variable

  1.  From the Data Integration home page under Data Exchange, click C1 to the right of the Pipeline you want to edit, and then select ‘Pipeline Details‘.

C2

  1. Click the C3 icon and ‘Details’ tab to edit the Pipeline details like name and number of parallel jobs.

C4

C5

  1. Click ‘Variables‘ tab to add/update/delete Variables.

C6

  1. Click C7 to add a new variable. A new row is added to the bottom of the page. Enter the following details:
    • Variable name: CopyScenario
    • Display Name: Copy Scenario
    • Display Sequence: 8
    • Required: Yes
    • Validation Type: Custom List

C8

  1. To add custom values to be available for selection during run time for this variable, click the C3 icon. Enter the list values in the below format.

C9

  1. Below is the available values we want for the CopyScenario variable during run time. Click ‘OK‘. Click ‘Save‘.

C10

 

Editing a Pipeline to Add a New Job

  1. On the Pipeline page, expand the stage to which you want to add a new job.

C11

  1. Click the add job icon.

C12

  1. To add a new job to run the data copy rule, select the below:
    • Type: Business Rule
    • Connection: Local
    • Name (Select the business rule to launch in this job): Copy Working to Final
    • Title (Name of the job): Copy Working to Final
    • Sequence: 2
    • Label (runtime prompt as it is defined in the selected business rule): Not Applicable
    • Value (custom value type for a runtime prompt, specify the actual value): Not Applicable

C13

  1. The new job is auto-saved to the Pipeline.

Running a Pipeline

  1. You can open the Pipeline and Click C14 to run the Pipeline. C15
  2. Or you can run it on the Data Integrations page. C16
  3. On the Run Pipeline page, complete any runtime prompts and then click ‘Run‘. C17
  1. When the Pipeline is running, the system shows the status as C18 . You can click the status icon to download the log. C19

 

  1. When a job has been successfully executed, C20 a checkmark appears in the job card. C21
  2. You can see the status of the Pipeline in Process Details. You can click on the C24 icon to download the log files for each job. C22

With the above detailed steps, Perficient was able to convert a manual and time-consuming data load process into a streamlined Pipeline that can be executed end to end with a single click. Mastering the process of editing and running EPM Pipelines can be crucial to your organization achieving efficiency and streamlined data management, thus harnessing the full potential of Oracle EPM Cloud for your business endeavors.

 

 

]]>
https://blogs.perficient.com/2023/08/16/effortless-data-integration-step-by-step-editing-and-running-of-oracle-epm-cloud-pipelines/feed/ 0 340637
Efficiency Redefined: How Oracle Cloud EPM Pipelines Can Streamline Your Business Process https://blogs.perficient.com/2023/08/09/efficiency-redefined-how-oracle-cloud-epm-pipelines-can-streamline-your-business-process/ https://blogs.perficient.com/2023/08/09/efficiency-redefined-how-oracle-cloud-epm-pipelines-can-streamline-your-business-process/#comments Wed, 09 Aug 2023 18:52:33 +0000 https://blogs.perficient.com/?p=340622

Oracle’s new Pipelines feature in the Enterprise Performance Management Cloud (EPM) applications was made available with the June 2023 update. Pipelines allow for organizing individual jobs into one overall procedure directly within the EPM application, even across multiple EPM applications. You can learn more about this new feature in a recent Perficient blog Oracle EPM Cloud Feature Spotlight – Pipelines.

Below I’ll detail how Perficient used Pipelines to organize and automate the complete end-to-end data load process for one of our clients.

The daily load process of our client’s EPBCS application includes three steps:

  • Actuals data load from the ERP Cloud into the Reporting and Financials cubes
  • Copy of Actuals to Forecast scenario
  • Balance Sheet and Cash Flow Calculation

Each of these steps in the load process needs to be run in a specific order and is contingent upon the success of the previous step. But since the client did not own a remote server to host EPM Automate scripts that would facilitate the sequential execution of these jobs, the Administrator was forced to perform each step manually.

With the new Pipelines feature, we could organize the three steps into one workflow or “Pipeline” within the application, thus simplifying the load process. Here’s how we did it!

B1

Creating a Pipeline

 Here are the steps that were used to create a Pipeline for the daily load process:

  1. From the Data Exchange page, Data Integration tab, click C7, and then select Pipeline.
    Note: Only Service Administrators can create and run a Pipeline.B2
  2. From the Create Pipeline page, click ‘Details’, and in the ‘Pipeline Name’ enter the process name: “Daily Actuals Load”.
  3. In ‘Pipeline Code’, specify a Pipeline code: “DailyActuals” (Alphanumeric characters with minimum size 3 to maximum 30. No special characters or space is allowed).
  4. All jobs in the process run in a sequence so enter Maximum Parallel Jobs for each stage as 1.
  5. Click ‘Save and Continue‘.
  6. B3On the Variables page, a set of out-of-box variables (global values) for the Pipeline is available from which you can set parameters at runtime (You can delete variables you do not need or create new variables. Variables can be pre-defined types like: “Period,” “Import Mode,” “Export Mode,” etc., or they can be custom values used as job parameters).
  7. Keep the below default runtime variables included with a Pipeline. Click ‘Save‘. B4
  8. On the Pipeline page, click B5  to create a new stage card. It contains jobs that you want to run in the Pipeline at that stage and can include jobs of any type and for multiple target applications.
  9. In the Stage Editor, enter the following:
    • Stage Name: Loads
    • Title: ERP Actuals Load
    • Sequence (A number to define the chronological order in which a stage is executed): 1
    • Parallel: Off B6
  10. On the stage card, click ‘>‘ to add a new job to the stage. B7
  11. On the stage card, click B8 (Create Job icon). B9
  12. A new job card is displayed on the stage card. B10
  13. In the Job Editor, from ‘Type‘ drop-down, select the ‘Integration‘ type job to add to the stage card. B11
  14. From the ‘Connection‘ drop-down, select ‘Locali if the data integration is in the current environment.
  15. Enter the following details for the Job:
    • Name (Select the Data Integration that loads from Cloud ERP to this EPBCS instance): Fusion US to EPBCS
    • Title (Enter the title of job name to appear on the job card): RPT Actuals Load
    • Sequence: 1
    • Job Parameters: select any job parameters associated with the job like Import Mode, Export Mode, Start & End Period, Data load POV. B12
  1. Similarly, create a second job in this stage card to load data to the Financials cube. B13
  2. Create the second stage to copy Actuals to the Forecast scenario by clicking on B14. Provide a stage sequence 2. B15
  3. Add a Business Rule type job and enter the following details:
    • Connection: Local
    • Name (Select the business rule to launch in this job): Copy Actuals to OEP_FS
    • Title (Name of the job): Copy Actuals to Forecast
    • Sequence: 1
    • Label (runtime prompt as it is defined in the selected business rule): CurMonth
    • Value (custom value type for a runtime prompt, specify the actual value): Not Applicable B16
  1. Create the third and final stage of the Pipeline to calculate Balance Sheet and Cashflow at the Forecast level and add a Business rule type Job to use the desired rule. Close the Pipeline. B17

          Note: The Pipeline is auto-saved while you are creating stages and jobs.

  1. The new Pipeline is added to the Data Integration homepage. Each Pipeline is identified with a B19 under the ‘Type’ header.

B18

With a few simple steps, we organized the different types of jobs that needed to run for the data to load and process into one Pipeline. I will discuss how you can edit an existing pipeline, run it, and review the log files in my next blog!

]]>
https://blogs.perficient.com/2023/08/09/efficiency-redefined-how-oracle-cloud-epm-pipelines-can-streamline-your-business-process/feed/ 3 340622
Industries Document Generation https://blogs.perficient.com/2023/07/11/industries-document-generation/ https://blogs.perficient.com/2023/07/11/industries-document-generation/#comments Tue, 11 Jul 2023 11:27:19 +0000 https://blogs.perficient.com/?p=339833

Every company needs documents for its processes, information, contracts, proposals, quotes, reports, non-disclosure agreements, service agreements, and for various other purposes. Document creation and management is a crucial part of their operations. To make it easy, Omnistudio provides document generation capabilities, tailored to meet the unique requirements of different Industries. In this blog post, we will explore how OmniStudio empowers document generation across a range of industries, enhancing efficiency, accuracy, and compliance.

What is Industries Document Generation?

OmniStudio offers a powerful and robust solution for document generation across various industries, empowering organizations to automate and streamline their document creation processes.

To get the document generation capabilities, you need to install ‘OmniStudio’ and ‘Salesforce Industry’ packages in the same org. Within the package, you get the tools necessary to optimize the document generation process. A sample functionality comes with the package. You can leverage that or can build your own to fulfill your purposes. Let’s dive into how we can configure the document generation functionality.

How to generate documents?

The Industries Document Generation functionality uses Document Template along with Omnistudio components to generate documents. The Omnistudio components generally involve Omniscripts, Integration Procedures and Dataraptors. As part of the package, it provides two sample Omniscripts:

  • Client-Side Omniscript: To generate documents with user inputs
  • Server-Side Omniscript: To Generate documents without user interaction

You can leverage these Omniscripts or can create your own. The Dataraptors inside these Omniscripts extract require data from Salesforce and merge it with the document template to dynamically generate several documents from a single template file. The document template can be of type Microsoft Word (.docx), Microsoft PowerPoint (.pptx), and Web templates. The templates can be designed/configured inside the Document Template Designer.

Document Template Designer:

The Document Template Designer tool is one of the main components for the document generation functionality. It enables businesses to easily create and customize their document templates. It offers a user-friendly interface, allowing users to design visually appealing and professional templates tailored to their specific needs.

Below Image shows what it looks like.

Image - Document Template Designer

Document Template Designer – Configuration

 

Here are some key features of Document Template Designer:

  • Multiple Template Types:

The Document template designer allows us to create 3 types of templates:

    1. Vlocity Web Templates: Allows us to create template in the Template Designer itself.
    2. Microsoft Word .DOCX Template: Create a MS Word document and upload it in the Template Designer
    3. Microsoft Powerpoint .PPTX Template: Create a MS PPT document and upload it in the Template Designer
  • Versioning and Template Management:

It allows users to manage multiple versions of their templates and track changes over time. For Word/PPT documents, you can simply replace the old file with an updated file.

  • Dynamic Data Binding:

It enables users to insert Salesforce data into Tokens. Tokens are nothing but the Placeholder for the data fields in Salesforce. Learn more about the tokens in the Documentation – Tokens in Microsoft Word or Microsoft PowerPoint Documents

  • Localization and Multi-Language Support:

Users can create templates in different languages, ensuring that documents can be generated to meet the diverse needs of their global customer base.

  • Preview and Testing:

Prior to finalizing a document template, users can preview and test how the template will appear when populated with data.

Client-Side Document Generation:

The client-side document generation mechanism allows us to collect information from users and generate documents based on those inputs. Use client-side Omniscript that comes with the package or modify it depending on the requirements. This Omniscript enables the user to pick the document template and provides options for what type of document needs to be generated. Then, it passes this information as input parameters to an LWC component to generate the actual document. Then the generated document can be attached to a Salesforce object or can be stored in external systems.

Image2

This image is taken from: https://help.salesforce.com.

Here are the features of Client-side Document Generation:

  • Flexibility of document generation:

With Omniscript UI, user inputs can be taken to decide what type of document to generate. Users have flexibility to choose templates and a number of documents to generate.

  • Runs on the client machine:

Client-side document generation is a browser-based process. This results in fast processing and saving round-trip to the server.

  • Document Preview:

You can display a generated document to the user and can send the document via email or to other integrated systems.

Server-Side Document Generation:

Server-side document generation service defines the document generation process without having user interaction, i.e., all the processes for this technique happen in the back-end. It’s an asynchronous process built for generating large and heavy-rendering documents. It also uses either Apex classes, Integration Procedure, or sample Omniscript to generate documents.

Server-Side Document Generation Flow

This image is taken from: https://help.salesforce.com.

The process for server-side document generation uses remote API services. These services hosted on Salesforce Hyperforce environment. Using Integration Procedures or apex code, it sends request to this environment to generate documents. The generated document then gets stored in Salesforce as content documents.

Here are the features of Server-side document Generation:

  • Attachment and Storage Management:

The generated documents are stored in Salesforce Content. Additionally, upload them to external file storage systems or integrate with document management solutions for efficient storage and retrieval.

  • Batch Processing:

It uses batch processing to go with Salesforce’s multi-tenant architecture, as it involves the processing of heavy documents.

  • Automation:

The document generation process can be triggered based on specific events or actions, ensuring the right documents are generated at the right time.

  • Scalability & Performance:

Server-side document generation in OmniStudio is designed to manage large volumes of data and generate documents efficiently. It ensures optimal performance and scalability, allowing businesses to generate documents at scale.

Conclusion:

Salesforce, with its industry-specific solutions and robust document generation capabilities, empowers businesses to automate and streamline their document generation and management processes. Providing personalized, compliant, and efficient document generation, helps organizations enhance customer experience, drive productivity, and achieve tangible business outcomes. Embracing the capabilities of Industries Document Generation can revolutionize your industry, enabling you to stay ahead in today’s competitive landscape.

]]>
https://blogs.perficient.com/2023/07/11/industries-document-generation/feed/ 2 339833
ETL Vs ELT Differences https://blogs.perficient.com/2023/07/04/etl-vs-elt-differences/ https://blogs.perficient.com/2023/07/04/etl-vs-elt-differences/#comments Tue, 04 Jul 2023 12:49:45 +0000 https://blogs.perficient.com/?p=338959

What is ETL?

Elt Process

ETL stands for Extract, Transform, Load. This process is used to integrate data from multiple sources into a single destination, such as a data warehouse. The process involves extracting data from the source systems, transforming it into a format that can be used by the destination system, and then loading it into the destination system. ETL is commonly used in business intelligence and data warehousing projects to consolidate data from various sources and make it available for analysis and reporting.

What is ELT?

Elt

ELT stands for Extract, Load, Transform. It is a process similar to ETL but with a different order of operations. In ELT, data is first extracted from source systems and loaded into the destination system, and then transformed into a format that can be used for analysis and reporting. This approach is often used when the destination system has the capability to perform complex transformations and data manipulation. ELT is becoming more popular with the rise of cloud-based data warehouses and big data platforms that can handle large-scale data processing and transformation.

Here’s What Makes these Two Different:

Etl Vs Elt 768x488

ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) are two methods of data integration used in data warehousing.

ETL involves extracting data from various sources, transforming it into a format that can be used by the target system, and then loading it into the target system. The transformation process involves cleaning, validating, and enriching the data before it is loaded. ETL is a batch-oriented process that requires a significant amount of computing power and storage space.

Conversely, ELT involves extracting data from various sources and loading it directly into the target system without any transformation. The transformation process is performed after the data has been loaded into the target system. ELT is a more modern approach that takes advantage of the processing power of modern data warehouses and allows for real-time analysis of data.

The main difference between ETL and ELT is the order in which the transformation process is performed. In ETL, transformation is performed before loading, while in ELT, transformation is performed after loading. The choice between ETL and ELT depends on the specific needs of the organization and the characteristics of the data being integrated.

How is ELT Different from ETL and What are its Advantages and Disadvantages.

Advantages of ELT over ETL:

  • Faster processing: ELT can process data faster than ETL because it eliminates the need for a separate transformation tool.
  • Lower latency: ELT can provide lower latency in data processing because it can load data directly into the data warehouse without the need for intermediate storage.
  • More efficient use of resources: ELT can make more efficient use of computing resources because it can leverage the processing power of the data warehouse.
  • Better support for big data: ELT is better suited for big data environments because it can handle large volumes of data without the need for additional infrastructure.

Disadvantages of ELT over ETL:

  • Dependency on data warehouse: ELT processes are dependent on the availability and compatibility of the data warehouse, which can cause delays or failures in data integration.
  • Complexity: ELT requires a high level of technical expertise and may be more difficult to implement than ETL.
  • Data quality issues: ELT can result in data quality issues if not properly designed or executed, leading to inaccuracies or incomplete data in the data warehouse.
  • Security risks: ELT processes can introduce security risks if sensitive data is not properly protected during extraction, loading, and transformation.

So which approach to choose, ETL or ELT?

Etl Vs Elt Data Warehouse 1024x711

ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) are two approaches to data integration that are widely used in the industry. Both ETL and ELT are used to extract data from multiple sources, transform it into a format that can be used by the target system, and load it into the target system. However, there are some key differences between the two approaches.

ETL (Extract, Transform, Load):

ETL is a traditional approach to data integration that has been used for many years. In this approach, data is first extracted from various sources and then transformed into a format that can be used by the target system. The transformed data is then loaded into the target system. ETL is a batch process that is usually done on a scheduled basis.

The main advantage of ETL is that it allows for complex transformations to be performed on the data before it is loaded into the target system. This means that data can be cleaned, filtered, and enriched before it is used. ETL also allows for data to be consolidated from multiple sources, which can be useful when data is spread across different systems.

However, ETL can be slow and resource intensive. Because the transformations are performed before the data is loaded into the target system, large amounts of data can take a long time to process. ETL also requires a dedicated server or cluster to perform the transformations.

Example of ETL:

A company wants to integrate data from multiple sources, including sales data from its CRM system and financial data from its accounting software. They use an ETL tool to extract the data, transform it into a common format, and load it into a data warehouse. The ETL process includes cleaning and filtering the data and performing calculations to create new metrics. The transformed data is then used for reporting and analysis.

ELT (Extract, Load, Transform):

ELT is a newer approach to data integration that has become popular in recent years. In this approach, data is first extracted from various sources and then loaded into the target system. Once the data is in the target system it is transformed into a format that can be used by the system.

The main advantage of ELT is that it is faster and more scalable than ETL. Because the transformations are performed after the data is loaded into the target system, large amounts of data can be processed quickly. ELT also requires less hardware than ETL, as the transformations can be performed on the target system itself.

However, ELT is not suitable for complex transformations. Because the transformations are performed after the data is loaded into the target system, there are limitations on what can be done with the data. ELT is also not suitable for consolidating data from multiple sources, as the data must be loaded into the target system before it can be combined.

Example of ELT:

A company wants to migrate its on-premises database to the cloud. They use an ELT tool to extract the data from the on-premises database and load it into the cloud database. Once the data is in the cloud database, they use SQL queries and other tools to transform the data into the desired format. The ELT process is faster and more scalable than ETL, as it does not require a dedicated server or cluster for transformations.

Conclusion:

In conclusion, both ETL and ELT have their advantages and disadvantages. ETL is best suited for situations where complex transformations are required and where data needs to be consolidated from multiple sources. ELT is best suited for situations where speed and scalability are important and where simple transformations are sufficient. Ultimately, the choice between ETL and ELT will depend on the specific needs of the organization and the nature of the data being integrated.

Please share your thoughts and suggestions in the space below, and I’ll do my best to respond to all of them as time allows.

For more such blogs click here

Happy Reading!

]]>
https://blogs.perficient.com/2023/07/04/etl-vs-elt-differences/feed/ 13 338959
3 Reasons to Use Sitecore Workflow https://blogs.perficient.com/2023/06/02/3-reasons-to-use-sitecore-workflow/ https://blogs.perficient.com/2023/06/02/3-reasons-to-use-sitecore-workflow/#comments Fri, 02 Jun 2023 12:01:42 +0000 https://blogs.perficient.com/?p=336860

Many consider workflow to be a necessary evil. But it is necessary, and I will make my case in this post. I highly recommend working through the requirements for workflow and including it in your initial site build or rebuild. However, not all of us have that luxury – and that’s okay. The great thing about workflow in Sitecore is that at the end of the day, it is just another field that can be modified like the others. This means that you can develop and apply workflow to Sitecore items at any point in your lifecycle via Powershell. 

Here are three reasons that I am a big advocate of implementing workflow. I would love to hear yours in the comments or on LinkedIn.

1. Scalability

Here is a common scenario: when building a website, workflow is deemed unnecessary and pushed to Phase 2 because there are only a few content authors. Other features and bugs continue to take priority over implementing workflow. Now you are a few years down the line and your organization has grown, or you have implemented features that require a new set of content authors to be trained. Suddenly this lack of workflow is a problem. 

An organization may not have any issue allowing all content authors to be administrators early in a Sitecore implementation. In fact, it is often viewed as the more efficient option if their authors are Sitecore savvy. It will save money on the workflow implementation, and the authors will not be limited to performing only certain actions. 

However, this model is not scalable. Businesses experience turnover and growth. It is unlikely that you will have the same people using Sitecore in a year that you do today. Making everyone an administrator is risky, and implementing even the most basic user roles and workflow will enable your organization to adopt Sitecore on a larger scale. 

2. Audit Trail

When you make a change to a page as an administrator, you just make the change and save. When you are using workflow and user roles, you use the “lock and edit” action that checks out and adds a new version of the page. This creates an audit trail of the editing history of that page:

Workflow

An example of a Sitecore page that has 9 different versions created through workflow.

This history serves several purposes – the first being that it eliminates confusion. I have aided in many investigations when something unexpectedly changed on a page. No one knew who made the change or when it happened. Sitecore will show you which account last updated each item and when, but beyond that you must rely on the logs. Combing through the logs can be extremely time-consuming if you cannot pinpoint the day and time that the activity took place. If you have this editing history as shown above, you have a clear audit trail of what changed and when. 

Secondly, having this history can almost serve an “undo” function. It allows you to revert the page to a previous version if something went wrong with the most recent version. 

3. Ability to run automated publishing jobs

Let’s consider the case of an organization that has a lot of content but a small approval team. To make things easier on the approvers, they may elect to run an automated publishing job 1-2 times per 24 hours to publish all finalized content to the front end. The team would approve content throughout the day, publish what is needed to go live on the site immediately, and the automated job would publish the rest of the approved content on schedule. 

Automated publishing jobs are not possible unless you have workflow in place. The reason these jobs work is because they only publish approved content, not content that is in Draft or Waiting for Approval state. If there is no workflow at all, these jobs would publish all saved changes. This would potentially result in changes that were not ready to be user-facing going live on the front end. 

Applying workflow after implementation

The good news: if you have already implemented a site without workflow, it is simple enough to run a project to implement it and apply it to the items in your solution. You can apply basic workflow, or you can develop a custom workflow tailored to your organization and its needs.  

When the development is done, you can deploy the code to production and plan to run a Powershell script to apply the workflow at a time when it is most convenient for your team. This means that you can time the deployment and the “enabling” separately to ensure that your entire team is prepared. When you are applying workflow via script, you will likely be setting each item to the Approved state initially. The content team will need advance notice to ensure all recent changes are ready for production before the workflow changes are applied, since setting each item to Approved and publishing will push all changes to the front end. 

Summary

Workflow is a worthwhile investment. If it is not already in your solution, I hope it is an upcoming stop on your roadmap. If you are also passionate about workflow or have any other compelling reasons to add to this list, reach out to me and start a dialog on LinkedIn!

]]>
https://blogs.perficient.com/2023/06/02/3-reasons-to-use-sitecore-workflow/feed/ 6 336860