Platforms and Technology Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/ Expert Digital Insights Tue, 29 Apr 2025 18:03:09 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Platforms and Technology Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/ 32 32 30508587 How Innovative Healthcare Organizations Integrate Clinical Intelligence https://blogs.perficient.com/2025/04/28/how-innovative-healthcare-organizations-integrate-clinical-intelligence/ https://blogs.perficient.com/2025/04/28/how-innovative-healthcare-organizations-integrate-clinical-intelligence/#respond Mon, 28 Apr 2025 15:29:30 +0000 https://blogs.perficient.com/?p=380660

Healthcare organizations (HCOs) face mounting pressure to boost operational efficiency, improve health and wellness, and enhance experiences. To drive these outcomes, leaders are aligning enterprise and business goals with digital investments that intelligently automate processes and optimize the health journey. 

Clinical intelligence plays a pivotal role in this transformation. It unlocks advanced data-driven insights that enable intelligent healthcare organizations to drive health innovation and elevate impactful health experiences. This approach aligns with the healthcare industry’s quintuple aim to enhance health outcomes, reduce costs, improve patient/member experiences, advance health equity, and improve the work life of healthcare teams. 

Intelligent Healthcare Organizations: Driven By Clinical Intelligence  

Our industry experts were recently interviewed by Forrester for their April 2025 report, Clinical Intelligence Will Power The Intelligent Healthcare Organization, which explores ways healthcare and business leaders can transform workflows to propel the enterprise toward next-gen operations and experiences. 

We believe the fact that we were interviewed for this report highlights our commitment to optimize technology, interoperability, and digital experiences in ways that build consumer trust, drive innovation, and support more-personalized care.  

We combine strategy, industry best practices, and technology expertise to deliver award-winning results for leading health plans and providers: 

  • Business Transformation: Activate strategy for transformative outcomes and health experiences. 
  • Modernization: Maximize technology to drive health innovation, efficiency, and interoperability. 
  • Data Analytics: Power enterprise agility and accelerate healthcare insights. 
  • Consumer Experience: Connect, ease, and elevate impactful health journeys. 

Understand and Deliver On Consumer Needs and Expectations 

Every individual brings with them an ever-changing set of needs, preferences, and health conditions. Now more than ever, consumers are flat out demanding a more tailored approach to their health care. This means it is imperative to know your audience. If you do not approach people as individuals with unique, personal needs, you risk losing them to another organization that does.  

Becoming an intelligent healthcare organization (IHO) takes more than just a technology investment; it is a complete restructuring of the enterprise to infuse and securely utilize clinical intelligence in every area and interaction.

In its report, Forrester defines an IHO as, “A healthcare organization that perpetually captures, transforms, and delivers data at scale and creates and seamlessly disseminates clinical intelligence, maximizing clinical workflows and operations and the experience of employees and customers. IHOs operate in one connected system that empowers engagement among all stakeholders.”

Ultimately, consumers – as a patient receiving care, a member engaging in their plan’s coverage, or a caregiver supporting this process – want to make and support informed health care decisions that cost-effectively drive better health outcomes. IHOs focus on delivering high-quality, personalized insights and support to the business, care teams, and consumers when it matters most and in ways that are accessible and actionable.

Orchestrate Better Health Access 

Digital-first care stands at the forefront of transformation, providing more options than ever before as individuals search for and choose care. When digital experiences are orchestrated with consumers’ expectations and options in mind, care solutions like telehealth services, find-care experiences, and mobile health apps can help HCOs deliver the right care at the right time, through the right channel, and with guidance that eases complex decisions, supports proactive health, and activates conversions. 

The shift toward digital-first care solutions means it is even more crucial for HCOs to understand real-time consumer expectations to help shape business priorities and form empathetic, personalized experiences that build trust and loyalty. 

In its report, Forrester states, “And as consumer trust has taken a hit over the past three years, it is encouraging that 72% of healthcare business and technology professionals expect their organization to increase its investment in customer management technologies.”  

Clinical intelligence, leveraged well, can transform the ways that consumers interact and engage across the healthcare ecosystem. IHOs see clinical intelligence as a way to innovate beyond mandated goals to add business value, meet consumers’ evolving expectations, and deliver equitable care and services.  

Interoperability plays a crucial role in this process, as it enables more seamless, integrated experiences across all digital platforms and systems. This interconnectedness ensures that consumers receive consistent, coordinated care, regardless of where they are seeking treatment and are supported by informed business and clinical teams. 

Mandates such as Health Level 7 (HL7) standards, Fast Healthcare Interoperability Resources (FHIR), and Centers for Medicare & Medicaid Services (CMS) Interoperability and Patient Access Final Rule are creating a more connected and data-driven healthcare ecosystem. Additionally, CMS price transparency regulations are empowering consumers to become more informed, active, and engaged patients. Price transparency and cost estimator tools have the potential to give organizations a competitive edge and drive brand loyalty by providing a transparent, proactive, personalized, and timely experience. 

The most successful organizations will build a proper foundation that scales and supports successive mandates. Composable architecture offers a powerful, flexible approach that balances “best in breed,” fit-for-purpose solutions while bypassing unneeded, costly features or services. It’s vital to build trust in data and with consumers, paving the way for ubiquitous, fact-based decision making that supports health and enables relationships across the care continuum. 

Success in Action: Empowering Healthcare Consumers and Their Care Ecosystems With Interoperable Data 

Enable Caregivers and Care Teams 

As the population ages, caregivers play an increasingly important role in the healthcare journey, and their experience is distinct. They may continually move in and out of the caregiver role. It’s essential to understand and engage these vital partners, providing them with important tools and resources to support quality care.  

Clinical intelligence can provide HCOs with advanced insights into the needs of caregivers and care teams, helping clinical, operational, IT, digital, and marketing leaders design systems that support the health and efficacy of these important care providers.  

Integrated telehealth and remote monitoring have become essential to managing chronic conditions and an aging population. Intuitive, integrated digital tools and personalized messaging can help mitigate potential health barriers by proactively addressing concerns around transportation, costs, medication adherence, appointment scheduling, and more.  

A well-planned, well-executed strategy ideally supports access to care for all, creating a healthier and more-welcoming environment for team members to build trust, elevate consumer satisfaction, and drive higher-quality care.  

Success in Action: A Digital Approach to Addressing Health Equity 

Improve Operational Efficiencies for Care Teams 

HCO leaders are investing in advanced technologies and automations to modernize operations, streamline experiences, and unlock reliable insights.  

Clinical intelligence paired with intelligent automations can accelerate patient and member care for clinical and customer care teams, helping to alleviate stress on a workforce burdened with high rates of burnout.  

In its report, Forrester shares, “In Forrester’s Priorities Survey, 2024, 65% or more of healthcare business and technology professionals said that they expect their organization to significantly increase its investments in business insights and analytics, data and information management, AI, and business automation and robotics in the next 12 months.”  

It’s clear the U.S. healthcare industry stands on the cusp of a transformative era powered by advanced analytics and holistic business transformation. AI-driven automations can reduce administrative costs, while AI-enabled treatment plans offer hyper-personalized precision medicine. As technology continues to shape healthcare experiences, Felix Bradbury, Perficient senior solutions architect, shares his thoughts on the topic: 

“Trust is crucial in healthcare. Understanding how to make AI algorithms interpretable and ensuring they can provide transparent explanations of their decisions will be key to fostering trust among clinicians and patients.” 

AI can be a powerful enabler of business priorities. To power and scale effective use cases, HCOs are investing in core building blocks: a modern and secure infrastructure, well-governed data, and team training and enablement. A well-formed strategy that aligns key business needs with people, technology, and processes can turn data into a powerful tool that accelerates operational efficiency and business success, positioning you as an intelligent healthcare organization.  

Success in Action: Engaging Diverse Audiences As They Navigate Cancer Care 

Healthcare Leaders Turn To Us

Discover why we have been trusted by the 10 largest health systems and the 10 largest health insurers in the U.S. Explore our healthcare expertise and contact us to learn more. 

]]>
https://blogs.perficient.com/2025/04/28/how-innovative-healthcare-organizations-integrate-clinical-intelligence/feed/ 0 380660
Certifications | A rocket fuel for growth https://blogs.perficient.com/2025/04/28/certifications-a-rocket-fuel-for-growth/ https://blogs.perficient.com/2025/04/28/certifications-a-rocket-fuel-for-growth/#comments Mon, 28 Apr 2025 14:51:49 +0000 https://blogs.perficient.com/?p=380616

There´s no doubt that certifications can speed up and accelerate business growth in many ways. Of course, it´s not the only factor but certainly they play a big part in being a partner-certified consultancy like Perficient is. Certifications provide a reliable technical validation that demonstrates expertise. So, in a competitive market, when a client is looking for a high degree of skill or competence, certifications can contribute as a differentiator in the “big tech ecosystem” showing not only commitment to quality standards but also concern for keeping skills up to date.    

Surviving in the “big tech ecosystem”

In nature, the key to success in ecosystems is the interaction between the parts, linking them to balanced cycles where energy flows and nature flourishes.[1] Technology ecosystems are like natural ones, representing interconnected networks of platforms, applications, developers, partners, and users that collaborate to create greater value than any single part could achieve independently. As in nature, in today’s hyperconnected world, the most successful organizations cannot afford to be isolated. Interaction also is the key to success, leveraged by collaboration, integration and continuous evolution. At Perficient, we know exactly that real impact is always driven through connections among the parts, carefully cultivated to make the difference in the “big tech ecosystem”. Like living ecosystems, we maximize interaction knowing that technology ecosystems outstand whenever they work together so they can thrive on diversity, interdependence, and continuous adaptation.

Creating value through a virtuous cycle

A fundamental part of tech ecosystems is how expertise is being demonstrated. Certifications whenever they are aligned with business projections excel proficiency, elevating business across multiple dimensions to rocket fuel growth. At Perficient, we champion progress through our commitment to certification investments aligned with our market positioning and target client needs, not only because it represents growth for everyone, but also because it vouches our expertise. So, in this virtuous cycle, every time that a colleague takes a certification, value is added for all the parts in favor of the ecosystem.

Where true impact relies

As mentioned, certifications have a direct impact in multiple dimensions. They serve as market differentiators for better positioning when clients are evaluating options, highlighting unique advantages that contribute to standing out from competitors. They can also expand service offerings, as the knowledge gained by a certification leverage for new service lines or enhancing ones. Therefore, they open doors to new co-selling opportunities and accelerate sales cycles as capabilities are easy to verify. And if we´re talking about sales, they can definitely improve profit margins. They are also a powerful tool to help colleagues with upskilling and reskilling with a direct impact on their professional development and adaptability to the market, encouraging their willingness to accredit skills, continue learning and evolving with the industry. They boost professional growth, promoting the adherence of best industry practices through standardized methodologies, contributing to career advancement. Consequently, this conducts to operational excellence that has a direct effect on project outcomes creating consistent quality frameworks, reducing implementation variations, streamline solution development processes and rework costs.

Final Thoughts

Conclusively there´s no doubt that certifications rocket fuel growth for everyone. They play a significant role in tech ecosystems, whenever they are aligned with business projections, demonstrating expertise and acting as differentiators in the “big tech ecosystem”. Certifications not only speed up and accelerate business growth delivering substantial measurable value to clients creating exceptional business results but also have a direct impact on career development for colleagues. So, in the end they´re a win-win for everyone.

[1] Ecosystem – Wikipedia

]]>
https://blogs.perficient.com/2025/04/28/certifications-a-rocket-fuel-for-growth/feed/ 1 380616
Perficient Included in IDC Market Glance: Healthcare Provider Operational IT Solutions, 1Q25 https://blogs.perficient.com/2025/04/25/perficient-included-in-idc-market-glance-healthcare-provider-operational-it-solutions-1q25/ https://blogs.perficient.com/2025/04/25/perficient-included-in-idc-market-glance-healthcare-provider-operational-it-solutions-1q25/#respond Fri, 25 Apr 2025 17:47:27 +0000 https://blogs.perficient.com/?p=380606

As technology continues to advance, patients and care teams expect to seamlessly engage with tools that support better health and accelerate progress. These developments demand the rapid, secure, scalable, and compliant sharing of data. 

By aligning enterprise and business goals with digital technology, healthcare organizations (HCOs) can activate strategies for transformative outcomes and improve experiences and efficiencies across the health journey. 

IDC Market Glance: Healthcare Provider Operational IT Solutions, 1Q25 

Perficient is proud to be included in the categories of IT Services and SI services in the IDC Market Glance: Healthcare Provider Operational IT Solutions, 1Q25 report (doc #US52221325, March 2025). We believe our inclusion in this report’s newly introduced “Services” segmentation underscores our expertise to leverage AI-driven automation and advanced analytics, optimize technology investments, and navigate evolving industry challenges. 

IDC states, “This expansion reflects the industry’s shift toward outsourced expertise, scalable service models, and strategic partnerships to manage complex operational IT and infrastructure efficiently.” 

IDC defines IT Services as, “managed IT services, ensuring system reliability, cybersecurity, and infrastructure optimization. These solutions support healthcare provider transformation initiatives, helpdesk management, network monitoring, and compliance with healthcare IT regulations.” The SI Services category is defined by IDC as, “system integration services that help deploy technologies and connect disparate systems, including EHRs, RCM platforms, ERP solutions, and third-party applications to enhance interoperability, efficiency, automation, and compliance with industry standards.”  

Advanced Solutions for Data-Driven Success 

We imagine, engineer, and optimize scalable, reliable technologies and data, partnering with healthcare leaders to better understand consumer expectations and strategically align digital investments with business priorities.  

Our end-to-end professional services include: 

  • Digital transformation strategy:  The healthcare industry’s rapid evolution requires attention in several areas – adopting new care models, capitalizing on disruptive technologies, and affecting regulatory, operational, financial, and organizational change. We equip HCOs to recognize and speed past potential hurdles in order to maximize ROI by making the most of technology, operational, and financial resources. 
  • Cloud-native environments: Cloud technology is the primary enabler of business transformation and outcomes-focused value. Investing in cloud allows HCOs to overcome limitations of legacy systems, improve stability, and reduce costs. It also leads to better solution quality, faster feature delivery, and encourages a culture of innovation. Our expert consultants tailor cloud solutions to unique business needs, empowering teams and fueling growth, intelligence, and long-term profitability. 
  • Hyper-scalable data infrastructures: We equip HCOs to maximize the value of information across the care ecosystem by uncovering the most meaningful, trustworthy data and enriching it with critical context so you can use it to answer difficult questions, power meaningful experiences, and automate smart decisions. Trusting data begins with having trust in the people, processes, and systems that source, move, transform, and manage that data. We partner to build data into a powerful, differentiating asset that can accelerate clinical, marketing, and operational excellence as information is exchanged across organizations, systems, devices, and applications. 
  • AI ecosystems: HCO’s face mounting competition, financial pressures, and macro uncertainties. Enhance operations with innovative and intelligent AI and automation solutions that help you overcome complex challenges, streamline processes, and unlock new levels of productivity. Holistic business transformation and advanced analytics are front and center in this industry evolution, and generative AI (GenAI) and agentic AI have fundamentally shifted how organizations approach intelligence within digital systems. According to IDC, “GenAI will continue to redefine workflows, while agentic AI shows promise to drive real-time, responsive, and interpretive orchestration across operations.” Position yourself for success now and in the future with enhanced customer interactions, reduced operational costs, and data-driven decision-making powered by our AI expertise. 
  • Digital experiences: Digital-first care options are changing the face of healthcare experiences, bringing commerce-like solutions to consumers who search for and choose care that best fits their personal priorities and needs. We build high-impact experience strategies and put them in motion, so your marketing investments drive results that grow lasting relationships and support healthy communities. As the healthcare landscape continues to evolve – with organizational consolidations and new disruptors reshaping the marketplace – we help you proactively and efficiently attract and nurture prospective patients and caregivers as they make health decisions. 

We don’t just implement solutions; we create intelligent strategies that align technology with your key business priorities and organizational capabilities. Our approach goes beyond traditional data services. We create AI-ready intelligent ecosystems that breathe life into your data strategy and accelerate transformation. By combining technical excellence, global reach, and a client-centric approach, we’re able to drive business transformation, boost operational resilience, and enhance health outcomes. 

Success in Action: Illuminating a Clear Path to Care With AI-Enabled Search 

Empower Healthcare Experiences Through Innovative Technology 

Whether you want to redefine workflows, personalize care pathways, or revolutionize proactive health management, Perficient can help you boost efficiencies and a competitive edge.  

We combine strategy, industry best practices, and technology expertise to deliver award-winning results for leading health systems: 

  • Business Transformation: Transform strategy into action: improve operations, lower costs, build operational resilience, and optimize care. 
  • Modernization: Provide quality, cost-effective tools and platforms that enable exceptional care. 
  • Data Analytics: Enable trusted data access and insight to clinical, operational, and financial teams across the healthcare ecosystem. 
  • Consumer Experience: Harness data and technology to drive optimal healthcare outcomes and experiences. 

Discover why we have been trusted by the 10 largest health systems and the 10 largest health insurers in the U.S. Explore our healthcare expertise and contact us to learn more.

]]>
https://blogs.perficient.com/2025/04/25/perficient-included-in-idc-market-glance-healthcare-provider-operational-it-solutions-1q25/feed/ 0 380606
Redwood is coming… https://blogs.perficient.com/2025/04/24/redwood-is-coming-or-is-it-already-here/ https://blogs.perficient.com/2025/04/24/redwood-is-coming-or-is-it-already-here/#respond Thu, 24 Apr 2025 11:19:32 +0000 https://blogs.perficient.com/?p=380523

If you are a Game of Thrones fan, you are probably familiar with the “winter is coming” phrase.  When it comes to Oracle Fusion, the Redwood experience has been coming for years, but now it’s almost here.

Oracle is in the process of overhauling the whole fusion suite with what they call the “Redwood Experience.” The newly designed Redwood pages are not only responsive and more powerful than their ancestors, but they bring great capability to the table.

  • Redwood pages are built for the future. They are all AI-ready and some come with pre-built AI capabilities.
  • They are geared toward a “Journey Guide” concept, so enterprise-level software implementations are no longer full of “technical jargon.”
  • The new AI Studio and the Visual Studio give Oracle Fusion clients the ability to modify the application for their business needs.

How to Move Forward with the Redwood Experience

Adopting to Redwood is not a straightforward task.  Every quarterly release, Oracle will add more and more pages with the Redwood design, but how do you adopt and take on to the Redwood experience and explore AI opportunities?

  1. First, deploy the setup screens where Redwood experience is available.
  2. Second, review quarterly updates and decide what screens are mature enough to be deployed.
  3. Third, review is the new design is beginning new functionality or lacking any functionality. For instance, Oracle Work Definition Redwood pages are bringing new functionality, whereas the newly designed Order Management pages won’t support certain flows.  Having said that, Order Management screens brings so much when in comes to AI capabilities, if the “not yet available” features is not a business requirement, moving to the Redwood experience will bring efficiency in customer service and much better user experience.
  4. Fourth, have a game plan to roll out with your pace. With the cloud, you are in total control of how and when you roll out the SCM pages. According to Oracle, there isn’t yet a definitive timeframe that Oracle will make the Redwood pages mandatory (04/2025).  Please note that some of the pages are already in play and some made have been mandatory.

 

 

User acceptance and adoption comes with time, so the sooner the transition begins, the more successful the implementations will go. Perficient can help you with your transition from traditional Fusion or legacy on-prem applications to the SCM Redwood experience. When you are ready to take the first step and you’re looking for some advice, contact us. Our strategy is to craft a path for our clients that will make the transition as seamless as possible to the user community and their support staff.

 

Redwood - Manage Manufacturer

New modern looking newly designed Manage Manufacturers Redwood Experience with built-in AI Assist

 

 

Below are the Supply Chain features Oracle has released from release 24D to 25B. (2024 Q3- 2025 Q2) only for Inventory Management and yet it is an overwhelming list.  Please stay tuned for our Redwood series that will be talking about select features.

Inventory Management
24D
Create Guided Journeys for Redwood Pages in the Setup and Maintenance Work Area
Integrate Manufacturing and Maintenance Direct Work Order Transactions with Your Warehouse Management System
Redwood: Audit Receipt Accrual Clearing Balances Using a New User Experience
Redwood: Correct Receipts Using a Redwood Page
Redwood: Create an Interorganization Transfer Using a Mobile Device
Redwood: Create and Edit Accrual Cutoff Rules Using a New User Experience
Redwood: Create Cycle Counts Using a Redwood Page
Redwood: Create Receipt Returns Using a Redwood Page
Redwood: Create Unordered Receipts Using a Redwood Page
Redwood: Inspect Receipts Using a Redwood Page
Redwood: Inspect Received Goods Using a Mobile Device
Redwood: Manage Inbound Shipments and Create ASN or ASBN Using a Redwood Page
Redwood: Review and Clear Open Receipt Accrual Balance Using a New User Experience
Redwood: Review Receipt Accounting Distributions Using a New User Experience
Redwood: Review Receipt Accounting Exceptions using a New User Experience
Redwood: View Item Quantities Using a Redwood Page
Redwood: View Lot Attributes in Mobile Inventory Transactions
Redwood: View Receipts and Receipt Returns in Supplier Portal Using a Redwood Page
Redwood: View the Inventory Management (New) Tile as Inventory Management (Mobile)
Replenish Locations Using Radio Frequency Identification
25A
Capture Recall Notices from the U.S. Food and Drug Administration Curated and Communicated by Oracle
Collaborate with Notes When Reviewing Open Accrual Balances
Complete Recall Containment Tasks Bypassing the Recall Count And Disposition
Create a Flow Manufacturing Work Definition Associated with a Production Line
Manage Shipping Profile Options
Redwood: Approve Physical Inventory Adjustments Using a Redwood Page
Redwood: Compare Standard Costs Using a New User Experience
Redwood: Create and Update Cost Scenarios Using a New User Experience
Redwood: Create and Update Standard Costs Using a New User Experience
Redwood: Create Manual Count Schedules Using a Redwood Page
Redwood: Create Nudges to Notify Users of Item Shortage and Item Stockout
Redwood: Define Pull Sequences and Generate Supplier and Intraorganization Kanban Cards
Redwood: Enhanced Costed BOM Report with Indented View of Lower-Level Subassembly Details
Redwood: Enter Receipt Quantity by Distribution in the Responsive Self-Service Receiving Application
Redwood: Manage ABC Classes, Classification Sets, and Assignment Groups Using a Redwood Page
Redwood: Manage Account Aliases Using a Redwood Page
Redwood: Manage and Create Physical Inventories Using a Redwood Page
Redwood: Manage Consigned Inventory Using a Redwood Page
Redwood: Manage Consumption Rules Using a Redwood Page
Redwood: Manage Interorganization Parameters Using a Redwood Page
Redwood: Manage Intersubinventory Parameters Using a Redwood Page
Redwood: Manage Inventory Transaction Reasons Using a Redwood Page
Redwood: Manage Lot and Serial Attribute Mappings Using a Redwood Page
Redwood: Manage Lot Expiration Actions Using a Redwood Page
Redwood: Manage Lot Grades Using a Redwood Page
Redwood: Manage Movement Requests Using a Redwood Page
Redwood: Manage Pick Slip Grouping Rules Using a Redwood Page
Redwood: Manage Picking Rules and Picking Rule Assignments Using a Redwood Page
Redwood: Manage Receiving Parameters Using a Redwood Page
Redwood: Manage Shipment Lines Using a Redwood Page
Redwood: Manage Shipments Using a Redwood Page
Redwood: Manage Transfer Orders Using a Redwood Page
Redwood: Perform Inventory Transactions Directly from Item Quantities
Redwood: Put Away Receipts Using a Redwood Page
Redwood: Receive Expected Shipments Using a Redwood Page
Redwood: Receive Multiple Lines Together in Responsive Self-Service Receiving as a Casual Receiver
Redwood: Receive Work Order Destination Purchases Using the Responsive Self-Service Receiving Application
Redwood: Record Physical Inventory Tags Using a Mobile Device
Redwood: Record Physical Inventory Tags Using a Spreadsheet
Redwood: Review Completed Transactions Using a Redwood Page
Redwood: Review Consumption Advices Using a Redwood Page
Redwood: Review Standard Costs Import Exceptions Using a New User Experience
Redwood: SCM AI Agents
Redwood: Search and View Supplier ASN in Receiving
Redwood: Signal and Track Supplier and Intraorganization Kanban Replenishment
Redwood: Use Descriptive Flexfields and Attachments in Mobile Inventory
Redwood: Use Redwood Style in Movement Request Approvals Notification
Redwood: View Item Supply and Demand Using a Redwood Page
Redwood: View Rollup Costs Using a New User Experience
Redwood: View Scenario Exceptions Using a New User Experience
Summarize and Categorize the Manual Accrual Clearing Transactions for a Period Using Generative AI
25B
Analyze Kanban Activity Using Oracle Transactional Business Intelligence and Business Intelligence Cloud Connector
Define Pull Sequences and Generate Production and Interorganization Kanban Cards
Define Time Fence to Locate Recalled Parts and Withdraw Irrelevant Recalls
Implement a Temporary Kanban Card for Short-Term Demand Surge
Manage and Track Supplier Kanban Cards Through the Supplier Portal
Receive FYI Notifications when a Recall Notice is Ingested
Redwood: Accounting Overhead Rules
Redwood: Analyze Gross Margin
Redwood: Capture Lot and Serial Numbers with a Streamlined Flow for Mobile Cycle Counting
Redwood: Confirm Picks Using a Mobile Device with an Improved User Experience
Redwood: Confirm Picks Using a Redwood Page
Redwood: Cost Accounting Landing Page
Redwood: Cost Accounting Periods
Redwood: Create and Edit Cost Adjustments
Redwood: Create and Edit Cost Analysis Groups Using a New User Experience
Redwood: Create and Edit Cost Books Using a New User Experience
Redwood: Create and Edit Cost Component Mappings Using a New User Experience
Redwood: Create and Edit Cost Elements Using a New User Experience
Redwood: Create and Edit Cost Organization Relationships Using a New User Experience
Redwood: Create and Edit Cost Organizations Using a New User Experience
Redwood: Create and Edit Cost Profiles Using a New User Experience
Redwood: Create and Edit Default Cost Profiles Using a New User Experience
Redwood: Create and Edit Item Cost Profiles Using a New User Experience
Redwood: Create and Edit Overhead Cost Element Groups Using a New User Experience
Redwood: Create and Edit Overhead Expense Pools Using a New User Experience
Redwood: Create and Edit Valuation Structures Using a New User Experience
Redwood: Create and Edit Valuation Units Using a New User Experience
Redwood: Create Cost Accounting Distributions
Redwood: Enter Miscellaneous Transactions on a Mobile Device Using a Streamlined Flow
Redwood: Implement Cost Accounting Using Quick Setup
Redwood: Manage Cycle Count Sequences Using a Redwood Page
Redwood: Manage Default Packing Configurations Using a Redwood Page
Redwood: Manage Inventory Business Event Configurations Using a Redwood Page
Redwood: Manage Material Statuses Using a Redwood Page
Redwood: Manage Pending Transactions Using a Redwood Page
Redwood: Manage Pick Wave Release Rules Using a Redwood Page
Redwood: Manage Release Sequence Rules Using a Redwood Page
Redwood: Manage Reservation Interface Records Using a Spreadsheet
Redwood: Manage Reservations Using a Redwood Page
Redwood: Manage Ship Confirm Rules Using a Redwood Page
Redwood: Manage Shipment Interface Records Using a Spreadsheet
Redwood: Manage Shipping Cost Types Using a Redwood Page
Redwood: Manage Shipping Document Job Set Rules Using a Redwood Page
Redwood: Manage Shipping Document Output Preferences Using a Redwood Page
Redwood: Manage Shipping Exceptions Using a Redwood Page
Redwood: Manage Shipping Parameters Using a Redwood Page
Redwood: Manage Shipping Transaction Correction Records Using a Spreadsheet
Redwood: Manage Transaction Sources and Types Using a Redwood Page
Redwood: Manage Transportation Schedules Using a Redwood Page
Redwood: Manage Units of Measure Usages Using a Redwood Page
Redwood: Receive Multiple Distribution Purchase Orders on the Expected Shipment Lines and Received Lines Pages
Redwood: Record PAR Counts on a Mobile Device Using a Streamlined Flow
Redwood: Review and Approve Item Cost Profiles
Redwood: Review Consigned Inventory in Supplier Portal Using a Redwood Page
Redwood: Review Consumption Advice in Supplier Portal Using a Redwood Page
Redwood: Review Cost Accounting Distributions
Redwood: Review Cost Accounting Processes
Redwood: Review Inventory Valuation
Redwood: Review Item Costs
Redwood: Review Maintenance Work Order Costs
Redwood: Review Standard Purchase Cost Variances
Redwood: Review Work Order Costs
Redwood: Standard Cost Overhead Absorption Rules
Redwood: Use a Redwood Template for Automatic Debit Memo Failure Notifications
Redwood: Use a Redwood Template for Confirm Receipt Notifications
Redwood: Use a Redwood Template for Create ASN Notifications
Redwood: Use Additional Pick Slip Grouping Rules Criteria
Redwood: Use an Improved Experience for Mobile Inventory Transactions
Redwood: Use Improved Capabilities in the Responsive Self-Service Receiving Application
Redwood: Use Improved Search Capabilities on Expected Shipment Lines Page
Redwood: Use Improved Sorting of Source Picking Locations During Pick Confirm
Redwood: Use Locators on Transfer Orders
Redwood: Use Saved Searches on Redwood Pages
Redwood: Use the Improved Inventory Management Landing Page
Redwood: View Additional Information When Creating a Receipt Using a Mobile Device
Redwood: View Additional Information When Performing a Subinventory Transfer Using a Mobile Device
Redwood: View Electronic Records Using a Redwood Page

 

]]>
https://blogs.perficient.com/2025/04/24/redwood-is-coming-or-is-it-already-here/feed/ 0 380523
Meet Perficient at Data Summit 2025 https://blogs.perficient.com/2025/04/22/meet-perficient-at-data-summit-2025/ https://blogs.perficient.com/2025/04/22/meet-perficient-at-data-summit-2025/#respond Tue, 22 Apr 2025 18:39:18 +0000 https://blogs.perficient.com/?p=380394

Data Summit 2025 is just around the corner, and we’re excited to connect, learn, and share ideas with fellow leaders in the data and AI space. As the pace of innovation accelerates, events like this offer a unique opportunity to engage with peers, discover groundbreaking solutions, and discuss the future of data-driven transformation. 

We caught up with Jerry Locke, a data solutions expert at Perficient, who’s not only attending the event but also taking the stage as a speaker. Here’s what he had to say about this year’s conference and why it matters: 

Why is this event important for the data industry? 

“Anytime you can meet outside of the screen is always a good thing. For me, it’s all about learning, networking, and inspiration. The world of data is expanding at an unprecedented pace. Global data volume is projected to reach over 180 zettabytes (or 180 trillion gigabytes) by 2025—tripling from just 64 zettabytes in 2020. That’s a massive jump. The question we need to ask is: What are modern organizations doing to not only secure all this data but also use it to unlock new business opportunities? That’s what I’m looking to explore at this summit.” 

What topics do you think will be top-of-mind for attendees this year? 

“I’m especially interested in the intersection of data engineering and AI. I’ve been lucky to work on modern data teams where we’ve adopted CI/CD pipelines and scalable architectures. AI has completely transformed how we manage data pipelines—mostly for the better. The conversation this year will likely revolve around how to continue that momentum while solving real-world challenges.” 

Are there any sessions you’re particularly excited to attend? 

“My plan is to soak in as many sessions on data and AI as possible. I’m especially curious about the use cases being shared, how organizations are applying these technologies today, and more importantly, how they plan to evolve them over the next few years.” 

What makes this event special for you, personally? 

“I’ve never been to this event before, but several of my peers have, and they spoke highly of the experience. Beyond the networking, I’m really looking forward to being inspired by the incredible work others are doing. As a speaker, I’m honored to be presenting on serverless engineering in today’s cloud-first world. I’m hoping to not only share insights but also get thoughtful feedback from the audience and my peers. Ultimately, I want to learn just as much from the people in the room as they might learn from me.” 

What’s one thing you hope listeners take away from your presentation? 

“My main takeaway is simple: start. If your data isn’t on the cloud yet, start that journey. If your engineering isn’t modernized, begin that process. Serverless is a key part of modern data engineering, but the real goal is enabling fast, informed decision-making through your data. It won’t always be easy—but it will be worth it.

I also hope that listeners understand the importance of composable data systems. If you’re building or working with data systems, composability gives you agility, scalability, and future-proofing. So instead of a big, all-in-one data platform (monolith), you get a flexible architecture where you can plug in best-in-class tools for each part of your data stack. Composable data systems let you choose the best tool for each job, swap out or upgrade parts without rewriting everything, and scale or customize workflows as your needs evolve.” 

Don’t miss Perficient at Data Summit 2025. A global digital consultancy, Perficient is committed to partnering with clients to tackle complex business challenges and accelerate transformative growth. 

]]>
https://blogs.perficient.com/2025/04/22/meet-perficient-at-data-summit-2025/feed/ 0 380394
Part 1 – Marketing Cloud Personalization and Mobile Apps: Functionality 101 https://blogs.perficient.com/2025/04/21/part-1-marketing-cloud-personalization-and-mobile-apps-functionality-101/ https://blogs.perficient.com/2025/04/21/part-1-marketing-cloud-personalization-and-mobile-apps-functionality-101/#comments Mon, 21 Apr 2025 21:45:01 +0000 https://blogs.perficient.com/?p=379201

Over the past three years working with Marketing Cloud Personalization (formerly Interaction Studio), I’ve always been intrigued by the Mobile icon and its capabilities. A few months ago, I decided to take a hands-on approach by developing my own application to explore this functionality firsthand, testing its implementation and understanding its real-world impact. And that  is what this blog is about.

The Overall Process

The overall steps of the Marketing Cloud Personalization Mobile integration goes as follows:

  1. Have an Application (Understatement)
  2. Have access to the app project and code.
  3. Integrate the Evergage SDK library to the app.
  4. Create a Mobile App inside Personalization UI
  5. Create a connection between the app and the Personalization Dataset
  6. Track views and actions of the user in the app (code implementation).
  7. Publish and track campaign actions and push notifications.

That’s all… easy right?. Within this blog we will review how to do the connection between MCP and the mobile app and how to create a first interaction (steps 1 and part of step 6).

For this demo, I developed an iOS application using the Swift programming language. While I’m not yet an expert, I’ve been steadily learning how to navigate Xcode and implement functionality using Swift. This project has been a great opportunity to expand my skills in iOS development and better understand the tools and frameworks available within Apple’s ecosystem.

Integrate the Evergage SDK in the App

The iOS app I create is very simple (for now), it just a label, a button and an input field. The user types something in the input field, then clicks the button and the data is sent to the label to be shown.

Iphone 16 App Simulator View

So, we need to add the Evergage SDK inside the app project. Download the Evergage iOS SDK (v1.4.1), unzip it and open the static folder. There, the Evergage.xcframework is the one we are about to use. When you have the folder ready, you need to copy the folder into your app. You should have something like this:

Evergage Framework FolderMobileapp Folder Structure

After you added your folder, you need to Build your app again with Command + B.

Now we need to validate the framework is there, so go to Target -> General -> Frameworks, Libraries and Embedded Content. You should see something like this, and since I’m using the static folder, the Do Not Embed is ok.

General Information In Xcode

Validate the Framework Search Path contains a path where the framework was copied/installed. This step would probably be done manually since sometimes the path doesn’t appear. Build the app again to validate if no errors appears.

Framework Search Paths

To validate this works, go to the AppDelegate.swift and type Import Evergage, if no errors appear, you are good to go 🙂

Import Evergage View

 

Create a Mobile App Inside Personalization

Next, we have to create the Native App inside the Personalization dataset of your choice.

Hoover over Mobile and click Add Native App

Mpc Mobile View

Fill the information of the App Name and Bundle ID. For the Bundle ID, go to Target > General > Identity

Add Native App

You will with something like this:

Demoapp Mpc View

Create the Connection to the Dataset

In the AppDelegate.swift , we will do the equivalent to add the JavaScript beacon on the page.

  1. First, we need to import the Evergage class reference. This allow the start of the Marketing Cloud Personalization iOS SDK. Our tracking interactions now should be done inside a UIViewController inherited classes.
  2. Change the didFinishLaunchingWithOptions to willFinishLaounchingWithOptions
  3. Inside the application function we do the following:
    1. Create a singleton instance of Evergage. A Singleton is a creational design pattern that lets you ensure that a class has only one instance, while providing a global access point to this instance. So with this, it provides a global access point to the instance, which can be used to coordinate actions across our app.
    2. Set the user id. For this, we set the evergage.userId using the evergage.anonymousId , but if we already have the email or an id for the user, we should passed right away.
    3. Start the Evergage configuration. Here we pass the Personalization’s account id and dataset id. Other values set are the usePushNotifications and the useDesignMode . The last one help us to connect the Personalization web console for action mapping screen.

 

//Other imports
Import Evergage

@main
class AppDelegate: UIResponder, UIApplicationDelegate {



    func application(_ application: UIApplication, willFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool{
        
        //Create an singleton instance of Evergage
        let evergage = Evergage.sharedInstance()
        
        //Set User ID as anonymous
        evergage.userId = evergage.anonymousId
        
        //Start the Evergage Configuration with our Dataset information
        evergage.start { (clientConfigurationBuilder)   in
            clientConfigurationBuilder.account = "ACCOUNT_ID"
            clientConfigurationBuilder.dataset = "DATASET_ID"
            // if we want to user push notification campaings
            clientConfigurationBuilder.usePushNotifications = true
            //Allow user-initiated gesture to connect to the Personalization web console for action mapping screens.
            clientConfigurationBuilder.useDesignMode = true
        }
        
        
        
        // Override point for customization after application launch.
        return true
    }
}

 

 

If we launch the app at this very moment, we will get the following inside  Marketing Cloud personalization

Eventstream Report Interaction Action Description

This is very good and with that we are certain its working and sending the information to Marketing Cloud Personalization.

Track Actions

So, in order to track a screen we can use the evergageScreen . We use this property as part of the EVGScreen and EVGContext classes for tracking and personalization. This is possible when the app is using UIViewController for each of the screens or pages we have.

class ViewController: UIViewController {

        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view.
            trackScreen()
        }
        
        func trackScreen(){
            
            evergageScreen?.trackAction("Main Screen")
            
        }
}

 

Interaction Action Forbutton

If we would want to track the action of click a button, we can do something similar, for example this:

@IBAction func handleClick(_ sender: UIButton) {
        
        labelText.text = inputField.text
        evergageScreen?.trackAction("Button clicked")
        
    }

In this code, each time the user clicks a button, the handleClick function will trigger the action. the inputField.text will be assign to the labelText.text and the trackAction function will be triggered and the action will sent to our dataset.

Wrapping Up Part 1: What’s next?

That wraps up the first part of this tutorial! We’ve covered the basic about how to add the Personalization SDK inside a mobile iOS application, how to create a Mobile App within Personalization and do a very basic action tracking in a view. In Part 2, we’ll dive into tracking more complex actions like view item and view item detail which are part of the catalog object action’s for tracking items.

]]>
https://blogs.perficient.com/2025/04/21/part-1-marketing-cloud-personalization-and-mobile-apps-functionality-101/feed/ 5 379201
What does SFO have to do with Oracle? https://blogs.perficient.com/2025/04/21/what-does-sfo-have-to-do-with-oracle/ https://blogs.perficient.com/2025/04/21/what-does-sfo-have-to-do-with-oracle/#respond Mon, 21 Apr 2025 10:33:06 +0000 https://blogs.perficient.com/?p=380320

Isn’t SFO an airport?  The airport one would travel if the destination is Oracle’s Redwood Shores campus.  Widely known as the initialism for the San Francisco International Airport, the answer would be correct if this question were posed in that context.  However, in Oracle Fusion, SFO stands for the Supply Chain Financial Orchestration. Based on what it does, we cannot call it an airport, but it sure is a control tower for financial transactions.

As companies are expanding their presence across countries and continents through mergers and acquisitions or natural growth, it becomes inevitable for the companies to transact across the borders and produce intercompany financial transactions.

Supply Chain Financial Orchestration (SFO), is the place where Oracle Fusion handles those transactions. The material may move one way, but for legal or financial reasons the financial flow could be following a different path.

A Typical Scenario

A Germany-based company sells to its EU customers from its Berlin office, but ships from its warehouses in New Delhi and Beijing.

Global

Oracle Fusion SFO takes care of all those transactions and as transactions are processed in Cost Management, financial trade transactions are created, and corporations can see their internal margins, intercompany accounting, and intercompany invoices.

Oh wait, the financial orchestration doesn’t have to be across countries only.  What if a corporation wants to measure its manufacturing and sales operations profitability?  Supply Chain Financial Orchestration is there for you.

In short, SFO is a tool that is part of the Supply Chain management offering that helps create intercompany trade transactions for various business cases.

Contact Mehmet Erisen at Perficient for more introspection of this functionality, and how Perficient and Oracle Fusion Cloud can digitalize and modernize your ERP platform.

www.oracle.com

www.perficient.com

]]>
https://blogs.perficient.com/2025/04/21/what-does-sfo-have-to-do-with-oracle/feed/ 0 380320
Roeslein and Associates goes live with Oracle Project Driven Supply Chain https://blogs.perficient.com/2025/04/21/roeslein-and-associates-goes-live-with-oracle-project-driven-supply-chain/ https://blogs.perficient.com/2025/04/21/roeslein-and-associates-goes-live-with-oracle-project-driven-supply-chain/#respond Mon, 21 Apr 2025 10:20:05 +0000 https://blogs.perficient.com/?p=368833

Roeslein & Associates 

Business Challenge + Opportunity 

Replaced disparate and outdated legacy systems with Oracle Fusion Cloud Manufacturing at a well-established manufacturing company.  We implemented a scalable Fusion solution, including Project Driven Supply Chain (PDSC), and full Financial and Supply Chain Management Suites to enable Roeslein to execute and extend their business processes globally. 

The challenge in manufacturing was to set standard manufacturing processes to fulfill highly customized demand originating from their customers. In addition, Perficient designed a Supply Chain Data Architecture to support the functionality of the solution. 

Achievements

  • Created Global Solution Template to be used globally 
  • Redesigned Enterprise Structure to enable Roeslein to track profits in different business units. 
  • Defined processes to execute standard manufacturing processes for custom and highly flexible manufacturing demand 
  • Implemented Project Driven Supply Chain including Inventory, Manufacturing, Order Management, Procurement and Cost Management 
  • Implemented Solutions to support aftermarket part orders in addition to Manufacturing Orders 
  • Designed two Integration between Fusion and UKG to support labor capture in Manufacturing and Projects 
  • Built Integration between Roeslein’s  eCommerce Platform and Fusion to support of their Aftermarket Business 

 

Contact Mehmet Erisen at Perficient for more introspection of this phenomenal achievement.  Congratulations to Roeslein & Associates and their entire staff! 

]]>
https://blogs.perficient.com/2025/04/21/roeslein-and-associates-goes-live-with-oracle-project-driven-supply-chain/feed/ 0 368833
How the Change to TLS Certificate Lifetimes Will Affect Sitecore Projects (and How to Prepare) https://blogs.perficient.com/2025/04/18/how-the-change-to-tls-certificate-lifetimes-will-affect-sitecore-projects-and-how-to-prepare/ https://blogs.perficient.com/2025/04/18/how-the-change-to-tls-certificate-lifetimes-will-affect-sitecore-projects-and-how-to-prepare/#respond Fri, 18 Apr 2025 13:54:17 +0000 https://blogs.perficient.com/?p=380286

TLS certificate lifetimes are being significantly reduced over the next few years as part of an industry-wide push toward greater security and automation. Here’s the phased timeline currently in place:

  • Now through March 15, 2026: Maximum lifetime is 398 days

  • Starting March 15, 2026: Reduced to 200 days

  • Starting March 15, 2027: Further reduced to 100 days

  • Starting March 15, 2029: Reduced again to just 47 days

For teams managing Sitecore implementations, this is more than a policy shift—it introduces operational urgency. As certificates begin expiring more frequently, any reliance on manual tracking or last-minute renewals could result in costly downtime or broken integrations.

If your Sitecore environment includes secure endpoints, custom domains, or external integrations, now is the time to assess your certificate strategy and move toward automation.

Why This Matters for Sitecore

Sitecore projects often involve:

  • Multiple environments (development, staging, production) with different certificates

  • Custom domains or subdomains used for CDNs, APIs, headless apps, or marketing campaigns

  • Third-party integrations that require secure connections

  • Marketing and personalization features that rely on seamless uptime

A single expired certificate can lead to downtime, loss of customer trust, or failed integrations—any of which could severely impact your digital experience delivery.

Key Risks of Shorter TLS Lifetimes

  • Increased risk of missed renewals if teams rely on manual tracking

  • Broken environments due to expired certs in Azure, IIS, or Kubernetes configurations

  • Delayed deployments when certificates must be re-issued last minute

  • SEO and trust damage if browsers start flagging your site as insecure

How to Prepare Your Sitecore Project Teams

To stay ahead of the TLS certificate lifecycle changes, here are concrete steps you should take:

1. Inventory All TLS Certificates

  • Audit all environments and domains using certificates

  • Include internal services, custom endpoints, and non-production domains

  • Use a centralized tracking tool (e.g., Azure Key Vault, HashiCorp Vault, or a certificate management platform)

2. Automate Certificate Renewals

  • Wherever possible, switch to automated certificate issuance and renewal

  • Use services like:

    • Azure App Service Managed Certificates

    • Let’s Encrypt with automation scripts

    • ACME protocol integrations for Kubernetes

  • For Azure-hosted Sitecore instances, leverage Key Vault and App Gateway integrations

3. Establish Certificate Ownership

  • Assign clear ownership of certificate management per environment or domain

  • Document who is responsible for renewals and updates

  • Add certificate health checks to your DevOps dashboards

4. Integrate Certificate Checks into CI/CD Pipelines

  • Validate certificate validity before deployments

  • Fail builds if certificates are nearing expiration

  • Include certificate management tasks as part of environment provisioning

5. Educate Your Team

  • Hold knowledge-sharing sessions with developers, infrastructure engineers, and marketers

  • Make sure everyone understands the impact of expired certificates on the Sitecore experience

6. Test Expiry Scenarios

  • Simulate certificate expiry in non-production environments

  • Monitor behavior in Sitecore XP and XM environments, including CD and CM roles

  • Validate external systems (e.g., CDNs, integrations, identity providers) against cert failures

Final Thoughts

TLS certificate management is no longer a “set it and forget it” task. With shorter lifetimes becoming the norm, proactive planning is essential to avoid downtime and ensure secure, uninterrupted experiences for your users.

Start by auditing your current certificates and work toward automating renewals. Make certificate monitoring part of your DevOps practice, and ensure your Sitecore teams are aware of the upcoming changes.

Action Items for This Week:

  • Identify all TLS certificates in your Sitecore environments

  • Document renewal dates and responsible owners

  • Begin automating renewals for at least one domain

  • Review Azure and Sitecore documentation for certificate integration options

]]>
https://blogs.perficient.com/2025/04/18/how-the-change-to-tls-certificate-lifetimes-will-affect-sitecore-projects-and-how-to-prepare/feed/ 0 380286
Security Best Practices in Sitecore XM Cloud https://blogs.perficient.com/2025/04/16/security-best-practices-in-sitecore-xm-cloud/ https://blogs.perficient.com/2025/04/16/security-best-practices-in-sitecore-xm-cloud/#respond Wed, 16 Apr 2025 23:45:38 +0000 https://blogs.perficient.com/?p=380233

Securing your Sitecore XM Cloud environment is critical to protecting your content, your users, and your brand. This post walks through key areas of XM Cloud security, including user management, authentication, secure coding, and best practices you can implement today to reduce your security risks.

We’ll also take a step back to look at the Sitecore Cloud Portal—the central control panel for managing user access across your Sitecore organization. Understanding both the Cloud Portal and XM Cloud’s internal security tools is essential for building a strong foundation of security.


Sitecore Cloud Portal User Management: Centralized Access Control

The Sitecore Cloud Portal is the gateway to managing user access across all Sitecore DXP tools, including XM Cloud. Proper setup here ensures that only the right people can view or change your environments and content.

Organization Roles

Each user you invite to your Sitecore organization is assigned an Organization Role, which defines their overall access level:

  • Organization Owner – Full control over the organization, including user and app management.

  • Organization Admin – Can manage users and assign app access, but cannot assign/remove Owners.

  • Organization User – Limited access; can only use specific apps they’ve been assigned to.

Tip: Assign the “Owner” role sparingly—only to those who absolutely need full administrative control.

App Roles

Beyond organization roles, users are granted App Roles for specific products like XM Cloud. These roles determine what actions they can take inside each product:

  • Admin – Full access to all features of the application.

  • User – More limited, often focused on content authoring or reviewing.

Managing Access

From the Admin section of the Cloud Portal, Organization Owners or Admins can:

  • Invite new team members and assign roles.

  • Grant access to apps like XM Cloud and assign appropriate app-level roles.

  • Review and update roles as team responsibilities shift.

  • Remove access when team members leave or change roles.

Security Tips:

  • Review user access regularly.

  • Use the least privilege principle—only grant what’s necessary.

  • Enable Multi-Factor Authentication (MFA) and integrate Single Sign-On (SSO) for extra protection.


XM Cloud User Management and Access Rights

Within XM Cloud itself, there’s another layer of user and role management that governs access to content and features.

Key Concepts

  • Users: Individual accounts representing people who work in the XM Cloud instance.

  • Roles: Collections of users with shared permissions.

  • Domains: Logical groupings of users and roles, useful for managing access in larger organizations.

Recommendation: Don’t assign permissions directly to users—assign them to roles instead for easier management.

Access Rights

Permissions can be set at the item level for things like reading, writing, deleting, or publishing. Access rights include:

  • Read

  • Write

  • Create

  • Delete

  • Administer

Each right can be set to:

  • Allow

  • Deny

  • Inherit

Best Practices

  • Follow the Role-Based Access Control (RBAC) model.

  • Create custom roles to reflect your team’s structure and responsibilities.

  • Audit roles and access regularly to prevent privilege creep.

  • Avoid modifying default system users—create new accounts instead.


Authentication and Client Credentials

XM Cloud supports robust authentication mechanisms to control access between services, deployments, and repositories.

Managing Client Credentials

When integrating external services or deploying via CI/CD, you’ll often need to authenticate through client credentials.

  • Use the Sitecore Cloud Portal to create and manage client credentials.

  • Grant only the necessary scopes (permissions) to each credential.

  • Rotate credentials periodically and revoke unused ones.

  • Use secure secrets management tools to store client IDs and secrets outside of source code.

For Git and deployment pipelines, connect XM Cloud environments to your repository using secure tokens and limit access to specific environments or branches when possible.


Secure Coding and Data Handling

Security isn’t just about who has access—it’s also about how your code and data behave in production.

Secure Coding Practices

  • Sanitize all inputs to prevent injection attacks.

  • Avoid exposing sensitive information in logs or error messages.

  • Use HTTPS for all external communications.

  • Validate data both on the client and server sides.

  • Keep dependencies up to date and monitor for vulnerabilities.

Data Privacy and Visitor Personalization

When using visitor data for personalization, be transparent and follow data privacy best practices:

  • Explicitly define what data is collected and how it’s used.

  • Give visitors control over their data preferences.

  • Avoid storing personally identifiable information (PII) unless absolutely necessary.


Where to Go from Here

Securing your XM Cloud environment is an ongoing process that involves team coordination, regular reviews, and constant vigilance. Here’s how to get started:

  • Audit your Cloud Portal roles and remove unnecessary access.

  • Establish a role-based structure in XM Cloud and limit direct user permissions.

  • Implement secure credential management for deployments and integrations.

  • Train your developers on secure coding and privacy best practices.

The stronger your security practices, the more confidence you—and your clients—can have in your digital experience platform.

]]>
https://blogs.perficient.com/2025/04/16/security-best-practices-in-sitecore-xm-cloud/feed/ 0 380233
Countdown to the Kibo Connect Client Summit 2025 https://blogs.perficient.com/2025/04/16/countdown-to-the-kibo-connect-client-summit-2025/ https://blogs.perficient.com/2025/04/16/countdown-to-the-kibo-connect-client-summit-2025/#respond Wed, 16 Apr 2025 17:54:16 +0000 https://blogs.perficient.com/?p=380181

Our trusted Unified Commerce Platform partner, Kibo, is gearing up to host the Kibo Connect Client Summit from May 7th to 9th at the Loews Downtown Chicago Hotel. Since the start of our partnership in 2021, Kibo has consistently delivered success through innovative commerce and delivery models, a dynamic omnichannel pricing and promotions engine, and robust delivery options seamlessly integrated into its user-friendly interface. This upcoming summit promises to bring together industry leaders, innovators, and experts to exchange valuable insights, strategies, and success stories from the world of commerce.

What to Expect at the Summit

The event offers plenty of networking opportunities, with more than 200 executives and industry experts in attendance. Key figures like CTOs and SVPs from renowned businesses such as Total Wine & More, Forrester, and Ace Hardware will be among those contributing thought leadership on stage. Adding to the lineup, our very own Zach Zalowitz, Principal of Order Management and Product Information Management, and Kim Glasscock, Director of Order Management, will represent us at the summit.

Expect a wealth of strategic discussions on the latest practices in commerce, order management, and customer experience. One must-attend session is the panel ‘The Future of Commerce: Navigating Disruption and Driving Innovation,’ featuring Zach Zalowitz alongside prominent leaders from ODP and Proactiv. The panel discussion takes place on Thursday, May 8th, at 2:00 PM.

Additionally, attendees will have the chance to gain actionable insights by participating in various workshops and sessions led by global brands and technology providers.

Join Us at Kibo Connect 2025!

We’re thrilled to be part of the Kibo Connect Client Summit and look forward to seeing you there. Attendees can expect some exciting surprises from us at the event, whether it’s inspiration from the main stage or insightful conversations at our partner table. As we countdown to May, stay tuned for more updates and information.

Explore our commerce expertise in and in your industry as we prepare to connect, and contact us if you’re ready to schedule time for a discussion at the event.

]]>
https://blogs.perficient.com/2025/04/16/countdown-to-the-kibo-connect-client-summit-2025/feed/ 0 380181
Android Development Codelab: Mastering Advanced Concepts https://blogs.perficient.com/2025/04/10/android-development-codelab-mastering-advanced-concepts/ https://blogs.perficient.com/2025/04/10/android-development-codelab-mastering-advanced-concepts/#respond Thu, 10 Apr 2025 22:28:06 +0000 https://blogs.perficient.com/?p=379698

 

This guide will walk you through building a small application step-by-step, focusing on integrating several powerful tools and concepts essential for modern Android development.

What We’ll Cover:

  • Jetpack Compose: Building the UI declaratively.
  • NoSQL Database (Firestore): Storing and retrieving data in the cloud.
  • WorkManager: Running reliable background tasks.
  • Build Flavors: Creating different versions of the app (e.g., dev vs. prod).
  • Proguard/R8: Shrinking and obfuscating your code for release.
  • Firebase App Distribution: Distributing test builds easily.
  • CI/CD (GitHub Actions): Automating the build and distribution process.

The Goal: Build a “Task Reporter” app. Users can add simple task descriptions. These tasks are saved to Firestore. A background worker will periodically “report” (log a message or update a counter in Firestore) that the app is active. We’ll have dev and prod flavors pointing to different Firestore collections/data and distribute the dev build for testing.

Prerequisites:

  • Android Studio (latest stable version recommended).
  • Basic understanding of Kotlin and Android development fundamentals.
  • Familiarity with Jetpack Compose basics (Composable functions, State).
  • A Google account to use Firebase.
  • A GitHub account (for CI/CD).

Let’s get started!


Step 0: Project Setup

  1. Create New Project: Open Android Studio -> New Project -> Empty Activity (choose Compose).
  2. Name: AdvancedConceptsApp (or your choice).
  3. Package Name: Your preferred package name (e.g., com.yourcompany.advancedconceptsapp).
  4. Language: Kotlin.
  5. Minimum SDK: API 24 or higher.
  6. Build Configuration Language: Kotlin DSL (build.gradle.kts).
  7. Click Finish.

Step 1: Firebase Integration (Firestore & App Distribution)

  1. Connect to Firebase: In Android Studio: Tools -> Firebase.
    • In the Assistant panel, find Firestore. Click “Get Started with Cloud Firestore”. Click “Connect to Firebase”. Follow the prompts to create a new Firebase project or connect to an existing one.
    • Click “Add Cloud Firestore to your app”. Accept changes to your build.gradle.kts (or build.gradle) files. This adds the necessary dependencies.
    • Go back to the Firebase Assistant, find App Distribution. Click “Get Started”. Add the App Distribution Gradle plugin by clicking the button. Accept changes.
  2. Enable Services in Firebase Console:
    • Go to the Firebase Console and select your project.
    • Enable Firestore Database (start in Test mode).
    • In the left menu, go to Build -> Firestore Database. Click “Create database”.
      • Start in Test mode for easier initial development (we’ll secure it later if needed). Choose a location close to your users. Click “Enable”.
    • Ensure App Distribution is accessible (no setup needed here yet).
  3. Download Initial google-services.json:
    • In Firebase Console -> Project Settings (gear icon) -> Your apps.
    • Ensure your Android app (using the base package name like com.yourcompany.advancedconceptsapp) is registered. If not, add it.
    • Download the google-services.json file.
    • Switch Android Studio to the Project view and place the file inside the app/ directory.
    • Note: We will likely replace this file in Step 4 after configuring build flavors.

Step 2: Building the Basic UI with Compose

Let’s create a simple UI to add and display tasks.

  1. Dependencies: Ensure necessary dependencies for Compose, ViewModel, Firestore, and WorkManager are in app/build.gradle.kts.
    app/build.gradle.kts

    
    dependencies {
        // Core & Lifecycle & Activity
        implementation("androidx.core:core-ktx:1.13.1") // Use latest versions
        implementation("androidx.lifecycle:lifecycle-runtime-ktx:2.8.1")
        implementation("androidx.activity:activity-compose:1.9.0")
        // Compose
        implementation(platform("androidx.compose:compose-bom:2024.04.01")) // Check latest BOM
        implementation("androidx.compose.ui:ui")
        implementation("androidx.compose.ui:ui-graphics")
        implementation("androidx.compose.ui:ui-tooling-preview")
        implementation("androidx.compose.material3:material3")
        implementation("androidx.lifecycle:lifecycle-viewmodel-compose:2.8.1")
        // Firebase
        implementation(platform("com.google.firebase:firebase-bom:33.0.0")) // Check latest BOM
        implementation("com.google.firebase:firebase-firestore-ktx")
        // WorkManager
        implementation("androidx.work:work-runtime-ktx:2.9.0") // Check latest version
    }
                    

    Sync Gradle files.

  2. Task Data Class: Create data/Task.kt.
    data/Task.kt

    
    package com.yourcompany.advancedconceptsapp.data
    
    import com.google.firebase.firestore.DocumentId
    
    data class Task(
        @DocumentId
        val id: String = "",
        val description: String = "",
        val timestamp: Long = System.currentTimeMillis()
    ) {
        constructor() : this("", "", 0L) // Firestore requires a no-arg constructor
    }
                    
  3. ViewModel: Create ui/TaskViewModel.kt. (We’ll update the collection name later).
    ui/TaskViewModel.kt

    
    package com.yourcompany.advancedconceptsapp.ui
    
    import androidx.lifecycle.ViewModel
    import androidx.lifecycle.viewModelScope
    import com.google.firebase.firestore.ktx.firestore
    import com.google.firebase.firestore.ktx.toObjects
    import com.google.firebase.ktx.Firebase
    import com.yourcompany.advancedconceptsapp.data.Task
    // Import BuildConfig later when needed
    import kotlinx.coroutines.flow.MutableStateFlow
    import kotlinx.coroutines.flow.StateFlow
    import kotlinx.coroutines.launch
    import kotlinx.coroutines.tasks.await
    
    // Temporary placeholder - will be replaced by BuildConfig field
    const val TEMPORARY_TASKS_COLLECTION = "tasks"
    
    class TaskViewModel : ViewModel() {
        private val db = Firebase.firestore
        // Use temporary constant for now
        private val tasksCollection = db.collection(TEMPORARY_TASKS_COLLECTION)
    
        private val _tasks = MutableStateFlow<List<Task>>(emptyList())
        val tasks: StateFlow<List<Task>> = _tasks
    
        private val _error = MutableStateFlow<String?>(null)
        val error: StateFlow<String?> = _error
    
        init {
            loadTasks()
        }
    
        fun loadTasks() {
            viewModelScope.launch {
                try {
                     tasksCollection.orderBy("timestamp", com.google.firebase.firestore.Query.Direction.DESCENDING)
                        .addSnapshotListener { snapshots, e ->
                            if (e != null) {
                                _error.value = "Error listening: ${e.localizedMessage}"
                                return@addSnapshotListener
                            }
                            _tasks.value = snapshots?.toObjects<Task>() ?: emptyList()
                            _error.value = null
                        }
                } catch (e: Exception) {
                    _error.value = "Error loading: ${e.localizedMessage}"
                }
            }
        }
    
         fun addTask(description: String) {
            if (description.isBlank()) {
                _error.value = "Task description cannot be empty."
                return
            }
            viewModelScope.launch {
                 try {
                     val task = Task(description = description, timestamp = System.currentTimeMillis())
                     tasksCollection.add(task).await()
                     _error.value = null
                 } catch (e: Exception) {
                    _error.value = "Error adding: ${e.localizedMessage}"
                }
            }
        }
    }
                    
  4. Main Screen Composable: Create ui/TaskScreen.kt.
    ui/TaskScreen.kt

    
    package com.yourcompany.advancedconceptsapp.ui
    
    // Imports: androidx.compose.*, androidx.lifecycle.viewmodel.compose.viewModel, java.text.SimpleDateFormat, etc.
    import androidx.compose.foundation.layout.*
    import androidx.compose.foundation.lazy.LazyColumn
    import androidx.compose.foundation.lazy.items
    import androidx.compose.material3.*
    import androidx.compose.runtime.*
    import androidx.compose.ui.Alignment
    import androidx.compose.ui.Modifier
    import androidx.compose.ui.unit.dp
    import androidx.lifecycle.viewmodel.compose.viewModel
    import com.yourcompany.advancedconceptsapp.data.Task
    import java.text.SimpleDateFormat
    import java.util.Date
    import java.util.Locale
    import androidx.compose.ui.res.stringResource
    import com.yourcompany.advancedconceptsapp.R // Import R class
    
    @OptIn(ExperimentalMaterial3Api::class) // For TopAppBar
    @Composable
    fun TaskScreen(taskViewModel: TaskViewModel = viewModel()) {
        val tasks by taskViewModel.tasks.collectAsState()
        val errorMessage by taskViewModel.error.collectAsState()
        var taskDescription by remember { mutableStateOf("") }
    
        Scaffold(
            topBar = {
                TopAppBar(title = { Text(stringResource(id = R.string.app_name)) }) // Use resource for flavor changes
            }
        ) { paddingValues ->
            Column(modifier = Modifier.padding(paddingValues).padding(16.dp).fillMaxSize()) {
                // Input Row
                Row(verticalAlignment = Alignment.CenterVertically, modifier = Modifier.fillMaxWidth()) {
                    OutlinedTextField(
                        value = taskDescription,
                        onValueChange = { taskDescription = it },
                        label = { Text("New Task Description") },
                        modifier = Modifier.weight(1f),
                        singleLine = true
                    )
                    Spacer(modifier = Modifier.width(8.dp))
                    Button(onClick = {
                        taskViewModel.addTask(taskDescription)
                        taskDescription = ""
                    }) { Text("Add") }
                }
                Spacer(modifier = Modifier.height(16.dp))
                // Error Message
                errorMessage?.let { Text(it, color = MaterialTheme.colorScheme.error, modifier = Modifier.padding(bottom = 8.dp)) }
                // Task List
                if (tasks.isEmpty() && errorMessage == null) {
                    Text("No tasks yet. Add one!")
                } else {
                    LazyColumn(modifier = Modifier.weight(1f)) {
                        items(tasks, key = { it.id }) { task ->
                            TaskItem(task)
                            Divider()
                        }
                    }
                }
            }
        }
    }
    
    @Composable
    fun TaskItem(task: Task) {
        val dateFormat = remember { SimpleDateFormat("yyyy-MM-dd HH:mm", Locale.getDefault()) }
        Row(modifier = Modifier.fillMaxWidth().padding(vertical = 8.dp), verticalAlignment = Alignment.CenterVertically) {
            Column(modifier = Modifier.weight(1f)) {
                Text(task.description, style = MaterialTheme.typography.bodyLarge)
                Text("Added: ${dateFormat.format(Date(task.timestamp))}", style = MaterialTheme.typography.bodySmall)
            }
        }
    }
                    
  5. Update MainActivity.kt: Set the content to TaskScreen.
    MainActivity.kt

    
    package com.yourcompany.advancedconceptsapp
    
    import android.os.Bundle
    import androidx.activity.ComponentActivity
    import androidx.activity.compose.setContent
    import androidx.compose.foundation.layout.fillMaxSize
    import androidx.compose.material3.MaterialTheme
    import androidx.compose.material3.Surface
    import androidx.compose.ui.Modifier
    import com.yourcompany.advancedconceptsapp.ui.TaskScreen
    import com.yourcompany.advancedconceptsapp.ui.theme.AdvancedConceptsAppTheme
    // Imports for WorkManager scheduling will be added in Step 3
    
    class MainActivity : ComponentActivity() {
        override fun onCreate(savedInstanceState: Bundle?) {
            super.onCreate(savedInstanceState)
            setContent {
                AdvancedConceptsAppTheme {
                    Surface(modifier = Modifier.fillMaxSize(), color = MaterialTheme.colorScheme.background) {
                        TaskScreen()
                    }
                }
            }
            // TODO: Schedule WorkManager job in Step 3
        }
    }
                    
  6. Run the App: Test basic functionality. Tasks should appear and persist in Firestore’s `tasks` collection (initially).

Step 3: WorkManager Implementation

Create a background worker for periodic reporting.

  1. Create the Worker: Create worker/ReportingWorker.kt. (Collection name will be updated later).
    worker/ReportingWorker.kt

    
    package com.yourcompany.advancedconceptsapp.worker
    
    import android.content.Context
    import android.util.Log
    import androidx.work.CoroutineWorker
    import androidx.work.WorkerParameters
    import com.google.firebase.firestore.ktx.firestore
    import com.google.firebase.ktx.Firebase
    // Import BuildConfig later when needed
    import kotlinx.coroutines.tasks.await
    
    // Temporary placeholder - will be replaced by BuildConfig field
    const val TEMPORARY_USAGE_LOG_COLLECTION = "usage_logs"
    
    class ReportingWorker(appContext: Context, workerParams: WorkerParameters) :
        CoroutineWorker(appContext, workerParams) {
    
        companion object { const val TAG = "ReportingWorker" }
        private val db = Firebase.firestore
    
        override suspend fun doWork(): Result {
            Log.d(TAG, "Worker started: Reporting usage.")
            return try {
                val logEntry = hashMapOf(
                    "timestamp" to System.currentTimeMillis(),
                    "message" to "App usage report.",
                    "worker_run_id" to id.toString()
                )
                // Use temporary constant for now
                db.collection(TEMPORARY_USAGE_LOG_COLLECTION).add(logEntry).await()
                Log.d(TAG, "Worker finished successfully.")
                Result.success()
            } catch (e: Exception) {
                Log.e(TAG, "Worker failed", e)
                Result.failure()
            }
        }
    }
                    
  2. Schedule the Worker: Update MainActivity.kt‘s onCreate method.
    MainActivity.kt additions

    
    // Add these imports to MainActivity.kt
    import android.content.Context
    import android.util.Log
    import androidx.work.*
    import com.yourcompany.advancedconceptsapp.worker.ReportingWorker
    import java.util.concurrent.TimeUnit
    
    // Inside MainActivity class, after setContent { ... } block in onCreate
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContent {
            // ... existing code ...
        }
        // Schedule the worker
        schedulePeriodicUsageReport(this)
    }
    
    // Add this function to MainActivity class
    private fun schedulePeriodicUsageReport(context: Context) {
        val constraints = Constraints.Builder()
            .setRequiredNetworkType(NetworkType.CONNECTED)
            .build()
    
        val reportingWorkRequest = PeriodicWorkRequestBuilder<ReportingWorker>(
                1, TimeUnit.HOURS // ~ every hour
             )
            .setConstraints(constraints)
            .addTag(ReportingWorker.TAG)
            .build()
    
        WorkManager.getInstance(context).enqueueUniquePeriodicWork(
            ReportingWorker.TAG,
            ExistingPeriodicWorkPolicy.KEEP,
            reportingWorkRequest
        )
        Log.d("MainActivity", "Periodic reporting work scheduled.")
    }
                    
  3. Test WorkManager:
    • Run the app. Check Logcat for messages from ReportingWorker and MainActivity about scheduling.
    • WorkManager tasks don’t run immediately, especially periodic ones. You can use ADB commands to force execution for testing:
      • Find your package name: com.yourcompany.advancedconceptsapp
      • Force run jobs: adb shell cmd jobscheduler run -f com.yourcompany.advancedconceptsapp 999 (The 999 is usually sufficient, it’s a job ID).
      • Or use Android Studio’s App Inspection tab -> Background Task Inspector to view and trigger workers.
    • Check your Firestore Console for the usage_logs collection.

Step 4: Build Flavors (dev vs. prod)

Create dev and prod flavors for different environments.

  1. Configure app/build.gradle.kts:
    app/build.gradle.kts

    
    android {
        // ... namespace, compileSdk, defaultConfig ...
    
        // ****** Enable BuildConfig generation ******
        buildFeatures {
            buildConfig = true
        }
        // *******************************************
    
        flavorDimensions += "environment"
    
        productFlavors {
            create("dev") {
                dimension = "environment"
                applicationIdSuffix = ".dev" // CRITICAL: Changes package name for dev builds
                versionNameSuffix = "-dev"
                resValue("string", "app_name", "Task Reporter (Dev)")
                buildConfigField("String", "TASKS_COLLECTION", "\"tasks_dev\"")
                buildConfigField("String", "USAGE_LOG_COLLECTION", "\"usage_logs_dev\"")
            }
            create("prod") {
                dimension = "environment"
                resValue("string", "app_name", "Task Reporter")
                buildConfigField("String", "TASKS_COLLECTION", "\"tasks\"")
                buildConfigField("String", "USAGE_LOG_COLLECTION", "\"usage_logs\"")
            }
        }
    
        // ... buildTypes, compileOptions, etc ...
    }
                    

    Sync Gradle files.

    Important: We added applicationIdSuffix = ".dev". This means the actual package name for your development builds will become something like com.yourcompany.advancedconceptsapp.dev. This requires an update to your Firebase project setup, explained next. Also note the buildFeatures { buildConfig = true } block which is required to use buildConfigField.
  2. Handling Firebase for Suffixed Application IDs

    Because the `dev` flavor now has a different application ID (`…advancedconceptsapp.dev`), the original `google-services.json` file (downloaded in Step 1) will not work for `dev` builds, causing a “No matching client found” error during build.

    You must add this new Application ID to your Firebase project:

    1. Go to Firebase Console: Open your project settings (gear icon).
    2. Your apps: Scroll down to the “Your apps” card.
    3. Add app: Click “Add app” and select the Android icon (</>).
    4. Register dev app:
      • Package name: Enter the exact suffixed ID: com.yourcompany.advancedconceptsapp.dev (replace `com.yourcompany.advancedconceptsapp` with your actual base package name).
      • Nickname (Optional): “Task Reporter Dev”.
      • SHA-1 (Optional but Recommended): Add the debug SHA-1 key from `./gradlew signingReport`.
    5. Register and Download: Click “Register app”. Crucially, download the new google-services.json file offered. This file now contains configurations for BOTH your base ID and the `.dev` suffixed ID.
    6. Replace File: In Android Studio (Project view), delete the old google-services.json from the app/ directory and replace it with the **newly downloaded** one.
    7. Skip SDK steps: You can skip the remaining steps in the Firebase console for adding the SDK.
    8. Clean & Rebuild: Back in Android Studio, perform a Build -> Clean Project and then Build -> Rebuild Project.
    Now your project is correctly configured in Firebase for both `dev` (with the `.dev` suffix) and `prod` (base package name) variants using a single `google-services.json`.
  3. Create Flavor-Specific Source Sets:
    • Switch to Project view in Android Studio.
    • Right-click on app/src -> New -> Directory. Name it dev.
    • Inside dev, create res/values/ directories.
    • Right-click on app/src -> New -> Directory. Name it prod.
    • Inside prod, create res/values/ directories.
    • (Optional but good practice): You can now move the default app_name string definition from app/src/main/res/values/strings.xml into both app/src/dev/res/values/strings.xml and app/src/prod/res/values/strings.xml. Or, you can rely solely on the resValue definitions in Gradle (as done above). Using resValue is often simpler for single strings like app_name. If you had many different resources (layouts, drawables), you’d put them in the respective dev/res or prod/res folders.
  4. Use Build Config Fields in Code:
      • Update TaskViewModel.kt and ReportingWorker.kt to use BuildConfig instead of temporary constants.

    TaskViewModel.kt change

    
    // Add this import
    import com.yourcompany.advancedconceptsapp.BuildConfig
    
    // Replace the temporary constant usage
    // const val TEMPORARY_TASKS_COLLECTION = "tasks" // Remove this line
    private val tasksCollection = db.collection(BuildConfig.TASKS_COLLECTION) // Use build config field
                        

    ReportingWorker.kt change

    
    // Add this import
    import com.yourcompany.advancedconceptsapp.BuildConfig
    
    // Replace the temporary constant usage
    // const val TEMPORARY_USAGE_LOG_COLLECTION = "usage_logs" // Remove this line
    
    // ... inside doWork() ...
    db.collection(BuildConfig.USAGE_LOG_COLLECTION).add(logEntry).await() // Use build config field
                        

    Modify TaskScreen.kt to potentially use the flavor-specific app name (though resValue handles this automatically if you referenced @string/app_name correctly, which TopAppBar usually does). If you set the title directly, you would load it from resources:

     // In TaskScreen.kt (if needed)
    import androidx.compose.ui.res.stringResource
    import com.yourcompany.advancedconceptsapp.R // Import R class
    // Inside Scaffold -> topBar

    TopAppBar(title = { Text(stringResource(id = R.string.app_name)) }) // Use string resource

  5. Select Build Variant & Test:
    • In Android Studio, go to Build -> Select Build Variant… (or use the “Build Variants” panel usually docked on the left).
    • You can now choose between devDebug, devRelease, prodDebug, and prodRelease.
    • Select devDebug. Run the app. The title should say “Task Reporter (Dev)”. Data should go to tasks_dev and usage_logs_dev in Firestore.
    • Select prodDebug. Run the app. The title should be “Task Reporter”. Data should go to tasks and usage_logs.

Step 5: Proguard/R8 Configuration (for Release Builds)

R8 is the default code shrinker and obfuscator in Android Studio (successor to Proguard). It’s enabled by default for release build types. We need to ensure it doesn’t break our app, especially Firestore data mapping.

    1. Review app/build.gradle.kts Release Build Type:
      app/build.gradle.kts

      
      android {
          // ...
          buildTypes {
              release {
                  isMinifyEnabled = true // Should be true by default for release
                  isShrinkResources = true // R8 handles both
                  proguardFiles(
                      getDefaultProguardFile("proguard-android-optimize.txt"),
                      "proguard-rules.pro" // Our custom rules file
                  )
              }
              debug {
                  isMinifyEnabled = false // Usually false for debug
                  proguardFiles(
                      getDefaultProguardFile("proguard-android-optimize.txt"),
                      "proguard-rules.pro"
                  )
              }
              // ... debug build type ...
          }
          // ...
      }
                 

      isMinifyEnabled = true enables R8 for the release build type.

    2. Configure app/proguard-rules.pro:
      • Firestore uses reflection to serialize/deserialize data classes. R8 might remove or rename classes/fields needed for this process. We need to add “keep” rules.
      • Open (or create) the app/proguard-rules.pro file. Add the following:
      
      # Keep Task data class and its members for Firestore serialization
      -keep class com.yourcompany.advancedconceptsapp.data.Task { (...); *; }
      # Keep any other data classes used with Firestore similarly
      # -keep class com.yourcompany.advancedconceptsapp.data.AnotherFirestoreModel { (...); *; }
      
      # Keep Coroutine builders and intrinsics (often needed, though AGP/R8 handle some automatically)
      -keepnames class kotlinx.coroutines.intrinsics.** { *; }
      
      # Keep companion objects for Workers if needed (sometimes R8 removes them)
      -keepclassmembers class * extends androidx.work.Worker {
          public static ** Companion;
      }
      
      # Keep specific fields/methods if using reflection elsewhere
      # -keepclassmembers class com.example.SomeClass {
      #    private java.lang.String someField;
      #    public void someMethod();
      # }
      
      # Add rules for any other libraries that require them (e.g., Retrofit, Gson, etc.)
      # Consult library documentation for necessary Proguard/R8 rules.
    • Explanation:
      • -keep class ... { <init>(...); *; }: Keeps the Task class, its constructors (<init>), and all its fields/methods (*) from being removed or renamed. This is crucial for Firestore.
      • -keepnames: Prevents renaming but allows removal if unused.
      • -keepclassmembers: Keeps specific members within a class.

3. Test the Release Build:

    • Select the prodRelease build variant.
    • Go to Build -> Generate Signed Bundle / APK…. Choose APK.
    • Create a new keystore or use an existing one (follow the prompts). Remember the passwords!
    • Select prodRelease as the variant. Click Finish.
    • Android Studio will build the release APK. Find it (usually in app/prod/release/).
    • Install this APK manually on a device: adb install app-prod-release.apk.
    • Test thoroughly. Can you add tasks? Do they appear? Does the background worker still log to Firestore (check usage_logs)? If it crashes or data doesn’t save/load correctly, R8 likely removed something important. Check Logcat for errors (often ClassNotFoundException or NoSuchMethodError) and adjust your proguard-rules.pro file accordingly.

 


 

Step 6: Firebase App Distribution (for Dev Builds)

Configure Gradle to upload development builds to testers via Firebase App Distribution.

  1. Download private key: on Firebase console go to Project Overview  at left top corner -> Service accounts -> Firebase Admin SDK -> Click on “Generate new private key” button ->
    api-project-xxx-yyy.json move this file to root project at the same level of app folder *Ensure that this file be in your local app, do not push it to the remote repository because it contains sensible data and will be rejected later
  2. Configure App Distribution Plugin in app/build.gradle.kts:
    app/build.gradle.kts

    
    // Apply the plugin at the top
    plugins {
        // ... other plugins id("com.android.application"), id("kotlin-android"), etc.
        alias(libs.plugins.google.firebase.appdistribution)
    }
    
    android {
        // ... buildFeatures, flavorDimensions, productFlavors ...
    
        buildTypes {
            getByName("release") {
                isMinifyEnabled = true // Should be true by default for release
                isShrinkResources = true // R8 handles both
                proguardFiles(
                    getDefaultProguardFile("proguard-android-optimize.txt"),
                    "proguard-rules.pro" // Our custom rules file
                )
            }
            getByName("debug") {
                isMinifyEnabled = false // Usually false for debug
                proguardFiles(
                    getDefaultProguardFile("proguard-android-optimize.txt"),
                    "proguard-rules.pro"
                )
            }
            firebaseAppDistribution {
                artifactType = "APK"
                releaseNotes = "Latest build with fixes/features"
                testers = "briew@example.com, bri@example.com, cal@example.com"
                serviceCredentialsFile="$rootDir/api-project-xxx-yyy.json"//do not push this line to the remote repository or stablish as local variable } } } 

    Add library version to libs.version.toml

    
    [versions]
    googleFirebaseAppdistribution = "5.1.1"
    [plugins]
    google-firebase-appdistribution = { id = "com.google.firebase.appdistribution", version.ref = "googleFirebaseAppdistribution" }
    
    Ensure the plugin classpath is in the 

    project-level

     build.gradle.kts: 

    project build.gradle.kts

    
    plugins {
        // ...
        alias(libs.plugins.google.firebase.appdistribution) apply false
    }
                    

    Sync Gradle files.

  3. Upload a Build Manually:
    • Select the desired variant (e.g., devDebugdevRelease, prodDebug , prodRelease).
    • In Android Studio Terminal  run  each commmand to generate apk version for each environment:
      • ./gradlew assembleRelease appDistributionUploadProdRelease
      • ./gradlew assembleRelease appDistributionUploadDevRelease
      • ./gradlew assembleDebug appDistributionUploadProdDebug
      • ./gradlew assembleDebug appDistributionUploadDevDebug
    • Check Firebase Console -> App Distribution -> Select .dev project . Add testers or use the configured group (`android-testers`).

Step 7: CI/CD with GitHub Actions

Automate building and distributing the `dev` build on push to a specific branch.

  1. Create GitHub Repository. Create a new repository on GitHub and push your project code to it.
    1. Generate FIREBASE_APP_ID:
      • on Firebase App Distribution go to Project Overview -> General -> App ID for com.yourcompany.advancedconceptsapp.dev environment (1:xxxxxxxxx:android:yyyyyyyyyy)
      • In GitHub repository go to Settings -> Secrets and variables -> Actions -> New repository secret
      • Set the name: FIREBASE_APP_ID and value: paste the App ID generated
    2. Add FIREBASE_SERVICE_ACCOUNT_KEY_JSON:
      • open api-project-xxx-yyy.json located at root project and copy the content
      • In GitHub repository go to Settings -> Secrets and variables -> Actions -> New repository secret
      • Set the name: FIREBASE_SERVICE_ACCOUNT_KEY_JSON and value: paste the json content
    3. Create GitHub Actions Workflow File:
      • In your project root, create the directories .github/workflows/.
      • Inside .github/workflows/, create a new file named android_build_distribute.yml.
      • Paste the following content:
    4. 
      name: Android CI 
      
      on: 
        push: 
          branches: [ "main" ] 
        pull_request: 
          branches: [ "main" ] 
      jobs: 
        build: 
          runs-on: ubuntu-latest 
          steps: 
          - uses: actions/checkout@v3
          - name: set up JDK 17 
            uses: actions/setup-java@v3 
            with: 
              java-version: '17' 
              distribution: 'temurin' 
              cache: gradle 
          - name: Grant execute permission for gradlew 
            run: chmod +x ./gradlew 
          - name: Build devRelease APK 
            run: ./gradlew assembleRelease 
          - name: upload artifact to Firebase App Distribution
            uses: wzieba/Firebase-Distribution-Github-Action@v1
            with:
              appId: ${{ secrets.FIREBASE_APP_ID }}
              serviceCredentialsFileContent: ${{ secrets.FIREBASE_SERVICE_ACCOUNT_KEY_JSON }}
              groups: testers
              file: app/build/outputs/apk/dev/release/app-dev-release-unsigned.apk
      
    1. Commit and Push: Commit the .github/workflows/android_build_distribute.yml file and push it to your main branch on GitHub.
    1. Verify: Go to the “Actions” tab in your GitHub repository. You should see the workflow running. If it succeeds, check Firebase App Distribution for the new build. Your testers should get notified.

 


 

Step 8: Testing and Verification Summary

    • Flavors: Switch between devDebug and prodDebug in Android Studio. Verify the app name changes and data goes to the correct Firestore collections (tasks_dev/tasks, usage_logs_dev/usage_logs).
    • WorkManager: Use the App Inspection -> Background Task Inspector or ADB commands to verify the ReportingWorker runs periodically and logs data to the correct Firestore collection based on the selected flavor.
    • R8/Proguard: Install and test the prodRelease APK manually. Ensure all features work, especially adding/viewing tasks (Firestore interaction). Check Logcat for crashes related to missing classes/methods.
    • App Distribution: Make sure testers receive invites for the devDebug (or devRelease) builds uploaded manually or via CI/CD. Ensure they can install and run the app.
    • CI/CD: Check the GitHub Actions logs for successful builds and uploads after pushing to the develop branch. Verify the build appears in Firebase App Distribution.

 

Conclusion

Congratulations! You’ve navigated complex Android topics including Firestore, WorkManager, Compose, Flavors (with correct Firebase setup), R8, App Distribution, and CI/CD.

This project provides a solid foundation. From here, you can explore:

    • More complex WorkManager chains or constraints.
    • Deeper R8/Proguard rule optimization.
    • More sophisticated CI/CD pipelines (deploy signed apks/bundles, running tests, deploying to Google Play).
    • Using different NoSQL databases or local caching with Room.
    • Advanced Compose UI patterns and state management.
    • Firebase Authentication, Cloud Functions, etc.

If you want to have access to the full code in my GitHub repository, contact me in the comments.


 

Project Folder Structure (Conceptual)


AdvancedConceptsApp/
├── .git/
├── .github/workflows/android_build_distribute.yml
├── .gradle/
├── app/
│   ├── build/
│   ├── libs/
│   ├── src/
│   │   ├── main/           # Common code, res, AndroidManifest.xml
│   │   │   └── java/com/yourcompany/advancedconceptsapp/
│   │   │       ├── data/Task.kt
│   │   │       ├── ui/TaskScreen.kt, TaskViewModel.kt, theme/
│   │   │       ├── worker/ReportingWorker.kt
│   │   │       └── MainActivity.kt
│   │   ├── dev/            # Dev flavor source set (optional overrides)
│   │   ├── prod/           # Prod flavor source set (optional overrides)
│   │   ├── test/           # Unit tests
│   │   └── androidTest/    # Instrumentation tests
│   ├── google-services.json # *** IMPORTANT: Contains configs for BOTH package names ***
│   ├── build.gradle.kts    # App-level build script
│   └── proguard-rules.pro # R8/Proguard rules
├── api-project-xxx-yyy.json # Firebase service account key json
├── gradle/wrapper/
├── build.gradle.kts      # Project-level build script
├── gradle.properties
├── gradlew
├── gradlew.bat
└── settings.gradle.kts
        

 

]]>
https://blogs.perficient.com/2025/04/10/android-development-codelab-mastering-advanced-concepts/feed/ 0 379698