Strategy and Transformation Articles / Blogs / Perficient https://blogs.perficient.com/category/services/strategy-and-consulting/ Expert Digital Insights Wed, 23 Apr 2025 09:08:57 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Strategy and Transformation Articles / Blogs / Perficient https://blogs.perficient.com/category/services/strategy-and-consulting/ 32 32 30508587 Meet Perficient at Data Summit 2025 https://blogs.perficient.com/2025/04/22/meet-perficient-at-data-summit-2025/ https://blogs.perficient.com/2025/04/22/meet-perficient-at-data-summit-2025/#respond Tue, 22 Apr 2025 18:39:18 +0000 https://blogs.perficient.com/?p=380394

Data Summit 2025 is just around the corner, and we’re excited to connect, learn, and share ideas with fellow leaders in the data and AI space. As the pace of innovation accelerates, events like this offer a unique opportunity to engage with peers, discover groundbreaking solutions, and discuss the future of data-driven transformation. 

We caught up with Jerry Locke, a data solutions expert at Perficient, who’s not only attending the event but also taking the stage as a speaker. Here’s what he had to say about this year’s conference and why it matters: 

Why is this event important for the data industry? 

“Anytime you can meet outside of the screen is always a good thing. For me, it’s all about learning, networking, and inspiration. The world of data is expanding at an unprecedented pace. Global data volume is projected to reach over 180 zettabytes (or 180 trillion gigabytes) by 2025—tripling from just 64 zettabytes in 2020. That’s a massive jump. The question we need to ask is: What are modern organizations doing to not only secure all this data but also use it to unlock new business opportunities? That’s what I’m looking to explore at this summit.” 

What topics do you think will be top-of-mind for attendees this year? 

“I’m especially interested in the intersection of data engineering and AI. I’ve been lucky to work on modern data teams where we’ve adopted CI/CD pipelines and scalable architectures. AI has completely transformed how we manage data pipelines—mostly for the better. The conversation this year will likely revolve around how to continue that momentum while solving real-world challenges.” 

Are there any sessions you’re particularly excited to attend? 

“My plan is to soak in as many sessions on data and AI as possible. I’m especially curious about the use cases being shared, how organizations are applying these technologies today, and more importantly, how they plan to evolve them over the next few years.” 

What makes this event special for you, personally? 

“I’ve never been to this event before, but several of my peers have, and they spoke highly of the experience. Beyond the networking, I’m really looking forward to being inspired by the incredible work others are doing. As a speaker, I’m honored to be presenting on serverless engineering in today’s cloud-first world. I’m hoping to not only share insights but also get thoughtful feedback from the audience and my peers. Ultimately, I want to learn just as much from the people in the room as they might learn from me.” 

What’s one thing you hope listeners take away from your presentation? 

“My main takeaway is simple: start. If your data isn’t on the cloud yet, start that journey. If your engineering isn’t modernized, begin that process. Serverless is a key part of modern data engineering, but the real goal is enabling fast, informed decision-making through your data. It won’t always be easy—but it will be worth it.

I also hope that listeners understand the importance of composable data systems. If you’re building or working with data systems, composability gives you agility, scalability, and future-proofing. So instead of a big, all-in-one data platform (monolith), you get a flexible architecture where you can plug in best-in-class tools for each part of your data stack. Composable data systems let you choose the best tool for each job, swap out or upgrade parts without rewriting everything, and scale or customize workflows as your needs evolve.” 

Don’t miss Perficient at Data Summit 2025. A global digital consultancy, Perficient is committed to partnering with clients to tackle complex business challenges and accelerate transformative growth. 

]]>
https://blogs.perficient.com/2025/04/22/meet-perficient-at-data-summit-2025/feed/ 0 380394
What does SFO have to do with Oracle? https://blogs.perficient.com/2025/04/21/what-does-sfo-have-to-do-with-oracle/ https://blogs.perficient.com/2025/04/21/what-does-sfo-have-to-do-with-oracle/#respond Mon, 21 Apr 2025 10:33:06 +0000 https://blogs.perficient.com/?p=380320

Isn’t SFO an airport?  The airport one would travel if the destination is Oracle’s Redwood Shores campus.  Widely known as the initialism for the San Francisco International Airport, the answer would be correct if this question were posed in that context.  However, in Oracle Fusion, SFO stands for the Supply Chain Financial Orchestration. Based on what it does, we cannot call it an airport, but it sure is a control tower for financial transactions.

As companies are expanding their presence across countries and continents through mergers and acquisitions or natural growth, it becomes inevitable for the companies to transact across the borders and produce intercompany financial transactions.

Supply Chain Financial Orchestration (SFO), is the place where Oracle Fusion handles those transactions. The material may move one way, but for legal or financial reasons the financial flow could be following a different path.

A Typical Scenario

A Germany-based company sells to its EU customers from its Berlin office, but ships from its warehouses in New Delhi and Beijing.

Global

Oracle Fusion SFO takes care of all those transactions and as transactions are processed in Cost Management, financial trade transactions are created, and corporations can see their internal margins, intercompany accounting, and intercompany invoices.

Oh wait, the financial orchestration doesn’t have to be across countries only.  What if a corporation wants to measure its manufacturing and sales operations profitability?  Supply Chain Financial Orchestration is there for you.

In short, SFO is a tool that is part of the Supply Chain management offering that helps create intercompany trade transactions for various business cases.

Contact Mehmet Erisen at Perficient for more introspection of this functionality, and how Perficient and Oracle Fusion Cloud can digitalize and modernize your ERP platform.

www.oracle.com

www.perficient.com

]]>
https://blogs.perficient.com/2025/04/21/what-does-sfo-have-to-do-with-oracle/feed/ 0 380320
Navigating the Digital Transformation Landscape in 2025 https://blogs.perficient.com/2025/04/18/forrester-q2-2025-digital-transformation-landscape/ https://blogs.perficient.com/2025/04/18/forrester-q2-2025-digital-transformation-landscape/#respond Fri, 18 Apr 2025 20:42:14 +0000 https://blogs.perficient.com/?p=380015

Keeping up with today’s fast-paced technological environment, with businesses undergoing a significant transformation in operations, customer interactions, and innovation, can be challenging. Partnering with the right digital transformation service provider is essential for success. A proven track record in guiding businesses through digital complexities is crucial for unlocking their full potential, driving efficiency, and ensuring exceptional customer experiences, leading to long-term success.

The Digital Transformation Services Landscape, Q2 2025 Report

The recent Forrester report defines digital transformation services as – “Service providers that offer multidisciplinary capabilities to support enterprises in articulating, orchestrating, and governing strategy-aligned business transformation journeys, driving change across technology, ways of working, operating models, data, and corporate culture to continuously improve business outcomes.” This report provides an in-depth overview of 35 digital transformation service providers, offering valuable insights into the current market landscape.

Understanding the Providers

Forrester meticulously researched each service provider through a comprehensive set of questions. According to Forrester, “organizations leverage digital transformation services to:

  • Articulate and orchestrate strategy-aligned transformation journeys.
  • Align tech modernization with people, organization, and culture changes.
  • Navigate transformation risks.”

Leaders can compare digital transformation service providers listed in the report based on size, offerings, geography, and business scenario differentiation to make informed decisions.

Core Business Scenarios

The report identifies the core business scenarios that are “most frequently sought after by buyers and addressed by digital transformation services solutions.” These scenarios include enterprise transformation, customer experience (CX) transformation, data and analytics transformation, and infrastructure and operational transformation.

Our Inclusion

We are proud to be listed in the Forrester Digital Transformation Services Landscape report as a digital transformation consultancy with an industry focus in the sectors of financial services, healthcare, and industrial products, and a geographic focus in four regions: North America (NA), Asia Pacific (APAC), and Latin America (LATAM).

As a dynamic global organization, we believe that with our cohesive, integrated strategy, we can deliver from any of our geographic locations and bring together the best team and the best value for the customer.

Access the Forrester report, The Digital Transformation Services Landscape, Q2 2025 to find out more.

Your Digital Transformation Journey

Seeing the world through your customers’ eyes is the best way to meet their needs. Our Digital Business Transformation practice enables leaders to meet the demands of today’s fast-changing, customer-centric world. We help you articulate a vision, formulate strategy, and align your team around the capabilities you need to stay ahead of disruption. Together, we resolve uncertainty, embrace change, and establish a North Star to guide your transformation journeys.

We implement the Envision Strategy Framework, a continuous and adaptive process that feeds real-world insights back into strategic decisions. This framework is informed by customer empathy and grounded in executional know-how. We put customers at the center of our digital strategy formulation process.

Supporting this is Envision Online, a comprehensive digital transformation platform that amplifies strategic decision-making based on the Envision Framework. With proprietary tools and a wealth of industry data, we deliver swift, actionable insights to help understand your organization’s competitive positioning.

Learn more about the report.

Forrester does not endorse any company, product, brand, or service included in its research publications and does not advise any person to select the products or services of any company or brand based on the ratings included in such publications. Information is based on the best available resources. Opinions reflect judgment at the time and are subject to change. For more information, read about Forrester’s objectivity here.

]]>
https://blogs.perficient.com/2025/04/18/forrester-q2-2025-digital-transformation-landscape/feed/ 0 380015
Generative AI in Data and Quality Assurance (QA): Transforming Processes https://blogs.perficient.com/2025/04/16/generative-ai-in-data-and-quality-assurance-qa-transforming-processes/ https://blogs.perficient.com/2025/04/16/generative-ai-in-data-and-quality-assurance-qa-transforming-processes/#respond Wed, 16 Apr 2025 09:43:22 +0000 https://blogs.perficient.com/?p=380012

Generative AI (Gen AI) transforms how organizations interact with data and develop high-quality software. GenAI is a game changer in multiple industries, automating processes, increasing accuracy, and providing predictive insights. Here, we concentrate on its uses in data management, effects on efficiency, innovation, and cost savings.

GenAI in Data Management

Gen AI revolutionizes the data lifecycle by improving data quality, automating processes, and thus accelerating and improving decision-making. Key applications include:

  • Data Augmentation: Gen AI generates synthetic data to augment existing datasets. This is more advantageous when training machine learning models that require diverse and large-scale data inputs.
  • Data Cleansing finds and corrects duplicates, errors, missing values, and inconsistent formats, providing high-quality datasets ready for analysis.
  • Data Enrichment: Gen AI generates fresh features for existing data (e.g., generating customer demographics based on purchase history or activity logs).
  • Real-time Data Processing: Gen AI uses complex algorithms for real-time ingestion, cleansing, and transformations, guaranteeing seamless integration across systems.
  • Predictive Analytics observes patterns and anomalies in data to forecast trends or spot a critical problem before it escalates.

Benefits of Data Management

  • Accuracy and consistency of datasets are improved
  • Operational costs and manual intervention are reduced.
  • Improved innovation with high-quality data for better product development.

GenAI in Quality Assurance (QA)

GenAI is also transforming QA processes by automating test cases, generating test data, detecting bugs at an early stage, and performing predictive analysis. Its dynamic capabilities enhance the efficiency of software testing and reduce costs.

Applications in QA

Synthetic Test Data Generation: GenAI synthesizes realistic datasets critical for unbiased testing, assisting organizations with the ethical concerns of real-world data. It is especially relevant for healthcare.

Automated Test Case Generation: GenAI examines user stories and requirements using retrieval-augmented generation (RAG) and advanced algorithms to automatically create comprehensive test cases.

Exploration of Scenarios: QA teams can validate rare case scenarios that are difficult to find manually. GenAI is generating complexities that truly reflect realistic usages.

Continuous Monitoring: Unlike traditional AI approaches, GenAI monitors software performance in real-time even as development cycles run.

Test Automation: Generative AI enables tools like GitHub Copilot and AWS Code Whisperer to generate reusable code snippets to deploy automated tests, reducing manual work.

Benefits in QA

  • Better, wider coverage of the test scenario and device.
  • Predictive insights to identify defects faster.
  • Saves Cost due to reduction of manual testing efforts.

Generative AI implementation challenges

As the advantages are considerable, there are some challenges to Gen AI implementation:

Integration Challenges: It may be challenging to ensure Compatibility with existing systems.

Data Sovereignty: Following regulations on how to handle sensitive or synthetic data e.g. GDPR compliance.

Resistant to Change: Individual teams might be unwilling to adjust to new tools because they either lack knowledge of how to utilize them or fear being displaced, not just by the tools themselves but also, in a wider sense, by automation.

Firm plans, stakeholder engagement, and clear guidance on AI tool use will help to ameliorate these challenges.

Conclusion

Generative AI is used to revolutionize data management and QA processes. Automating tasks to improve performance and accuracy for reducing errors and predictive analytics via synthetic data creation is a way to distinguish oneself as the foundation of certain emerging digital transformation strategies today. The more businesses collaborate with GenAI throughout their workflows, the more its capabilities will reveal efficiency and innovation, at blazing speed.

]]>
https://blogs.perficient.com/2025/04/16/generative-ai-in-data-and-quality-assurance-qa-transforming-processes/feed/ 0 380012
Ethics in AI Implementation: Balancing Innovation and Responsibility https://blogs.perficient.com/2025/04/15/ethics-in-ai-implementation/ https://blogs.perficient.com/2025/04/15/ethics-in-ai-implementation/#comments Tue, 15 Apr 2025 20:41:24 +0000 https://blogs.perficient.com/?p=380058

AI is revolutionizing our daily lives, reshaping how we work, communicate, and make decisions. From healthcare to finance, AI is revolutionizing industries with process automation, increasing efficacy, and innovation. However, these AI advancements come with significant ethical risks, impact of AI needs to be addressed urgently, as it becomes increasingly embedded within our society. Deploying AI without considering its ethical impact isn’t just a tech oversight, it’s a human one. At the end of the day, it’s people who experience the fallout of flawed decisions.

Imagine an AI system denying a qualified candidate a job interview because of hidden biases in its training data. As AI becomes integral to decision-making processes, ensuring ethical implementation is no longer optional, it’s imperative.

What is AI Ethics?

AI ethics refers to the principles and practices that guide the responsible development and use of AI technology to ensure it benefits to society and minimizes potential harm. Ethical AI focuses on:

Fairness: AI must not reinforce existing societal biases. This means actively reviewing data for gender, racial, or socioeconomic bias before it’s used in training, and making adjustments where needed. A fair AI system should perform consistently across different population groups, not just for the majority or most-represented users.

Transparency: Ensuring AI decision-making processes are understandable.

Accountability: Encouraging developers and organizations to be accountable for the impacts of AI.

Privacy: Protecting users’ data from misuse.

Environmental Impact: Reducing AI’s carbon footprint through greener technologies.

Why Ethics in AI Matter

AI systems are being built into critical areas such as healthcare, the criminal justice system, hiring, and finance. If ethics are left out of the equation, these systems can quietly reinforce real-world inequalities, without anyone noticing until it’s too late. For instance:

  • Amazon eliminated an AI recruiting tool because the data used to train its models was biased, favoring men over women.
  • The Netherlands childcare benefits scandal showed how biased algorithms can ruin lives with false allegations of fraud.
  • In 2024, a major financial institution faced backlash when its AI-driven loan approval system was found to disproportionately reject applications from minority communities, highlighting the urgent need for bias mitigation in AI algorithms.

These examples illustrate the potential for harm when ethical frameworks are neglected.

Key Ethical Challenges in AI

Bias: When Machines Reflect Our Flaws

Artificial intelligence systems often have biases that mirror the inclinations of the data used for their training. If left unchecked, these inclinations can lead to injustice towards specific groups.

Why Transparency Isn’t Optional Anymore

Many AI models are “black boxes,” and it’s hard to tell how or why they make a decision. Lack of transparency undermines trust, especially when decisions are based on unclear or unreliable data.

Who’s Accountable When AI Gets It Wrong?

Determining responsibility for an AI system’s actions, especially in high-stakes scenarios like healthcare, remains a complex issue.

Privacy Concerns

AI systems are collecting and using personal data very quickly and on a large scale, that raises serious privacy concerns. Especially given that there is limited accountability and transparency around data usage. Users have little to no understanding of how their date is being handled.

Environmental Impact

Training large-scale machine learning models has an energy cost that is substantially high and degrades the environment.

Strategies for Implementing Ethical and Efficient AI

Implementation

Organizations should proactively implement ethical practices at all levels of their AI framework:

1. Create Ethical Guidelines for Internal Use

  • Develop a comprehensive ethics policy that outlines acceptable AI use cases, decision-making protocols, and review processes.
  • Create an AI Ethics Committee to monitor compliance with these guidelines.

2. Diversity in Data and Teams

  • Ensure datasets are representative and inclusive. Assemble diverse teams to bring varied perspectives to AI development.
  • Having teams that are diverse in background will help to see ethical blind spots.

3. Embed Ethics into Development

  • Considering ethical implications at each stage of AI development.
  • Use tools like IBM’s AI Fairness 360 or Google’s What-If Tool to regularly test for bias in models.

4. Continuous Monitoring

  • Using checklists to review and apply ethical guidelines.
  • Engaging third-party audits to provide unbiased perspectives on system functioning.

5. Educate Stakeholders

  • Training programs to educate employees on ethical ideas and good practices in using AI.
  • Including cross-functional conversations among developers, legal teams, and user advocates.

6. Partner with Ethical Partners

  • Build partnerships with suppliers committed to ethical AI solutions.
  • Seeking transparency, clarity, fairness, and accountability in external tools and evaluating them before implementation.

Forging the Future

Indeed, an ethically responsible approach to AI is both a technical challenge and a societal imperative. By emphasizing fairness, transparency, accountability, and privacy protection, organizations can develop systems that are both trustworthy and aligned with human values. As the forces shaping the future continue to evolve, our responsibility to ensure inclusive and ethical innovation must grow alongside them.

AI ethics is a shared responsibility involving developers, businesses, policymakers, and society at large.

Ethical AI isn’t just about doing the right thing, it’s becoming a regulatory necessity. Standards like the EU AI Act or the IEEE’s Ethically Aligned Design provide important frameworks for developers and businesses navigating this space.

By taking deliberate steps toward responsible implementation today, we can shape a future where AI enhances lives without compromising fundamental rights or values. As AI continues to evolve, it’s our collective responsibility to steer its development ethically.

]]>
https://blogs.perficient.com/2025/04/15/ethics-in-ai-implementation/feed/ 1 380058
⚡ PERFATHON 2025 – Hackathon at Perficient 👩‍💻 https://blogs.perficient.com/2025/04/15/perfathon-2025-the-hackathon-at-perficient/ https://blogs.perficient.com/2025/04/15/perfathon-2025-the-hackathon-at-perficient/#respond Tue, 15 Apr 2025 20:30:48 +0000 https://blogs.perficient.com/?p=380047

April 10–11, 2025, marked an exciting milestone for Perficient India as we hosted our hackathon – Perfathon 2025. Held at our Bangalore office, this thrilling, high-energy event ran non-stop from 12 PM on April 10 to 4 PM on April 11, bringing together 6 enthusiastic teams, creative minds, and some truly impactful ideas.

Perf7 Perf8

Setting the Stage

The excitement wasn’t just limited to the two days — the buzz began a week in advance, with teasers and prep that got everyone curious and pumped. The organizing team went all out to set the vibe right from the moment we stepped in — from vibrant decoration and  music to cool Perfathon hoodies and high spirits all around.

Perf5 Perf6 Perf11 Perf25

Our General Manager, Sumantra Nandi, kicked off the event with inspiring words and warm introductions to the teams, setting the tone for what would be a fierce, friendly, and collaborative code fest.

Meet the Gladiators

Six teams, each with 3–5 members, jumped into this coding battleground:

  • Bro Code

  • Code Red

  • Ctrl Alt Defeat

  • Code Wizards

  • The Tech Titans

  • Black Pearl

Each team was given the freedom to either pick a curated list of internal problem statements or come up with their own. Some of the challenge themes included:  Internal Idea & Innovation Hub, Skills & Project Matchmaker , Ready to Integrate AI Package etc. The open-ended format allowed teams to think outside the box, pick what resonated with them, and own the solution-building process.

Perf12 Perf13  Perf16 Perf21Perf14 Perf15

 Let the Hacking Begin!

Using a chit system, teams were randomly assigned dedicated spaces to work from, and the presentation order was decided — adding an element of surprise and fun!

Day 1 saw intense brainstorming, constant collaboration, design sprints, and non-stop coding. Teams powered through challenges, pivoted when needed, and showcased problem-solving spirit.

Evaluation with Impact

Everyone presented their solutions to our esteemed judges, who evaluated them across several crucial dimensions: tech stack used, task distribution among team members, solution complexity, optimization and relevance, future scope and real-world impact, scalability and deployment plans, UI designs, AI component etc.

The judging wasn’t just about scoring — it was about constructive insights. Judges offered thought-provoking feedback and suggestions, pushing teams to reflect more deeply on their solutions and discover new layers of improvement. A heartfelt thank you to each judge for their valuable time and perspectives.

This marked the official beginning of the code battle — from here on, it was about execution, collaboration, and pushing through to build something meaningful.

Perf1 Perf2 Perf3 Perf24 Perf27 Perf28 Perf29

Time to Shine (Day 2)

As Day 2 commenced, the teams picked up right where they left off — crushing it with creativity and clean code. The GitHub repository was set up by the organizing team, allowing all code commits and pushes to be tracked live right from the start of the event. The Final Showdown kicked off around 4 PM on April 11, with the spotlight on each team to demo their working prototypes.

A team representative collected chits to decide the final presentation order. In the audience this time were not just internal leaders, but also a special client guest , Sravan Vashista, (IT CX Director and IT Country GM, Keysight Technologies) and our GM Sumantra Nandi, adding more weight to the final judgment.

Each team presented with full energy, integrated judge and audience feedback, and answered queries with clarity and confidence. The tension was real, and the performances were exceptional.

 And the Winners Are…

Before the grand prize distribution, our guest speaker, Sravan Vashista delivered an insightful and encouraging address. He applauded the energy in the room, appreciated the quality of solutions, and emphasized the importance of owning challenges and solving from within. The prize distribution was a celebration in itself — beaming faces, loud cheers, proud smiles, and a sense of fulfillment that only comes from doing something truly impactful.

After two action-packed days of code, creativity, and collaboration , it was finally time to crown our champions.

🥇 Code Red emerged victorious as the Perfathon 2025 Champions, thanks to their standout performance, technical depth, clear problem-solving approach, and powerful teamwork.

🥈 Code Wizards claimed the First Runners-Up spot with their solution and thoughtful execution.

🥉 Black Pearl took home the Second Runners-Up title, impressing everyone with their strong team synergy.

Each team received trophies and appreciation, but more importantly, they took home the experience of being real solution creators.

Perf10  Perf19 Perf23 Perf18 Perf30

🙌 Thank You, Team Perfathon!

A massive shoutout to our organizers, volunteers, and judges who made Perfathon a reality. Huge thanks to our leadership and HR team for their continuous support and encouragement, and to every participant who made the event what it was — memorable, meaningful, and magical.

Perf17 Perf33

Perf32 Perf9  Perf31

We’re already looking forward to Perfathon 2026. Until then, let’s keep the hacker spirit alive and continue being the solution-makers our organization needs.

]]>
https://blogs.perficient.com/2025/04/15/perfathon-2025-the-hackathon-at-perficient/feed/ 0 380047
Android Development Codelab: Mastering Advanced Concepts https://blogs.perficient.com/2025/04/10/android-development-codelab-mastering-advanced-concepts/ https://blogs.perficient.com/2025/04/10/android-development-codelab-mastering-advanced-concepts/#respond Thu, 10 Apr 2025 22:28:06 +0000 https://blogs.perficient.com/?p=379698

 

This guide will walk you through building a small application step-by-step, focusing on integrating several powerful tools and concepts essential for modern Android development.

What We’ll Cover:

  • Jetpack Compose: Building the UI declaratively.
  • NoSQL Database (Firestore): Storing and retrieving data in the cloud.
  • WorkManager: Running reliable background tasks.
  • Build Flavors: Creating different versions of the app (e.g., dev vs. prod).
  • Proguard/R8: Shrinking and obfuscating your code for release.
  • Firebase App Distribution: Distributing test builds easily.
  • CI/CD (GitHub Actions): Automating the build and distribution process.

The Goal: Build a “Task Reporter” app. Users can add simple task descriptions. These tasks are saved to Firestore. A background worker will periodically “report” (log a message or update a counter in Firestore) that the app is active. We’ll have dev and prod flavors pointing to different Firestore collections/data and distribute the dev build for testing.

Prerequisites:

  • Android Studio (latest stable version recommended).
  • Basic understanding of Kotlin and Android development fundamentals.
  • Familiarity with Jetpack Compose basics (Composable functions, State).
  • A Google account to use Firebase.
  • A GitHub account (for CI/CD).

Let’s get started!


Step 0: Project Setup

  1. Create New Project: Open Android Studio -> New Project -> Empty Activity (choose Compose).
  2. Name: AdvancedConceptsApp (or your choice).
  3. Package Name: Your preferred package name (e.g., com.yourcompany.advancedconceptsapp).
  4. Language: Kotlin.
  5. Minimum SDK: API 24 or higher.
  6. Build Configuration Language: Kotlin DSL (build.gradle.kts).
  7. Click Finish.

Step 1: Firebase Integration (Firestore & App Distribution)

  1. Connect to Firebase: In Android Studio: Tools -> Firebase.
    • In the Assistant panel, find Firestore. Click “Get Started with Cloud Firestore”. Click “Connect to Firebase”. Follow the prompts to create a new Firebase project or connect to an existing one.
    • Click “Add Cloud Firestore to your app”. Accept changes to your build.gradle.kts (or build.gradle) files. This adds the necessary dependencies.
    • Go back to the Firebase Assistant, find App Distribution. Click “Get Started”. Add the App Distribution Gradle plugin by clicking the button. Accept changes.
  2. Enable Services in Firebase Console:
    • Go to the Firebase Console and select your project.
    • Enable Firestore Database (start in Test mode).
    • In the left menu, go to Build -> Firestore Database. Click “Create database”.
      • Start in Test mode for easier initial development (we’ll secure it later if needed). Choose a location close to your users. Click “Enable”.
    • Ensure App Distribution is accessible (no setup needed here yet).
  3. Download Initial google-services.json:
    • In Firebase Console -> Project Settings (gear icon) -> Your apps.
    • Ensure your Android app (using the base package name like com.yourcompany.advancedconceptsapp) is registered. If not, add it.
    • Download the google-services.json file.
    • Switch Android Studio to the Project view and place the file inside the app/ directory.
    • Note: We will likely replace this file in Step 4 after configuring build flavors.

Step 2: Building the Basic UI with Compose

Let’s create a simple UI to add and display tasks.

  1. Dependencies: Ensure necessary dependencies for Compose, ViewModel, Firestore, and WorkManager are in app/build.gradle.kts.
    app/build.gradle.kts

    
    dependencies {
        // Core & Lifecycle & Activity
        implementation("androidx.core:core-ktx:1.13.1") // Use latest versions
        implementation("androidx.lifecycle:lifecycle-runtime-ktx:2.8.1")
        implementation("androidx.activity:activity-compose:1.9.0")
        // Compose
        implementation(platform("androidx.compose:compose-bom:2024.04.01")) // Check latest BOM
        implementation("androidx.compose.ui:ui")
        implementation("androidx.compose.ui:ui-graphics")
        implementation("androidx.compose.ui:ui-tooling-preview")
        implementation("androidx.compose.material3:material3")
        implementation("androidx.lifecycle:lifecycle-viewmodel-compose:2.8.1")
        // Firebase
        implementation(platform("com.google.firebase:firebase-bom:33.0.0")) // Check latest BOM
        implementation("com.google.firebase:firebase-firestore-ktx")
        // WorkManager
        implementation("androidx.work:work-runtime-ktx:2.9.0") // Check latest version
    }
                    

    Sync Gradle files.

  2. Task Data Class: Create data/Task.kt.
    data/Task.kt

    
    package com.yourcompany.advancedconceptsapp.data
    
    import com.google.firebase.firestore.DocumentId
    
    data class Task(
        @DocumentId
        val id: String = "",
        val description: String = "",
        val timestamp: Long = System.currentTimeMillis()
    ) {
        constructor() : this("", "", 0L) // Firestore requires a no-arg constructor
    }
                    
  3. ViewModel: Create ui/TaskViewModel.kt. (We’ll update the collection name later).
    ui/TaskViewModel.kt

    
    package com.yourcompany.advancedconceptsapp.ui
    
    import androidx.lifecycle.ViewModel
    import androidx.lifecycle.viewModelScope
    import com.google.firebase.firestore.ktx.firestore
    import com.google.firebase.firestore.ktx.toObjects
    import com.google.firebase.ktx.Firebase
    import com.yourcompany.advancedconceptsapp.data.Task
    // Import BuildConfig later when needed
    import kotlinx.coroutines.flow.MutableStateFlow
    import kotlinx.coroutines.flow.StateFlow
    import kotlinx.coroutines.launch
    import kotlinx.coroutines.tasks.await
    
    // Temporary placeholder - will be replaced by BuildConfig field
    const val TEMPORARY_TASKS_COLLECTION = "tasks"
    
    class TaskViewModel : ViewModel() {
        private val db = Firebase.firestore
        // Use temporary constant for now
        private val tasksCollection = db.collection(TEMPORARY_TASKS_COLLECTION)
    
        private val _tasks = MutableStateFlow<List<Task>>(emptyList())
        val tasks: StateFlow<List<Task>> = _tasks
    
        private val _error = MutableStateFlow<String?>(null)
        val error: StateFlow<String?> = _error
    
        init {
            loadTasks()
        }
    
        fun loadTasks() {
            viewModelScope.launch {
                try {
                     tasksCollection.orderBy("timestamp", com.google.firebase.firestore.Query.Direction.DESCENDING)
                        .addSnapshotListener { snapshots, e ->
                            if (e != null) {
                                _error.value = "Error listening: ${e.localizedMessage}"
                                return@addSnapshotListener
                            }
                            _tasks.value = snapshots?.toObjects<Task>() ?: emptyList()
                            _error.value = null
                        }
                } catch (e: Exception) {
                    _error.value = "Error loading: ${e.localizedMessage}"
                }
            }
        }
    
         fun addTask(description: String) {
            if (description.isBlank()) {
                _error.value = "Task description cannot be empty."
                return
            }
            viewModelScope.launch {
                 try {
                     val task = Task(description = description, timestamp = System.currentTimeMillis())
                     tasksCollection.add(task).await()
                     _error.value = null
                 } catch (e: Exception) {
                    _error.value = "Error adding: ${e.localizedMessage}"
                }
            }
        }
    }
                    
  4. Main Screen Composable: Create ui/TaskScreen.kt.
    ui/TaskScreen.kt

    
    package com.yourcompany.advancedconceptsapp.ui
    
    // Imports: androidx.compose.*, androidx.lifecycle.viewmodel.compose.viewModel, java.text.SimpleDateFormat, etc.
    import androidx.compose.foundation.layout.*
    import androidx.compose.foundation.lazy.LazyColumn
    import androidx.compose.foundation.lazy.items
    import androidx.compose.material3.*
    import androidx.compose.runtime.*
    import androidx.compose.ui.Alignment
    import androidx.compose.ui.Modifier
    import androidx.compose.ui.unit.dp
    import androidx.lifecycle.viewmodel.compose.viewModel
    import com.yourcompany.advancedconceptsapp.data.Task
    import java.text.SimpleDateFormat
    import java.util.Date
    import java.util.Locale
    import androidx.compose.ui.res.stringResource
    import com.yourcompany.advancedconceptsapp.R // Import R class
    
    @OptIn(ExperimentalMaterial3Api::class) // For TopAppBar
    @Composable
    fun TaskScreen(taskViewModel: TaskViewModel = viewModel()) {
        val tasks by taskViewModel.tasks.collectAsState()
        val errorMessage by taskViewModel.error.collectAsState()
        var taskDescription by remember { mutableStateOf("") }
    
        Scaffold(
            topBar = {
                TopAppBar(title = { Text(stringResource(id = R.string.app_name)) }) // Use resource for flavor changes
            }
        ) { paddingValues ->
            Column(modifier = Modifier.padding(paddingValues).padding(16.dp).fillMaxSize()) {
                // Input Row
                Row(verticalAlignment = Alignment.CenterVertically, modifier = Modifier.fillMaxWidth()) {
                    OutlinedTextField(
                        value = taskDescription,
                        onValueChange = { taskDescription = it },
                        label = { Text("New Task Description") },
                        modifier = Modifier.weight(1f),
                        singleLine = true
                    )
                    Spacer(modifier = Modifier.width(8.dp))
                    Button(onClick = {
                        taskViewModel.addTask(taskDescription)
                        taskDescription = ""
                    }) { Text("Add") }
                }
                Spacer(modifier = Modifier.height(16.dp))
                // Error Message
                errorMessage?.let { Text(it, color = MaterialTheme.colorScheme.error, modifier = Modifier.padding(bottom = 8.dp)) }
                // Task List
                if (tasks.isEmpty() && errorMessage == null) {
                    Text("No tasks yet. Add one!")
                } else {
                    LazyColumn(modifier = Modifier.weight(1f)) {
                        items(tasks, key = { it.id }) { task ->
                            TaskItem(task)
                            Divider()
                        }
                    }
                }
            }
        }
    }
    
    @Composable
    fun TaskItem(task: Task) {
        val dateFormat = remember { SimpleDateFormat("yyyy-MM-dd HH:mm", Locale.getDefault()) }
        Row(modifier = Modifier.fillMaxWidth().padding(vertical = 8.dp), verticalAlignment = Alignment.CenterVertically) {
            Column(modifier = Modifier.weight(1f)) {
                Text(task.description, style = MaterialTheme.typography.bodyLarge)
                Text("Added: ${dateFormat.format(Date(task.timestamp))}", style = MaterialTheme.typography.bodySmall)
            }
        }
    }
                    
  5. Update MainActivity.kt: Set the content to TaskScreen.
    MainActivity.kt

    
    package com.yourcompany.advancedconceptsapp
    
    import android.os.Bundle
    import androidx.activity.ComponentActivity
    import androidx.activity.compose.setContent
    import androidx.compose.foundation.layout.fillMaxSize
    import androidx.compose.material3.MaterialTheme
    import androidx.compose.material3.Surface
    import androidx.compose.ui.Modifier
    import com.yourcompany.advancedconceptsapp.ui.TaskScreen
    import com.yourcompany.advancedconceptsapp.ui.theme.AdvancedConceptsAppTheme
    // Imports for WorkManager scheduling will be added in Step 3
    
    class MainActivity : ComponentActivity() {
        override fun onCreate(savedInstanceState: Bundle?) {
            super.onCreate(savedInstanceState)
            setContent {
                AdvancedConceptsAppTheme {
                    Surface(modifier = Modifier.fillMaxSize(), color = MaterialTheme.colorScheme.background) {
                        TaskScreen()
                    }
                }
            }
            // TODO: Schedule WorkManager job in Step 3
        }
    }
                    
  6. Run the App: Test basic functionality. Tasks should appear and persist in Firestore’s `tasks` collection (initially).

Step 3: WorkManager Implementation

Create a background worker for periodic reporting.

  1. Create the Worker: Create worker/ReportingWorker.kt. (Collection name will be updated later).
    worker/ReportingWorker.kt

    
    package com.yourcompany.advancedconceptsapp.worker
    
    import android.content.Context
    import android.util.Log
    import androidx.work.CoroutineWorker
    import androidx.work.WorkerParameters
    import com.google.firebase.firestore.ktx.firestore
    import com.google.firebase.ktx.Firebase
    // Import BuildConfig later when needed
    import kotlinx.coroutines.tasks.await
    
    // Temporary placeholder - will be replaced by BuildConfig field
    const val TEMPORARY_USAGE_LOG_COLLECTION = "usage_logs"
    
    class ReportingWorker(appContext: Context, workerParams: WorkerParameters) :
        CoroutineWorker(appContext, workerParams) {
    
        companion object { const val TAG = "ReportingWorker" }
        private val db = Firebase.firestore
    
        override suspend fun doWork(): Result {
            Log.d(TAG, "Worker started: Reporting usage.")
            return try {
                val logEntry = hashMapOf(
                    "timestamp" to System.currentTimeMillis(),
                    "message" to "App usage report.",
                    "worker_run_id" to id.toString()
                )
                // Use temporary constant for now
                db.collection(TEMPORARY_USAGE_LOG_COLLECTION).add(logEntry).await()
                Log.d(TAG, "Worker finished successfully.")
                Result.success()
            } catch (e: Exception) {
                Log.e(TAG, "Worker failed", e)
                Result.failure()
            }
        }
    }
                    
  2. Schedule the Worker: Update MainActivity.kt‘s onCreate method.
    MainActivity.kt additions

    
    // Add these imports to MainActivity.kt
    import android.content.Context
    import android.util.Log
    import androidx.work.*
    import com.yourcompany.advancedconceptsapp.worker.ReportingWorker
    import java.util.concurrent.TimeUnit
    
    // Inside MainActivity class, after setContent { ... } block in onCreate
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContent {
            // ... existing code ...
        }
        // Schedule the worker
        schedulePeriodicUsageReport(this)
    }
    
    // Add this function to MainActivity class
    private fun schedulePeriodicUsageReport(context: Context) {
        val constraints = Constraints.Builder()
            .setRequiredNetworkType(NetworkType.CONNECTED)
            .build()
    
        val reportingWorkRequest = PeriodicWorkRequestBuilder<ReportingWorker>(
                1, TimeUnit.HOURS // ~ every hour
             )
            .setConstraints(constraints)
            .addTag(ReportingWorker.TAG)
            .build()
    
        WorkManager.getInstance(context).enqueueUniquePeriodicWork(
            ReportingWorker.TAG,
            ExistingPeriodicWorkPolicy.KEEP,
            reportingWorkRequest
        )
        Log.d("MainActivity", "Periodic reporting work scheduled.")
    }
                    
  3. Test WorkManager:
    • Run the app. Check Logcat for messages from ReportingWorker and MainActivity about scheduling.
    • WorkManager tasks don’t run immediately, especially periodic ones. You can use ADB commands to force execution for testing:
      • Find your package name: com.yourcompany.advancedconceptsapp
      • Force run jobs: adb shell cmd jobscheduler run -f com.yourcompany.advancedconceptsapp 999 (The 999 is usually sufficient, it’s a job ID).
      • Or use Android Studio’s App Inspection tab -> Background Task Inspector to view and trigger workers.
    • Check your Firestore Console for the usage_logs collection.

Step 4: Build Flavors (dev vs. prod)

Create dev and prod flavors for different environments.

  1. Configure app/build.gradle.kts:
    app/build.gradle.kts

    
    android {
        // ... namespace, compileSdk, defaultConfig ...
    
        // ****** Enable BuildConfig generation ******
        buildFeatures {
            buildConfig = true
        }
        // *******************************************
    
        flavorDimensions += "environment"
    
        productFlavors {
            create("dev") {
                dimension = "environment"
                applicationIdSuffix = ".dev" // CRITICAL: Changes package name for dev builds
                versionNameSuffix = "-dev"
                resValue("string", "app_name", "Task Reporter (Dev)")
                buildConfigField("String", "TASKS_COLLECTION", "\"tasks_dev\"")
                buildConfigField("String", "USAGE_LOG_COLLECTION", "\"usage_logs_dev\"")
            }
            create("prod") {
                dimension = "environment"
                resValue("string", "app_name", "Task Reporter")
                buildConfigField("String", "TASKS_COLLECTION", "\"tasks\"")
                buildConfigField("String", "USAGE_LOG_COLLECTION", "\"usage_logs\"")
            }
        }
    
        // ... buildTypes, compileOptions, etc ...
    }
                    

    Sync Gradle files.

    Important: We added applicationIdSuffix = ".dev". This means the actual package name for your development builds will become something like com.yourcompany.advancedconceptsapp.dev. This requires an update to your Firebase project setup, explained next. Also note the buildFeatures { buildConfig = true } block which is required to use buildConfigField.
  2. Handling Firebase for Suffixed Application IDs

    Because the `dev` flavor now has a different application ID (`…advancedconceptsapp.dev`), the original `google-services.json` file (downloaded in Step 1) will not work for `dev` builds, causing a “No matching client found” error during build.

    You must add this new Application ID to your Firebase project:

    1. Go to Firebase Console: Open your project settings (gear icon).
    2. Your apps: Scroll down to the “Your apps” card.
    3. Add app: Click “Add app” and select the Android icon (</>).
    4. Register dev app:
      • Package name: Enter the exact suffixed ID: com.yourcompany.advancedconceptsapp.dev (replace `com.yourcompany.advancedconceptsapp` with your actual base package name).
      • Nickname (Optional): “Task Reporter Dev”.
      • SHA-1 (Optional but Recommended): Add the debug SHA-1 key from `./gradlew signingReport`.
    5. Register and Download: Click “Register app”. Crucially, download the new google-services.json file offered. This file now contains configurations for BOTH your base ID and the `.dev` suffixed ID.
    6. Replace File: In Android Studio (Project view), delete the old google-services.json from the app/ directory and replace it with the **newly downloaded** one.
    7. Skip SDK steps: You can skip the remaining steps in the Firebase console for adding the SDK.
    8. Clean & Rebuild: Back in Android Studio, perform a Build -> Clean Project and then Build -> Rebuild Project.
    Now your project is correctly configured in Firebase for both `dev` (with the `.dev` suffix) and `prod` (base package name) variants using a single `google-services.json`.
  3. Create Flavor-Specific Source Sets:
    • Switch to Project view in Android Studio.
    • Right-click on app/src -> New -> Directory. Name it dev.
    • Inside dev, create res/values/ directories.
    • Right-click on app/src -> New -> Directory. Name it prod.
    • Inside prod, create res/values/ directories.
    • (Optional but good practice): You can now move the default app_name string definition from app/src/main/res/values/strings.xml into both app/src/dev/res/values/strings.xml and app/src/prod/res/values/strings.xml. Or, you can rely solely on the resValue definitions in Gradle (as done above). Using resValue is often simpler for single strings like app_name. If you had many different resources (layouts, drawables), you’d put them in the respective dev/res or prod/res folders.
  4. Use Build Config Fields in Code:
      • Update TaskViewModel.kt and ReportingWorker.kt to use BuildConfig instead of temporary constants.

    TaskViewModel.kt change

    
    // Add this import
    import com.yourcompany.advancedconceptsapp.BuildConfig
    
    // Replace the temporary constant usage
    // const val TEMPORARY_TASKS_COLLECTION = "tasks" // Remove this line
    private val tasksCollection = db.collection(BuildConfig.TASKS_COLLECTION) // Use build config field
                        

    ReportingWorker.kt change

    
    // Add this import
    import com.yourcompany.advancedconceptsapp.BuildConfig
    
    // Replace the temporary constant usage
    // const val TEMPORARY_USAGE_LOG_COLLECTION = "usage_logs" // Remove this line
    
    // ... inside doWork() ...
    db.collection(BuildConfig.USAGE_LOG_COLLECTION).add(logEntry).await() // Use build config field
                        

    Modify TaskScreen.kt to potentially use the flavor-specific app name (though resValue handles this automatically if you referenced @string/app_name correctly, which TopAppBar usually does). If you set the title directly, you would load it from resources:

     // In TaskScreen.kt (if needed)
    import androidx.compose.ui.res.stringResource
    import com.yourcompany.advancedconceptsapp.R // Import R class
    // Inside Scaffold -> topBar

    TopAppBar(title = { Text(stringResource(id = R.string.app_name)) }) // Use string resource

  5. Select Build Variant & Test:
    • In Android Studio, go to Build -> Select Build Variant… (or use the “Build Variants” panel usually docked on the left).
    • You can now choose between devDebug, devRelease, prodDebug, and prodRelease.
    • Select devDebug. Run the app. The title should say “Task Reporter (Dev)”. Data should go to tasks_dev and usage_logs_dev in Firestore.
    • Select prodDebug. Run the app. The title should be “Task Reporter”. Data should go to tasks and usage_logs.

Step 5: Proguard/R8 Configuration (for Release Builds)

R8 is the default code shrinker and obfuscator in Android Studio (successor to Proguard). It’s enabled by default for release build types. We need to ensure it doesn’t break our app, especially Firestore data mapping.

    1. Review app/build.gradle.kts Release Build Type:
      app/build.gradle.kts

      
      android {
          // ...
          buildTypes {
              release {
                  isMinifyEnabled = true // Should be true by default for release
                  isShrinkResources = true // R8 handles both
                  proguardFiles(
                      getDefaultProguardFile("proguard-android-optimize.txt"),
                      "proguard-rules.pro" // Our custom rules file
                  )
              }
              debug {
                  isMinifyEnabled = false // Usually false for debug
                  proguardFiles(
                      getDefaultProguardFile("proguard-android-optimize.txt"),
                      "proguard-rules.pro"
                  )
              }
              // ... debug build type ...
          }
          // ...
      }
                 

      isMinifyEnabled = true enables R8 for the release build type.

    2. Configure app/proguard-rules.pro:
      • Firestore uses reflection to serialize/deserialize data classes. R8 might remove or rename classes/fields needed for this process. We need to add “keep” rules.
      • Open (or create) the app/proguard-rules.pro file. Add the following:
      
      # Keep Task data class and its members for Firestore serialization
      -keep class com.yourcompany.advancedconceptsapp.data.Task { (...); *; }
      # Keep any other data classes used with Firestore similarly
      # -keep class com.yourcompany.advancedconceptsapp.data.AnotherFirestoreModel { (...); *; }
      
      # Keep Coroutine builders and intrinsics (often needed, though AGP/R8 handle some automatically)
      -keepnames class kotlinx.coroutines.intrinsics.** { *; }
      
      # Keep companion objects for Workers if needed (sometimes R8 removes them)
      -keepclassmembers class * extends androidx.work.Worker {
          public static ** Companion;
      }
      
      # Keep specific fields/methods if using reflection elsewhere
      # -keepclassmembers class com.example.SomeClass {
      #    private java.lang.String someField;
      #    public void someMethod();
      # }
      
      # Add rules for any other libraries that require them (e.g., Retrofit, Gson, etc.)
      # Consult library documentation for necessary Proguard/R8 rules.
    • Explanation:
      • -keep class ... { <init>(...); *; }: Keeps the Task class, its constructors (<init>), and all its fields/methods (*) from being removed or renamed. This is crucial for Firestore.
      • -keepnames: Prevents renaming but allows removal if unused.
      • -keepclassmembers: Keeps specific members within a class.

3. Test the Release Build:

    • Select the prodRelease build variant.
    • Go to Build -> Generate Signed Bundle / APK…. Choose APK.
    • Create a new keystore or use an existing one (follow the prompts). Remember the passwords!
    • Select prodRelease as the variant. Click Finish.
    • Android Studio will build the release APK. Find it (usually in app/prod/release/).
    • Install this APK manually on a device: adb install app-prod-release.apk.
    • Test thoroughly. Can you add tasks? Do they appear? Does the background worker still log to Firestore (check usage_logs)? If it crashes or data doesn’t save/load correctly, R8 likely removed something important. Check Logcat for errors (often ClassNotFoundException or NoSuchMethodError) and adjust your proguard-rules.pro file accordingly.

 


 

Step 6: Firebase App Distribution (for Dev Builds)

Configure Gradle to upload development builds to testers via Firebase App Distribution.

  1. Download private key: on Firebase console go to Project Overview  at left top corner -> Service accounts -> Firebase Admin SDK -> Click on “Generate new private key” button ->
    api-project-xxx-yyy.json move this file to root project at the same level of app folder *Ensure that this file be in your local app, do not push it to the remote repository because it contains sensible data and will be rejected later
  2. Configure App Distribution Plugin in app/build.gradle.kts:
    app/build.gradle.kts

    
    // Apply the plugin at the top
    plugins {
        // ... other plugins id("com.android.application"), id("kotlin-android"), etc.
        alias(libs.plugins.google.firebase.appdistribution)
    }
    
    android {
        // ... buildFeatures, flavorDimensions, productFlavors ...
    
        buildTypes {
            getByName("release") {
                isMinifyEnabled = true // Should be true by default for release
                isShrinkResources = true // R8 handles both
                proguardFiles(
                    getDefaultProguardFile("proguard-android-optimize.txt"),
                    "proguard-rules.pro" // Our custom rules file
                )
            }
            getByName("debug") {
                isMinifyEnabled = false // Usually false for debug
                proguardFiles(
                    getDefaultProguardFile("proguard-android-optimize.txt"),
                    "proguard-rules.pro"
                )
            }
            firebaseAppDistribution {
                artifactType = "APK"
                releaseNotes = "Latest build with fixes/features"
                testers = "briew@example.com, bri@example.com, cal@example.com"
                serviceCredentialsFile="$rootDir/api-project-xxx-yyy.json"//do not push this line to the remote repository or stablish as local variable } } } 

    Add library version to libs.version.toml

    
    [versions]
    googleFirebaseAppdistribution = "5.1.1"
    [plugins]
    google-firebase-appdistribution = { id = "com.google.firebase.appdistribution", version.ref = "googleFirebaseAppdistribution" }
    
    Ensure the plugin classpath is in the 

    project-level

     build.gradle.kts: 

    project build.gradle.kts

    
    plugins {
        // ...
        alias(libs.plugins.google.firebase.appdistribution) apply false
    }
                    

    Sync Gradle files.

  3. Upload a Build Manually:
    • Select the desired variant (e.g., devDebugdevRelease, prodDebug , prodRelease).
    • In Android Studio Terminal  run  each commmand to generate apk version for each environment:
      • ./gradlew assembleRelease appDistributionUploadProdRelease
      • ./gradlew assembleRelease appDistributionUploadDevRelease
      • ./gradlew assembleDebug appDistributionUploadProdDebug
      • ./gradlew assembleDebug appDistributionUploadDevDebug
    • Check Firebase Console -> App Distribution -> Select .dev project . Add testers or use the configured group (`android-testers`).

Step 7: CI/CD with GitHub Actions

Automate building and distributing the `dev` build on push to a specific branch.

  1. Create GitHub Repository. Create a new repository on GitHub and push your project code to it.
    1. Generate FIREBASE_APP_ID:
      • on Firebase App Distribution go to Project Overview -> General -> App ID for com.yourcompany.advancedconceptsapp.dev environment (1:xxxxxxxxx:android:yyyyyyyyyy)
      • In GitHub repository go to Settings -> Secrets and variables -> Actions -> New repository secret
      • Set the name: FIREBASE_APP_ID and value: paste the App ID generated
    2. Add FIREBASE_SERVICE_ACCOUNT_KEY_JSON:
      • open api-project-xxx-yyy.json located at root project and copy the content
      • In GitHub repository go to Settings -> Secrets and variables -> Actions -> New repository secret
      • Set the name: FIREBASE_SERVICE_ACCOUNT_KEY_JSON and value: paste the json content
    3. Create GitHub Actions Workflow File:
      • In your project root, create the directories .github/workflows/.
      • Inside .github/workflows/, create a new file named android_build_distribute.yml.
      • Paste the following content:
    4. 
      name: Android CI 
      
      on: 
        push: 
          branches: [ "main" ] 
        pull_request: 
          branches: [ "main" ] 
      jobs: 
        build: 
          runs-on: ubuntu-latest 
          steps: 
          - uses: actions/checkout@v3
          - name: set up JDK 17 
            uses: actions/setup-java@v3 
            with: 
              java-version: '17' 
              distribution: 'temurin' 
              cache: gradle 
          - name: Grant execute permission for gradlew 
            run: chmod +x ./gradlew 
          - name: Build devRelease APK 
            run: ./gradlew assembleRelease 
          - name: upload artifact to Firebase App Distribution
            uses: wzieba/Firebase-Distribution-Github-Action@v1
            with:
              appId: ${{ secrets.FIREBASE_APP_ID }}
              serviceCredentialsFileContent: ${{ secrets.FIREBASE_SERVICE_ACCOUNT_KEY_JSON }}
              groups: testers
              file: app/build/outputs/apk/dev/release/app-dev-release-unsigned.apk
      
    1. Commit and Push: Commit the .github/workflows/android_build_distribute.yml file and push it to your main branch on GitHub.
    1. Verify: Go to the “Actions” tab in your GitHub repository. You should see the workflow running. If it succeeds, check Firebase App Distribution for the new build. Your testers should get notified.

 


 

Step 8: Testing and Verification Summary

    • Flavors: Switch between devDebug and prodDebug in Android Studio. Verify the app name changes and data goes to the correct Firestore collections (tasks_dev/tasks, usage_logs_dev/usage_logs).
    • WorkManager: Use the App Inspection -> Background Task Inspector or ADB commands to verify the ReportingWorker runs periodically and logs data to the correct Firestore collection based on the selected flavor.
    • R8/Proguard: Install and test the prodRelease APK manually. Ensure all features work, especially adding/viewing tasks (Firestore interaction). Check Logcat for crashes related to missing classes/methods.
    • App Distribution: Make sure testers receive invites for the devDebug (or devRelease) builds uploaded manually or via CI/CD. Ensure they can install and run the app.
    • CI/CD: Check the GitHub Actions logs for successful builds and uploads after pushing to the develop branch. Verify the build appears in Firebase App Distribution.

 

Conclusion

Congratulations! You’ve navigated complex Android topics including Firestore, WorkManager, Compose, Flavors (with correct Firebase setup), R8, App Distribution, and CI/CD.

This project provides a solid foundation. From here, you can explore:

    • More complex WorkManager chains or constraints.
    • Deeper R8/Proguard rule optimization.
    • More sophisticated CI/CD pipelines (deploy signed apks/bundles, running tests, deploying to Google Play).
    • Using different NoSQL databases or local caching with Room.
    • Advanced Compose UI patterns and state management.
    • Firebase Authentication, Cloud Functions, etc.

If you want to have access to the full code in my GitHub repository, contact me in the comments.


 

Project Folder Structure (Conceptual)


AdvancedConceptsApp/
├── .git/
├── .github/workflows/android_build_distribute.yml
├── .gradle/
├── app/
│   ├── build/
│   ├── libs/
│   ├── src/
│   │   ├── main/           # Common code, res, AndroidManifest.xml
│   │   │   └── java/com/yourcompany/advancedconceptsapp/
│   │   │       ├── data/Task.kt
│   │   │       ├── ui/TaskScreen.kt, TaskViewModel.kt, theme/
│   │   │       ├── worker/ReportingWorker.kt
│   │   │       └── MainActivity.kt
│   │   ├── dev/            # Dev flavor source set (optional overrides)
│   │   ├── prod/           # Prod flavor source set (optional overrides)
│   │   ├── test/           # Unit tests
│   │   └── androidTest/    # Instrumentation tests
│   ├── google-services.json # *** IMPORTANT: Contains configs for BOTH package names ***
│   ├── build.gradle.kts    # App-level build script
│   └── proguard-rules.pro # R8/Proguard rules
├── api-project-xxx-yyy.json # Firebase service account key json
├── gradle/wrapper/
├── build.gradle.kts      # Project-level build script
├── gradle.properties
├── gradlew
├── gradlew.bat
└── settings.gradle.kts
        

 

]]>
https://blogs.perficient.com/2025/04/10/android-development-codelab-mastering-advanced-concepts/feed/ 0 379698
Managed Service Offering (MSO) Support Ticketing System https://blogs.perficient.com/2025/04/10/managed-service-offering-mso-support-ticketing-system/ https://blogs.perficient.com/2025/04/10/managed-service-offering-mso-support-ticketing-system/#respond Thu, 10 Apr 2025 06:26:07 +0000 https://blogs.perficient.com/?p=379087

A ticketing system, such as a Dynamic Tracking Tool, can be a powerful tool for MSO support teams, providing a centralized and efficient way to manage incidents and service requests. Here are some more details on the benefits.

  1. Organize and triage cases: With a ticketing system, MSO support teams can easily prioritize cases based on their priority, status, and other relevant information. This allows them to quickly identify and resolve critical issues before they become major problems.
  2. Automate distribution and assignment: A ticketing system can automate the distribution and assignment of incidents to the right department staff member. This ensures that incidents are quickly and efficiently handled by the most qualified support team members.
  3. Increase collaboration: A ticketing system can increase collaboration between customer service teams and other stakeholders. It allows for easy and quick ticket assignment, collaboration in resolving issues, and real-time changes.
  4. Consolidate support needs: Using a ticketing system consolidates all support needs in one place, providing a record of customer interactions stored in the system. This allows support teams to quickly and easily access customer history, track communication, and resolve issues more effectively.
  5. Dynamics Tracking Tool: This shows various reports, such as the Real-Time Tracking Report and Historical Data Report, which are provided to monitor and analyze tracking data efficiently.

Overall, a ticketing system can help MSO support teams to be more organized, efficient, and effective in managing incidents and service requests.

Ticketchart

Benefits of a Dynamic Ticketing Management System

Benefitsofdynamics

 

  1. Prioritization: A ticketing system efficiently prioritizes incidents based on their impact on the business and their urgency. This ensures critical issues are resolved quickly, minimizing downtime and maximizing productivity.
  2. Efficiency: A ticketing system streamlines the incident management process, reducing the time and effort required to handle incidents. It allows support teams to focus on resolving issues rather than spending time on administrative tasks such as logging incidents and updating users.
  3. Collaboration: A ticketing system enables collaboration between support teams, allowing them to share information and expertise to resolve incidents more efficiently. It also enables users to collaborate with support teams, providing real-time updates and feedback on the status of their incidents.
  4. Tracking & Reporting: A ticketing system provides detailed monitoring and reporting capabilities, allowing businesses to analyze incident data and identify trends and patterns. This information can be used to identify recurring issues, develop strategies to prevent incidents from occurring, and improve the overall quality of support services.
  5. Professionalism: A ticketing system provides a professional and consistent approach to incident management, ensuring that all incidents are handled promptly and efficiently. This helps to enhance the reputation of the support team and the business as a whole.
  6. Transparency: A ticketing system provides transparency in the incident management process, allowing users to track the status of their incidents in real time. It also provides visibility into the actions taken by support teams, enabling users to understand how incidents are being resolved.
  7. Continuity: A ticketing system provides continuity in the incident management process, ensuring that incidents are handled consistently and effectively across the organization. It also ensures that incident data is captured and stored in a centralized location, providing a comprehensive view of the incident management process.

A Support System Orbits Around 3-Tiered Support

3tieredsupportsystem

Tier 1

Tier 1 tech support is typically the first level of technical support in a multi-tiered technical support model. It is responsible for handling basic customer issues and providing initial diagnosis and resolution of technical problems.

A Tier 1 specialist’s primary responsibility is to gather customer information and analyze the symptoms to determine the underlying problem. They may use pre-determined scripts or workflows to troubleshoot common technical issues and provide basic solutions.

If the issue is beyond their expertise, they may escalate it to the appropriate Tier 2 or Tier 3 support team for further investigation and resolution.

Overall, Tier 1 tech support is critical for providing initial assistance to customers and ensuring that technical issues are addressed promptly and efficiently.

Tier 2

Tier 2 support is the second level of technical support in a multi-tiered technical support model, and it typically involves more specialized technical knowledge and skills than Tier 2 support.

Tier 2 support is staffed by technicians with in-depth technical knowledge and experience troubleshooting complex technical issues. These technicians are responsible for providing more advanced technical assistance to customers, and they may use more specialized tools or equipment to diagnose and resolve technical problems.

Tier 2 support is critical for resolving complex technical issues and ensuring that customers receive high-quality technical assistance.

Tier 3

Support typically involves highly specialized technical knowledge and skills, and technicians at this level are often subject matter experts in their respective areas. They may be responsible for developing new solutions or workarounds for complex technical issues and providing training and guidance to Tier 1 and Tier 2 support teams.

In some cases, Tier 3 support may be provided by the product or service vendor, while in other cases, it may be provided by a third-party provider. The goal of Tier 3 support is to ensure that the most complex technical issues are resolved as quickly and efficiently as possible, minimizing downtime and ensuring customer satisfaction.

Overall, Tier 3 support is critical in providing advanced technical assistance and ensuring that the most complex technical problems are resolved effectively.

Determine The Importance of Tickets/Incidents/Issues/Cases

The first step in a support ticketing system is to determine the incident’s importance. This involves assessing the incident’s impact on the user and the business and assigning a priority level based on the severity of the issue.

Importanceoftickets

  1. Receiving: The step is to receive the incident report from the user. This can be done through various channels, such as email, phone, or a web-based form.
  2. Validating: This step involves validating the incident and verifying that it is a valid issue that needs to be addressed by the Support team.
  3. Logging: Once the incident has been validated, it is logged into an incident application, which is used to track and manage it throughout the process.
  4. Screening: The next step is to screen the incident and determine the user’s symptoms. This involves asking questions to gather more information about the issue and to identify any patterns or trends that may help resolve the incident.
  5. Prioritizing: Once the symptoms have been identified, the next step is to prioritize the incident based on its impact on the user and the business.
  6. Assigning: After the incident has been prioritized, it is assigned to a support team that will handle it. If the support team cannot handle the incident, it is escalated to a higher-level tier.
  7. Escalating: If the incident requires more advanced expertise or resources, it is escalated to a higher-level tier where it can be resolved more effectively.
  8. Resolving: The support team or higher-level tier works on resolving the incident and provides updates to the user until the issue is resolved.
  9. Closing: Once the incident has been resolved, the ticket is closed by logging the resolution and changing the ticket status to indicate that the incident has been successfully resolved.

Summary

Ticketing systems are essential for businesses that want to manage customer service requests efficiently. These systems allow customers to submit service requests, track the progress of their requests, and receive updates when their requests are resolved. The ticketing system also enables businesses to assign service requests to the appropriate employees or teams and prioritize them based on urgency or severity. This helps streamline workflow and ensure service requests are addressed promptly and efficiently. Additionally, ticketing systems can provide valuable insights into customer behavior, allowing businesses to identify areas where they can improve their products or services.

]]>
https://blogs.perficient.com/2025/04/10/managed-service-offering-mso-support-ticketing-system/feed/ 0 379087
The 1960s Self-Help Book that Astonished me in 2025!! https://blogs.perficient.com/2025/04/04/the-1960s-self-help-book-that-astonished-me-in-2025/ https://blogs.perficient.com/2025/04/04/the-1960s-self-help-book-that-astonished-me-in-2025/#respond Fri, 04 Apr 2025 16:14:13 +0000 https://blogs.perficient.com/?p=379659

My dad generally does not have a very strong opinion about anything. His best reaction was when we went to see the Taj Mahal in Agra, India and he said … “it’s good”. Not someone who will applaud anything vociferously. When he heard about the whole manifestation spiel from my sister, he recommended us to read Psycho-Cybernetics by Dr Maxwell Maltz, a 1960s book he read as a young man that he says “was amazing”.  Coming from someone whose emotional range is “okay” to “could be worse,” this was basically his version of fireworks… Naturally, I decided to check it out…

I expected old and outdated self-help type advice… the kind of “grind harder” energy of the war era that feels like it belongs in black-and-white movies. But instead? It did hit different. It felt modern, relevant, and annoyingly… effective. Hence this blog.

A Plastic Surgeon Turned Epistemologist (I know … big word for brain transformation)

Dr. Maxwell Maltz, the author of Psycho-Cybernetics, was a cosmetic surgeon in the 1960s who noticed something unusual: fixing someone’s nose or scar didn’t always fix how they felt about themselves. Turns out, their self-image… the mental picture they had of themselves… didn’t update with the surgery.

That’s when Maltz cracked the code: Your self-image is basically your brain’s blueprint its operating system. It’s like the app running in the background that controls how you act, react, and even hold yourself back. If your self-image is outdated, no amount of external changes will make a difference. But if you can rewire it? It could be a game-changer.

Gen Z Did NOT Invent Manifestation?

Okay, let’s talk manifestation. You’ve seen it… people whispering affirmations into their oat milk lattes, crafting vision boards with magazine clippings and pinterest boards, journaling their dream lives like they’re already living them. The vibe? If you focus your thoughts and energy enough, good things will find you…

But guess what? Maltz was onto this way back in 1960… before TikTok made it a trend. His version wasn’t about crystals or cosmic timing; it was about mental rehearsal. Picture your goals so clearly and consistently that your brain starts treating them like real experiences. No props!!

The Theater of Your Mind

Maltz called this technique “The Theater of the Mind.” Imagine yourself achieving your goals… like actually see it happening in your head. Whether it’s acing a presentation or finally asking out your crush without turning into a bundle of nerves, you rehearse it mentally until your brain starts to believe it…

It’s not magic; it’s mechanics. Your brain doesn’t know the difference between real and vividly imagined. So instead of overthinking or getting lost in distractions, you train your inner autopilot to aim higher…

Failure and Feedback

Here’s the part that stayed with me: failure isn’t a sign you’re not capable… it’s just feedback for your brain to adjust course. Maltz compared it to your GPS… when you make a wrong turn, it doesn’t panic …  it just calmly recalculates and finds another way…

For someone raised on perfection and performance, this was freeing. Mistakes aren’t the end… they’re just part of the route…

So… Is This Just Another Self-Help Book?

Maybe, the tropes are similar, but the styles and the tools aren’t. Psycho-Cybernetics isn’t about wishing for miracles… it’s about understanding and reshaping the self-image that quietly directs your everyday life. When you change how you see yourself, everything… your habits, your confidence, even your presence… begins to shift.

I started believing I could handle challenges that once made me retreat. And more than anything, I realized that a lot of my so-called “personality quirks” were just old thought loops on repeat.

Vintage Science Meets New-Age Glow-Up

If you’re into manifestation… scripting dream lives at 11:11 or creating mood boards full of palm trees and future homes… ask yourself this: what’s your self-image doing while all this is happening? Because no matter how often you visualize success, the author emphasizes that if your inner dialogue still sometimes doubts your worth, that vision may never fully land.

Maltz figured this out decades before hashtags and highlight reels. And maybe that’s why my dad felt a shift when he read this book… and why I’m feeling something similar now just at a different age, in a different world… Turns out, rewiring your brain never really is never going out of style. Try it, I highly recommend, or as my dad said its an “AMAZING” read.

]]>
https://blogs.perficient.com/2025/04/04/the-1960s-self-help-book-that-astonished-me-in-2025/feed/ 0 379659
Perficient Included in IDC Market Glance: Payer, 1Q25 https://blogs.perficient.com/2025/04/02/perficient-included-in-idc-market-glance-payer-1q25/ https://blogs.perficient.com/2025/04/02/perficient-included-in-idc-market-glance-payer-1q25/#respond Wed, 02 Apr 2025 18:55:18 +0000 https://blogs.perficient.com/?p=379587

Health insurers today are navigating intense technological and regulatory requirements, along with rising consumer demand for seamless digital experiences. Leading organizations are investing in advanced technologies and automations to modernize operations, streamline experiences, and unlock reliable insights. By leveraging scalable infrastructures, you can turn data into a powerful tool that accelerates business success.

IDC Market Glance: Payer, 1Q25

Perficient is proud to be included in the IDC Market Glance: Payer, 1Q25 (doc#US53200825, March 2025) report for the second year in a row. According to IDC, this report “provides a glance at the current makeup of the payer IT landscape, illustrates who some of the major players are, and depicts the segments and structure of the market.”

Perficient is included in the categories of IT Services and Data Platforms/Interoperability. IDC defines the IT Services segment as, “Systems integration organizations providing advisory, consulting, development, and implementation services. Some IT Services firms also have products/solutions.” The Data Platforms/Interoperability segment is defined by IDC as, “Firms that provide data, data aggregation, data translation, data as a service and/or analytics solutions; either as off-premise, cloud, or tools on premise used for every aspect of operations.”

Discover Strategic Investments for Innovation and Success

Our strategists are committed to driving innovative solutions and guiding insurers on their digital transformation journey. We feel that our inclusion in this report reinforces our expertise in leveraging digital capabilities to unlock personalized experiences and drive greater operational efficiencies with our clients’ highly regulated, complex healthcare data.

The ten largest health insurers in the United States have counted on us to help drive the outcomes that matter most to businesses and consumers. Our experts can help you pragmatically and confidently navigate the intense regulatory requirements and consumer trends influencing digital investments. Learn more and contact us to discover how we partner to boost efficiencies, elevate health outcomes, and create differentiated experiences that enhance consumer trust.

]]>
https://blogs.perficient.com/2025/04/02/perficient-included-in-idc-market-glance-payer-1q25/feed/ 0 379587
Deena Piquion from Xerox on Data, Disruption, and Digital Natives https://blogs.perficient.com/2025/04/02/deena-piquion-xerox-data-disruption-digital-natives/ https://blogs.perficient.com/2025/04/02/deena-piquion-xerox-data-disruption-digital-natives/#respond Wed, 02 Apr 2025 11:00:54 +0000 https://blogs.perficient.com/?p=379538

In the new episode of the “What If? So What?” podcast, Jim Hertzfeld and Deena Piquion, chief growth and disruption officer at Xerox, discuss how disruption and digital transformation can position companies to succeed in a rapidly changing technology landscape.

Deena is leading Xerox on a unique and pivotal reinvention journey as the company undergoes a significant transformation, expanding beyond its traditional print and copy services. Deena explains how the company is now focusing on enabling the modern workforce with AI-powered platforms, workflow automation, and IT solutions.

Data plays a crucial role in Xerox’s digital transformation strategy and highlights the importance of integrating data from various sources to create a unified view that enables better decision-making and more effective marketing.

Listen to the podcast to hear more about internal disruption and digital innovation!

Listen now on your favorite podcast platform or visit our website.

 

Subscribe Where You Listen

Apple | Spotify | Amazon | Overcast

Meet our Guest

Deena Piquion headshot

Deena Piquion, Chief Growth and Disruption Officer, Xerox

Deena Piquion is chief growth and disruption officer at Xerox. She previously served as chief marketing officer, and senior vice president and general manager of Xerox Latin America operations. Prior to joining Xerox in 2019, she was with Tech Data Corporation, where she last served as vice president and general manager of Latin America & Caribbean.

She is a member of the Advisory Board of Teach for America Miami Dade County, a nonprofit organization dedicated to educational equity and excellence. Deena was awarded the Florida Diversity Council Glass Ceiling Award in 2016, was selected as a CRN Women of the Channel Honoree in 2017, and was named to Diversity First’s Top 50 Women in Tech 2021 and Top 100 CMOs in 2022.

Deena is actively engaged in her community and passionate about supporting children’s cancer research, and diversity and inclusion in technology. She is a dynamic blogger who created her own branded platform to share tips on personal and professional growth with an engaged following in the industry.

Connect with Deena

 

Meet the Host

Jim Hertzfeld

Jim Hertzfeld is Area Vice President, Strategy for Perficient.

For over two decades, he has worked with clients to convert market insights into real-world digital products and customer experiences that actually grow their business. More than just a strategist, Jim is a pragmatic rebel known for challenging the conventional and turning grand visions into actionable steps. His candid demeanor, sprinkled with a dose of cynical optimism, shapes a narrative that challenges and inspires listeners.

Connect with Jim:

LinkedIn | Perficient

]]>
https://blogs.perficient.com/2025/04/02/deena-piquion-xerox-data-disruption-digital-natives/feed/ 0 379538
Tips for building top performer teams https://blogs.perficient.com/2025/04/01/tips-for-building-top-performer-teams-ev/ https://blogs.perficient.com/2025/04/01/tips-for-building-top-performer-teams-ev/#comments Tue, 01 Apr 2025 19:19:11 +0000 https://blogs.perficient.com/?p=379528

There’s no doubt that every Director or Manager wants a high-performance team that delivers the best results and allows them to focus on building new business opportunities.

Come on, let’s face it! If we were comparing a work team with a sports team, who wouldn’t want to have a Barcelona Soccer Club, the Dodgers baseball team, or the Philadelphia Eagles in American football?

It’s easy to think and say, right? But where does the secret for building high-performance teams lives?

Martin Zwilling, founder and CEO of Founder & CEO at Startup Professionals, Inc., recommends the following list of actions for both entrepreneurs and senior executives to achieve the highest performance from team members (Zwilling, 2020):

Clearly and iteratively communicate team goals and objectives: 

Don’t rely on those who understand the message quickly; at least repeat it five times in different forums to ensure it was heard and understood.

Define and document role content and standards for performance: 

Don’t assume that team members already know what the expected standards of excellence are.

Give team members the right to make decisions in their role. 

Remember that micromanagement is not an effective way to achieve top performance. Instead, you can practice process coaching and let the team make their own decisions and improve step by step.

Relay regular informal observations on progress and results. 

Take the time to provide informal feedback weekly or even daily. This will help address gaps gradually and increase the team members’ psychological safety.

Give team members the training, tools, and data to do the job. 

As a Scrum Master working in an agile framework, you are a servant leader. Team members cannot be top performers without necessary resources. Leaders should anticipate these requirements, listen carefully to feedback from team members, and provide resources on a timely basis.

Diligently provide follow-up and support on assistance requests. 

As a leader you should recognize and support your team in situations that go beyond their domain.

Reward positive results. 

Recognition is important for leveraging the team members confidence and the team’s health.

Related to this topic, the Center for Human Capital Innovation provides also some examples and key factors for high-performance teams:

1992 US men’s Olympic basketball team, known as the “Dream Team” tell us that “the essence of a high-performance team isn’t found in the individual capabilities of its members but in their ability to adapt, learn, and evolve into a synergistic unit. This transformation was marked by a shift in the team’s approach to playing together, emphasizing mutual understanding, trust, and a unified strategy” (Center for Human Capital Innovation, 2024).

Taking in consideration the last paragraph, high-performance teams relays on:

  • Shared Vision and Direction by aligning team members to a common objective.
  • Quality of Interaction: ensure trust, open communication and desire to embrace conflict happens.
  • Sense of Renewal: high performer teams should feel empowered to take risk and innovate.

On the other hand, Expert Panel a former Forbes Councils Member provide these tips for optimizing the team’s level and avoid burnt out as well: 

  • Set boundaries and priorities between work and personal life.
  • Encourage your team to succeed by discussing the goals so everyone is on the same page with priorities, timelines and deadlines.
  • Identify tasks to be automated so everyone can have more time for learning, improve their performance and also have more time.
  • Be transparent by sharing the business case, listen to the team’s feedback and ensure everyone feels valuable on what their role provides to the business.

I hope these tips will help you to get your desired top performer team. Be patient but most importantly, work on it!

Bibliography:

]]>
https://blogs.perficient.com/2025/04/01/tips-for-building-top-performer-teams-ev/feed/ 1 379528