Mobile Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/mobile/ Expert Digital Insights Thu, 29 Jan 2026 22:12:06 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Mobile Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/mobile/ 32 32 30508587 Deep Thinking with AI Clusters: The Future of Distributed Intelligence https://blogs.perficient.com/2026/01/29/deep-thinking-with-ai-clusters-the-future-of-distributed-intelligence/ https://blogs.perficient.com/2026/01/29/deep-thinking-with-ai-clusters-the-future-of-distributed-intelligence/#respond Thu, 29 Jan 2026 20:49:26 +0000 https://blogs.perficient.com/?p=390033

In an era where artificial intelligence shapes every facet of our digital lives, a quiet revolution is unfolding in home labs and enterprise data centers alike. The AI Cluster paradigm represents a fundamental shift in how we approach machine intelligence—moving from centralized cloud dependency to distributed, on-premises deep thinking systems that respect privacy, reduce costs, and unlock unprecedented flexibility.

This exploration dives into the philosophy behind distributed AI inference, the tangible benefits of AI clusters, and the emerging frontier of mobile Neural Processing Units (NPUs) that promise to extend intelligent computing to the edge of our networks.

Screenshot1
The AI Cluster dashboard provides an intuitive interface for submitting inference jobs and monitoring worker status

The Philosophy of Deep Thinking in Distributed Systems

Traditional AI deployment follows a client-server model: send your data to the cloud, receive processed results. This approach, while convenient, creates fundamental tensions with privacy, latency, and control. AI clusters invert this paradigm.

“Deep thinking isn’t just about model size—it’s about creating the conditions where complex reasoning can occur without artificial constraints imposed by network latency, privacy concerns, or API rate limits.”

An AI cluster operates on three core principles:

1. Locality of Computation

Data never leaves your network. Whether processing proprietary code, sensitive documents, or experimental research, the inference happens within your controlled environment. This isn’t just about security—it’s about creating a space for uninhibited exploration where the AI can engage with your full context.

2. Heterogeneous Resource Pooling

A cluster doesn’t discriminate between hardware. NVIDIA CUDA GPUs, Apple Silicon with Metal acceleration, and even CPU-only nodes work together. This democratizes AI access—you don’t need a $40,000 H100; your gaming PC, MacBook, and old server can contribute meaningfully.

3. Emergent Capabilities Through Distribution

When workers specialize based on their capabilities, the cluster develops emergent behaviors. Large models run on powerful nodes for complex reasoning, while smaller models handle quick queries on lighter hardware. The system self-organizes around its constraints.

Architecture of Thought: How AI Clusters Enable Deep Reasoning

The AI Cluster architecture is deceptively simple yet profoundly effective. At its heart lies a coordinator—a Flask-based API server managing job distribution via Redis queues. Workers, running on diverse hardware, poll for jobs, download cached models, execute inference, and return results.

┌─────────────────────────────────────────────────────────────┐
│                    User Request Flow                        │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│   Browser/API Client                                        │
│         │                                                   │
│         ▼                                                   │
│   ┌─────────────┐    ┌─────────────┐    ┌─────────────┐    │
│   │ Coordinator │───▶│ Redis Queue │───▶│   Workers   │    │
│   │  (Flask)    │    │  (Job Pool) │    │ (GPU/CPU)   │    │
│   └─────────────┘    └─────────────┘    └─────────────┘    │
│         │                                      │            │
│         │◀────────────────────────────────────┘            │
│         │         Results + Metrics                        │
│         ▼                                                   │
│   ┌─────────────┐                                          │
│   │  WebSocket  │ ───▶ Real-time Progress Updates          │
│   └─────────────┘                                          │
│                                                             │
└─────────────────────────────────────────────────────────────┘

What makes this architecture conducive to deep thinking?

Asynchronous Processing: Jobs enter a queue, freeing users from synchronous waiting. This enables batch processing of complex, multi-step reasoning tasks that might take minutes rather than seconds.

Context Preservation: The system supports project uploads—entire codebases can be zipped and provided as context. When the AI generates code, it does so with full awareness of existing patterns, dependencies, and architectural decisions.

Model Selection Flexibility: From 6.7 billion parameter models for quick responses to 70 billion parameter behemoths for nuanced reasoning, the cluster dynamically routes jobs to appropriate workers based on model requirements and hardware capabilities.

Screenshot4
The Model Management interface lets you download and manage models of various sizes—from efficient 7B models to powerful 32B variants

The Tangible Benefits of Local AI Clusters

Beyond philosophical advantages, AI clusters deliver concrete benefits that compound over time:

Benefit Cloud API Approach AI Cluster Approach
Cost Per-token billing, unpredictable at scale One-time model download, electricity only
Privacy Data sent to third-party servers Data never leaves your network
Availability Dependent on internet, subject to outages Works offline after initial setup
Rate Limits Throttled during high demand Limited only by your hardware
Customization Fixed model versions, limited tuning Choose any GGUF model, quantization level
Latency Network round-trip overhead Local network speeds (sub-millisecond)

Real-World Scenario: Code Generation at Scale

Consider a development team generating AI-assisted code reviews for 1,000 pull requests monthly. With cloud APIs charging $0.01-0.03 per 1K tokens, costs quickly escalate to hundreds or thousands of dollars. An AI cluster running on existing hardware reduces this to electricity costs—often pennies per day.

Screenshot3
The Job History view tracks all completed inference tasks, showing model used, worker assignment, and execution timestamps

The Mobile NPU Frontier: Extending Intelligence to the Edge

Perhaps the most exciting development in distributed AI isn’t happening in data centers—it’s happening in your pocket. Modern smartphones contain dedicated Neural Processing Units capable of running billions of operations per second with remarkable energy efficiency.

Understanding Mobile NPUs

Mobile NPUs are specialized accelerators designed for machine learning workloads:

  • Apple Neural Engine: 16 cores delivering up to 35 TOPS (trillion operations per second) on iPhone and iPad
  • Qualcomm Hexagon NPU: Integrated into Snapdragon processors, offering up to 45 TOPS on flagship Android devices
  • Samsung Exynos NPU: Dedicated AI blocks for on-device inference
  • Google Tensor TPU: Custom silicon optimized for Pixel devices
Screenshot5
The Workers dashboard displays connected compute nodes—here showing a Mac-mini leveraging Apple’s Neural Engine for Metal-accelerated inference

Why Mobile NPUs Matter for AI Clusters

The integration of mobile NPUs into AI cluster architectures represents a paradigm shift:

Ubiquitous Compute Availability

Every smartphone becomes a potential worker node. A team of 10 people effectively adds 10 NPU accelerators to the cluster during work hours—and these aren’t trivial resources. Modern mobile NPUs can run 3-7 billion parameter models in quantized formats.

Energy Efficiency Advantage

Mobile NPUs are engineered for battery-constrained environments. They deliver impressive performance-per-watt, often 10-100x more efficient than desktop GPUs for inference workloads. For always-on edge inference, this efficiency is transformative.

Latency at the Edge

For applications requiring immediate response—voice interfaces, real-time code suggestions, on-device translation—mobile NPUs eliminate network round-trips entirely. The AI thinks where you are, not where the server is.

Integration Pathways for Mobile NPU Workers

Integrating mobile devices into an AI cluster requires careful consideration of their unique constraints:

Mobile NPU Integration Architecture:

┌─────────────────────────────────────────────────────────────┐
│                    Coordinator Server                       │
│   ┌─────────────────────────────────────────────────────┐   │
│   │     Job Queue with Device Capability Matching       │   │
│   │                                                     │   │
│   │  [Complex Job: 70B Model] ───▶ Desktop GPU Worker  │   │
│   │  [Medium Job: 7B Model]  ───▶ MacBook Metal        │   │
│   │  [Light Job: 3B Model]   ───▶ Mobile NPU Worker    │   │
│   │  [Edge Job: 1B Model]    ───▶ Any Available NPU    │   │
│   └─────────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────┘

Mobile Workers:
┌──────────────┐  ┌──────────────┐  ┌──────────────┐
│   iPhone 15   │  │  Pixel 8     │  │  Galaxy S24  │
│   Neural Eng  │  │  Tensor TPU  │  │  Exynos NPU  │
│   (15 TOPS)   │  │  (27 TOPS)   │  │  (20 TOPS)   │
└──────────────┘  └──────────────┘  └──────────────┘

The coordinator must understand device capabilities—battery level, thermal state, NPU availability, and supported model formats. Jobs are then intelligently routed:

  • Background inference: When devices are charging and idle, they can process larger batches
  • On-demand edge inference: Immediate local processing for time-sensitive requests
  • Federated processing: Distribute large jobs across multiple mobile devices for parallel execution

Deep Thinking: The Cognitive Benefits of Distributed AI

Beyond technical metrics, AI clusters enable qualitative improvements in how we interact with artificial intelligence:

Unhurried Reasoning

Cloud APIs optimize for throughput and revenue. Local clusters optimize for quality. When you’re not paying per-token, you can allow the model to “think” longer, generate multiple candidates, and self-critique. This creates space for emergent reasoning patterns that rushed inference precludes.

Contextual Continuity

With project uploads and persistent context, the AI develops a coherent understanding of your work over time. It’s not starting from zero with each request—it’s building on accumulated knowledge of your codebase, your patterns, your preferences.

Experimental Freedom

Without cost concerns, developers explore more freely. Ask the AI to generate ten different implementations. Request detailed explanations of every design decision. Iterate on prompts until they’re perfect. This experimental abundance is where breakthrough insights emerge.

“The best tool is the one you use without hesitation. When AI assistance is free and private, you integrate it into your workflow at the speed of thought.”

Building Your Own AI Cluster: Key Considerations

For those inspired to build their own distributed AI infrastructure, consider these foundational elements:

Hardware Requirements

Model Size Minimum VRAM/RAM Recommended Hardware
3-7B (Q4) 4-8 GB Entry GPU, Apple M1, Mobile NPU
13-14B (Q4) 10-16 GB RTX 3060+, Apple M1 Pro+
33-34B (Q4) 20-24 GB RTX 3090/4090, Apple M2 Max+
70B (Q4) 40-48 GB Multi-GPU, Apple M2 Ultra

Network Architecture

Isolate your cluster on a dedicated subnet for security. The AI Cluster architecture uses 10.10.10.0/24 by default, with API key authentication and Redis password protection. All traffic stays internal—the coordinator never exposes endpoints to the internet.

Model Selection Strategy

Choose models that match your primary use cases:

  • Code generation: DeepSeek Coder V2 (16B), Qwen 2.5 Coder (32B)
  • General reasoning: Mixtral, Llama 3
  • Quick responses: Smaller 7B models with aggressive quantization

The Future: Convergence of Cloud, Edge, and Mobile

The trajectory is clear: AI inference is becoming increasingly distributed. The future cluster won’t distinguish between a rack-mounted server and a smartphone—it will see a heterogeneous pool of capabilities, dynamically allocating workloads based on real-time conditions.

Key developments to watch:

  • Improved mobile inference frameworks: Core ML, NNAPI, and TensorFlow Lite are rapidly closing the gap with desktop frameworks
  • Federated learning integration: Clusters that not only infer but continuously improve through distributed training
  • Hybrid cloud-edge architectures: Local clusters handling sensitive/frequent workloads while burst capacity comes from cloud providers
  • Specialized edge accelerators: Dedicated NPU devices (like Coral TPU) at $50-100 price points

Conclusion: Thinking Without Boundaries

AI clusters represent more than a technical architecture—they embody a philosophy of democratized intelligence. By distributing computation across diverse hardware, keeping data private, and eliminating usage costs, we create conditions for genuine deep thinking.

The addition of mobile NPUs extends this philosophy to its logical conclusion: intelligence that follows you, processes where you are, and thinks at the speed your context demands.

Whether you’re a solo developer in a home lab or an enterprise team building internal AI infrastructure, the principles remain constant: maximize locality, embrace heterogeneity, and design for the deep thinking that emerges when artificial intelligence is liberated from artificial constraints.

Start Your Journey

The AI Cluster project is open source under AGPL-3.0, with commercial licensing available. Explore the architecture, deploy your first worker, and experience what it means to have an AI that truly works for you.

Components included: Flask coordinator, universal Python worker, React dashboard, and comprehensive documentation for Proxmox deployment.

]]>
https://blogs.perficient.com/2026/01/29/deep-thinking-with-ai-clusters-the-future-of-distributed-intelligence/feed/ 0 390033
Hybrid AI: Empowering On-Device Models with Cloud-Synced Skills https://blogs.perficient.com/2026/01/28/hybrid-ai-empowering-on-device-models-with-cloud-synced-skills/ https://blogs.perficient.com/2026/01/28/hybrid-ai-empowering-on-device-models-with-cloud-synced-skills/#respond Wed, 28 Jan 2026 22:41:31 +0000 https://blogs.perficient.com/?p=389905

Learn how to combine Firebase’s hybrid inference with dynamic “AI Skills” to build smarter, private, and faster applications.

The landscape of Artificial Intelligence is shifting rapidly from purely cloud-based monoliths to hybrid architectures. Developers today face a critical choice: run models in the cloud for maximum power, or on-device for privacy and speed? With the recent updates to Firebase AI Logic, you no longer have to choose. You can have both.

In this post, we will explore how to implement hybrid on-device inference and take it a step further by introducing the concept of “AI Skills.” We will discuss how to architect a system where your local on-device models can dynamically learn new capabilities by syncing “skills” from the cloud.

1. The Foundation: Hybrid On-Device Inference

According to Firebase’s latest documentation, hybrid inference enables apps to attempt processing locally first and fall back to the cloud only when necessary. This approach offers significant benefits:

  • Privacy: Sensitive user data stays on the device.
  • Latency: Zero network round-trips for common tasks.
  • Cost: Offloading processing to the user’s hardware reduces cloud API bills.
  • Offline Capability: AI features work even without an internet connection.

How to Implement It

Using the Firebase AI Logic SDK, you can initialize a model with a preference for on-device execution. The SDK handles the complexity of checking if a local model (like Gemini Nano in Chrome) is available.

// Initialize the model with hybrid logic
const model = getGenerativeModel(firebase, {
  model: "gemini-1.5-flash",
  // Tells the SDK to try local execution first
  inferenceMode: "PREFER_ON_DEVICE", 
});

// Run the inference
const result = await model.generateContent("Draft a polite email declining an invitation.");
console.log(result.text());

When the app first loads, you may need to ensure the on-device model is downloaded. The SDK provides hooks to monitor this download progress, ensuring a smooth user experience rather than a silent stall.

2. What Are “AI Skills”?

While the model provides the “brain,” it needs knowledge and tools to be effective. In the evolving world of Agentic AI, we differentiate between the Agent, Tools, and Skills.

Drawing from insights at Cirrius Solutions and Data Science Collective, here is the breakdown:

Component Definition Analogy
Agent The reasoning engine (e.g., Gemini Nano or Flash). The Chef
Tools Mechanisms to perform actions (API calls, calculators). The Knife & Pan
Skills Modular, reusable knowledge packages or “playbooks” that teach the agent how to use tools or solve specific problems. The Recipe

Skills vs. Tools: A Tool might be a function to `send_email()`. A Skill is the procedural knowledge (often defined in a `SKILL.md` or structured JSON) that tells the agent: “When the user asks for a refund, check the policy date first, calculate the amount, and then use the email tool to send a confirmation.”

3. Adding Skills to On-Device Models via Cloud Sync

The limitation of on-device models is often their size; they cannot “know” everything. However, by combining Hybrid Inference with AI Skills, we can create a powerful architecture where the device is the engine, but the cloud provides the fuel.

Here is a strategy to dynamically add skills to your on-device model without updating the entire app binary:

The Architecture

  1. Cloud “Skill Registry”: Host your skills (instruction sets, prompts, and lightweight tool definitions) in a real-time cloud database (like Firestore) or configuration service (Firebase Remote Config).
  2. Synchronization: When the app launches, it syncs the latest “Skills” relevant to the user’s context.
  3. Local Injection: These skills are injected into the on-device model’s system instructions or context window at runtime.

Implementation Strategy

Imagine a “Customer Support” skill. Instead of hardcoding the support rules into the app, we fetch them dynamically.

// 1. Fetch the latest 'Skill' from the Cloud (e.g., Firestore or Remote Config)
const supportSkill = await fetchSkillFromCloud("refund_policy_v2");
// supportSkill.content = "Authorized to refund if purchase < 30 days. Use tool: processRefund(id)."

// 2. Initialize the On-Device Model with this new Skill
const localModel = getGenerativeModel(firebase, {
  model: "gemini-nano",
  inferenceMode: "PREFER_ON_DEVICE",
  systemInstruction: `You are a helpful assistant. 
                      Current Skill Module: ${supportSkill.content}` 
});

// 3. Execute locally
// The on-device model now "knows" the new refund policy without an app update.
const response = await localModel.generateContent("Can I get a refund for my order from last week?");

Why This Matters

This “Cloud-Sync Skill” architecture solves the biggest problem of local AI: stale knowledge.

  • Dynamic Updates: Did your business logic change? Update the Skill in the cloud, and every on-device model updates instantly.
  • Personalization: Sync different skills for different users (e.g., “Admin Skills” vs. “User Skills”) while still keeping the heavy processing on their own device.

Conclusion

By leveraging Firebase’s Hybrid Inference, developers can finally bridge the gap between cloud capability and local privacy. But the true game-changer lies in treating your AI not just as a static model, but as an agent that can learn new Skills dynamically from the cloud.

This architecture—Local Brain, Cloud Skills—is the blueprint for the next generation of intelligent, responsive, and efficient applications.

]]>
https://blogs.perficient.com/2026/01/28/hybrid-ai-empowering-on-device-models-with-cloud-synced-skills/feed/ 0 389905
The Desktop LLM Revolution Left Mobile Behind https://blogs.perficient.com/2026/01/26/the-desktop-llm-revolution-left-mobile-behind/ https://blogs.perficient.com/2026/01/26/the-desktop-llm-revolution-left-mobile-behind/#respond Mon, 26 Jan 2026 19:44:56 +0000 https://blogs.perficient.com/?p=389927

Large Language Models have fundamentally transformed how we work on desktop computers. From simple ChatGPT conversations to sophisticated coding assistants like Claude and Cursor, from image generation to CLI-based workflows—LLMs have become indispensable productivity tools.

Desktop with multiple windows versus iPhone single-app limitation
On desktop, LLMs integrate seamlessly into multi-window workflows. On iPhone? Not so much.

On my Mac, invoking Claude is a keyboard shortcut away. I can keep my code editor, browser, and AI assistant all visible simultaneously. The friction between thought and action approaches zero.

But on iPhone, that seamless experience crumbles.

The App-Switching Problem

iOS enforces a fundamental constraint: one app in the foreground at a time. This creates a cascade of friction every time you want to use an LLM:

  1. You’re browsing Twitter and encounter text you want translated
  2. You must leave Twitter (losing your scroll position)
  3. Find and open your LLM app
  4. Wait for it to load
  5. Type or paste your query
  6. Get your answer
  7. Switch back to Twitter
  8. Try to find where you were

This workflow is so cumbersome that many users simply don’t bother. The activation energy required to use an LLM on iPhone often exceeds the perceived benefit.

“Opening an app is the biggest barrier to using LLMs on iPhone.”

Building a System-Level LLM Experience

Rather than waiting for Apple Intelligence to mature, I built my own solution using iOS Shortcuts. The goal: make LLM access feel native to iOS, not bolted-on.

iOS Shortcuts workflow diagram for LLM integration
The complete workflow: Action Button → Shortcut → API → Notification → Notes

The Architecture

My system combines three key components:

  • Trigger: iPhone’s Action Button for instant, one-press access
  • Backend: Multiple LLM providers via API calls (Siliconflow’s Qwen, Nvidia’s models, Google’s Gemini Flash)
  • Output: System notifications for quick answers, with automatic saving to Bear for detailed responses
iPhone Action Button triggering AI assistant
One press of the Action Button brings AI assistance without leaving your current app.

Three Core Functions

I configured three preset modes accessible through the shortcut:

Function Use Case Output
Quick Q&A General questions, fact-checking Notification popup
Translation English ↔ Chinese conversion Notification + clipboard
Voice Todo Capture tasks via speech Formatted list in Bear app

Why This Works

The magic isn’t in the LLM itself—it’s in the integration points:

  • No app switching required: Shortcuts run as an overlay, preserving your current context
  • Sub-second invocation: Action Button is always accessible, even from the lock screen
  • Persistent results: Answers are automatically saved, so you never lose important responses
  • Model flexibility: Using APIs means I can switch providers based on speed, cost, or capability

The Bigger Picture

Apple Intelligence promises to bring system-level AI to iOS, but its rollout has been slow and its capabilities limited. By building with Shortcuts and APIs, I’ve created a more capable system that:

  • Works today, not “sometime next year”
  • Uses state-of-the-art models (not Apple’s limited on-device options)
  • Costs pennies per query (far less than subscription apps)
  • Respects my workflow instead of demanding I adapt to it

Try It Yourself

The iOS Shortcuts app is more powerful than most users realize. Combined with free or low-cost API access from providers like Siliconflow, Groq, or Google AI Studio, you can build your own system-level AI assistant in an afternoon.

The best interface is no interface at all. When AI assistance is a single button press away—without leaving what you’re doing—you’ll actually use it.

]]>
https://blogs.perficient.com/2026/01/26/the-desktop-llm-revolution-left-mobile-behind/feed/ 0 389927
Part 2: Building Mobile AI: A Developer’s Guide to On-Device Intelligence https://blogs.perficient.com/2026/01/19/part-2-building-mobile-ai-a-developers-guide-to-on-device-intelligence/ https://blogs.perficient.com/2026/01/19/part-2-building-mobile-ai-a-developers-guide-to-on-device-intelligence/#respond Mon, 19 Jan 2026 22:27:11 +0000 https://blogs.perficient.com/?p=389702

Subtitle: Side-by-side implementation of Secure AI on Android (Kotlin) and iOS (Swift).

In Part 1, we discussed why we need to move away from slow, cloud-dependent chatbots. Now, let’s look at how to build instant, on-device intelligence. While native code is powerful, managing two separate AI stacks can be overwhelming.

Before we jump into platform-specific code, we need to talk about the “Bridge” that connects them: Google ML Kit.

The Cross-Platform Solution: Google ML Kit

If you don’t want to maintain separate Core ML (iOS) and custom Android models, Google ML Kit is your best friend. It acts as a unified wrapper for on-device machine learning, supporting both Android and iOS.

It offers two massive advantages:

  1. Turnkey Solutions: Instant APIs for Face Detection, Barcode Scanning, and Text Recognition that work identically on both platforms.
  2. Custom Model Support: You can train a single TensorFlow Lite (.tflite) model and deploy it to both your Android and iOS apps using ML Kit’s custom model APIs.

For a deep dive on setting this up, bookmark the official ML Kit guide.


The Code: Side-by-Side Implementation

Below, we compare the implementation of two core features: Visual Intelligence (Generative AI) and Real-Time Inference (Computer Vision). You will see that despite the language differences, the architecture for the “One AI” future is remarkably similar.

Feature 1: The “Brain” (Generative AI & Inference)

On Android, we leverage Gemini Nano (via ML Kit’s Generative AI features). On iOS, we use a similar asynchronous pattern to feed data to the Neural Engine.

Android (Kotlin)

We check the model status and then run inference. The system manages the NPU access for us.

// GenAIImageDescriptionScreen.kt
val featureStatus = imageDescriber.checkFeatureStatus().await()

when (featureStatus) {
    FeatureStatus.AVAILABLE -> {
        // The model is ready on-device
        val request = ImageDescriptionRequest.builder(bitmap).build()
        val result = imageDescriber.runInference(request).await()
        onResult(result.description)
    }
    FeatureStatus.DOWNLOADABLE -> {
        // Silently download the model in the background
        imageDescriber.downloadFeature(callback).await()
    }
}

iOS (Swift)

We use an asynchronous loop to continuously pull frames and feed them to the Core ML model.

// DataModel.swift
func runModel() async {
    try! loadModel()
    
    while !Task.isCancelled {
        // Thread-safe access to the latest camera frame
        let image = lastImage.withLock({ $0 })
        
        if let pixelBuffer = image?.pixelBuffer {
            // Run inference on the Neural Engine
            try? await performInference(pixelBuffer)
        }
        // Yield to prevent UI freeze
        try? await Task.sleep(for: .milliseconds(50))
    }
}

Feature 2: The “Eyes” (Real-Time Vision)

For tasks like Face Detection or Object Tracking, speed is everything. We need 30+ frames per second to ensure the app feels responsive.

Android (Kotlin)

We use FaceDetection from ML Kit. The FaceAnalyzer runs on every frame, calculating probabilities for “liveness” (smiling, eyes open) instantly.

// FacialRecognitionScreen.kt
FaceInfo(
    confidence = 1.0f,
    // Detect micro-expressions for liveness check
    isSmiling = face.smilingProbability?.let { it > 0.5f } ?: false,
    eyesOpen = face.leftEyeOpenProbability?.let { left -> 
        face.rightEyeOpenProbability?.let { right ->
            left > 0.5f && right > 0.5f 
        }
    } ?: true
)

iOS (Swift)

We process the prediction result and update the UI immediately. Here, we even visualize the confidence level using color, providing instant feedback to the user.

// ViewfinderView.swift
private func updatePredictionLabel() {
    for result in prediction {
        // Dynamic feedback based on confidence
        let probability = result.probability
        let color = getColorForProbability(probability) // Red to Green transition
        
        let text = "\(result.label): \(String(format: "%.2f", probability))"
        // Update UI layer...
    }
}

Feature 3: Secure Document Scanning

Sometimes you just need a perfect scan without the cloud risk. Android provides a system-level intent that handles edge detection and perspective correction automatically.

Android (Kotlin)

// DocumentScanningScreen.kt
val options = GmsDocumentScannerOptions.Builder()
    .setGalleryImportAllowed(false) // Force live camera for security
    .setPageLimit(5)
    .setResultFormats(RESULT_FORMAT_PDF)
    .build()

scanner.getStartScanIntent(activity).addOnSuccessListener { intentSender ->
    scannerLauncher.launch(IntentSenderRequest.Builder(intentSender).build())
}

Conclusion: One Logic, Two Platforms

Whether you are writing Swift for an iPhone 17 pr0 or Kotlin for a medical Android tablet, the paradigm has shifted.

  1. Capture locally.
  2. Infer on the NPU.
  3. React instantly.

By building this architecture now, you are preparing your codebase for Spring 2026, where on-device intelligence will likely become the standard across both ecosystems.

Reference: Google ML Kit Documentation

]]>
https://blogs.perficient.com/2026/01/19/part-2-building-mobile-ai-a-developers-guide-to-on-device-intelligence/feed/ 0 389702
Part 1: Mobile AI 2026: Why On-Device Intelligence is the New Standard https://blogs.perficient.com/2026/01/19/part-1-mobile-ai-2026-why-on-device-intelligence-is-the-new-standard/ https://blogs.perficient.com/2026/01/19/part-1-mobile-ai-2026-why-on-device-intelligence-is-the-new-standard/#comments Mon, 19 Jan 2026 20:15:36 +0000 https://blogs.perficient.com/?p=389691

Subtitle: From Critical Medical Hardware to the Apple Ecosystem, the future of mobile intelligence is local, instant, and unified.

We are standing at a hardware tipping point. For the last decade, “AI” on mobile effectively meant one thing: sending data to the cloud and waiting for an answer. Especially for those chatbots, adding AI to an app meant integrating a slow, spinning loading indicator while data traveled to a server, waited in a queue, and eventually returned text. Users are tired of waiting. They are overwhelmed by generic bots that feel disconnected from the app they are actually using.

But as we move toward 2026, the script is flipping. Phone manufacturers are shipping devices with neural engines (NPUs) so powerful they rival the desktop GPUs of just a few years ago. This shift isn’t just about faster chatbots or smoother animations; it is reshaping critical industries like healthcare and unifying the mobile ecosystem under a single dominant model family: Google Gemini.

The Hardware Revolution: The “Brain” in Your Pocket

The defining trend of the 2025-2026 cycle is the explosion of Hardware Acceleration. Modern mobile processors—whether it’s the latest Snapdragons powering Android flagships or the A-series chips in iPhones—are no longer just Central Processing Units (CPUs). They are dedicated AI powerhouses capable of “always-on” generative tasks.

This hardware leap means we can now run massive models (like Gemini Nano) directly on the device. The benefits are immediate and transformative:

  • Zero Latency: No network round-trips. The intelligence feels instantaneous.
  • Total Privacy: Sensitive data never leaves the phone’s secure enclave.
  • Offline Reliability: Intelligence works in elevators, basements, and airplanes.

The Critical Use Case: Android in Healthcare

Nowhere is this shift more vital than in the rapidly expanding world of Medical Devices. Android has quietly become the operating system of choice for specialized medical hardware, from handheld ultrasound scanners to patient vitals monitors.

Why is the edge critical here? Because medical environments are unforgiving. A doctor in a rural clinic or a paramedic in a speeding ambulance cannot rely on spotty 5G connections to process a patient’s vitals or analyze an X-ray.

  • Privacy Compliance: Processing sensitive patient data (like facial analysis for pain detection) strictly on-device removes complex regulatory cloud compliance hurdles. The data stays with the patient.
  • Reliability: An Android-based diagnostic tool must work instantly, 100% of the time, regardless of Wi-Fi status.
  • Adoption: We are seeing a massive surge in smart, connected medical tools that rely on commodity Android hardware to deliver hospital-grade diagnostics at a fraction of the cost.

The “One AI” Future: Gemini on iOS & Android

Perhaps the most compelling reason to bet on Gemini is the upcoming unification of the mobile AI landscape. Reports indicate that Apple is partnering with Google to integrate Gemini models into iOS 18 and macOS Sequoia for complex reasoning tasks and summaries, a rollout expected to mature by Spring 2026.

While Apple will handle basic tasks with its own on-device models, it is leaning on Gemini’s superior reasoning for the “heavy lifting.” This creates a unique opportunity for developers:

  • Unified Intelligence: Learning to engineer prompts and integrations for Gemini means you are effectively targeting the entire mobile market—both the Android medical devices and the premium iPhone user base.
  • Cross-Platform Consistency: A feature built on Gemini’s logic will behave consistently whether it’s running on a Samsung Galaxy Tab in a hospital or an iPhone 17 in a consumer’s hand.
  • Future-Proofing: With these updates expected shortly, building expertise in Gemini now puts us ahead of the curve when the feature goes mainstream across billions of Apple devices.

In Part 2, we will leave the strategy behind and dive into the code to see how we are already building this future today on iOS and Android.

]]>
https://blogs.perficient.com/2026/01/19/part-1-mobile-ai-2026-why-on-device-intelligence-is-the-new-standard/feed/ 1 389691
Building a Reliable Client-Side Token Management System in Flutter https://blogs.perficient.com/2026/01/08/building-a-reliable-client-side-token-management-system-in-flutter/ https://blogs.perficient.com/2026/01/08/building-a-reliable-client-side-token-management-system-in-flutter/#respond Fri, 09 Jan 2026 05:15:35 +0000 https://blogs.perficient.com/?p=389472

In one of my recent Flutter projects, I had to implement a session token mechanism that behaved very differently from standard JWT-based authentication systems.

The backend issued a 15-minute session token, but with strict constraints:

  • No expiry timestamp was provided
  • The server extended the session only when the app made an API call
  • Long-running user workflows depended entirely on session continuity

If the session expired unexpectedly, users could lose progress mid-flow, leading to inconsistent states and broken experiences. This meant the entire token lifecycle had to be controlled on the client, in a predictable and self-healing way.

This is the architecture I designed.


  1. The Core Challenge

The server provided the token but not its expiry. The only rule:

“Token is valid for 15 minutes, and any API call extends the session.”

To protect long-running user interactions, the application needed to:

  • Track token lifespan locally
  • Refresh or extend sessions automatically
  • Work uniformly across REST and GraphQL
  • Survive app backgrounding and resuming
  • Preserve in-progress workflows without UI disruption

This required a fully client-driven token lifecycle engine.


  1. Client-Side Countdown Timer

Since expiry data was not available from the server, I implemented a local countdown timer to represent session validity.

How it works:

  • When token is obtained → start a 15-minute timer
  • When any API call happens → reset the timer (because backend extends session)
  • If the timer is about to expire:
    • Active user flow → show a visible countdown
    • Passive or static screens → attempt silent refresh
  • If refresh fails → gracefully log out in case of logged-in users

This timer became the foundation of the entire system.

 

Blank Diagram (3)


  1. Handling App Lifecycle Transitions

Users frequently minimize or switch apps. To maintain session correctness:

  • On background: pause the timer and store timestamp
  • On resume: calculate elapsed background time
    • If still valid → refresh & restart timer
    • If expired → re-authenticate or log out

This prevented accidental session expiry just because the app was minimized.

Blank Diagram (4)

 


  1. REST Auto-Refresh with Dio Interceptors

For REST APIs, Dio interceptors provided a clean, centralized way to manage token refresh.

Interceptor Responsibilities:

  • If timer is null → start timer
  • If timer exists but is inactive,
    • token expired → refresh token
    • perform silent re-login if needed
  • If timer is active → reset the timer
  • Inject updated token into headers

Conceptual Implementation:

class SessionInterceptor extends Interceptor {

  @override

  Future<void> onRequest(

    RequestOptions options,

    RequestInterceptorHandler handler,

  ) async {

    if (sessionTimer == null) {

      startSessionTimer();

    } else if (!sessionTimer.isActive) {

      await refreshSession();

      if (isAuthenticatedUser) {

        await silentReauthentication();

      }

    }

    options.headers[‘Authorization’] = ‘Bearer $currentToken’;

    resetSessionTimer();

    handler.next(options);

  }

}

This made REST calls self-healing, with no manual checks in individual services.


  1. GraphQL Auto-Refresh with Custom AuthLink

GraphQL required custom handling because it doesn’t support interceptors.
I implemented a custom AuthLink where token management happened inside getToken().

AuthLink Responsibilities:

  • Timer null → start
  • Timer inactive,
    • refresh token
    • update storage
    • silently re-login if necessary
  • Timer active → reset timer and continue

GraphQL operations then behaved consistently with REST, including auto-refresh and retry.

Conceptual implementation:

class CustomAuthLink extends AuthLink {

  CustomAuthLink()

      : super(

          getToken: () async {

            if (sessionTimer == null) {

              startSessionTimer();

              return currentToken;

            }

            if (!sessionTimer.isActive) {

              await refreshSession();

              if (isAuthenticatedUser) {

                await silentReauthentication();

              }

              return currentToken;

            }

            resetSessionTimer();

            return currentToken;

          },

        );

}


  1. Silent Session Extension for Authenticated Users

When authenticated users’ sessions extended:

  • token refresh happened in background
  • user data was re-synced silently
  • no screens were reset
  • no interruptions were shown

This was essential for long-running user workflows.


Engineering Lessons Learned

  • When token expiry information is not provided by the backend, session management must be treated as a first-class client responsibility rather than an auxiliary concern. Deferring this logic to individual API calls or UI layers leads to fragmentation and unpredictable behavior.
  • A client-side timer, when treated as the authoritative representation of session validity, significantly simplifies the overall design. By anchoring all refresh, retry, and termination decisions to a single timing mechanism, the system becomes easier to reason about, test, and maintain.
  • Application lifecycle events have a direct and often underestimated impact on session correctness. Explicitly handling backgrounding and resumption prevents sessions from expiring due to inactivity that does not reflect actual user intent or engagement.
  • Centralizing session logic for REST interactions through a global interceptor reduces duplication and eliminates inconsistent implementations across services. This approach ensures that every network call adheres to the same session rules without requiring feature-level awareness.
  • GraphQL requires a different integration point, but achieving behavioral parity with REST is essential. Embedding session handling within a custom authorization link proved to be the most reliable way to enforce consistent session behavior across both communication models.
  • Silent session extension for authenticated users is critical for preserving continuity during long-running interactions. Refreshing sessions transparently avoids unnecessary interruptions and prevents loss of in-progress work.
  • In systems where backend constraints limit visibility into session expiry, a client-driven lifecycle model is not merely a workaround. It is a necessary architectural decision that improves reliability, protects user progress, and provides predictable behavior under real-world usage conditions.
]]>
https://blogs.perficient.com/2026/01/08/building-a-reliable-client-side-token-management-system-in-flutter/feed/ 0 389472
Apple’s Big Move: The Future of Mobile https://blogs.perficient.com/2025/09/09/apple-future-of-mobile/ https://blogs.perficient.com/2025/09/09/apple-future-of-mobile/#respond Tue, 09 Sep 2025 16:45:07 +0000 https://blogs.perficient.com/?p=386939

Well, that was a lot to unpack. The Apple event today, announcing iOS 26 and the iPhone 17, truly lived up to the “Awe Dropping” invitation, and not just because of the new iPhone 17 Air’s ridiculously thin design. While the new 24MP selfie camera, the upgraded 48MP Telephoto lens on the Pro models, and the “Liquid Glass” UI are certainly head-turners, the real narrative is about something far more fundamental.

This event wasn’t just about a new phone; it was about the official start of a new mobile era. Apple is openly acknowledging that the old paradigm—the one where we’re constantly chasing down apps and fighting off a barrage of distracting push notifications—is finally being put to rest. And honestly, it’s about time. Our digital lives have become a frantic game of Whack-A-Mole, tapping icons and dismissing alerts just to get a single task done.

What we saw today with iOS 26 isn’t a simple evolution; it’s a profound re-architecture of how the mobile operating system works, and it’s a brilliant example of creative technology in action. It’s built on an anticipatory design model, where the system itself becomes a proactive companion, anticipating what we need before we even ask. The new on-device “Visual Intelligence” that lets you interact with anything on your screen is a perfect metaphor for this shift—it’s less about launching a specific app and more about the device just knowing what to do.

This all hinges on the unique convergence of four foundational pillars that Apple has been building for years:

  1. On-device AI: The new A19 Pro chip with its beefed-up Neural Engine isn’t just for faster processing; it’s the engine for this new, context-aware intelligence. All that real-time analysis of your data happens on the device, keeping your personal information private and secure, which is a key differentiator in today’s privacy-conscious landscape.
  2. Conversational UIs: The deeper integration of natural language requests means we’re moving toward a system where we can simply ask our phone to perform complex, multi-step tasks across different apps, all while the system seamlessly orchestrates the workflow.
  3. App Abstraction: This is the big one. We’re seeing the dissolution of the app as a silo. Instead, apps are becoming a collection of “components” or “intelligent actions” that can be called upon system-wide. The ability for the new “Live Translation” feature to pull data from Messages and the Phone app is a clear sign that this new framework is here to stay.
  4. Security and Privacy: The announcement last year of the new “Private Cloud Compute” isn’t just a feature; it’s a strategic pillar. It shows that Apple is doubling down on its privacy-first ethos, demonstrating that powerful AI can be delivered without sacrificing the trust of its users.

Graphic of "The Dissolution of the App as a Silo"

The competitive landscape is shifting in a fundamental way. It’s no longer about which company can create a single, all-encompassing super-app. The new battleground is how well a business can integrate its functionality and value into this ambient, proactive mobile experience. The winners will be those who can transition from a transactional “app” model to a service-based “companion” model—providing continuous, frictionless value that makes our lives easier, not more cluttered. The companies that will win are proactively establishing/defining the capabilities and data required to deliver these seamless mobile experiences. When the game changes on the front-end, the back end needs to pivot.

The iPhone 17 and iOS 26 aren’t just incremental updates. They represent a significant turning point in the industry. It’s a move from a mobile world we have to consciously navigate to one that feels more like a seamless extension of ourselves. And for those of us in the business of creative digital engagement, that’s the most exciting and awe-dropping announcement of all.

]]>
https://blogs.perficient.com/2025/09/09/apple-future-of-mobile/feed/ 0 386939
Exploring the Future of React Native: Upcoming Features, and AI Integrations https://blogs.perficient.com/2025/08/25/exploring-the-future-of-react-native-upcoming-features-and-ai-integrations/ https://blogs.perficient.com/2025/08/25/exploring-the-future-of-react-native-upcoming-features-and-ai-integrations/#respond Tue, 26 Aug 2025 04:39:19 +0000 https://blogs.perficient.com/?p=386505

Introduction

With over 9+ years of experience in mobile development and a strong focus on React Native, I’ve always been eager to stay ahead of the curve. Recently, I’ve been exploring the future of React Native, diving into upcoming features, AI integrations, and Meta’s long-term vision for cross-platform innovation. React Native has been a game-changing framework in the mobile space, empowering teams to build seamless cross-platform applications using JavaScript and React. Backed by Meta (formerly Facebook), it continues to evolve rapidly, introducing powerful new capabilities, optimizing performance, and increasingly integrating AI-driven solutions.

In this article, we’ll explore upcoming React Native features, how AI is integrating into the ecosystem, and Meta’s long-term vision for cross-platform innovation.

Upcoming Features in React Native

React Native’s core team, alongside open-source contributors, is actively working on several exciting updates. Here’s what’s on the horizon:

Fabric: The New Rendering Engine

Fabric modernizes React Native’s rendering infrastructure to make it faster, more predictable, and easier to debug.
Key benefits:

  • Concurrent React support
  • Synchronous layout and rendering
  • Enhanced native interoperability

As of 2025, Fabric is being gradually enabled by default in newer React Native versions (0.75+).

TurboModules

A redesigned native module system aimed at improving startup time and memory usage. TurboModules allow React Native to lazily load native modules only when needed, reducing app initialization overhead.

Hermes 2.x and Beyond

Meta’s lightweight JavaScript engine for React Native apps continues to get faster, with better memory management and debugging tools like Chrome DevTools integration.

New improvements:

  • Smaller bundle sizes
  • Better GC performance
  • Faster cold starts

React Native Codegen

A system that automates native bridge generation, making native module creation safer and faster, while reducing runtime errors. This is essential for scaling large apps with native modules.

AI Integrations in React Native

Artificial Intelligence is not just for backend systems or web apps anymore. AI is actively being integrated into React Native workflows, both at runtime and during development.

Where AI is showing up in React Native:

  • AI-Powered Code Suggestions & Debugging
    Tools like GitHub Copilot, ChatGPT, and AI-enhanced IDE extensions are streamlining development, providing real-time code fixes, explanations, and best practices.

  • ML Models in React Native Apps
    With frameworks like TensorFlow.js, ML Kit, and custom CoreML/MLModel integration via native modules, developers can embed models for:

    • Image recognition
    • Voice processing
    • Predictive text
    • Sentiment analysis

  • AI-Based Performance Monitoring & Crash Prediction
    Meta and third-party analytics tools are embedding AI to predict crashes and performance bottlenecks, offering insights before problems escalate in production apps.

  • AI-Driven Accessibility Improvements
    Automatically generating image descriptions or accessibility labels using computer vision models is becoming a practical AI use case in mobile apps.

Meta’s Vision for Cross-Platform Innovation

Meta’s vision for React Native is clear: to make cross-platform development seamless, high-performing, and future-proof.

What Meta is focusing on:

  • Unified Rendering Pipeline (Fabric)
  • Tight integration with Concurrent React
  • Deep AI integrations for personalization, recommendations, and moderation
  • Optimized developer tooling (Flipper, Hermes, Codegen)
  • Expanding React Native’s use across Meta’s product family (Facebook, Instagram, Oculus apps)

Long-Term:

Expect more AI-powered tooling, better integration between React (Web) and React Native, and Meta investing in AI-assisted developer workflows.

Conclusion

React Native’s future is bright, with Fabric, TurboModules, Hermes, and AI integrations reshaping how mobile apps are built and optimized. Meta’s continuous investment ensures that React Native remains not only relevant but also innovative in the ever-changing app development landscape.

As AI becomes a core part of both our development tools and end-user experiences, React Native developers are uniquely positioned to lead the next generation of intelligent, performant, cross-platform apps.

 

]]>
https://blogs.perficient.com/2025/08/25/exploring-the-future-of-react-native-upcoming-features-and-ai-integrations/feed/ 0 386505
Flutter Web Hot Reload Has Landed – No More Refreshes! https://blogs.perficient.com/2025/07/02/flutter-web-hot-reload-flutter-3-32/ https://blogs.perficient.com/2025/07/02/flutter-web-hot-reload-flutter-3-32/#respond Wed, 02 Jul 2025 07:06:59 +0000 https://blogs.perficient.com/?p=382968

Flutter’s famous hot reload just levelled up — now it works on the web! No more full refreshes every time you tweak a UI. If you’re building apps with Flutter Build Web, this update is a game-changer.

Let’s dive into what it is, how to use it, why it matters, and see it in action.

What Is Hot Reload for Flutter Web?

Previously, Flutter Web only supported hot restart, meaning the entire app state would reset every time you changed code. That slows you down, especially in apps with complex UI or setup steps.

With Flutter Web hot reload, you can now:

  • Inject code changes into the running app
  • Preserve state
  • Update UI instantly
  • Skip full browser reloads

Now the Fun Part: How to Use It

Make sure you’ve upgraded to Flutter 3.32 or later to use this feature.

Command Line

Run your app on the web with hot reload using:

flutter run -d chrome --web-experimental-hot-reload

While it runs:

  • Press r → hot reload
  • Press R → hot restart

 

VS Code Setup

To use hot reload from VS Code, update your launch.json like this:

"configurations": [

  {

    "name": "Flutter web (hot reload)",

    "type": "dart",

    "request": "launch",

    "program": "lib/main.dart",

    "args": [

      "-d",

      "chrome",

      "--web-experimental-hot-reload"

    ]

  }

]

 

Then:

  • Enable “Dart: Flutter Hot Reload On Save” in VS Code settings
  • Use the lightning bolt to hot reload
  • Use the ⟳ button to hot restart

DartPad Support

Hot reload is also now available in DartPad!

  • Visit dartpad.dev
  • Load any Flutter app
  • If hot reload is supported, you’ll see a Reload button appear
  • You can test UI tweaks right in the browser — no install required

Measuring Reload Time

No extra setup is needed to measure performance. Flutter now logs reload time directly in your terminal:

Screen shot

You’ll see this every time you trigger a hot reload (r in terminal or ⚡ in editor). It’s a great way to verify the reload speed and confirm it’s working as expected.

Demo: Hot Reload & State Retention

Here’s how to clearly showcase hot reload in action with screen recordings:

Hot Reload with UI Change

Shows instant UI updates without refresh

State Retention Demo

 Proves the state is preserved after hot reload

This visual comparison makes it obvious why hot reload is better for iteration.

Summary

Flutter Web hot reload represents a significant advancement for rapid, stateful development in the browser.

  • Instant UI feedback
  • State preservation
  • DartPad support
  • Built-in performance logging
  • Works on Flutter 3.32+

If you’ve been avoiding Flutter Web because of slow reload cycles, now’s the time to dive in again.

This feature is still experimental — expect even more improvements soon.

]]>
https://blogs.perficient.com/2025/07/02/flutter-web-hot-reload-flutter-3-32/feed/ 0 382968
Part 2 – Marketing Cloud Personalization and Mobile Apps: Tracking Items https://blogs.perficient.com/2025/06/25/part-2-marketing-cloud-personalization-and-mobile-apps-tracking-items/ https://blogs.perficient.com/2025/06/25/part-2-marketing-cloud-personalization-and-mobile-apps-tracking-items/#comments Wed, 25 Jun 2025 19:22:08 +0000 https://blogs.perficient.com/?p=381961

In the first part of this series, we covered how to connect a mobile app to Marketing Cloud Personalization using Salesforce’s Mobile SDK. In this post, we’ll explore how to send catalog items from the mobile app to your dataset.

What’s new on the DemoApp?

Since the last post, I made some changes in the app. The app is connected to a free NASA API where takes information from the Mars Rover Photos connection. This connection returns an array of images taken on a specific earth’s date. For the demo purposes I’m only using the first record on that array. This  API is designed to collect image data gathered by NASA’s Curiosity, Opportunity, and Spirit rovers on Mars. The API make it more easily available to other developers, educators, and citizen scientists.

The app has two different views, the main view and the display image view. In the main view, the user picks the Earth’s date and the app sends it to the API to retrieve a picture. The second view displays the picture along with some information (see image below). The goal here is to send the item (the picture and its information) to Personalization.

Simulator Screenshot Iphone 16 Pro

 

The role of Personalization’s Event API

Marketing Cloud Personalization provides an Event API that sources use to send event data to the platform, where the event pipeline processes it. Then you can return Campaigns data that can be served to the end user. Developers cannot use of this API to handle mobile application events.

“The Personalization Mobile SDK has separate functionality used for mobile app event processing, with built-in features that are currently unavailable through the Event API.”

So, we can eliminate possibility to use the Event API in this use case.

Tracking Items

Tracking items is an important part of any Personalization implementation. Configure the Catalog Object so that it log and event when a user has viewed productFor example, imagine you have an app that sells the sweaters you knit. You want to know how many users go to “Red and Blue Sweater” product. With that information you can promote other products that those users might like, so they will be more likely to buy from you.

There are two ways to track items and actions. Track catalog objects like Products, Articles, Blogs and Categories (which are the main catalog objects in Personalization) and you can also add Tags as a related catalog objects for those I name before.

You can also track actions like AddToCard, RemoveFromCart and Purchase.

 

Process Catalog Objects/Item Data

In order to process the catalog and item data we are going to sent from our mobile application, we need to activate the Process Item Data from Native Mobile Apps . This option will make it possible for Personalization to process the data. By default Personalization ignores all the mobile catalog data that it receives.

To activate this functionality, hover over SETTING > GENERAL SETUP > ADVANCE OPTIONS > Activate Process Item Data from Native Mobile Apps

Process Item Data From Native Mobile Apps configuration inside Salesforce Marketing Cloud Personalization

The SDK currently works for Products, Articles and Blogs, those are called Items. They will be able to track purchases, comments, or views. They also will be able to relate with other catalog objects like brand, category and keyword.

Methods to process item data

The following methods are used to track the action of the users viewing and Item or the detail of and item. The web comparison for these methods are the SalesforceInteractions.CatalogObjectInteractionName.ViewCatalogObject and the SalesforceInteractions.CatalogObjectInteractionName.ViewCatalogObjectDetail

viewItem: and viewItem:actionName:

These methods track when a user views an item. Personalization will automatically track the time spent viewing the item while the context, app, and user is active. The item will remain the one viewed until this method or viewItemDetail  are called again. See documentation Here

The second method have the actionName  parameter that is use for different action name to distinguish this View Item.

evergageScreen?.viewItem(_ item: EVGItem?)
evergageScreen?.viewItem(_ item: EVGItem?, actionName: String?)

View Item Interaction in the Event Stream:

Event Detail Interaction

View Item interaction using the actionName parameter

Event Detail Interaction with Action Field parameter

EVGItem is an abstract base class. An item is something in the app that users can view or otherwise engage with. Classes like EVGProduct or EVGArticle inherits from this class

The question mark at the end of String and EVGItem  means that the value is optional or can be  nil . The last one can happen to the EVGItem  if have some invalid value.

viewItemDetail: and viewItemDetail:actionName:

These methods track details when a user views an item, such as looking at other product images or opens the specifications tab. Personalization will automatically track the time spent viewing the item while the context, app, and user is active. The item will remain the one viewed until this method or viewItem:  are called again.

The second method have the actionName  parameter that its use for different action name to distinguish this View Item Detail.

evergageScreen?.viewItemDetail(_ item: EVGItem?)
evergageScreen?.viewItemDetail(_ item: EVGItem?, actionName: String?)

View Item Detail interaction in the Event Stream:

Event Item Detail interaction

View Item Detail interaction but using the actionName parameter:

Event Item Detail With Action interaction

Now we have define those EVGItem objects with the actual catalog object we want to track Blog / Category / Articles / Product.

 

The EVGProduct Class

By definition, a Product is an item that a business can sell to users.  Products can be added to EVGLineItem objects when they have been ordered by the user.

We have a group of initializers we can use to create an Evergage product and send it back to Personalization. The EVGProduct class have variety of methods we can use. For this post I will show the most relevant to use.

Something important to remember is that in order to use classes like EVGProduct or EVGArticle, we need to import the Evergagelibrary.

The productWithId: method

The most basic of them all, we just need to pass the ID of the product and that’s it. This can be useful if we don’t want to provide too much information.

evergageScreen?.viewItem(EVGProduct.init(id: "p123"))

The productWithId:name:price:url:imageUrl:evgDescription: method

Builds an EVGProduct, including many of the commonly used fields. This constructor use the id, name, price, url, image and description fields of the Product catalog object.

As a reminder , I’m building my Product catalog object using the images from the Mars Rover Photos with some other attributes that we got in the response from the API

For this constructor, the values I’m sending in the parameters are:

  • The ID of the image returned in the JSON
  • The full name of the camera that took the photo
  • A price of 10 (just because this needs a value here)
  • The image URL returned in the JSON
  • A description I made using the earth, landing and launching dates.

All I just have to do is pass the new item using any of the method we use to track item data.

let item : EVGItem = EVGProduct.init(id:String(id),
                                      name: name,
                                      price: 10,
                                      url: url,
                                      imageUrl: imageUrl,
                                      evgDescription: "This is a photo taken form \(roverName). Earth Date: \(earthDate). Landing Date: \(landingDate). Launch Date: \(launchDate)")
        
 evergageScreen?.viewItemDetail(item)

 

The item declaration ir correct since EVGProduct inherits from EVGItem.

After populating the information, the catalog object will look like this inside Marketing Cloud Personalization:

Product Catalog Object Item inside SFMC Personalization

The productFromJSONDictionary: method

As the name says it creates an EVGProduct from the provided JSON dictionary. A JSON dictionary is an array of key-value pair in the form [String : Any] where you add attributes from the Product catalog object.

let productDict : [String : Any] = [
           "_id": String(id),
           "url": url,
           "name": name,
           "imageUrl": imageUrl,
           "description": "This is a photo taken form \(roverName). Earth Date: \(earthDate). Landing Date: \(landingDate). Launch Date: \(launchDate)",
           "price": 10,
           "currency": "USD",
           "inventoryCount": 2
]

let itemJson: EVGItem? = EVGProduct.init(fromJSONDictionary: productDict)
evergageScreen?.viewItemDetail(itemJson, actionName: "User did specific action")

Then you can initialize the EVGProduct object with the constructor that uses the fromJSONDictionary parameter.

The last step here will be sent the action with the viewItemDetail method.

This is how the record should look like after the creation in the dataset.

Product created using JSON method

 

Final Class

This is how our class will look with the methods to sent the item interactions.

Swift class code with the method to sent interaction to Personalization

Bonus: How to set attributes values?

Imagine you also want to set attributes to sent to personalization like first name, last name, email address or zip code. If you want to do that, all you need to do its to use the setUserAttribute method inside the AppDelegate class or after the user logs in. We used this class to pass the id of the user and to set the datasetID.

After the user logs in you can pass the information you need to personalization The setUserAttribute:forName: sets an attribute (a name/value pair) on the user. The next event will send the new value to the Personalization dataset.

evergage.setUserAttribute("attributeValue", forName: "attributeName")

//Following the example
evergage.userId = evergage.anonymousId
evergage.setUserAttribute("Raul", forName: "firstName")
evergage.setUserAttribute("Juliao", forName: "lastName")
evergage.setUserAttribute("raul@gmail.com", forName: "emailAddress")
evergage.setUserAttribute("123456", forName: "zipCode")

The set attributes event:

Event interaction setting user information.

The Customer’s Profile view

Customer Profile View pointing the newly set attributes

 

Conclusion: Syncing Your Mobile App’s Catalog with Personalization

To wrap things up, setting up Articles, Blogs, and Categories works pretty much the same way as setting up Products. The structure stays consistent—you just have to keep in mind that each one belongs to a different class, so you’ll need to tweak things slightly depending on what you’re working with.

That said, one big limitation to note is that you can’t send custom attributes in catalog objects, even if you try using the JSON dictionary method. I tested a few different approaches, and unfortunately, it only supports the default attributes.

Also, the documentation doesn’t really go into detail about using other types of catalog objects outside of Articles, Blogs, Products, and Categories. It’s unclear if custom catalog objects are supported at all through the mobile SDK, which makes things a bit tricky if you’re looking to do something more advanced.

In part 3 we are going to take a look at how to set push notifications and mobile campaigns.

]]>
https://blogs.perficient.com/2025/06/25/part-2-marketing-cloud-personalization-and-mobile-apps-tracking-items/feed/ 2 381961
Over The Air Updates for React Native Apps https://blogs.perficient.com/2025/06/02/over-the-air-ota-deployment-process-for-mobile-app/ https://blogs.perficient.com/2025/06/02/over-the-air-ota-deployment-process-for-mobile-app/#respond Mon, 02 Jun 2025 14:07:24 +0000 https://blogs.perficient.com/?p=349211

Mobile App development is rapidly growing and so is the expectation of robust support. “Mobile first” is the set paradigm for many application development teams. Unlike web deployment, an app release has to go through the review process via App Store Connect and Google Play. Minor or major releases follow the app review same process, which can take 1-4 days. Hot fixes or critical security patches are also bound by the review cycle restrictions.  This may lead to service disruptions, negative app and customer reviews.

Let’s say that the latest version of an app is version 1.2. However, a critical bug was identified in version 1.1. The app developers may release version 1.3, but the challenge would be that it may take a while to release the new version (unless a forced update mechanism is implemented for the app). Another potential challenge would be the fact that there is no guarantee that the user would have auto updates on.

Luckily, “Over The Air” updates comes to the rescue in such situations.

The Over The Air (OTA), deployment process for mobile apps allows developers to push updates without going through the traditional review process. The OTA update process enables faster delivery for any hot fix or patch.

While this is very exciting, it does come with a few limitations:

  • This feature is not intended for major updates or large feature launches.
  • OTA primarily works with JavaScript bundlers so native feature changes cannot be deployed via OTA deployment.

Mobile OTA Deployment

React Native consists of JavaScript and Native code. When the app gets compiled, it creates the JSbundles for Android and iOS apps along with the native builds. OTA also relies on the JavaScript bundles and hence React Native apps are great candidates to take advantage of the OTA update technology.

One of our client’s app has an OTA deployment process implemented using App Center. However, Microsoft has decided to retire App Center as of March 31, 2025. Hence, we started exploring the alternatives. One of the alternate solutions on the the table was provided by App Center and the other was to find a similar PAAS solution from another provider. Since back-end stack was AWS, we chose to go with EAS Update.

EAS Update

EAS Update is a hosted service that serves updates for projects using expo-updates library. Once the EAS Update is configured correctly, the app will be listening for any targeted version of the app on the EAS dev cloud server. Expo provides a great documentation on setup and configuration.

How Does It Work?

In a nutshell;

  1. Integrate “EAS Updates” in the app project.
  2. The user has the app installed on their device.
  3. The development team made a bug fix/patch and generated JSbundle for the targeted app version and uploaded to the Expo.dev cloud server.
  4. Next time the user opens the app (frequencies can be configurable, we can set on app resume/start), the app will check if any bundle is available to be installed. If there is an update available, the newer version of the app from Expo will be installed on user’s device.
Over The Air Update process flow

OTA deployment process

Additional details can be found at https://docs.expo.dev/eas-update/how-it-works/.

Implementation Details:

If you are new to React Native app development, this article may help Ramp Up On React/React Native In Less Than a Month. And if you are transitioning from React to React Native, you may find this React Native – A Web Developer’s Perspective on Pivoting to Mobile useful.

I am using my existing React-Native 0.73.7 app. However, one can start a fresh React Native App for your test.

Project configuration requires us to setup expo-modules. The Expo installation guide provides an installer which handles configuration.  Our project needed an SDK 50 version of the installer.

  • Using npx install-expo-modules@0.8.1, I installed Expo, SDK-50, in alignment with our current React native version 0.73.7, which added the following dependencies.
"@expo/vector-icons": "^14.0.0",
"expo-asset": "~9.0.2",
"expo-file-system": "~16.0.9",
"expo-font": "~11.10.3",
"expo-keep-awake": "~12.8.2",
"expo-modules-autolinking": "1.10.3",
"expo-modules-core": "1.11.14",
"fbemitter": "^3.0.0",
"whatwg-url-without-unicode": "8.0.0-3"
  • Installed Expo-updates v0.24.14 package which added the following dependencies.
"@expo/code-signing-certificates": "0.0.5",
"@expo/config": "~8.5.0",
"@expo/config-plugins": "~7.9.0",
"arg": "4.1.0",
"chalk": "^4.1.2",
"expo-eas-client": "~0.11.0",
"expo-manifests": "~0.13.0",
"expo-structured-headers": "~3.7.0",
"expo-updates-interface": "~0.15.1",
"fbemitter": "^3.0.0",
"resolve-from": "^5.0.0"
  • Created expo account at https://expo.dev/signup
  • To setup the account execute, eas configure
  • This generated the project id and other account details.
  • Following channels were created: staging, uat, and production.
  • Added relevant project values to app.json, added Expo.plist, and updated same in AndroidManifest.xml.
  • Scripts block of package.json has been updated to use npx expo to launch the app.
  • AppDelegate.swift was refactored as part of the change.
  • App Center and CodePush assets and references were removed.
  • Created custom component to display a modal prompt when new update is found.

OTA Deployment:

  • Execute the command via terminal:
EAS_CHANNEL=staging RUNTIME_VERSION="7.13" eas update --message "build:[QA] - 7.13.841 - 25.5.9.4 - OTA Test2 commit"
  • Once the package is published, I can see my update available in expo.dev as shown in the image below.
EAS update OTA deployment

EAS update screen once OTA deployment is successful.

Test:

  1. Unlike App center, Expo provides the same package for iOS and Android targets.
  2. The targeted version package is available on the expo server.
  3. App restart or resume will display the popup (custom implementation) informing “A new update is available.”.
  4. When a user hits “OK” button in the popup, the update will be installed and content within the App will restart.
  5. If the app successfully restarts, the update is successfully installed.

Considerations:

  • In metro.config.js – the @rnx-kit/metro-serializer had to be commented out due to compatibility issue with EAS Update bundle process.
  • @expo/vector-icons package causes Android release build to crash on app startup. This package can be removed but if package-lock.json is removed the package will reinstall as an expo dependency and again, cause the app to crash. The issue is described in the comments here: https://github.com/expo/expo/issues/26521. There is no solution available at the moment. The expo vector icons package isn’t being handled correctly during the build process. It is caused by the react-native-elements package. When removed, the files are no longer added to app.manifest and the app builds and runs as expected.
  • Somehow the font require statements in node_modules/react-native-elements/dist/helpers/getIconType.js are being picked up during the expo-updates generation of app.manifest even though the files are not used our app. The current solution is to go ahead and include the fonts in the package but this is not optimal. Better solution is to filter those fonts from expo-updates process.

Deployment Troubleshooting:

  • Error fetching latest Expo update: Error: “channel-name” is not allowed to be empty.

The headers “expo-runtime-version”, “expo-channel-name”, and “expo-platform” are required. They can also be set with the query parameters “runtime-version”, “channel-name”, and “platform”. Learn more: https://github.com/expo/fyi/blob/main/eas-update-missing-headers.md

The configuration values for iOS app are maintained in Supporting/Expo.plist. The above error indicates that the EXUpdatesRequestHeadersblock in the plist might be missing.

OTA deployment is very useful when large number of customers are using the app and any urgent hot fix or patch needs to be released. You can set this for your lower environments as well as the production.

In my experience, it is very reliable and the expo team is doing great job on maintaining it.

So take advantage of this amazing service and Happy coding!

 

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2025/06/02/over-the-air-ota-deployment-process-for-mobile-app/feed/ 0 349211
Android Development Codelab: Mastering Advanced Concepts https://blogs.perficient.com/2025/04/10/android-development-codelab-mastering-advanced-concepts/ https://blogs.perficient.com/2025/04/10/android-development-codelab-mastering-advanced-concepts/#respond Thu, 10 Apr 2025 22:28:06 +0000 https://blogs.perficient.com/?p=379698

 

This guide will walk you through building a small application step-by-step, focusing on integrating several powerful tools and concepts essential for modern Android development.

What We’ll Cover:

  • Jetpack Compose: Building the UI declaratively.
  • NoSQL Database (Firestore): Storing and retrieving data in the cloud.
  • WorkManager: Running reliable background tasks.
  • Build Flavors: Creating different versions of the app (e.g., dev vs. prod).
  • Proguard/R8: Shrinking and obfuscating your code for release.
  • Firebase App Distribution: Distributing test builds easily.
  • CI/CD (GitHub Actions): Automating the build and distribution process.

The Goal: Build a “Task Reporter” app. Users can add simple task descriptions. These tasks are saved to Firestore. A background worker will periodically “report” (log a message or update a counter in Firestore) that the app is active. We’ll have dev and prod flavors pointing to different Firestore collections/data and distribute the dev build for testing.

Prerequisites:

  • Android Studio (latest stable version recommended).
  • Basic understanding of Kotlin and Android development fundamentals.
  • Familiarity with Jetpack Compose basics (Composable functions, State).
  • A Google account to use Firebase.
  • A GitHub account (for CI/CD).

Let’s get started!


Step 0: Project Setup

  1. Create New Project: Open Android Studio -> New Project -> Empty Activity (choose Compose).
  2. Name: AdvancedConceptsApp (or your choice).
  3. Package Name: Your preferred package name (e.g., com.yourcompany.advancedconceptsapp).
  4. Language: Kotlin.
  5. Minimum SDK: API 24 or higher.
  6. Build Configuration Language: Kotlin DSL (build.gradle.kts).
  7. Click Finish.

Step 1: Firebase Integration (Firestore & App Distribution)

  1. Connect to Firebase: In Android Studio: Tools -> Firebase.
    • In the Assistant panel, find Firestore. Click “Get Started with Cloud Firestore”. Click “Connect to Firebase”. Follow the prompts to create a new Firebase project or connect to an existing one.
    • Click “Add Cloud Firestore to your app”. Accept changes to your build.gradle.kts (or build.gradle) files. This adds the necessary dependencies.
    • Go back to the Firebase Assistant, find App Distribution. Click “Get Started”. Add the App Distribution Gradle plugin by clicking the button. Accept changes.
  2. Enable Services in Firebase Console:
    • Go to the Firebase Console and select your project.
    • Enable Firestore Database (start in Test mode).
    • In the left menu, go to Build -> Firestore Database. Click “Create database”.
      • Start in Test mode for easier initial development (we’ll secure it later if needed). Choose a location close to your users. Click “Enable”.
    • Ensure App Distribution is accessible (no setup needed here yet).
  3. Download Initial google-services.json:
    • In Firebase Console -> Project Settings (gear icon) -> Your apps.
    • Ensure your Android app (using the base package name like com.yourcompany.advancedconceptsapp) is registered. If not, add it.
    • Download the google-services.json file.
    • Switch Android Studio to the Project view and place the file inside the app/ directory.
    • Note: We will likely replace this file in Step 4 after configuring build flavors.

Step 2: Building the Basic UI with Compose

Let’s create a simple UI to add and display tasks.

  1. Dependencies: Ensure necessary dependencies for Compose, ViewModel, Firestore, and WorkManager are in app/build.gradle.kts.
    app/build.gradle.kts

    
    dependencies {
        // Core & Lifecycle & Activity
        implementation("androidx.core:core-ktx:1.13.1") // Use latest versions
        implementation("androidx.lifecycle:lifecycle-runtime-ktx:2.8.1")
        implementation("androidx.activity:activity-compose:1.9.0")
        // Compose
        implementation(platform("androidx.compose:compose-bom:2024.04.01")) // Check latest BOM
        implementation("androidx.compose.ui:ui")
        implementation("androidx.compose.ui:ui-graphics")
        implementation("androidx.compose.ui:ui-tooling-preview")
        implementation("androidx.compose.material3:material3")
        implementation("androidx.lifecycle:lifecycle-viewmodel-compose:2.8.1")
        // Firebase
        implementation(platform("com.google.firebase:firebase-bom:33.0.0")) // Check latest BOM
        implementation("com.google.firebase:firebase-firestore-ktx")
        // WorkManager
        implementation("androidx.work:work-runtime-ktx:2.9.0") // Check latest version
    }
                    

    Sync Gradle files.

  2. Task Data Class: Create data/Task.kt.
    data/Task.kt

    
    package com.yourcompany.advancedconceptsapp.data
    
    import com.google.firebase.firestore.DocumentId
    
    data class Task(
        @DocumentId
        val id: String = "",
        val description: String = "",
        val timestamp: Long = System.currentTimeMillis()
    ) {
        constructor() : this("", "", 0L) // Firestore requires a no-arg constructor
    }
                    
  3. ViewModel: Create ui/TaskViewModel.kt. (We’ll update the collection name later).
    ui/TaskViewModel.kt

    
    package com.yourcompany.advancedconceptsapp.ui
    
    import androidx.lifecycle.ViewModel
    import androidx.lifecycle.viewModelScope
    import com.google.firebase.firestore.ktx.firestore
    import com.google.firebase.firestore.ktx.toObjects
    import com.google.firebase.ktx.Firebase
    import com.yourcompany.advancedconceptsapp.data.Task
    // Import BuildConfig later when needed
    import kotlinx.coroutines.flow.MutableStateFlow
    import kotlinx.coroutines.flow.StateFlow
    import kotlinx.coroutines.launch
    import kotlinx.coroutines.tasks.await
    
    // Temporary placeholder - will be replaced by BuildConfig field
    const val TEMPORARY_TASKS_COLLECTION = "tasks"
    
    class TaskViewModel : ViewModel() {
        private val db = Firebase.firestore
        // Use temporary constant for now
        private val tasksCollection = db.collection(TEMPORARY_TASKS_COLLECTION)
    
        private val _tasks = MutableStateFlow<List<Task>>(emptyList())
        val tasks: StateFlow<List<Task>> = _tasks
    
        private val _error = MutableStateFlow<String?>(null)
        val error: StateFlow<String?> = _error
    
        init {
            loadTasks()
        }
    
        fun loadTasks() {
            viewModelScope.launch {
                try {
                     tasksCollection.orderBy("timestamp", com.google.firebase.firestore.Query.Direction.DESCENDING)
                        .addSnapshotListener { snapshots, e ->
                            if (e != null) {
                                _error.value = "Error listening: ${e.localizedMessage}"
                                return@addSnapshotListener
                            }
                            _tasks.value = snapshots?.toObjects<Task>() ?: emptyList()
                            _error.value = null
                        }
                } catch (e: Exception) {
                    _error.value = "Error loading: ${e.localizedMessage}"
                }
            }
        }
    
         fun addTask(description: String) {
            if (description.isBlank()) {
                _error.value = "Task description cannot be empty."
                return
            }
            viewModelScope.launch {
                 try {
                     val task = Task(description = description, timestamp = System.currentTimeMillis())
                     tasksCollection.add(task).await()
                     _error.value = null
                 } catch (e: Exception) {
                    _error.value = "Error adding: ${e.localizedMessage}"
                }
            }
        }
    }
                    
  4. Main Screen Composable: Create ui/TaskScreen.kt.
    ui/TaskScreen.kt

    
    package com.yourcompany.advancedconceptsapp.ui
    
    // Imports: androidx.compose.*, androidx.lifecycle.viewmodel.compose.viewModel, java.text.SimpleDateFormat, etc.
    import androidx.compose.foundation.layout.*
    import androidx.compose.foundation.lazy.LazyColumn
    import androidx.compose.foundation.lazy.items
    import androidx.compose.material3.*
    import androidx.compose.runtime.*
    import androidx.compose.ui.Alignment
    import androidx.compose.ui.Modifier
    import androidx.compose.ui.unit.dp
    import androidx.lifecycle.viewmodel.compose.viewModel
    import com.yourcompany.advancedconceptsapp.data.Task
    import java.text.SimpleDateFormat
    import java.util.Date
    import java.util.Locale
    import androidx.compose.ui.res.stringResource
    import com.yourcompany.advancedconceptsapp.R // Import R class
    
    @OptIn(ExperimentalMaterial3Api::class) // For TopAppBar
    @Composable
    fun TaskScreen(taskViewModel: TaskViewModel = viewModel()) {
        val tasks by taskViewModel.tasks.collectAsState()
        val errorMessage by taskViewModel.error.collectAsState()
        var taskDescription by remember { mutableStateOf("") }
    
        Scaffold(
            topBar = {
                TopAppBar(title = { Text(stringResource(id = R.string.app_name)) }) // Use resource for flavor changes
            }
        ) { paddingValues ->
            Column(modifier = Modifier.padding(paddingValues).padding(16.dp).fillMaxSize()) {
                // Input Row
                Row(verticalAlignment = Alignment.CenterVertically, modifier = Modifier.fillMaxWidth()) {
                    OutlinedTextField(
                        value = taskDescription,
                        onValueChange = { taskDescription = it },
                        label = { Text("New Task Description") },
                        modifier = Modifier.weight(1f),
                        singleLine = true
                    )
                    Spacer(modifier = Modifier.width(8.dp))
                    Button(onClick = {
                        taskViewModel.addTask(taskDescription)
                        taskDescription = ""
                    }) { Text("Add") }
                }
                Spacer(modifier = Modifier.height(16.dp))
                // Error Message
                errorMessage?.let { Text(it, color = MaterialTheme.colorScheme.error, modifier = Modifier.padding(bottom = 8.dp)) }
                // Task List
                if (tasks.isEmpty() && errorMessage == null) {
                    Text("No tasks yet. Add one!")
                } else {
                    LazyColumn(modifier = Modifier.weight(1f)) {
                        items(tasks, key = { it.id }) { task ->
                            TaskItem(task)
                            Divider()
                        }
                    }
                }
            }
        }
    }
    
    @Composable
    fun TaskItem(task: Task) {
        val dateFormat = remember { SimpleDateFormat("yyyy-MM-dd HH:mm", Locale.getDefault()) }
        Row(modifier = Modifier.fillMaxWidth().padding(vertical = 8.dp), verticalAlignment = Alignment.CenterVertically) {
            Column(modifier = Modifier.weight(1f)) {
                Text(task.description, style = MaterialTheme.typography.bodyLarge)
                Text("Added: ${dateFormat.format(Date(task.timestamp))}", style = MaterialTheme.typography.bodySmall)
            }
        }
    }
                    
  5. Update MainActivity.kt: Set the content to TaskScreen.
    MainActivity.kt

    
    package com.yourcompany.advancedconceptsapp
    
    import android.os.Bundle
    import androidx.activity.ComponentActivity
    import androidx.activity.compose.setContent
    import androidx.compose.foundation.layout.fillMaxSize
    import androidx.compose.material3.MaterialTheme
    import androidx.compose.material3.Surface
    import androidx.compose.ui.Modifier
    import com.yourcompany.advancedconceptsapp.ui.TaskScreen
    import com.yourcompany.advancedconceptsapp.ui.theme.AdvancedConceptsAppTheme
    // Imports for WorkManager scheduling will be added in Step 3
    
    class MainActivity : ComponentActivity() {
        override fun onCreate(savedInstanceState: Bundle?) {
            super.onCreate(savedInstanceState)
            setContent {
                AdvancedConceptsAppTheme {
                    Surface(modifier = Modifier.fillMaxSize(), color = MaterialTheme.colorScheme.background) {
                        TaskScreen()
                    }
                }
            }
            // TODO: Schedule WorkManager job in Step 3
        }
    }
                    
  6. Run the App: Test basic functionality. Tasks should appear and persist in Firestore’s `tasks` collection (initially).

Step 3: WorkManager Implementation

Create a background worker for periodic reporting.

  1. Create the Worker: Create worker/ReportingWorker.kt. (Collection name will be updated later).
    worker/ReportingWorker.kt

    
    package com.yourcompany.advancedconceptsapp.worker
    
    import android.content.Context
    import android.util.Log
    import androidx.work.CoroutineWorker
    import androidx.work.WorkerParameters
    import com.google.firebase.firestore.ktx.firestore
    import com.google.firebase.ktx.Firebase
    // Import BuildConfig later when needed
    import kotlinx.coroutines.tasks.await
    
    // Temporary placeholder - will be replaced by BuildConfig field
    const val TEMPORARY_USAGE_LOG_COLLECTION = "usage_logs"
    
    class ReportingWorker(appContext: Context, workerParams: WorkerParameters) :
        CoroutineWorker(appContext, workerParams) {
    
        companion object { const val TAG = "ReportingWorker" }
        private val db = Firebase.firestore
    
        override suspend fun doWork(): Result {
            Log.d(TAG, "Worker started: Reporting usage.")
            return try {
                val logEntry = hashMapOf(
                    "timestamp" to System.currentTimeMillis(),
                    "message" to "App usage report.",
                    "worker_run_id" to id.toString()
                )
                // Use temporary constant for now
                db.collection(TEMPORARY_USAGE_LOG_COLLECTION).add(logEntry).await()
                Log.d(TAG, "Worker finished successfully.")
                Result.success()
            } catch (e: Exception) {
                Log.e(TAG, "Worker failed", e)
                Result.failure()
            }
        }
    }
                    
  2. Schedule the Worker: Update MainActivity.kt‘s onCreate method.
    MainActivity.kt additions

    
    // Add these imports to MainActivity.kt
    import android.content.Context
    import android.util.Log
    import androidx.work.*
    import com.yourcompany.advancedconceptsapp.worker.ReportingWorker
    import java.util.concurrent.TimeUnit
    
    // Inside MainActivity class, after setContent { ... } block in onCreate
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContent {
            // ... existing code ...
        }
        // Schedule the worker
        schedulePeriodicUsageReport(this)
    }
    
    // Add this function to MainActivity class
    private fun schedulePeriodicUsageReport(context: Context) {
        val constraints = Constraints.Builder()
            .setRequiredNetworkType(NetworkType.CONNECTED)
            .build()
    
        val reportingWorkRequest = PeriodicWorkRequestBuilder<ReportingWorker>(
                1, TimeUnit.HOURS // ~ every hour
             )
            .setConstraints(constraints)
            .addTag(ReportingWorker.TAG)
            .build()
    
        WorkManager.getInstance(context).enqueueUniquePeriodicWork(
            ReportingWorker.TAG,
            ExistingPeriodicWorkPolicy.KEEP,
            reportingWorkRequest
        )
        Log.d("MainActivity", "Periodic reporting work scheduled.")
    }
                    
  3. Test WorkManager:
    • Run the app. Check Logcat for messages from ReportingWorker and MainActivity about scheduling.
    • WorkManager tasks don’t run immediately, especially periodic ones. You can use ADB commands to force execution for testing:
      • Find your package name: com.yourcompany.advancedconceptsapp
      • Force run jobs: adb shell cmd jobscheduler run -f com.yourcompany.advancedconceptsapp 999 (The 999 is usually sufficient, it’s a job ID).
      • Or use Android Studio’s App Inspection tab -> Background Task Inspector to view and trigger workers.
    • Check your Firestore Console for the usage_logs collection.

Step 4: Build Flavors (dev vs. prod)

Create dev and prod flavors for different environments.

  1. Configure app/build.gradle.kts:
    app/build.gradle.kts

    
    android {
        // ... namespace, compileSdk, defaultConfig ...
    
        // ****** Enable BuildConfig generation ******
        buildFeatures {
            buildConfig = true
        }
        // *******************************************
    
        flavorDimensions += "environment"
    
        productFlavors {
            create("dev") {
                dimension = "environment"
                applicationIdSuffix = ".dev" // CRITICAL: Changes package name for dev builds
                versionNameSuffix = "-dev"
                resValue("string", "app_name", "Task Reporter (Dev)")
                buildConfigField("String", "TASKS_COLLECTION", "\"tasks_dev\"")
                buildConfigField("String", "USAGE_LOG_COLLECTION", "\"usage_logs_dev\"")
            }
            create("prod") {
                dimension = "environment"
                resValue("string", "app_name", "Task Reporter")
                buildConfigField("String", "TASKS_COLLECTION", "\"tasks\"")
                buildConfigField("String", "USAGE_LOG_COLLECTION", "\"usage_logs\"")
            }
        }
    
        // ... buildTypes, compileOptions, etc ...
    }
                    

    Sync Gradle files.

    Important: We added applicationIdSuffix = ".dev". This means the actual package name for your development builds will become something like com.yourcompany.advancedconceptsapp.dev. This requires an update to your Firebase project setup, explained next. Also note the buildFeatures { buildConfig = true } block which is required to use buildConfigField.
  2. Handling Firebase for Suffixed Application IDs

    Because the `dev` flavor now has a different application ID (`…advancedconceptsapp.dev`), the original `google-services.json` file (downloaded in Step 1) will not work for `dev` builds, causing a “No matching client found” error during build.

    You must add this new Application ID to your Firebase project:

    1. Go to Firebase Console: Open your project settings (gear icon).
    2. Your apps: Scroll down to the “Your apps” card.
    3. Add app: Click “Add app” and select the Android icon (</>).
    4. Register dev app:
      • Package name: Enter the exact suffixed ID: com.yourcompany.advancedconceptsapp.dev (replace `com.yourcompany.advancedconceptsapp` with your actual base package name).
      • Nickname (Optional): “Task Reporter Dev”.
      • SHA-1 (Optional but Recommended): Add the debug SHA-1 key from `./gradlew signingReport`.
    5. Register and Download: Click “Register app”. Crucially, download the new google-services.json file offered. This file now contains configurations for BOTH your base ID and the `.dev` suffixed ID.
    6. Replace File: In Android Studio (Project view), delete the old google-services.json from the app/ directory and replace it with the **newly downloaded** one.
    7. Skip SDK steps: You can skip the remaining steps in the Firebase console for adding the SDK.
    8. Clean & Rebuild: Back in Android Studio, perform a Build -> Clean Project and then Build -> Rebuild Project.
    Now your project is correctly configured in Firebase for both `dev` (with the `.dev` suffix) and `prod` (base package name) variants using a single `google-services.json`.
  3. Create Flavor-Specific Source Sets:
    • Switch to Project view in Android Studio.
    • Right-click on app/src -> New -> Directory. Name it dev.
    • Inside dev, create res/values/ directories.
    • Right-click on app/src -> New -> Directory. Name it prod.
    • Inside prod, create res/values/ directories.
    • (Optional but good practice): You can now move the default app_name string definition from app/src/main/res/values/strings.xml into both app/src/dev/res/values/strings.xml and app/src/prod/res/values/strings.xml. Or, you can rely solely on the resValue definitions in Gradle (as done above). Using resValue is often simpler for single strings like app_name. If you had many different resources (layouts, drawables), you’d put them in the respective dev/res or prod/res folders.
  4. Use Build Config Fields in Code:
      • Update TaskViewModel.kt and ReportingWorker.kt to use BuildConfig instead of temporary constants.

    TaskViewModel.kt change

    
    // Add this import
    import com.yourcompany.advancedconceptsapp.BuildConfig
    
    // Replace the temporary constant usage
    // const val TEMPORARY_TASKS_COLLECTION = "tasks" // Remove this line
    private val tasksCollection = db.collection(BuildConfig.TASKS_COLLECTION) // Use build config field
                        

    ReportingWorker.kt change

    
    // Add this import
    import com.yourcompany.advancedconceptsapp.BuildConfig
    
    // Replace the temporary constant usage
    // const val TEMPORARY_USAGE_LOG_COLLECTION = "usage_logs" // Remove this line
    
    // ... inside doWork() ...
    db.collection(BuildConfig.USAGE_LOG_COLLECTION).add(logEntry).await() // Use build config field
                        

    Modify TaskScreen.kt to potentially use the flavor-specific app name (though resValue handles this automatically if you referenced @string/app_name correctly, which TopAppBar usually does). If you set the title directly, you would load it from resources:

     // In TaskScreen.kt (if needed)
    import androidx.compose.ui.res.stringResource
    import com.yourcompany.advancedconceptsapp.R // Import R class
    // Inside Scaffold -> topBar

    TopAppBar(title = { Text(stringResource(id = R.string.app_name)) }) // Use string resource

  5. Select Build Variant & Test:
    • In Android Studio, go to Build -> Select Build Variant… (or use the “Build Variants” panel usually docked on the left).
    • You can now choose between devDebug, devRelease, prodDebug, and prodRelease.
    • Select devDebug. Run the app. The title should say “Task Reporter (Dev)”. Data should go to tasks_dev and usage_logs_dev in Firestore.
    • Select prodDebug. Run the app. The title should be “Task Reporter”. Data should go to tasks and usage_logs.

Step 5: Proguard/R8 Configuration (for Release Builds)

R8 is the default code shrinker and obfuscator in Android Studio (successor to Proguard). It’s enabled by default for release build types. We need to ensure it doesn’t break our app, especially Firestore data mapping.

    1. Review app/build.gradle.kts Release Build Type:
      app/build.gradle.kts

      
      android {
          // ...
          buildTypes {
              release {
                  isMinifyEnabled = true // Should be true by default for release
                  isShrinkResources = true // R8 handles both
                  proguardFiles(
                      getDefaultProguardFile("proguard-android-optimize.txt"),
                      "proguard-rules.pro" // Our custom rules file
                  )
              }
              debug {
                  isMinifyEnabled = false // Usually false for debug
                  proguardFiles(
                      getDefaultProguardFile("proguard-android-optimize.txt"),
                      "proguard-rules.pro"
                  )
              }
              // ... debug build type ...
          }
          // ...
      }
                 

      isMinifyEnabled = true enables R8 for the release build type.

    2. Configure app/proguard-rules.pro:
      • Firestore uses reflection to serialize/deserialize data classes. R8 might remove or rename classes/fields needed for this process. We need to add “keep” rules.
      • Open (or create) the app/proguard-rules.pro file. Add the following:
      
      # Keep Task data class and its members for Firestore serialization
      -keep class com.yourcompany.advancedconceptsapp.data.Task { (...); *; }
      # Keep any other data classes used with Firestore similarly
      # -keep class com.yourcompany.advancedconceptsapp.data.AnotherFirestoreModel { (...); *; }
      
      # Keep Coroutine builders and intrinsics (often needed, though AGP/R8 handle some automatically)
      -keepnames class kotlinx.coroutines.intrinsics.** { *; }
      
      # Keep companion objects for Workers if needed (sometimes R8 removes them)
      -keepclassmembers class * extends androidx.work.Worker {
          public static ** Companion;
      }
      
      # Keep specific fields/methods if using reflection elsewhere
      # -keepclassmembers class com.example.SomeClass {
      #    private java.lang.String someField;
      #    public void someMethod();
      # }
      
      # Add rules for any other libraries that require them (e.g., Retrofit, Gson, etc.)
      # Consult library documentation for necessary Proguard/R8 rules.
    • Explanation:
      • -keep class ... { <init>(...); *; }: Keeps the Task class, its constructors (<init>), and all its fields/methods (*) from being removed or renamed. This is crucial for Firestore.
      • -keepnames: Prevents renaming but allows removal if unused.
      • -keepclassmembers: Keeps specific members within a class.

3. Test the Release Build:

    • Select the prodRelease build variant.
    • Go to Build -> Generate Signed Bundle / APK…. Choose APK.
    • Create a new keystore or use an existing one (follow the prompts). Remember the passwords!
    • Select prodRelease as the variant. Click Finish.
    • Android Studio will build the release APK. Find it (usually in app/prod/release/).
    • Install this APK manually on a device: adb install app-prod-release.apk.
    • Test thoroughly. Can you add tasks? Do they appear? Does the background worker still log to Firestore (check usage_logs)? If it crashes or data doesn’t save/load correctly, R8 likely removed something important. Check Logcat for errors (often ClassNotFoundException or NoSuchMethodError) and adjust your proguard-rules.pro file accordingly.

 


 

Step 6: Firebase App Distribution (for Dev Builds)

Configure Gradle to upload development builds to testers via Firebase App Distribution.

  1. Download private key: on Firebase console go to Project Overview  at left top corner -> Service accounts -> Firebase Admin SDK -> Click on “Generate new private key” button ->
    api-project-xxx-yyy.json move this file to root project at the same level of app folder *Ensure that this file be in your local app, do not push it to the remote repository because it contains sensible data and will be rejected later
  2. Configure App Distribution Plugin in app/build.gradle.kts:
    app/build.gradle.kts

    
    // Apply the plugin at the top
    plugins {
        // ... other plugins id("com.android.application"), id("kotlin-android"), etc.
        alias(libs.plugins.google.firebase.appdistribution)
    }
    
    android {
        // ... buildFeatures, flavorDimensions, productFlavors ...
    
        buildTypes {
            getByName("release") {
                isMinifyEnabled = true // Should be true by default for release
                isShrinkResources = true // R8 handles both
                proguardFiles(
                    getDefaultProguardFile("proguard-android-optimize.txt"),
                    "proguard-rules.pro" // Our custom rules file
                )
            }
            getByName("debug") {
                isMinifyEnabled = false // Usually false for debug
                proguardFiles(
                    getDefaultProguardFile("proguard-android-optimize.txt"),
                    "proguard-rules.pro"
                )
            }
            firebaseAppDistribution {
                artifactType = "APK"
                releaseNotes = "Latest build with fixes/features"
                testers = "briew@example.com, bri@example.com, cal@example.com"
                serviceCredentialsFile="$rootDir/api-project-xxx-yyy.json"//do not push this line to the remote repository or stablish as local variable } } } 

    Add library version to libs.version.toml

    
    [versions]
    googleFirebaseAppdistribution = "5.1.1"
    [plugins]
    google-firebase-appdistribution = { id = "com.google.firebase.appdistribution", version.ref = "googleFirebaseAppdistribution" }
    
    Ensure the plugin classpath is in the 

    project-level

     build.gradle.kts: 

    project build.gradle.kts

    
    plugins {
        // ...
        alias(libs.plugins.google.firebase.appdistribution) apply false
    }
                    

    Sync Gradle files.

  3. Upload a Build Manually:
    • Select the desired variant (e.g., devDebugdevRelease, prodDebug , prodRelease).
    • In Android Studio Terminal  run  each commmand to generate apk version for each environment:
      • ./gradlew assembleRelease appDistributionUploadProdRelease
      • ./gradlew assembleRelease appDistributionUploadDevRelease
      • ./gradlew assembleDebug appDistributionUploadProdDebug
      • ./gradlew assembleDebug appDistributionUploadDevDebug
    • Check Firebase Console -> App Distribution -> Select .dev project . Add testers or use the configured group (`android-testers`).

Step 7: CI/CD with GitHub Actions

Automate building and distributing the `dev` build on push to a specific branch.

  1. Create GitHub Repository. Create a new repository on GitHub and push your project code to it.
    1. Generate FIREBASE_APP_ID:
      • on Firebase App Distribution go to Project Overview -> General -> App ID for com.yourcompany.advancedconceptsapp.dev environment (1:xxxxxxxxx:android:yyyyyyyyyy)
      • In GitHub repository go to Settings -> Secrets and variables -> Actions -> New repository secret
      • Set the name: FIREBASE_APP_ID and value: paste the App ID generated
    2. Add FIREBASE_SERVICE_ACCOUNT_KEY_JSON:
      • open api-project-xxx-yyy.json located at root project and copy the content
      • In GitHub repository go to Settings -> Secrets and variables -> Actions -> New repository secret
      • Set the name: FIREBASE_SERVICE_ACCOUNT_KEY_JSON and value: paste the json content
    3. Create GitHub Actions Workflow File:
      • In your project root, create the directories .github/workflows/.
      • Inside .github/workflows/, create a new file named android_build_distribute.yml.
      • Paste the following content:
    4. 
      name: Android CI 
      
      on: 
        push: 
          branches: [ "main" ] 
        pull_request: 
          branches: [ "main" ] 
      jobs: 
        build: 
          runs-on: ubuntu-latest 
          steps: 
          - uses: actions/checkout@v3
          - name: set up JDK 17 
            uses: actions/setup-java@v3 
            with: 
              java-version: '17' 
              distribution: 'temurin' 
              cache: gradle 
          - name: Grant execute permission for gradlew 
            run: chmod +x ./gradlew 
          - name: Build devRelease APK 
            run: ./gradlew assembleRelease 
          - name: upload artifact to Firebase App Distribution
            uses: wzieba/Firebase-Distribution-Github-Action@v1
            with:
              appId: ${{ secrets.FIREBASE_APP_ID }}
              serviceCredentialsFileContent: ${{ secrets.FIREBASE_SERVICE_ACCOUNT_KEY_JSON }}
              groups: testers
              file: app/build/outputs/apk/dev/release/app-dev-release-unsigned.apk
      
    1. Commit and Push: Commit the .github/workflows/android_build_distribute.yml file and push it to your main branch on GitHub.
    1. Verify: Go to the “Actions” tab in your GitHub repository. You should see the workflow running. If it succeeds, check Firebase App Distribution for the new build. Your testers should get notified.

 


 

Step 8: Testing and Verification Summary

    • Flavors: Switch between devDebug and prodDebug in Android Studio. Verify the app name changes and data goes to the correct Firestore collections (tasks_dev/tasks, usage_logs_dev/usage_logs).
    • WorkManager: Use the App Inspection -> Background Task Inspector or ADB commands to verify the ReportingWorker runs periodically and logs data to the correct Firestore collection based on the selected flavor.
    • R8/Proguard: Install and test the prodRelease APK manually. Ensure all features work, especially adding/viewing tasks (Firestore interaction). Check Logcat for crashes related to missing classes/methods.
    • App Distribution: Make sure testers receive invites for the devDebug (or devRelease) builds uploaded manually or via CI/CD. Ensure they can install and run the app.
    • CI/CD: Check the GitHub Actions logs for successful builds and uploads after pushing to the develop branch. Verify the build appears in Firebase App Distribution.

 

Conclusion

Congratulations! You’ve navigated complex Android topics including Firestore, WorkManager, Compose, Flavors (with correct Firebase setup), R8, App Distribution, and CI/CD.

This project provides a solid foundation. From here, you can explore:

    • More complex WorkManager chains or constraints.
    • Deeper R8/Proguard rule optimization.
    • More sophisticated CI/CD pipelines (deploy signed apks/bundles, running tests, deploying to Google Play).
    • Using different NoSQL databases or local caching with Room.
    • Advanced Compose UI patterns and state management.
    • Firebase Authentication, Cloud Functions, etc.

If you want to have access to the full code in my GitHub repository, contact me in the comments.


 

Project Folder Structure (Conceptual)


AdvancedConceptsApp/
├── .git/
├── .github/workflows/android_build_distribute.yml
├── .gradle/
├── app/
│   ├── build/
│   ├── libs/
│   ├── src/
│   │   ├── main/           # Common code, res, AndroidManifest.xml
│   │   │   └── java/com/yourcompany/advancedconceptsapp/
│   │   │       ├── data/Task.kt
│   │   │       ├── ui/TaskScreen.kt, TaskViewModel.kt, theme/
│   │   │       ├── worker/ReportingWorker.kt
│   │   │       └── MainActivity.kt
│   │   ├── dev/            # Dev flavor source set (optional overrides)
│   │   ├── prod/           # Prod flavor source set (optional overrides)
│   │   ├── test/           # Unit tests
│   │   └── androidTest/    # Instrumentation tests
│   ├── google-services.json # *** IMPORTANT: Contains configs for BOTH package names ***
│   ├── build.gradle.kts    # App-level build script
│   └── proguard-rules.pro # R8/Proguard rules
├── api-project-xxx-yyy.json # Firebase service account key json
├── gradle/wrapper/
├── build.gradle.kts      # Project-level build script
├── gradle.properties
├── gradlew
├── gradlew.bat
└── settings.gradle.kts
        

 

]]>
https://blogs.perficient.com/2025/04/10/android-development-codelab-mastering-advanced-concepts/feed/ 0 379698