Devops Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/devops/ Expert Digital Insights Wed, 14 May 2025 12:21:17 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Devops Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/devops/ 32 32 30508587 PIM for Azure Resources https://blogs.perficient.com/2025/05/14/pim-for-azure-resources/ https://blogs.perficient.com/2025/05/14/pim-for-azure-resources/#comments Wed, 14 May 2025 10:06:18 +0000 https://blogs.perficient.com/?p=381068

Privileged Identity Management

Privileged Identity Management (PIM) is a service in Microsoft Entra ID that enables you to manage, control, and monitor access to important resources in your organization. These resources include those in Microsoft Entra ID, Azure, and other Microsoft Online Services such as Microsoft 365 or Microsoft Intune. This blog has been written to help those who want to set up just-in-time access for Azure resources and provide access to the subscription level only.

Why do we need PIM for Azure Resources?

Better Security for Important Access

PIM ensures that only the right people can access essential systems when needed and only for a short time. This reduces the chances of misuse by someone with powerful access.

Giving Only the Minimum Access

PIM ensures that people only have the access they need to do their jobs. This means they can’t access anything unnecessary, keeping things secure.

Time-Limited Access

With PIM, users can get special access for a set period. Once the time is up, the access is automatically removed, preventing anyone from holding on to unnecessary permissions.

Access When Needed

PIM gives Just-in-Time (JIT) Access, meaning users can only request higher-level access when needed, and it is automatically taken away after a set time. This reduces the chances of having access for too long.

Approval Process for Access

PIM lets you set up a process where access needs to be approved by someone (like a manager or security) before it’s given. This adds another layer of control.

Tracking and Monitoring

PIM keeps detailed records of who asked for and received special access, when they accessed something, and what they did. This makes it easier to catch any suspicious activities.

Temporary Admin Access

Instead of giving someone admin access all the time, PIM allows it to be granted for specific tasks. Admins only get special access when needed, and for as long as necessary, so there is less risk.

Meeting Legal and Security Standards

Some industries require companies to follow strict rules (like protecting personal information). PIM helps meet these rules by controlling who has access and keeping track of it for audits.

 How to set up PIM in Azure

Create Security Group & Map to Subscriptions

  • Step 1: Create security groups for each Azure subscription to manage access control.
    • The security groups are derived from Azure Entra IDs. As illustrated in the snapshot below, use the global search box in the Azure portal to find the appropriate services.

Pim 1

 

  • Step 2: Select the service you need, then click New Group to create a new security group. Fill in all necessary details, including group name, description, and any other required attributes.

Pim 2

 

    • Create a separate group for each subscription.
    • If your account includes two subscriptions, such as Prod and Non-Prod, create distinct security groups for each subscription. This allows users to request access to a specific subscription.
    • Make the user a member of both groups, enabling them to choose which subscription resources they wish to activate.
    • The screenshot below shows that the Demo-Group security group will be created and assigned to its corresponding subscription.

Pim 3

 

Navigate to PIM (Privileged Identity Management)

  • Step 3: In the Azure portal, navigate to Identity Governance and select Privileged Identity Management (PIM) to manage privileged access.

Pim 4

 

Enable PIM for Azure Resources

  • Step 4: You can select the specific section within PIM you wish to enable PIM for. For this setup, we are focusing on enabling PIM for subscription-level access to control who can activate privileged access for Azure subscriptions.
  • Step 5: Choose Azure Resources from the list of available options in PIM, as shown in the screenshot below.

Pim 5

 

    • An assignment needs to be created for the groups we created so that members of those groups will see an option to activate access for their respective subscriptions.
  • Step 6: As per the screenshots below, once you select Azure resources, select the subscription and group for which you want to create assignments.

Pim 6

 

Pim 7

 

    • As per the image below, under the Resource section, subscription has been selected for which we want to give permission. Under Resource Type is subscription, choose the role you want to give permission to, and the Demo-Group security group is selected.

Pim 8

 

  • Step 7: Once the assignment is complete, users who are part of a group need to log out and log back in to see the changes applied. To view and activate your assignments in PIM, follow the steps below:

1. Navigate to the Assignments Section

  • Go to PIM (Privileged Identity Management) by selecting:
  • Entry IDIdentity GovernancePIMAzure ResourcesActivate Role.

2. Select Your Assignment

  • In this section, you will see a list of the assignments for which you are eligible.

3. Activate the Role

  • To activate a role, click on Activate. By default, the assignment will be set for 8 hours. If necessary, you may adjust the duration by justifying the requirement and enabling the assignment.

4. Validation and Finalization

  • The system will take some time to validate your request. Once completed, the assignment will appear under the Active Assignments.

Pim 12 1

 

  • Step 8: As shown in the screenshot below, the activation duration can be set to 24 hours by editing the assignment settings.

Pim 10

 

    • You can modify the assignment settings and adjust the values according to your specific requirements. Please refer to the screenshot below for more details.

Pim 11

 

Conclusion

Azure PIM helps make your system safer by ensuring that only the right people can access essential resources for a short time. It lets you give access when needed (just-in-time), require approval for special access, automatically manage who can access what, and keep track of everything. PIM is essential for organizations that want to limit who can access sensitive information, ensure only the necessary people have the correct permissions at the right time, and prevent unauthorized access.

]]>
https://blogs.perficient.com/2025/05/14/pim-for-azure-resources/feed/ 1 381068
How the Change to TLS Certificate Lifetimes Will Affect Sitecore Projects (and How to Prepare) https://blogs.perficient.com/2025/04/18/how-the-change-to-tls-certificate-lifetimes-will-affect-sitecore-projects-and-how-to-prepare/ https://blogs.perficient.com/2025/04/18/how-the-change-to-tls-certificate-lifetimes-will-affect-sitecore-projects-and-how-to-prepare/#respond Fri, 18 Apr 2025 13:54:17 +0000 https://blogs.perficient.com/?p=380286

TLS certificate lifetimes are being significantly reduced over the next few years as part of an industry-wide push toward greater security and automation. Here’s the phased timeline currently in place:

  • Now through March 15, 2026: Maximum lifetime is 398 days

  • Starting March 15, 2026: Reduced to 200 days

  • Starting March 15, 2027: Further reduced to 100 days

  • Starting March 15, 2029: Reduced again to just 47 days

For teams managing Sitecore implementations, this is more than a policy shift—it introduces operational urgency. As certificates begin expiring more frequently, any reliance on manual tracking or last-minute renewals could result in costly downtime or broken integrations.

If your Sitecore environment includes secure endpoints, custom domains, or external integrations, now is the time to assess your certificate strategy and move toward automation.

Why This Matters for Sitecore

Sitecore projects often involve:

  • Multiple environments (development, staging, production) with different certificates

  • Custom domains or subdomains used for CDNs, APIs, headless apps, or marketing campaigns

  • Third-party integrations that require secure connections

  • Marketing and personalization features that rely on seamless uptime

A single expired certificate can lead to downtime, loss of customer trust, or failed integrations—any of which could severely impact your digital experience delivery.

Key Risks of Shorter TLS Lifetimes

  • Increased risk of missed renewals if teams rely on manual tracking

  • Broken environments due to expired certs in Azure, IIS, or Kubernetes configurations

  • Delayed deployments when certificates must be re-issued last minute

  • SEO and trust damage if browsers start flagging your site as insecure

How to Prepare Your Sitecore Project Teams

To stay ahead of the TLS certificate lifecycle changes, here are concrete steps you should take:

1. Inventory All TLS Certificates

  • Audit all environments and domains using certificates

  • Include internal services, custom endpoints, and non-production domains

  • Use a centralized tracking tool (e.g., Azure Key Vault, HashiCorp Vault, or a certificate management platform)

2. Automate Certificate Renewals

  • Wherever possible, switch to automated certificate issuance and renewal

  • Use services like:

    • Azure App Service Managed Certificates

    • Let’s Encrypt with automation scripts

    • ACME protocol integrations for Kubernetes

  • For Azure-hosted Sitecore instances, leverage Key Vault and App Gateway integrations

3. Establish Certificate Ownership

  • Assign clear ownership of certificate management per environment or domain

  • Document who is responsible for renewals and updates

  • Add certificate health checks to your DevOps dashboards

4. Integrate Certificate Checks into CI/CD Pipelines

  • Validate certificate validity before deployments

  • Fail builds if certificates are nearing expiration

  • Include certificate management tasks as part of environment provisioning

5. Educate Your Team

  • Hold knowledge-sharing sessions with developers, infrastructure engineers, and marketers

  • Make sure everyone understands the impact of expired certificates on the Sitecore experience

6. Test Expiry Scenarios

  • Simulate certificate expiry in non-production environments

  • Monitor behavior in Sitecore XP and XM environments, including CD and CM roles

  • Validate external systems (e.g., CDNs, integrations, identity providers) against cert failures

Final Thoughts

TLS certificate management is no longer a “set it and forget it” task. With shorter lifetimes becoming the norm, proactive planning is essential to avoid downtime and ensure secure, uninterrupted experiences for your users.

Start by auditing your current certificates and work toward automating renewals. Make certificate monitoring part of your DevOps practice, and ensure your Sitecore teams are aware of the upcoming changes.

Action Items for This Week:

  • Identify all TLS certificates in your Sitecore environments

  • Document renewal dates and responsible owners

  • Begin automating renewals for at least one domain

  • Review Azure and Sitecore documentation for certificate integration options

]]>
https://blogs.perficient.com/2025/04/18/how-the-change-to-tls-certificate-lifetimes-will-affect-sitecore-projects-and-how-to-prepare/feed/ 0 380286
Security Best Practices in Sitecore XM Cloud https://blogs.perficient.com/2025/04/16/security-best-practices-in-sitecore-xm-cloud/ https://blogs.perficient.com/2025/04/16/security-best-practices-in-sitecore-xm-cloud/#respond Wed, 16 Apr 2025 23:45:38 +0000 https://blogs.perficient.com/?p=380233

Securing your Sitecore XM Cloud environment is critical to protecting your content, your users, and your brand. This post walks through key areas of XM Cloud security, including user management, authentication, secure coding, and best practices you can implement today to reduce your security risks.

We’ll also take a step back to look at the Sitecore Cloud Portal—the central control panel for managing user access across your Sitecore organization. Understanding both the Cloud Portal and XM Cloud’s internal security tools is essential for building a strong foundation of security.


Sitecore Cloud Portal User Management: Centralized Access Control

The Sitecore Cloud Portal is the gateway to managing user access across all Sitecore DXP tools, including XM Cloud. Proper setup here ensures that only the right people can view or change your environments and content.

Organization Roles

Each user you invite to your Sitecore organization is assigned an Organization Role, which defines their overall access level:

  • Organization Owner – Full control over the organization, including user and app management.

  • Organization Admin – Can manage users and assign app access, but cannot assign/remove Owners.

  • Organization User – Limited access; can only use specific apps they’ve been assigned to.

Tip: Assign the “Owner” role sparingly—only to those who absolutely need full administrative control.

App Roles

Beyond organization roles, users are granted App Roles for specific products like XM Cloud. These roles determine what actions they can take inside each product:

  • Admin – Full access to all features of the application.

  • User – More limited, often focused on content authoring or reviewing.

Managing Access

From the Admin section of the Cloud Portal, Organization Owners or Admins can:

  • Invite new team members and assign roles.

  • Grant access to apps like XM Cloud and assign appropriate app-level roles.

  • Review and update roles as team responsibilities shift.

  • Remove access when team members leave or change roles.

Security Tips:

  • Review user access regularly.

  • Use the least privilege principle—only grant what’s necessary.

  • Enable Multi-Factor Authentication (MFA) and integrate Single Sign-On (SSO) for extra protection.


XM Cloud User Management and Access Rights

Within XM Cloud itself, there’s another layer of user and role management that governs access to content and features.

Key Concepts

  • Users: Individual accounts representing people who work in the XM Cloud instance.

  • Roles: Collections of users with shared permissions.

  • Domains: Logical groupings of users and roles, useful for managing access in larger organizations.

Recommendation: Don’t assign permissions directly to users—assign them to roles instead for easier management.

Access Rights

Permissions can be set at the item level for things like reading, writing, deleting, or publishing. Access rights include:

  • Read

  • Write

  • Create

  • Delete

  • Administer

Each right can be set to:

  • Allow

  • Deny

  • Inherit

Best Practices

  • Follow the Role-Based Access Control (RBAC) model.

  • Create custom roles to reflect your team’s structure and responsibilities.

  • Audit roles and access regularly to prevent privilege creep.

  • Avoid modifying default system users—create new accounts instead.


Authentication and Client Credentials

XM Cloud supports robust authentication mechanisms to control access between services, deployments, and repositories.

Managing Client Credentials

When integrating external services or deploying via CI/CD, you’ll often need to authenticate through client credentials.

  • Use the Sitecore Cloud Portal to create and manage client credentials.

  • Grant only the necessary scopes (permissions) to each credential.

  • Rotate credentials periodically and revoke unused ones.

  • Use secure secrets management tools to store client IDs and secrets outside of source code.

For Git and deployment pipelines, connect XM Cloud environments to your repository using secure tokens and limit access to specific environments or branches when possible.


Secure Coding and Data Handling

Security isn’t just about who has access—it’s also about how your code and data behave in production.

Secure Coding Practices

  • Sanitize all inputs to prevent injection attacks.

  • Avoid exposing sensitive information in logs or error messages.

  • Use HTTPS for all external communications.

  • Validate data both on the client and server sides.

  • Keep dependencies up to date and monitor for vulnerabilities.

Data Privacy and Visitor Personalization

When using visitor data for personalization, be transparent and follow data privacy best practices:

  • Explicitly define what data is collected and how it’s used.

  • Give visitors control over their data preferences.

  • Avoid storing personally identifiable information (PII) unless absolutely necessary.


Where to Go from Here

Securing your XM Cloud environment is an ongoing process that involves team coordination, regular reviews, and constant vigilance. Here’s how to get started:

  • Audit your Cloud Portal roles and remove unnecessary access.

  • Establish a role-based structure in XM Cloud and limit direct user permissions.

  • Implement secure credential management for deployments and integrations.

  • Train your developers on secure coding and privacy best practices.

The stronger your security practices, the more confidence you—and your clients—can have in your digital experience platform.

]]>
https://blogs.perficient.com/2025/04/16/security-best-practices-in-sitecore-xm-cloud/feed/ 0 380233
Android Development Codelab: Mastering Advanced Concepts https://blogs.perficient.com/2025/04/10/android-development-codelab-mastering-advanced-concepts/ https://blogs.perficient.com/2025/04/10/android-development-codelab-mastering-advanced-concepts/#respond Thu, 10 Apr 2025 22:28:06 +0000 https://blogs.perficient.com/?p=379698

 

This guide will walk you through building a small application step-by-step, focusing on integrating several powerful tools and concepts essential for modern Android development.

What We’ll Cover:

  • Jetpack Compose: Building the UI declaratively.
  • NoSQL Database (Firestore): Storing and retrieving data in the cloud.
  • WorkManager: Running reliable background tasks.
  • Build Flavors: Creating different versions of the app (e.g., dev vs. prod).
  • Proguard/R8: Shrinking and obfuscating your code for release.
  • Firebase App Distribution: Distributing test builds easily.
  • CI/CD (GitHub Actions): Automating the build and distribution process.

The Goal: Build a “Task Reporter” app. Users can add simple task descriptions. These tasks are saved to Firestore. A background worker will periodically “report” (log a message or update a counter in Firestore) that the app is active. We’ll have dev and prod flavors pointing to different Firestore collections/data and distribute the dev build for testing.

Prerequisites:

  • Android Studio (latest stable version recommended).
  • Basic understanding of Kotlin and Android development fundamentals.
  • Familiarity with Jetpack Compose basics (Composable functions, State).
  • A Google account to use Firebase.
  • A GitHub account (for CI/CD).

Let’s get started!


Step 0: Project Setup

  1. Create New Project: Open Android Studio -> New Project -> Empty Activity (choose Compose).
  2. Name: AdvancedConceptsApp (or your choice).
  3. Package Name: Your preferred package name (e.g., com.yourcompany.advancedconceptsapp).
  4. Language: Kotlin.
  5. Minimum SDK: API 24 or higher.
  6. Build Configuration Language: Kotlin DSL (build.gradle.kts).
  7. Click Finish.

Step 1: Firebase Integration (Firestore & App Distribution)

  1. Connect to Firebase: In Android Studio: Tools -> Firebase.
    • In the Assistant panel, find Firestore. Click “Get Started with Cloud Firestore”. Click “Connect to Firebase”. Follow the prompts to create a new Firebase project or connect to an existing one.
    • Click “Add Cloud Firestore to your app”. Accept changes to your build.gradle.kts (or build.gradle) files. This adds the necessary dependencies.
    • Go back to the Firebase Assistant, find App Distribution. Click “Get Started”. Add the App Distribution Gradle plugin by clicking the button. Accept changes.
  2. Enable Services in Firebase Console:
    • Go to the Firebase Console and select your project.
    • Enable Firestore Database (start in Test mode).
    • In the left menu, go to Build -> Firestore Database. Click “Create database”.
      • Start in Test mode for easier initial development (we’ll secure it later if needed). Choose a location close to your users. Click “Enable”.
    • Ensure App Distribution is accessible (no setup needed here yet).
  3. Download Initial google-services.json:
    • In Firebase Console -> Project Settings (gear icon) -> Your apps.
    • Ensure your Android app (using the base package name like com.yourcompany.advancedconceptsapp) is registered. If not, add it.
    • Download the google-services.json file.
    • Switch Android Studio to the Project view and place the file inside the app/ directory.
    • Note: We will likely replace this file in Step 4 after configuring build flavors.

Step 2: Building the Basic UI with Compose

Let’s create a simple UI to add and display tasks.

  1. Dependencies: Ensure necessary dependencies for Compose, ViewModel, Firestore, and WorkManager are in app/build.gradle.kts.
    app/build.gradle.kts

    
    dependencies {
        // Core & Lifecycle & Activity
        implementation("androidx.core:core-ktx:1.13.1") // Use latest versions
        implementation("androidx.lifecycle:lifecycle-runtime-ktx:2.8.1")
        implementation("androidx.activity:activity-compose:1.9.0")
        // Compose
        implementation(platform("androidx.compose:compose-bom:2024.04.01")) // Check latest BOM
        implementation("androidx.compose.ui:ui")
        implementation("androidx.compose.ui:ui-graphics")
        implementation("androidx.compose.ui:ui-tooling-preview")
        implementation("androidx.compose.material3:material3")
        implementation("androidx.lifecycle:lifecycle-viewmodel-compose:2.8.1")
        // Firebase
        implementation(platform("com.google.firebase:firebase-bom:33.0.0")) // Check latest BOM
        implementation("com.google.firebase:firebase-firestore-ktx")
        // WorkManager
        implementation("androidx.work:work-runtime-ktx:2.9.0") // Check latest version
    }
                    

    Sync Gradle files.

  2. Task Data Class: Create data/Task.kt.
    data/Task.kt

    
    package com.yourcompany.advancedconceptsapp.data
    
    import com.google.firebase.firestore.DocumentId
    
    data class Task(
        @DocumentId
        val id: String = "",
        val description: String = "",
        val timestamp: Long = System.currentTimeMillis()
    ) {
        constructor() : this("", "", 0L) // Firestore requires a no-arg constructor
    }
                    
  3. ViewModel: Create ui/TaskViewModel.kt. (We’ll update the collection name later).
    ui/TaskViewModel.kt

    
    package com.yourcompany.advancedconceptsapp.ui
    
    import androidx.lifecycle.ViewModel
    import androidx.lifecycle.viewModelScope
    import com.google.firebase.firestore.ktx.firestore
    import com.google.firebase.firestore.ktx.toObjects
    import com.google.firebase.ktx.Firebase
    import com.yourcompany.advancedconceptsapp.data.Task
    // Import BuildConfig later when needed
    import kotlinx.coroutines.flow.MutableStateFlow
    import kotlinx.coroutines.flow.StateFlow
    import kotlinx.coroutines.launch
    import kotlinx.coroutines.tasks.await
    
    // Temporary placeholder - will be replaced by BuildConfig field
    const val TEMPORARY_TASKS_COLLECTION = "tasks"
    
    class TaskViewModel : ViewModel() {
        private val db = Firebase.firestore
        // Use temporary constant for now
        private val tasksCollection = db.collection(TEMPORARY_TASKS_COLLECTION)
    
        private val _tasks = MutableStateFlow<List<Task>>(emptyList())
        val tasks: StateFlow<List<Task>> = _tasks
    
        private val _error = MutableStateFlow<String?>(null)
        val error: StateFlow<String?> = _error
    
        init {
            loadTasks()
        }
    
        fun loadTasks() {
            viewModelScope.launch {
                try {
                     tasksCollection.orderBy("timestamp", com.google.firebase.firestore.Query.Direction.DESCENDING)
                        .addSnapshotListener { snapshots, e ->
                            if (e != null) {
                                _error.value = "Error listening: ${e.localizedMessage}"
                                return@addSnapshotListener
                            }
                            _tasks.value = snapshots?.toObjects<Task>() ?: emptyList()
                            _error.value = null
                        }
                } catch (e: Exception) {
                    _error.value = "Error loading: ${e.localizedMessage}"
                }
            }
        }
    
         fun addTask(description: String) {
            if (description.isBlank()) {
                _error.value = "Task description cannot be empty."
                return
            }
            viewModelScope.launch {
                 try {
                     val task = Task(description = description, timestamp = System.currentTimeMillis())
                     tasksCollection.add(task).await()
                     _error.value = null
                 } catch (e: Exception) {
                    _error.value = "Error adding: ${e.localizedMessage}"
                }
            }
        }
    }
                    
  4. Main Screen Composable: Create ui/TaskScreen.kt.
    ui/TaskScreen.kt

    
    package com.yourcompany.advancedconceptsapp.ui
    
    // Imports: androidx.compose.*, androidx.lifecycle.viewmodel.compose.viewModel, java.text.SimpleDateFormat, etc.
    import androidx.compose.foundation.layout.*
    import androidx.compose.foundation.lazy.LazyColumn
    import androidx.compose.foundation.lazy.items
    import androidx.compose.material3.*
    import androidx.compose.runtime.*
    import androidx.compose.ui.Alignment
    import androidx.compose.ui.Modifier
    import androidx.compose.ui.unit.dp
    import androidx.lifecycle.viewmodel.compose.viewModel
    import com.yourcompany.advancedconceptsapp.data.Task
    import java.text.SimpleDateFormat
    import java.util.Date
    import java.util.Locale
    import androidx.compose.ui.res.stringResource
    import com.yourcompany.advancedconceptsapp.R // Import R class
    
    @OptIn(ExperimentalMaterial3Api::class) // For TopAppBar
    @Composable
    fun TaskScreen(taskViewModel: TaskViewModel = viewModel()) {
        val tasks by taskViewModel.tasks.collectAsState()
        val errorMessage by taskViewModel.error.collectAsState()
        var taskDescription by remember { mutableStateOf("") }
    
        Scaffold(
            topBar = {
                TopAppBar(title = { Text(stringResource(id = R.string.app_name)) }) // Use resource for flavor changes
            }
        ) { paddingValues ->
            Column(modifier = Modifier.padding(paddingValues).padding(16.dp).fillMaxSize()) {
                // Input Row
                Row(verticalAlignment = Alignment.CenterVertically, modifier = Modifier.fillMaxWidth()) {
                    OutlinedTextField(
                        value = taskDescription,
                        onValueChange = { taskDescription = it },
                        label = { Text("New Task Description") },
                        modifier = Modifier.weight(1f),
                        singleLine = true
                    )
                    Spacer(modifier = Modifier.width(8.dp))
                    Button(onClick = {
                        taskViewModel.addTask(taskDescription)
                        taskDescription = ""
                    }) { Text("Add") }
                }
                Spacer(modifier = Modifier.height(16.dp))
                // Error Message
                errorMessage?.let { Text(it, color = MaterialTheme.colorScheme.error, modifier = Modifier.padding(bottom = 8.dp)) }
                // Task List
                if (tasks.isEmpty() && errorMessage == null) {
                    Text("No tasks yet. Add one!")
                } else {
                    LazyColumn(modifier = Modifier.weight(1f)) {
                        items(tasks, key = { it.id }) { task ->
                            TaskItem(task)
                            Divider()
                        }
                    }
                }
            }
        }
    }
    
    @Composable
    fun TaskItem(task: Task) {
        val dateFormat = remember { SimpleDateFormat("yyyy-MM-dd HH:mm", Locale.getDefault()) }
        Row(modifier = Modifier.fillMaxWidth().padding(vertical = 8.dp), verticalAlignment = Alignment.CenterVertically) {
            Column(modifier = Modifier.weight(1f)) {
                Text(task.description, style = MaterialTheme.typography.bodyLarge)
                Text("Added: ${dateFormat.format(Date(task.timestamp))}", style = MaterialTheme.typography.bodySmall)
            }
        }
    }
                    
  5. Update MainActivity.kt: Set the content to TaskScreen.
    MainActivity.kt

    
    package com.yourcompany.advancedconceptsapp
    
    import android.os.Bundle
    import androidx.activity.ComponentActivity
    import androidx.activity.compose.setContent
    import androidx.compose.foundation.layout.fillMaxSize
    import androidx.compose.material3.MaterialTheme
    import androidx.compose.material3.Surface
    import androidx.compose.ui.Modifier
    import com.yourcompany.advancedconceptsapp.ui.TaskScreen
    import com.yourcompany.advancedconceptsapp.ui.theme.AdvancedConceptsAppTheme
    // Imports for WorkManager scheduling will be added in Step 3
    
    class MainActivity : ComponentActivity() {
        override fun onCreate(savedInstanceState: Bundle?) {
            super.onCreate(savedInstanceState)
            setContent {
                AdvancedConceptsAppTheme {
                    Surface(modifier = Modifier.fillMaxSize(), color = MaterialTheme.colorScheme.background) {
                        TaskScreen()
                    }
                }
            }
            // TODO: Schedule WorkManager job in Step 3
        }
    }
                    
  6. Run the App: Test basic functionality. Tasks should appear and persist in Firestore’s `tasks` collection (initially).

Step 3: WorkManager Implementation

Create a background worker for periodic reporting.

  1. Create the Worker: Create worker/ReportingWorker.kt. (Collection name will be updated later).
    worker/ReportingWorker.kt

    
    package com.yourcompany.advancedconceptsapp.worker
    
    import android.content.Context
    import android.util.Log
    import androidx.work.CoroutineWorker
    import androidx.work.WorkerParameters
    import com.google.firebase.firestore.ktx.firestore
    import com.google.firebase.ktx.Firebase
    // Import BuildConfig later when needed
    import kotlinx.coroutines.tasks.await
    
    // Temporary placeholder - will be replaced by BuildConfig field
    const val TEMPORARY_USAGE_LOG_COLLECTION = "usage_logs"
    
    class ReportingWorker(appContext: Context, workerParams: WorkerParameters) :
        CoroutineWorker(appContext, workerParams) {
    
        companion object { const val TAG = "ReportingWorker" }
        private val db = Firebase.firestore
    
        override suspend fun doWork(): Result {
            Log.d(TAG, "Worker started: Reporting usage.")
            return try {
                val logEntry = hashMapOf(
                    "timestamp" to System.currentTimeMillis(),
                    "message" to "App usage report.",
                    "worker_run_id" to id.toString()
                )
                // Use temporary constant for now
                db.collection(TEMPORARY_USAGE_LOG_COLLECTION).add(logEntry).await()
                Log.d(TAG, "Worker finished successfully.")
                Result.success()
            } catch (e: Exception) {
                Log.e(TAG, "Worker failed", e)
                Result.failure()
            }
        }
    }
                    
  2. Schedule the Worker: Update MainActivity.kt‘s onCreate method.
    MainActivity.kt additions

    
    // Add these imports to MainActivity.kt
    import android.content.Context
    import android.util.Log
    import androidx.work.*
    import com.yourcompany.advancedconceptsapp.worker.ReportingWorker
    import java.util.concurrent.TimeUnit
    
    // Inside MainActivity class, after setContent { ... } block in onCreate
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContent {
            // ... existing code ...
        }
        // Schedule the worker
        schedulePeriodicUsageReport(this)
    }
    
    // Add this function to MainActivity class
    private fun schedulePeriodicUsageReport(context: Context) {
        val constraints = Constraints.Builder()
            .setRequiredNetworkType(NetworkType.CONNECTED)
            .build()
    
        val reportingWorkRequest = PeriodicWorkRequestBuilder<ReportingWorker>(
                1, TimeUnit.HOURS // ~ every hour
             )
            .setConstraints(constraints)
            .addTag(ReportingWorker.TAG)
            .build()
    
        WorkManager.getInstance(context).enqueueUniquePeriodicWork(
            ReportingWorker.TAG,
            ExistingPeriodicWorkPolicy.KEEP,
            reportingWorkRequest
        )
        Log.d("MainActivity", "Periodic reporting work scheduled.")
    }
                    
  3. Test WorkManager:
    • Run the app. Check Logcat for messages from ReportingWorker and MainActivity about scheduling.
    • WorkManager tasks don’t run immediately, especially periodic ones. You can use ADB commands to force execution for testing:
      • Find your package name: com.yourcompany.advancedconceptsapp
      • Force run jobs: adb shell cmd jobscheduler run -f com.yourcompany.advancedconceptsapp 999 (The 999 is usually sufficient, it’s a job ID).
      • Or use Android Studio’s App Inspection tab -> Background Task Inspector to view and trigger workers.
    • Check your Firestore Console for the usage_logs collection.

Step 4: Build Flavors (dev vs. prod)

Create dev and prod flavors for different environments.

  1. Configure app/build.gradle.kts:
    app/build.gradle.kts

    
    android {
        // ... namespace, compileSdk, defaultConfig ...
    
        // ****** Enable BuildConfig generation ******
        buildFeatures {
            buildConfig = true
        }
        // *******************************************
    
        flavorDimensions += "environment"
    
        productFlavors {
            create("dev") {
                dimension = "environment"
                applicationIdSuffix = ".dev" // CRITICAL: Changes package name for dev builds
                versionNameSuffix = "-dev"
                resValue("string", "app_name", "Task Reporter (Dev)")
                buildConfigField("String", "TASKS_COLLECTION", "\"tasks_dev\"")
                buildConfigField("String", "USAGE_LOG_COLLECTION", "\"usage_logs_dev\"")
            }
            create("prod") {
                dimension = "environment"
                resValue("string", "app_name", "Task Reporter")
                buildConfigField("String", "TASKS_COLLECTION", "\"tasks\"")
                buildConfigField("String", "USAGE_LOG_COLLECTION", "\"usage_logs\"")
            }
        }
    
        // ... buildTypes, compileOptions, etc ...
    }
                    

    Sync Gradle files.

    Important: We added applicationIdSuffix = ".dev". This means the actual package name for your development builds will become something like com.yourcompany.advancedconceptsapp.dev. This requires an update to your Firebase project setup, explained next. Also note the buildFeatures { buildConfig = true } block which is required to use buildConfigField.
  2. Handling Firebase for Suffixed Application IDs

    Because the `dev` flavor now has a different application ID (`…advancedconceptsapp.dev`), the original `google-services.json` file (downloaded in Step 1) will not work for `dev` builds, causing a “No matching client found” error during build.

    You must add this new Application ID to your Firebase project:

    1. Go to Firebase Console: Open your project settings (gear icon).
    2. Your apps: Scroll down to the “Your apps” card.
    3. Add app: Click “Add app” and select the Android icon (</>).
    4. Register dev app:
      • Package name: Enter the exact suffixed ID: com.yourcompany.advancedconceptsapp.dev (replace `com.yourcompany.advancedconceptsapp` with your actual base package name).
      • Nickname (Optional): “Task Reporter Dev”.
      • SHA-1 (Optional but Recommended): Add the debug SHA-1 key from `./gradlew signingReport`.
    5. Register and Download: Click “Register app”. Crucially, download the new google-services.json file offered. This file now contains configurations for BOTH your base ID and the `.dev` suffixed ID.
    6. Replace File: In Android Studio (Project view), delete the old google-services.json from the app/ directory and replace it with the **newly downloaded** one.
    7. Skip SDK steps: You can skip the remaining steps in the Firebase console for adding the SDK.
    8. Clean & Rebuild: Back in Android Studio, perform a Build -> Clean Project and then Build -> Rebuild Project.
    Now your project is correctly configured in Firebase for both `dev` (with the `.dev` suffix) and `prod` (base package name) variants using a single `google-services.json`.
  3. Create Flavor-Specific Source Sets:
    • Switch to Project view in Android Studio.
    • Right-click on app/src -> New -> Directory. Name it dev.
    • Inside dev, create res/values/ directories.
    • Right-click on app/src -> New -> Directory. Name it prod.
    • Inside prod, create res/values/ directories.
    • (Optional but good practice): You can now move the default app_name string definition from app/src/main/res/values/strings.xml into both app/src/dev/res/values/strings.xml and app/src/prod/res/values/strings.xml. Or, you can rely solely on the resValue definitions in Gradle (as done above). Using resValue is often simpler for single strings like app_name. If you had many different resources (layouts, drawables), you’d put them in the respective dev/res or prod/res folders.
  4. Use Build Config Fields in Code:
      • Update TaskViewModel.kt and ReportingWorker.kt to use BuildConfig instead of temporary constants.

    TaskViewModel.kt change

    
    // Add this import
    import com.yourcompany.advancedconceptsapp.BuildConfig
    
    // Replace the temporary constant usage
    // const val TEMPORARY_TASKS_COLLECTION = "tasks" // Remove this line
    private val tasksCollection = db.collection(BuildConfig.TASKS_COLLECTION) // Use build config field
                        

    ReportingWorker.kt change

    
    // Add this import
    import com.yourcompany.advancedconceptsapp.BuildConfig
    
    // Replace the temporary constant usage
    // const val TEMPORARY_USAGE_LOG_COLLECTION = "usage_logs" // Remove this line
    
    // ... inside doWork() ...
    db.collection(BuildConfig.USAGE_LOG_COLLECTION).add(logEntry).await() // Use build config field
                        

    Modify TaskScreen.kt to potentially use the flavor-specific app name (though resValue handles this automatically if you referenced @string/app_name correctly, which TopAppBar usually does). If you set the title directly, you would load it from resources:

     // In TaskScreen.kt (if needed)
    import androidx.compose.ui.res.stringResource
    import com.yourcompany.advancedconceptsapp.R // Import R class
    // Inside Scaffold -> topBar

    TopAppBar(title = { Text(stringResource(id = R.string.app_name)) }) // Use string resource

  5. Select Build Variant & Test:
    • In Android Studio, go to Build -> Select Build Variant… (or use the “Build Variants” panel usually docked on the left).
    • You can now choose between devDebug, devRelease, prodDebug, and prodRelease.
    • Select devDebug. Run the app. The title should say “Task Reporter (Dev)”. Data should go to tasks_dev and usage_logs_dev in Firestore.
    • Select prodDebug. Run the app. The title should be “Task Reporter”. Data should go to tasks and usage_logs.

Step 5: Proguard/R8 Configuration (for Release Builds)

R8 is the default code shrinker and obfuscator in Android Studio (successor to Proguard). It’s enabled by default for release build types. We need to ensure it doesn’t break our app, especially Firestore data mapping.

    1. Review app/build.gradle.kts Release Build Type:
      app/build.gradle.kts

      
      android {
          // ...
          buildTypes {
              release {
                  isMinifyEnabled = true // Should be true by default for release
                  isShrinkResources = true // R8 handles both
                  proguardFiles(
                      getDefaultProguardFile("proguard-android-optimize.txt"),
                      "proguard-rules.pro" // Our custom rules file
                  )
              }
              debug {
                  isMinifyEnabled = false // Usually false for debug
                  proguardFiles(
                      getDefaultProguardFile("proguard-android-optimize.txt"),
                      "proguard-rules.pro"
                  )
              }
              // ... debug build type ...
          }
          // ...
      }
                 

      isMinifyEnabled = true enables R8 for the release build type.

    2. Configure app/proguard-rules.pro:
      • Firestore uses reflection to serialize/deserialize data classes. R8 might remove or rename classes/fields needed for this process. We need to add “keep” rules.
      • Open (or create) the app/proguard-rules.pro file. Add the following:
      
      # Keep Task data class and its members for Firestore serialization
      -keep class com.yourcompany.advancedconceptsapp.data.Task { (...); *; }
      # Keep any other data classes used with Firestore similarly
      # -keep class com.yourcompany.advancedconceptsapp.data.AnotherFirestoreModel { (...); *; }
      
      # Keep Coroutine builders and intrinsics (often needed, though AGP/R8 handle some automatically)
      -keepnames class kotlinx.coroutines.intrinsics.** { *; }
      
      # Keep companion objects for Workers if needed (sometimes R8 removes them)
      -keepclassmembers class * extends androidx.work.Worker {
          public static ** Companion;
      }
      
      # Keep specific fields/methods if using reflection elsewhere
      # -keepclassmembers class com.example.SomeClass {
      #    private java.lang.String someField;
      #    public void someMethod();
      # }
      
      # Add rules for any other libraries that require them (e.g., Retrofit, Gson, etc.)
      # Consult library documentation for necessary Proguard/R8 rules.
    • Explanation:
      • -keep class ... { <init>(...); *; }: Keeps the Task class, its constructors (<init>), and all its fields/methods (*) from being removed or renamed. This is crucial for Firestore.
      • -keepnames: Prevents renaming but allows removal if unused.
      • -keepclassmembers: Keeps specific members within a class.

3. Test the Release Build:

    • Select the prodRelease build variant.
    • Go to Build -> Generate Signed Bundle / APK…. Choose APK.
    • Create a new keystore or use an existing one (follow the prompts). Remember the passwords!
    • Select prodRelease as the variant. Click Finish.
    • Android Studio will build the release APK. Find it (usually in app/prod/release/).
    • Install this APK manually on a device: adb install app-prod-release.apk.
    • Test thoroughly. Can you add tasks? Do they appear? Does the background worker still log to Firestore (check usage_logs)? If it crashes or data doesn’t save/load correctly, R8 likely removed something important. Check Logcat for errors (often ClassNotFoundException or NoSuchMethodError) and adjust your proguard-rules.pro file accordingly.

 


 

Step 6: Firebase App Distribution (for Dev Builds)

Configure Gradle to upload development builds to testers via Firebase App Distribution.

  1. Download private key: on Firebase console go to Project Overview  at left top corner -> Service accounts -> Firebase Admin SDK -> Click on “Generate new private key” button ->
    api-project-xxx-yyy.json move this file to root project at the same level of app folder *Ensure that this file be in your local app, do not push it to the remote repository because it contains sensible data and will be rejected later
  2. Configure App Distribution Plugin in app/build.gradle.kts:
    app/build.gradle.kts

    
    // Apply the plugin at the top
    plugins {
        // ... other plugins id("com.android.application"), id("kotlin-android"), etc.
        alias(libs.plugins.google.firebase.appdistribution)
    }
    
    android {
        // ... buildFeatures, flavorDimensions, productFlavors ...
    
        buildTypes {
            getByName("release") {
                isMinifyEnabled = true // Should be true by default for release
                isShrinkResources = true // R8 handles both
                proguardFiles(
                    getDefaultProguardFile("proguard-android-optimize.txt"),
                    "proguard-rules.pro" // Our custom rules file
                )
            }
            getByName("debug") {
                isMinifyEnabled = false // Usually false for debug
                proguardFiles(
                    getDefaultProguardFile("proguard-android-optimize.txt"),
                    "proguard-rules.pro"
                )
            }
            firebaseAppDistribution {
                artifactType = "APK"
                releaseNotes = "Latest build with fixes/features"
                testers = "briew@example.com, bri@example.com, cal@example.com"
                serviceCredentialsFile="$rootDir/api-project-xxx-yyy.json"//do not push this line to the remote repository or stablish as local variable } } } 

    Add library version to libs.version.toml

    
    [versions]
    googleFirebaseAppdistribution = "5.1.1"
    [plugins]
    google-firebase-appdistribution = { id = "com.google.firebase.appdistribution", version.ref = "googleFirebaseAppdistribution" }
    
    Ensure the plugin classpath is in the 

    project-level

     build.gradle.kts: 

    project build.gradle.kts

    
    plugins {
        // ...
        alias(libs.plugins.google.firebase.appdistribution) apply false
    }
                    

    Sync Gradle files.

  3. Upload a Build Manually:
    • Select the desired variant (e.g., devDebugdevRelease, prodDebug , prodRelease).
    • In Android Studio Terminal  run  each commmand to generate apk version for each environment:
      • ./gradlew assembleRelease appDistributionUploadProdRelease
      • ./gradlew assembleRelease appDistributionUploadDevRelease
      • ./gradlew assembleDebug appDistributionUploadProdDebug
      • ./gradlew assembleDebug appDistributionUploadDevDebug
    • Check Firebase Console -> App Distribution -> Select .dev project . Add testers or use the configured group (`android-testers`).

Step 7: CI/CD with GitHub Actions

Automate building and distributing the `dev` build on push to a specific branch.

  1. Create GitHub Repository. Create a new repository on GitHub and push your project code to it.
    1. Generate FIREBASE_APP_ID:
      • on Firebase App Distribution go to Project Overview -> General -> App ID for com.yourcompany.advancedconceptsapp.dev environment (1:xxxxxxxxx:android:yyyyyyyyyy)
      • In GitHub repository go to Settings -> Secrets and variables -> Actions -> New repository secret
      • Set the name: FIREBASE_APP_ID and value: paste the App ID generated
    2. Add FIREBASE_SERVICE_ACCOUNT_KEY_JSON:
      • open api-project-xxx-yyy.json located at root project and copy the content
      • In GitHub repository go to Settings -> Secrets and variables -> Actions -> New repository secret
      • Set the name: FIREBASE_SERVICE_ACCOUNT_KEY_JSON and value: paste the json content
    3. Create GitHub Actions Workflow File:
      • In your project root, create the directories .github/workflows/.
      • Inside .github/workflows/, create a new file named android_build_distribute.yml.
      • Paste the following content:
    4. 
      name: Android CI 
      
      on: 
        push: 
          branches: [ "main" ] 
        pull_request: 
          branches: [ "main" ] 
      jobs: 
        build: 
          runs-on: ubuntu-latest 
          steps: 
          - uses: actions/checkout@v3
          - name: set up JDK 17 
            uses: actions/setup-java@v3 
            with: 
              java-version: '17' 
              distribution: 'temurin' 
              cache: gradle 
          - name: Grant execute permission for gradlew 
            run: chmod +x ./gradlew 
          - name: Build devRelease APK 
            run: ./gradlew assembleRelease 
          - name: upload artifact to Firebase App Distribution
            uses: wzieba/Firebase-Distribution-Github-Action@v1
            with:
              appId: ${{ secrets.FIREBASE_APP_ID }}
              serviceCredentialsFileContent: ${{ secrets.FIREBASE_SERVICE_ACCOUNT_KEY_JSON }}
              groups: testers
              file: app/build/outputs/apk/dev/release/app-dev-release-unsigned.apk
      
    1. Commit and Push: Commit the .github/workflows/android_build_distribute.yml file and push it to your main branch on GitHub.
    1. Verify: Go to the “Actions” tab in your GitHub repository. You should see the workflow running. If it succeeds, check Firebase App Distribution for the new build. Your testers should get notified.

 


 

Step 8: Testing and Verification Summary

    • Flavors: Switch between devDebug and prodDebug in Android Studio. Verify the app name changes and data goes to the correct Firestore collections (tasks_dev/tasks, usage_logs_dev/usage_logs).
    • WorkManager: Use the App Inspection -> Background Task Inspector or ADB commands to verify the ReportingWorker runs periodically and logs data to the correct Firestore collection based on the selected flavor.
    • R8/Proguard: Install and test the prodRelease APK manually. Ensure all features work, especially adding/viewing tasks (Firestore interaction). Check Logcat for crashes related to missing classes/methods.
    • App Distribution: Make sure testers receive invites for the devDebug (or devRelease) builds uploaded manually or via CI/CD. Ensure they can install and run the app.
    • CI/CD: Check the GitHub Actions logs for successful builds and uploads after pushing to the develop branch. Verify the build appears in Firebase App Distribution.

 

Conclusion

Congratulations! You’ve navigated complex Android topics including Firestore, WorkManager, Compose, Flavors (with correct Firebase setup), R8, App Distribution, and CI/CD.

This project provides a solid foundation. From here, you can explore:

    • More complex WorkManager chains or constraints.
    • Deeper R8/Proguard rule optimization.
    • More sophisticated CI/CD pipelines (deploy signed apks/bundles, running tests, deploying to Google Play).
    • Using different NoSQL databases or local caching with Room.
    • Advanced Compose UI patterns and state management.
    • Firebase Authentication, Cloud Functions, etc.

If you want to have access to the full code in my GitHub repository, contact me in the comments.


 

Project Folder Structure (Conceptual)


AdvancedConceptsApp/
├── .git/
├── .github/workflows/android_build_distribute.yml
├── .gradle/
├── app/
│   ├── build/
│   ├── libs/
│   ├── src/
│   │   ├── main/           # Common code, res, AndroidManifest.xml
│   │   │   └── java/com/yourcompany/advancedconceptsapp/
│   │   │       ├── data/Task.kt
│   │   │       ├── ui/TaskScreen.kt, TaskViewModel.kt, theme/
│   │   │       ├── worker/ReportingWorker.kt
│   │   │       └── MainActivity.kt
│   │   ├── dev/            # Dev flavor source set (optional overrides)
│   │   ├── prod/           # Prod flavor source set (optional overrides)
│   │   ├── test/           # Unit tests
│   │   └── androidTest/    # Instrumentation tests
│   ├── google-services.json # *** IMPORTANT: Contains configs for BOTH package names ***
│   ├── build.gradle.kts    # App-level build script
│   └── proguard-rules.pro # R8/Proguard rules
├── api-project-xxx-yyy.json # Firebase service account key json
├── gradle/wrapper/
├── build.gradle.kts      # Project-level build script
├── gradle.properties
├── gradlew
├── gradlew.bat
└── settings.gradle.kts
        

 

]]>
https://blogs.perficient.com/2025/04/10/android-development-codelab-mastering-advanced-concepts/feed/ 0 379698
Mastering AWS IaC with Pulumi and Python – Part 2 https://blogs.perficient.com/2025/04/04/mastering-aws-iac-with-pulumi-and-python-part-2/ https://blogs.perficient.com/2025/04/04/mastering-aws-iac-with-pulumi-and-python-part-2/#respond Sat, 05 Apr 2025 04:34:29 +0000 https://blogs.perficient.com/?p=379632

In Part 1 of this series, we learned about the importance of AWS and Pulumi. Now, let’s explore the demo part in this practical session, which will create a service on AWS VPC by using Pulumi.

Before We Start, Ensure You Have the Following

AWS Account with IAM permissions for resource creation

  • Install Pulumi CLI:
    • # curl -fsSL https://get.pulumi.com | sh
  • Install Python & Virtual Environment:
    • # python3 -m venv venv
    • # source venv/bin/activate # On Windows: venv\Scripts\activate
      •  # pip install pulumi boto3

Configure AWS Credentials

  • Check if AWS CLI is Installed
    • Run the command:
    • # aws –version
  • If AWS CLI is not installed, download and install it from AWS CLI installation guide.

Create an IAM User and Assign Permissions

  • Go to the AWS Management Console → IAM → Users
  • Click Create User, provide a username, and check Access Key – Programmatic Access
  • Assign necessary policies/permissions (e.g., AdministratorAccess or a custom policy).

Generate Security Credentials

  • After creating the user, download or copy the Access Key ID and Secret Access Key.

Configure AWS CLI with IAM User Credentials

  • Run:
    • # aws configure
  • Enter the credentials when prompted:
    • Access Key ID
    • Secret Access Key
    • Default region (e.g., us-east-1)
    • Output format (e.g., json)

Verify Configuration

  • Run a test command, such as:
    • # aws sts get-caller-identity
  • If everything is set up correctly, this will return the IAM user details.

Pulumi Version

Part2 1

AWS Configuration

Picture2 2

Pulumi Dashboard

Picture3

It will be included with the details mentioned above

  • Overview
  • Readme
  • Updates
  • Deployments
  • Resources
  • Settings

Deployment Steps with Commands and Screenshots

Step 1: Initialize a Pulumi Project

  • # pulumi new aws-python

Step 2: Define AWS Resources

  • Modify __main__.py to create a VPC:

Picture4

Step 3. Pulumi Preview

  • # Pulumi Preview

Pulumi Preview shows a dry-run of changes before applying them. It helps you see what resources will be created (+), updated (~), or deleted (-) without actually making any changes.

Picture5

Step 4: Deploy Infrastructure

  • # pulumi up

Pulumi up deploys or updates infrastructure by applying changes from your Pulumi code.

Picture6

Picture7

Step 5: Verify Deployment

AWS Console Page

Creating VPC Peering with Pulumi

Picture8

Pulumi destroy

  • # Pulumi Destroy

Removes all resources managed by Pulumi, restoring the environment to its original state.  Picture9

Picture10

Step 6: Pulumi Stack Remove

  • # Pulumi Stack rm <stack name>

Pulumi stack rm removes a Pulumi stack and its state but does not delete cloud resources unless –force is used.

Picture11

Picture12

After removed Stack

Picture13

AWS Console Page after deleting VPC

Picture14

Conclusion

Pulumi offers a powerful, flexible, and developer-friendly approach to managing AWS infrastructure. By leveraging Pulumi, you can:

  • Simplify Infrastructure Management – Define cloud resources as code for consistency and repeatability.
  • Enhance Productivity—Create a dynamic infrastructure by using Python’s full capabilities, including loops, functions, and modules.
  • Improve Collaboration – Version control your infrastructure with Git and integrate seamlessly with CI/CD pipelines.
  • Achieve Multi-Cloud Flexibility – Deploy AWS, Azure, and Google Cloud workloads without changing tools.
  • Maintain Security & Compliance – Use IAM policies, automated policies, and state management to enforce best practices.

With Pulumi’s modern IaC approach, you can move beyond traditional Terraform and CloudFormation and embrace a more scalable, flexible, and efficient way to manage AWS resources.

Key Takeaways

  • Code-Driven Infrastructure – Use loops, conditionals, and functions for dynamic configurations.
  • Multi-Cloud & Hybrid Support – Pulumi works across AWS, Azure, Google Cloud, and Kubernetes.
  • State Management & Versioning – Store state remotely with Pulumi Cloud or AWS S3 + DynamoDB.
  • Developer-Friendly – No need to learn a new domain-specific language (DSL); use Python!
  • Experiment with More AWS Services – Deploy API Gateway, Lambda, or DynamoDB.
  • Implement CI/CD with Pulumi – Automate deployments using GitHub Actions, Jenkins, or AWS CodePipeline.
  • Explore Pulumi Stacks – Manage multiple environments efficiently.
  • Read the Official Pulumi Docs – Pulumi AWS Documentation

References

]]>
https://blogs.perficient.com/2025/04/04/mastering-aws-iac-with-pulumi-and-python-part-2/feed/ 0 379632
Mastering AWS Infrastructure as Code with Pulumi and Python – Part 1 https://blogs.perficient.com/2025/03/27/mastering-aws-infrastructure-as-code-with-pulumi-and-python/ https://blogs.perficient.com/2025/03/27/mastering-aws-infrastructure-as-code-with-pulumi-and-python/#respond Thu, 27 Mar 2025 12:50:55 +0000 https://blogs.perficient.com/?p=379134

Pulumi is a modern Infrastructure as Code (IaC) tool that allows you to define, deploy, and manage cloud infrastructure using general-purpose programming languages. Unlike Terraform, which uses HCL, Pulumi enables you to define infrastructure using Python, making it easier for developers to integrate infrastructure with application code.

What You’ll Learn

  • How Pulumi works with AWS
  • Setting up Pulumi with Python
  • Deploying various AWS services with real-world examples
  • Best practices and advanced tips

Why Pulumi for AWS?

  • Pulumi provides several advantages over traditional IaC tools like Terraform and CloudFormation:
  • Code Reusability and Modularity – Use loops, conditionals, and functions for dynamic configurations.
  • Multi-Cloud and Multi-Language Support – Deploy across AWS, Azure, and Google Cloud with Python, TypeScript, Go, or .NET.
  • State Management Options – Store state locally, in S3, or Pulumi Cloud.
  • CI/CD Integration – Easily integrate Pulumi with Jenkins, GitHub Actions, or AWS Code Pipeline.

How Pulumi Works

Pulumi Consists of 3 Main Components

  • Pulumi CLI executes commands like pulumi new, pulumi up, and pulumi destroy.
  • Pulumi SDK – Provides Python libraries to define and manage infrastructure.
  • Backend State Management – Stores infrastructure state in Pulumi Cloud, AWS S3, or locally.

Workflow Overview

  • Write Infrastructure Code (Python)
  • Pulumi Translates Code to AWS Resources
  • Apply Changes (pulumi up)
  • Pulumi Tracks State for Future Updates

Prerequisites

Pulumi Dashboard

The Pulumi Dashboard (if using Pulumi Cloud) helps track:

  • The current state of infrastructure.
  • A history of deployments and updates.
  • Who made changes and what changed.
  • The ability to roll back if needed.

So, yes, Pulumi destroys resources and updates the Pulumi Dashboard accordingly.

Pulumi Workflow and Behavior

  1. Create Resources (pulumi up)
    • When you run Pulumi up, Pulumi provisions the defined AWS resources and stores the state.
    • The Pulumi Dashboard (Pulumi Cloud) shows the deployed resources, updates, and history.
  2. Modify/Update Resources (pulumi up)
    • If you change the Pulumi code and run pulumi up, Pulumi calculates the difference (diff) and updates only the necessary resources.
    • The changes are reflected in the Pulumi Dashboard.
  3. Destroy Resources (pulumi destroy)
    • Running pulumi destroy removes all the resources created by Pulumi.
    • The deletion status is updated in the Pulumi Dashboard.

Real-World Use Case: Automating AWS Infrastructure for a Web Application

Scenario

A company running a high-traffic web application on AWS wants to automate its cloud infrastructure using Pulumi. The goal is to deploy a highly available, scalable, and secure architecture with:

  • Compute: EC2 instances with Auto Scaling and an Elastic Load Balancer.
  • Networking: A secure VPC with private and public subnets.
  • Storage: S3 for static content and RDS for a managed database.
  • Security: IAM roles, Security Groups, and encryption best practices.
  • Monitoring: CloudWatch for logging and alerts.
  • CI/CD Integration: GitHub Actions for automated deployments.

Best Practices for Using Pulumi in Production

  • Use Stacks for Environment Separation: Define separate stacks for development, staging, and production.
  • Leverage Pulumi Config & Secrets: Store sensitive values securely in Pulumi’s secret management system.
  • Adopt Remote State Management: Store Pulumi state in AWS S3 + DynamoDB for collaboration and recovery.
  • Automate Deployments with CI/CD: Integrate Pulumi with GitHub Actions, Jenkins, or AWS Code Pipeline.
  • Implement Role-Based Access Control (RBAC): Use IAM roles and policies to restrict access.

Architecture

Pulumiwithpython Architecture

Architecture Overview

Pulumi is an Infrastructure as Code (IaC) tool that allows you to define cloud infrastructure using programming languages like Python, TypeScript, and Go. In this architecture, Pulumi interacts with AWS to deploy multiple services.

Components in the architecture.

  1. Pulumi (IaC Tool)
    • Pulumi is at the top, managing the provisioning of AWS resources through code.
    • It interacts with AWS to define, deploy, and manage infrastructure.
  2. AWS Services Provisioned by Pulumi
    • Amazon VPC: The foundational network setup that provides isolated networking for AWS resources.
    • Amazon EC2: Virtual machines (compute instances) running applications or services.
    • Amazon S3: Object storage for data, logs, and backups.
    • Amazon RDS: Managed relational databases (e.g., MySQL, PostgreSQL).
    • AWS Lambda: Serverless computing service for event-driven applications.
    • Amazon CloudWatch: Monitoring and logging service for AWS infrastructure and applications.
    • Amazon EKS: Managed Kubernetes cluster for containerized applications.
    • AWS IAM (Identity and Access Management): Provides security and access control.
    • AWS CloudTrail: Logs API calls and activities for security and auditing.

How Pulumi Works in This Architecture

1. Define Infrastructure in Code

Using Pulumi (e.g., Python, TypeScript), you write a script to define resources like VPC, EC2, S3, etc.

2. Deploy Infrastructure

Run pulumi up, translating the code into AWS API calls to create and configure services.

3. Manage and Update

Modify infrastructure using Pulumi’s code and redeploy using pulumi up.

4. Destroy Infrastructure (if needed)

Run Pulumi destroy to remove the entire setup.

Conclusion

Pulumi is a powerful Infrastructure-as-Code (IaC) tool that enables developers to provision and manage AWS resources using familiar programming languages like Python. Unlike traditional declarative tools like Terraform, Pulumi allows for greater flexibility through loops, conditionals, and reusable components.

In this blog, we explored how Pulumi can deploy AWS services like EC2, S3, RDS, and Lambda, along with an architecture diagram to illustrate the deployment. With Pulumi, you can streamline cloud infrastructure management while leveraging best practices in software development.

After covering AWS-Pulumi in Part 1, stay tuned for Part 2, where we’ll set up a VPC on AWS using Pulumi.

]]>
https://blogs.perficient.com/2025/03/27/mastering-aws-infrastructure-as-code-with-pulumi-and-python/feed/ 0 379134
HCL Commerce Containers Explained https://blogs.perficient.com/2025/03/19/hcl-commerce-containers-explained/ https://blogs.perficient.com/2025/03/19/hcl-commerce-containers-explained/#comments Wed, 19 Mar 2025 05:31:20 +0000 https://blogs.perficient.com/?p=378730

In this blog, we will explore the various Containers, their functionalities, and how they interact to create a seamless customer shopping experience.

HCL Commerce Containers provide a modular and scalable approach to managing ecommerce applications.

Benefits of HCL Commerce Containers

  • Improved Performance: The system becomes faster and more responsive by caching frequent requests and optimizing search queries.
  • Scalability: Each Container can be scaled independently based on demand, ensuring the system can handle high traffic.
  • Manageability: Containers are designed to perform specific tasks, making the system easier to monitor, debug, and maintain.

 HCL Commerce Containers are individual components that work together to deliver a complete e-commerce solution.

Different Commerce Containers

  1. Cache app: This app implements caching mechanisms to store frequently accessed data in memory, reducing latency and improving response times for repeated requests.
  2. Nextjs-app: This app utilizes the Next.js framework to build server-side rendered (SSR) and statically generated (SSG) React applications. It dynamically interfaces with backend services like store-web or ts-web to fetch and display product data.
  3. Query-app: Acts as a middleware for handling search queries. It leverages Elasticsearch for full-text search capabilities and integrates with the cache app to enhance search performance by caching query results.
  4. Store-web: It handles the user interface and shopping experience, including browsing products, adding items to the cart, and checking out.
  5. Ts-app, Ts-web, Ts-utils:
    • Ts-app: Manages background processes such as order processing, user authentication, and other backend services.
    • Ts-web: This container is for the administrative tools. It supports tasks like cataloging, marketing, promotions, and order management, providing administrators and business users the necessary tools.
    • Ts-utils: Contains utility scripts and tools for automating routine tasks and maintenance operations.
  6. Ingest-app, Nifi-app:
    • Ingest-app: Handles the ingestion of product and catalog data into Elasticsearch, ensuring that the search index is current.
    • Nifi-app: This app utilizes Apache NiFi for orchestrating data flow pipelines. It automates the extraction, transformation, and loading (ETL) processes, ensuring data consistency and integrity across systems.
  7. Registry app: This app implements a service registry to maintain a directory of all microservices and their instances (Containers). It facilitates service discovery and load balancing within the microservices architecture.
  8. Tooling-web: Provides a suite of monitoring and debugging tools for developers and administrators. It includes dashboards for tracking system performance, logs, and metrics to aid in troubleshooting and maintaining system health.
Hcl commerce containers

HCL Commerce containers

Conclusion

This blog explored the various HCL Commerce Containers, their functionalities, and how they work together to create a robust e-commerce solution. By understanding and implementing these Containers, you can enhance the performance and scalability of your e-commerce platform.

Please go through the link to learn about Deploying HCL commerce elasticsearch and solrbased solutions”  https://blogs.perficient.com/2024/12/11/deploying-hcl-commerce-elasticsearch-and-solr-based-solutions/

]]>
https://blogs.perficient.com/2025/03/19/hcl-commerce-containers-explained/feed/ 1 378730
Deployment of Infra using Terraform(IaC) and Automate CICD using Jenkins on AWS ECS https://blogs.perficient.com/2025/03/11/deployment-of-infra-using-terraformiac-and-automate-cicd-using-jenkins-on-aws-ecs/ https://blogs.perficient.com/2025/03/11/deployment-of-infra-using-terraformiac-and-automate-cicd-using-jenkins-on-aws-ecs/#respond Tue, 11 Mar 2025 18:43:02 +0000 https://blogs.perficient.com/?p=378120

Terraform

Terraform is a HashiCorp-owned Infrastructure as Code (IaC) technology that allows you to develop, deploy, alter, and manage infrastructure using code. It maintains your infrastructure’s lifespan, enables you to define resources and infrastructure in human-readable, declarative configuration files, and manages your infrastructure’s lifecycle.

Code is simply instructions written in the HCL (Hashi Corp Configuration Language) language in a human-readable format with the extension (.tf) or (.tf.json) which is written in HCL (Hashi Corp Configuration Language) Language.

What is IaC?

Infrastructure as code (IaC) refers to using configuration files to control your IT infrastructure.

What is the Purpose of  IaC?

Managing IT infrastructure has traditionally been a laborious task. People would physically install and configure servers, which is time-consuming and costly.

Nowadays, businesses are growing rapidly, so manual-managed infrastructure can no longer meet the demands of today’s businesses.

To meet the customer’s demands and save costs, IT organizations quickly adopt the Public Cloud, which is mostly API-driven, and they architecting their application in such a way that to support a much higher level of elasticity and deploy their application on supporting technologies like Docker container and Public Cloud. To build, manage, and deploy the code on those technologies, a tool like Terraform is invaluable for delivering the product quickly.

Terraform Workflow

Tf Workflow

Terraform Init

  • The Terraform Init command initializes a working directory containing Terraform configuration files.

Terraform Plan

  • The Terraform Plan command is used to create an execution plan.

Terraform Apply

  • The Terraform Apply command is used to apply the changes required to reach the desired state of the configuration.

Terraform Refresh

  • The Terraform Refresh command reconciles the state Terraform knows about (via its state file) with the real-world infrastructure. This does not modify infrastructure but does modify the state file.

Terraform Destroy

  • The Terraform Destroy command is used to destroy the Terraform-managed infrastructure.

Jenkins Pipeline

A Jenkins Pipeline is a suite of plugins that supports building, deploying, and automating continuous integration and delivery (CI/CD) workflows. It provides a way to define the entire build process in a scripted or declarative format called a Jenkinsfile. This allows developers to manage and version their CI/CD processes alongside their application code.

Why Jenkins Pipeline?

Infrastructure as Code (IaC)

  • The build process is defined in a Jenkinsfile written in Groovy-based DSL (Domain-Specific Language).
  • The Jenkinsfile can be stored and versioned in the same repository as the application source code, ensuring synchronization between code and build processes.

Reusability and Maintainability

  • A single Jenkins pipeline can be reused across multiple environments (development, testing, production).
  • Update the Jenkinsfile to change the build process, reducing the need to manually modify multiple jobs in Jenkins.

Improved Version Control

  • Both the application code and build process are versioned together.
  • Older releases can be built using the corresponding Jenkinsfile, ensuring compatibility.

Automation and Scalability

  • The pipeline automates the entire CI/CD workflow, including code fetching, building, testing, and deployment.
  • It supports parallel stages, enabling multiple tasks (e.g., unit and integration tests) to run concurrently.

Simplified Configuration Management

  • Job configurations are no longer stored as XML files in Jenkins. Instead, they are defined as code in the Jenkinsfile, making backup and restoration easier.

Types of Jenkins Pipelines

Jenkins provides two types of pipelines:

Declarative Pipeline

  • Easier to use, structured, and designed for most users.
  • Uses a defined syntax and provides built-in error handling.

Scripted Pipeline

  • More flexible but requires advanced Groovy scripting knowledge.

AWS ECS 

AWS ECS (Elastic Container Service) is an AWS Container managed Service that allows you to run and manage Docker containers on a cluster of Virtual Servers.

Container Deployment Era

The container OS maintains application isolation. Container service is trending nowadays.

  • Lightweight: Containers have less overhead than virtual machines. They can be used with the host OS without installing it; they contain only the libraries and modules required to run the application.
  • Portable: Containers can be moved from one host to another and run across the OS distribution and clouds.
  • Efficient: Better resource utilization than the virtual machines, which cannot occupy the entire hardware and gradually increase based on requirement.
  • Fast Deployment: Containers can build quickly from container images, and it is easy to roll back.
  • Microservices: It is based on loosely coupled architecture and supports distributed best for microservices.

Architecture

Arch

In this architecture diagram, we will launch an EC2 Instance using Terraform in AWS and the user data for Jenkins Server configuration. By Jenkins CICD Pipeline, we fetch the Source Code from GitHub, create a Docker Image, and upload it to the ECR Docker Registry. We will deploy the application on the ECS Cluster with this docker image.

Step 1: Create an IAM user and an Access Key/Secret Key for the IAM user, and provide the appropriate permissions, such as ECR and Docker Container Policy.

Img 1

Step 2: Create an ECR Repository to store the Docker Images.

Img 2

Step 3: Create an ECS Cluster

Img 4

Step 3.1: Create a task Definition. The Task Definition contains all the information to run the Container, such as the container Image URL and Compute Power.

Img 5

Step 3.2- Execution Role: It is attached to the Task Definition with permission from CloudWatch Logs to collect the real-time logs of the Container and ECS Task Execution Role Policy.

Iam Role

Step 3.3: Create a Service in Cluster: If Task Definition cannot handle the deployment, we have to create a service that is an intermediary between the application and Container Instances.

Img 6

Step 4: Jenkins Server Configuration

Img 3.0

Let’s deploy the code using Jenkins on ECS Cluster: Jenkinsfile for CICD Pipeline: https://github.com/prafulitankar/GitOps/blob/main/Jenkinsfile

Create a Jenkins Pipeline, which should be a Declarative Pipeline.

Img 7.0

We have done with the Infra setup and Jenkins Pipeline. Let’s Run the Jenkins Pipeline:

Img 7

Once Jenkins Pipeline is successfully executed, the ECS service will try to deploy a new revision of Docker Image.

Img 8

OutputOnce the Pipeline was executed successfully, our application deployed successfully on the ECS Cluster output of the application.

Img 9

We launched the Jenkins Server on an EC2 Instance with Terraform. Then, we created an ECR repository to store the Docker image, ECS Cluster, task definition, and Service to deploy the application. Using the Jenkins pipeline, we pulled the source code from GitHub, built the code, created a Docker image, and uploaded it to the ECR repository. This is our CI part, and then we deployed our application on ECS, which is CD.

]]>
https://blogs.perficient.com/2025/03/11/deployment-of-infra-using-terraformiac-and-automate-cicd-using-jenkins-on-aws-ecs/feed/ 0 378120
Best Practices for IaC using AWS CloudFormation https://blogs.perficient.com/2025/03/11/best-practices-for-iac-using-aws-cloudformation/ https://blogs.perficient.com/2025/03/11/best-practices-for-iac-using-aws-cloudformation/#comments Tue, 11 Mar 2025 15:41:28 +0000 https://blogs.perficient.com/?p=378210

In the ever-evolving landscape of cloud computing, Infrastructure as Code (IaC) has emerged as a cornerstone practice for managing and provisioning infrastructure. IaC enables developers to define infrastructure configurations using code, ensuring consistency, automation, and scalability. AWS CloudFormation, a key service in the AWS ecosystem, simplifies IaC by allowing users to easily model and set up AWS resources. This blog explores the best practices for utilizing AWS CloudFormation to achieve reliable, secure, and efficient infrastructure management.

Why Use AWS CloudFormation?

AWS CloudFormation provides a comprehensive solution for automating the deployment and management of AWS resources. The primary advantages of using CloudFormation include:

  • Consistency: Templates define the infrastructure in a standardized manner, eliminating configuration drift.
  • Automation: Automatic provisioning and updating of infrastructure, reducing manual intervention.
  • Scalability: Easily replicate infrastructure across multiple environments and regions.
  • Dependency Management: Automatically handles resource creation in the correct sequence based on dependencies.
  • Rollback Capability: Automatic rollback to the previous state in case of deployment failures.

Comparison with Other IaC Tools

  • AWS CloudFormation stands out among other IaC tools, such as Terraform and Ansible, due to its deep integration with AWS services. Unlike Terraform, which supports multiple cloud providers, CloudFormation is tailored specifically for AWS, offering native support and advanced features like Drift Detection and Stack Policies. Additionally, CloudFormation provides out-of-the-box rollback functionality, making it more reliable for AWS-centric workloads.

Best Practices for CloudFormation

1. Organize Templates Efficiently

Modularization

Breaking down large CloudFormation templates into smaller, reusable components enhances maintainability and scalability. Modularization allows you to create separate templates for different infrastructure components such as networking, compute instances, and databases.

Example:

Mod

compute.yml

Com

The network.yml template creates the VPC and subnets in this example, while the compute.yml template provisions the EC2 instance. You can use Export and ImportValue functions to share resource outputs between templates.

Nested Stacks

Nested stacks allow you to create a parent stack that references child stacks, improving reusability and modularization.

Example:

Nes

Using nested stacks ensures a clean separation of concerns and simplifies stack management.

2. Parameterization and Reusability

Enhance template reusability and flexibility through parameterization:

  • Parameters Section: Define configurable values such as instance types, environment names, and AMI IDs.
  • Mappings Section: Use mappings to create static mappings between parameter values and resource properties.
  • Default Values: Set default values for optional parameters to simplify deployments.
  • AWS CloudFormation Macros: Use macros to extend template functionality and perform custom transformations.

Example:

Par

3. Security Considerations

Securing infrastructure configurations is paramount. Best practices include:

  • IAM Roles and Policies: Assign least privilege permissions to CloudFormation stacks and resources.
  • Secrets Management: Store sensitive data such as passwords and API keys in AWS Secrets Manager or Systems Manager Parameter Store.
  • Encryption: Enable encryption for data at rest using AWS KMS.
  • Stack Policies: Apply stack policies to protect critical resources from unintended updates.

Example:

Sec

4. Version Control and Automation

Integrating CloudFormation with version control systems and CI/CD pipelines improves collaboration and automation:

  • Version Control: Store templates in Git repositories to track changes and facilitate code reviews.
  • CI/CD Pipelines: Automate template validation, deployment, and rollback using AWS CodePipeline or Jenkins.
  • Infrastructure as Code Testing: Incorporate automated testing frameworks to validate templates before deployment.

Example Pipeline:

Ver

5. Template Validation and Testing

Validation and testing are critical for ensuring the reliability of CloudFormation templates:

  • Linting: Use the cfn-lint tool to validate templates against AWS best practices and syntax rules.
  • Change Sets: Preview changes before applying them using CloudFormation Change Sets.
  • Unit Testing: Write unit tests to verify custom macros and transformations.
  • Integration Testing: Deploy templates in isolated environments to validate functionality and performance.

Example:

cfn-lint template.yml

aws cloudformation create-change-set --stack-name MyStack --template-body file://template.yml

6. Stack Policies and Drift Detection

Protecting infrastructure from unauthorized changes and maintaining consistency is essential:

  • Stack Policies: Define stack policies to prevent accidental updates to critical resources.
  • Drift Detection: Regularly perform drift detection to identify and remediate unauthorized changes.
  • Audit Trails: Enable AWS CloudTrail to log API activity and monitor changes.

Example Stack Policy:

  1. Define the Stack Policy in a separate JSON file:

Picture7

  1. Apply the policy while creating or updating the stack:

Picture8

AWS CloudFormation Architecture

Below is a high-level architecture diagram illustrating how AWS.

Picture9

Step-by-Step Configuration

  1. Create a CloudFormation Template: Write the YAML or JSON template defining AWS resources.
  2. Upload to S3: Store the template in an S3 bucket for easy access.
  3. Deploy Stack: Create the stack using the AWS Management Console, CLI, or SDK.
  4. Monitor Stack Events: Track resource creation and update progress in the AWS Console.
  5. Update Stack: Modify the template and update the stack with the new configuration.
  6. Perform Drift Detection: Identify and resolve configuration drift.

Conclusion

AWS CloudFormation is a powerful tool for implementing infrastructure as code, offering automation, consistency, and scalability. By following best practices such as template modularization, security considerations, and automation, organizations can enhance the reliability and efficiency of their cloud infrastructure. Adopting AWS CloudFormation simplifies infrastructure management and strengthens overall security and compliance.

Embracing these best practices will enable businesses to leverage the full potential of AWS CloudFormation, fostering a more agile and resilient cloud environment.

 

]]>
https://blogs.perficient.com/2025/03/11/best-practices-for-iac-using-aws-cloudformation/feed/ 1 378210
Automate the Deployment of a Static Website to an S3 Bucket Using GitHub Actions https://blogs.perficient.com/2025/03/05/automate-the-deployment-of-a-static-website-to-an-s3-bucket-using-github-actions/ https://blogs.perficient.com/2025/03/05/automate-the-deployment-of-a-static-website-to-an-s3-bucket-using-github-actions/#comments Wed, 05 Mar 2025 06:43:31 +0000 https://blogs.perficient.com/?p=377956

Automating deployments is crucial for efficiency and reliability in today’s fast-paced development environment. GitHub Actions provides a seamless way to implement CI/CD pipelines, allowing developers to automate the deployment of static websites without manual intervention.

In this blog, we will explore how to deploy a static website to an AWS S3 bucket using GitHub Actions. We’ll cover setting up an S3 bucket, configuring IAM roles for secure authentication, and leveraging GitHub Actions workflows to streamline deployment. By the end, you’ll have a fully automated pipeline that ensures quick and secure deployments with minimal effort.

Prerequisites

  1. Amazon S3 Bucket: Create an S3 bucket and enable static website hosting.
  2. IAM User & Permissions: Create an IAM user with access to S3 and store credentials securely.
  3. GitHub Repository: Your static website code should be in a GitHub repository.
  4. GitHub Secrets: Store AWS credentials in GitHub Actions Secrets.
  5. Amazon EC2 – to create a self-hosted runner.

Deploy a Static Website to an S3 Bucket

Step 1

First, create a GitHub repository. I already made one with the same name, which is why it exists.

Static 1

 

 

Step 2

You can clone the repository from the URL below and put it into your local system. I have added the website-related code to my GitHub repository, so you just need to clone it: https://github.com/Kunal2795/Static-Website.git.

 

Step 3

Push the code to host this static website with your changes, such as updating the bucket name and AWS region. I already have it locally, so you just need to push it using the Git commands below:

Static 2

Step 4

Once the changes are pushed to your GitHub repository, ensure the main. The yaml file is in the .github/workflows directory.

Staticc 3

If the main.yaml file is not present in the .github/workflows/ directory. Create it and add a job to run the static website pipeline in GitHub Actions. The main.yaml file is the primary configuration file in GitHub Actions that runs the entire pipeline.

Add the following job code to the main.yaml file in the .github/workflows/ directory:

name: Portfolio Deployment2

on:

  push:

    branches:

    – main

jobs:

  build-and-deploy:

    runs-on: [self-hosted, silver]

    steps:

    – name: Checkout

      uses: actions/checkout@v1

 

    – name: Configure AWS Credentials

      uses: aws-actions/configure-aws-credentials@v1

      with:

        aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}

        aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

        aws-region: us-east-2

 

    – name: Deploy static site to S3 bucket

      run: aws s3 sync . s3://kc-devops –delete

 

You need to make some modifications in the above jobs, such as:

  • runs-on – Add either a self-hosted runner or a default runner (I have added a self-hosted runner).
  • AWS-access-key-id – You need to add the Access Key ID variable name (store the variable value in Variables, which I will show you below).
  • AWS-secret-access-key – You need to add the Secret Access Key ID variable name (store its value in Variables, which I will show you below)
  • AWS-region – Add Region of s3 bucket
  • run – In that section, you need to add the path of your bucket where you want to store your static website code.

How to Create a Self-hosted Runner

Launch an EC2 instance with Ubuntu OS using a simple configuration.

Static 4

After that, create a self-hosted runner using specific commands. To get these commands, go to Settings in GitHub, navigate to Actions, click on Runners, and then select Create New Self-Hosted Runner.

Select Linux as the runner image.

Static 5

Static 6

Run the above commands step by step on your EC2 server to download and configure the self-hosted runner.

Static 7

 

Static 8

Once the runner is downloaded and configured, check its status to ensure it is idle or offline. If it is offline, start the GitHub Runner service on your EC2 server.

Also, ensure that AWS CLI is installed on your server.

Static 9

IAM User

Create an IAM user and grant it full access to EC2 and S3 services.

Static 10

Then, go to Security Credentials, create an Access Key and Secret Access Key, and securely copy and store both the Access Key and Secret Access Key in a safe place.

Static 11

 

Next, navigate to GitHub Actions → Secrets & Variables → Actions, then add your AWS Access Key ID and Secret Access Key securely.

Static 12

After adding the Access Key ID and Secret Access Key, proceed to the next section: S3.

Create an S3 bucket—I have created one with the name kc-devops.

Static 13

Add the policy below to your S3 bucket and update the bucket name with your own bucket name.

Static 14

After setting up everything, go to GitHub Actions, open the main. In the yaml file, update the bucket name and commit the changes.

Then, click the Actions tab to see all your triggered workflows and their status.

Static 15

We can see that all the steps for the build and deploy jobs have been successfully completed.

Static 16

Lastly, sign in to the AWS Management Console and open the Amazon S3 console. Check all the codes are stored in your bucket.

Static 17

Then, go to the Properties tab. Under Static website hosting, find and click on the Endpoint URL. (Bucket Website endpoint)

This Endpoint URL is the Amazon S3 website endpoint for your bucket.

Static 18

Output

Finally, we have successfully deployed and hosted a static website using automation to the Amazon S3 bucket.

Static 19

Conclusion

With this setup, whenever you push changes to your GitHub repository, GitHub Actions automatically trigger the deployment process. This ensures that your static website is seamlessly updated and deployed to your AWS S3 bucket without any manual intervention. This automation streamlines the deployment workflow, making it more efficient and error-free.

 

]]>
https://blogs.perficient.com/2025/03/05/automate-the-deployment-of-a-static-website-to-an-s3-bucket-using-github-actions/feed/ 1 377956
RDS Migration: AWS-Managed to CMK Encryption https://blogs.perficient.com/2025/03/04/rds-migration-aws-managed-to-cmk-encryption/ https://blogs.perficient.com/2025/03/04/rds-migration-aws-managed-to-cmk-encryption/#respond Tue, 04 Mar 2025 06:00:17 +0000 https://blogs.perficient.com/?p=377717

As part of security and compliance best practices, it is essential to enhance data protection by transitioning from AWS-managed encryption keys to Customer Managed Keys (CMK).

Business Requirement

During database migration or restoration, it is not possible to directly change encryption from AWS-managed keys to Customer-Managed Keys (CMK).

During migration, the database snapshot must be created and re-encrypted with CMK to ensure a secure and efficient transition while minimizing downtime. This document provides a streamlined approach to saving time and ensuring compliance with best practices.

P1

                        Fig: RDS Snapshot Encrypted with AWS-Managed KMS Key

 

Objective

This document aims to provide a structured process for creating a database snapshot, encrypting it with a new CMK, and restoring it while maintaining the original database configurations. This ensures minimal disruption to operations while strengthening data security.

  • Recovery Process
  • Prerequisites
  • Configuration Overview
  • Best Practices

 

Prerequisites

Before proceeding with the snapshot and restoration process, ensure the following   prerequisites are met:

  1. AWS Access: You must have the IAM permissions to create, copy, and restore RDS snapshots.
  2. AWS KMS Key: Ensure you have a Customer-Managed Key (CMK) available in the AWS Key Management Service (KMS) for encryption.
  3. Database Availability: Verify that the existing database is healthy enough to take an accurate snapshot.
  4. Storage Considerations: Ensure sufficient storage is available to accommodate the snapshot and the restored instance.
  5. Networking Configurations: Ensure appropriate security groups, subnet groups, and VPC settings are in place.
  6. Backup Strategy: Have a backup plan in case of any failure during the process.

Configuration Overview

Step 1: Take a Snapshot of the Existing Database

  1. Log in to the AWS console with your credentials.
  2. Navigate to the RDS section where you manage database instances.
  3. Select the existing database for which you want to create the snapshot.
  4. Click on the Create Snapshot button.
  5. Provide a name and description for the snapshot, if necessary.
  6. Click Create Snapshot to initiate the snapshot creation process.
  7. Wait for the snapshot creation to complete before proceeding to the next step.

P2

Step 2: Copy Snapshot with New Encryption Keys

  1. Navigate to the section where your snapshots are stored.
  2. Locate the newly created snapshot in the list of available snapshots.
  3. Select the snapshot and click the Copy Snapshot option.
  4. In the encryption settings, choose New Encryption Key (this will require selecting a new Customer Managed Key (CMK)).
  5. Follow the prompts to copy the snapshot with the new encryption key. Click Next to continue.

P3

 

P4

Step 4: Navigate to the Newly Created Snapshot, Action to Restore

  1. Once the new snapshot is successfully created, navigate to the list of available snapshots.
  2. Locate the newly created snapshot.
  3. Select the snapshot and choose the Restore or Action → Restore option.

P5

 

Step 5: Fill in the Details as Old One

  1. When prompted to restore the snapshot, fill in the details using the same configuration as the old database. This includes:

Instance size, Database configurations, Networking details, Storage options

  1. Ensure all configurations match the old setup to maintain continuity.

Step 6: Create the Restored Database Output

  1. After filling in the necessary details, click Create to restore the snapshot to a new instance.
  2. Waiting for the process to be completed.
  3. Verify that the new database is restored successfully.

P6

 

Best Practices for RDS Encryption

  • Enable automated backups and validate snapshots.
  • Secure encryption keys and monitor storage costs.
  • Test restored databases before switching traffic.
  • Ensure security groups and CloudWatch monitoring are set up.
  • This ensures a secure and efficient RDS snapshot process.

 

Conclusion

Following these steps ensures a secure, efficient, and smooth process for taking, encrypting, and restoring RDS snapshots in AWS. Implementing best practices such as automated backups, encryption key management, and proactive monitoring can enhance data security and operational resilience. Proper planning and validation at each step will minimize risks and help maintain business continuity.

]]>
https://blogs.perficient.com/2025/03/04/rds-migration-aws-managed-to-cmk-encryption/feed/ 0 377717
Windows Password Recovery with AWS SSM https://blogs.perficient.com/2025/02/25/windows-password-recovery-with-aws-ssm/ https://blogs.perficient.com/2025/02/25/windows-password-recovery-with-aws-ssm/#respond Wed, 26 Feb 2025 05:27:12 +0000 https://blogs.perficient.com/?p=377706

The Systems Manager (SSM) streamlines managing Windows instances in AWS. If you’ve ever forgotten the password for your Windows EC2 instance, SSM offers a secure and efficient way to reset it without additional tools or manual intervention.

Objective & Business Requirement

In a production environment, losing access to a Windows EC2 instance due to an unknown or non-working password can cause significant downtime. Instead of taking a backup, creating a new instance, and reconfiguring the environment—which is time-consuming and impacts business operations—we leverage AWS Systems Manager (SSM) to efficiently recover access without disruption.

  • Recovery Process
  • Prerequisites
  • Configuration Overview
  • Best Practices
  • Conclusion

Prerequisites

Before you start, ensure the following prerequisites are met:

  1. SSM Agent Installed: The SSM agent must be installed and run on the Windows instance. AWS provides pre-configured AMIs with the agent installed.
  2. IAM Role Attached: Attach an IAM role to your instance with the necessary permissions. The policy should include:
    • AmazonSSMManagedInstanceCore
    • AmazonSSMFullAccess (or custom permissions to allow session management and run commands).
  3. Instance Managed by SSM: The instance must be registered as a managed instance in Systems Manager.

Configuration Overview

Follow this procedure if all you need is a PowerShell prompt on the target instance.

1. Log in to the AWS Management Console

  • Navigate to the EC2 service in the AWS Management Console.
  • Open the instance in the AWS console & click Connect.

S1

  • This opens a PowerShell session with “ssm-user”.

Picture2

2. Verify the Active Users

Run Commands to Reset the Password

With the session active, follow these steps to reset the password:

  • Run the following PowerShell command to list the local users: get-localuser

Picture3

  • Identify the username for which you need to reset the password.
  • Reset the password using the following command:

Replace <username> with the actual username and <password> with your new password.

net user Username password

3. Validate the New Password

  • Use Remote Desktop Protocol (RDP) to log into the Windows instance using the updated credentials.
  • To open an RDP connection to the instance in your browser, follow this procedure.
  • Open the instance in the AWS console & click Connect:
  • Switch to the “RDP client” tab & use Fleet Manager:

Picture4

  • Able to access the server using “RDP client,” Please refer to the below screenshot.

Picture5

 

Best Practices

  1. Strong Password Policy: Ensure the new password adheres to your organization’s password policy for security.
  2. Audit Logs: Use AWS CloudTrail to monitor who initiated the SSM session and track changes made.
  3. Restrict Access: Limit who can access SSM and manage your instances by defining strict IAM policies.

Troubleshooting Tips for Password Recovery

  • SSM Agent Issues: If the instance isn’t listed in SSM, verify that the SSM agent is installed and running.
  • IAM Role Misconfigurations: Ensure the IAM role attached to the instance has the correct permissions.
  • Session Manager Setup: If using the CLI, confirm that the Session Manager plugin is installed and correctly configured on your local machine.

 

Conclusion

AWS Systems Manager is a powerful tool that simplifies Windows password recovery and enhances the overall management and security of your instances. By leveraging SSM, you can avoid downtime, maintain access to critical instances, and adhere to AWS best practices for operational efficiency.

 

]]>
https://blogs.perficient.com/2025/02/25/windows-password-recovery-with-aws-ssm/feed/ 0 377706