Privileged Identity Management (PIM) is a service in Microsoft Entra ID that enables you to manage, control, and monitor access to important resources in your organization. These resources include those in Microsoft Entra ID, Azure, and other Microsoft Online Services such as Microsoft 365 or Microsoft Intune. This blog has been written to help those who want to set up just-in-time access for Azure resources and provide access to the subscription level only.
PIM ensures that only the right people can access essential systems when needed and only for a short time. This reduces the chances of misuse by someone with powerful access.
PIM ensures that people only have the access they need to do their jobs. This means they can’t access anything unnecessary, keeping things secure.
With PIM, users can get special access for a set period. Once the time is up, the access is automatically removed, preventing anyone from holding on to unnecessary permissions.
PIM gives Just-in-Time (JIT) Access, meaning users can only request higher-level access when needed, and it is automatically taken away after a set time. This reduces the chances of having access for too long.
PIM lets you set up a process where access needs to be approved by someone (like a manager or security) before it’s given. This adds another layer of control.
PIM keeps detailed records of who asked for and received special access, when they accessed something, and what they did. This makes it easier to catch any suspicious activities.
Instead of giving someone admin access all the time, PIM allows it to be granted for specific tasks. Admins only get special access when needed, and for as long as necessary, so there is less risk.
Some industries require companies to follow strict rules (like protecting personal information). PIM helps meet these rules by controlling who has access and keeping track of it for audits.
2. Select Your Assignment
Azure PIM helps make your system safer by ensuring that only the right people can access essential resources for a short time. It lets you give access when needed (just-in-time), require approval for special access, automatically manage who can access what, and keep track of everything. PIM is essential for organizations that want to limit who can access sensitive information, ensure only the necessary people have the correct permissions at the right time, and prevent unauthorized access.
]]>TLS certificate lifetimes are being significantly reduced over the next few years as part of an industry-wide push toward greater security and automation. Here’s the phased timeline currently in place:
Now through March 15, 2026: Maximum lifetime is 398 days
Starting March 15, 2026: Reduced to 200 days
Starting March 15, 2027: Further reduced to 100 days
Starting March 15, 2029: Reduced again to just 47 days
For teams managing Sitecore implementations, this is more than a policy shift—it introduces operational urgency. As certificates begin expiring more frequently, any reliance on manual tracking or last-minute renewals could result in costly downtime or broken integrations.
If your Sitecore environment includes secure endpoints, custom domains, or external integrations, now is the time to assess your certificate strategy and move toward automation.
Sitecore projects often involve:
Multiple environments (development, staging, production) with different certificates
Custom domains or subdomains used for CDNs, APIs, headless apps, or marketing campaigns
Third-party integrations that require secure connections
Marketing and personalization features that rely on seamless uptime
A single expired certificate can lead to downtime, loss of customer trust, or failed integrations—any of which could severely impact your digital experience delivery.
Increased risk of missed renewals if teams rely on manual tracking
Broken environments due to expired certs in Azure, IIS, or Kubernetes configurations
Delayed deployments when certificates must be re-issued last minute
SEO and trust damage if browsers start flagging your site as insecure
To stay ahead of the TLS certificate lifecycle changes, here are concrete steps you should take:
Audit all environments and domains using certificates
Include internal services, custom endpoints, and non-production domains
Use a centralized tracking tool (e.g., Azure Key Vault, HashiCorp Vault, or a certificate management platform)
Wherever possible, switch to automated certificate issuance and renewal
Use services like:
Azure App Service Managed Certificates
Let’s Encrypt with automation scripts
ACME protocol integrations for Kubernetes
For Azure-hosted Sitecore instances, leverage Key Vault and App Gateway integrations
Assign clear ownership of certificate management per environment or domain
Document who is responsible for renewals and updates
Add certificate health checks to your DevOps dashboards
Validate certificate validity before deployments
Fail builds if certificates are nearing expiration
Include certificate management tasks as part of environment provisioning
Hold knowledge-sharing sessions with developers, infrastructure engineers, and marketers
Make sure everyone understands the impact of expired certificates on the Sitecore experience
Simulate certificate expiry in non-production environments
Monitor behavior in Sitecore XP and XM environments, including CD and CM roles
Validate external systems (e.g., CDNs, integrations, identity providers) against cert failures
TLS certificate management is no longer a “set it and forget it” task. With shorter lifetimes becoming the norm, proactive planning is essential to avoid downtime and ensure secure, uninterrupted experiences for your users.
Start by auditing your current certificates and work toward automating renewals. Make certificate monitoring part of your DevOps practice, and ensure your Sitecore teams are aware of the upcoming changes.
Action Items for This Week:
Identify all TLS certificates in your Sitecore environments
Document renewal dates and responsible owners
Begin automating renewals for at least one domain
Review Azure and Sitecore documentation for certificate integration options
Securing your Sitecore XM Cloud environment is critical to protecting your content, your users, and your brand. This post walks through key areas of XM Cloud security, including user management, authentication, secure coding, and best practices you can implement today to reduce your security risks.
We’ll also take a step back to look at the Sitecore Cloud Portal—the central control panel for managing user access across your Sitecore organization. Understanding both the Cloud Portal and XM Cloud’s internal security tools is essential for building a strong foundation of security.
The Sitecore Cloud Portal is the gateway to managing user access across all Sitecore DXP tools, including XM Cloud. Proper setup here ensures that only the right people can view or change your environments and content.
Each user you invite to your Sitecore organization is assigned an Organization Role, which defines their overall access level:
Organization Owner – Full control over the organization, including user and app management.
Organization Admin – Can manage users and assign app access, but cannot assign/remove Owners.
Organization User – Limited access; can only use specific apps they’ve been assigned to.
Tip: Assign the “Owner” role sparingly—only to those who absolutely need full administrative control.
Beyond organization roles, users are granted App Roles for specific products like XM Cloud. These roles determine what actions they can take inside each product:
Admin – Full access to all features of the application.
User – More limited, often focused on content authoring or reviewing.
From the Admin section of the Cloud Portal, Organization Owners or Admins can:
Invite new team members and assign roles.
Grant access to apps like XM Cloud and assign appropriate app-level roles.
Review and update roles as team responsibilities shift.
Remove access when team members leave or change roles.
Security Tips:
Review user access regularly.
Use the least privilege principle—only grant what’s necessary.
Enable Multi-Factor Authentication (MFA) and integrate Single Sign-On (SSO) for extra protection.
Within XM Cloud itself, there’s another layer of user and role management that governs access to content and features.
Users: Individual accounts representing people who work in the XM Cloud instance.
Roles: Collections of users with shared permissions.
Domains: Logical groupings of users and roles, useful for managing access in larger organizations.
Recommendation: Don’t assign permissions directly to users—assign them to roles instead for easier management.
Permissions can be set at the item level for things like reading, writing, deleting, or publishing. Access rights include:
Read
Write
Create
Delete
Administer
Each right can be set to:
Allow
Deny
Inherit
Follow the Role-Based Access Control (RBAC) model.
Create custom roles to reflect your team’s structure and responsibilities.
Audit roles and access regularly to prevent privilege creep.
Avoid modifying default system users—create new accounts instead.
XM Cloud supports robust authentication mechanisms to control access between services, deployments, and repositories.
When integrating external services or deploying via CI/CD, you’ll often need to authenticate through client credentials.
Use the Sitecore Cloud Portal to create and manage client credentials.
Grant only the necessary scopes (permissions) to each credential.
Rotate credentials periodically and revoke unused ones.
Use secure secrets management tools to store client IDs and secrets outside of source code.
For Git and deployment pipelines, connect XM Cloud environments to your repository using secure tokens and limit access to specific environments or branches when possible.
Security isn’t just about who has access—it’s also about how your code and data behave in production.
Sanitize all inputs to prevent injection attacks.
Avoid exposing sensitive information in logs or error messages.
Use HTTPS for all external communications.
Validate data both on the client and server sides.
Keep dependencies up to date and monitor for vulnerabilities.
When using visitor data for personalization, be transparent and follow data privacy best practices:
Explicitly define what data is collected and how it’s used.
Give visitors control over their data preferences.
Avoid storing personally identifiable information (PII) unless absolutely necessary.
Securing your XM Cloud environment is an ongoing process that involves team coordination, regular reviews, and constant vigilance. Here’s how to get started:
Audit your Cloud Portal roles and remove unnecessary access.
Establish a role-based structure in XM Cloud and limit direct user permissions.
Implement secure credential management for deployments and integrations.
Train your developers on secure coding and privacy best practices.
]]>The stronger your security practices, the more confidence you—and your clients—can have in your digital experience platform.
This guide will walk you through building a small application step-by-step, focusing on integrating several powerful tools and concepts essential for modern Android development.
The Goal: Build a “Task Reporter” app. Users can add simple task descriptions. These tasks are saved to Firestore. A background worker will periodically “report” (log a message or update a counter in Firestore) that the app is active. We’ll have dev
and prod
flavors pointing to different Firestore collections/data and distribute the dev
build for testing.
Let’s get started!
AdvancedConceptsApp
(or your choice).com.yourcompany.advancedconceptsapp
).build.gradle.kts
).build.gradle.kts
(or build.gradle
) files. This adds the necessary dependencies.google-services.json
:
com.yourcompany.advancedconceptsapp
) is registered. If not, add it.google-services.json
file.app/
directory.Let’s create a simple UI to add and display tasks.
app/build.gradle.kts
.
dependencies {
// Core & Lifecycle & Activity
implementation("androidx.core:core-ktx:1.13.1") // Use latest versions
implementation("androidx.lifecycle:lifecycle-runtime-ktx:2.8.1")
implementation("androidx.activity:activity-compose:1.9.0")
// Compose
implementation(platform("androidx.compose:compose-bom:2024.04.01")) // Check latest BOM
implementation("androidx.compose.ui:ui")
implementation("androidx.compose.ui:ui-graphics")
implementation("androidx.compose.ui:ui-tooling-preview")
implementation("androidx.compose.material3:material3")
implementation("androidx.lifecycle:lifecycle-viewmodel-compose:2.8.1")
// Firebase
implementation(platform("com.google.firebase:firebase-bom:33.0.0")) // Check latest BOM
implementation("com.google.firebase:firebase-firestore-ktx")
// WorkManager
implementation("androidx.work:work-runtime-ktx:2.9.0") // Check latest version
}
Sync Gradle files.
data/Task.kt
.
package com.yourcompany.advancedconceptsapp.data
import com.google.firebase.firestore.DocumentId
data class Task(
@DocumentId
val id: String = "",
val description: String = "",
val timestamp: Long = System.currentTimeMillis()
) {
constructor() : this("", "", 0L) // Firestore requires a no-arg constructor
}
ui/TaskViewModel.kt
. (We’ll update the collection name later).
package com.yourcompany.advancedconceptsapp.ui
import androidx.lifecycle.ViewModel
import androidx.lifecycle.viewModelScope
import com.google.firebase.firestore.ktx.firestore
import com.google.firebase.firestore.ktx.toObjects
import com.google.firebase.ktx.Firebase
import com.yourcompany.advancedconceptsapp.data.Task
// Import BuildConfig later when needed
import kotlinx.coroutines.flow.MutableStateFlow
import kotlinx.coroutines.flow.StateFlow
import kotlinx.coroutines.launch
import kotlinx.coroutines.tasks.await
// Temporary placeholder - will be replaced by BuildConfig field
const val TEMPORARY_TASKS_COLLECTION = "tasks"
class TaskViewModel : ViewModel() {
private val db = Firebase.firestore
// Use temporary constant for now
private val tasksCollection = db.collection(TEMPORARY_TASKS_COLLECTION)
private val _tasks = MutableStateFlow<List<Task>>(emptyList())
val tasks: StateFlow<List<Task>> = _tasks
private val _error = MutableStateFlow<String?>(null)
val error: StateFlow<String?> = _error
init {
loadTasks()
}
fun loadTasks() {
viewModelScope.launch {
try {
tasksCollection.orderBy("timestamp", com.google.firebase.firestore.Query.Direction.DESCENDING)
.addSnapshotListener { snapshots, e ->
if (e != null) {
_error.value = "Error listening: ${e.localizedMessage}"
return@addSnapshotListener
}
_tasks.value = snapshots?.toObjects<Task>() ?: emptyList()
_error.value = null
}
} catch (e: Exception) {
_error.value = "Error loading: ${e.localizedMessage}"
}
}
}
fun addTask(description: String) {
if (description.isBlank()) {
_error.value = "Task description cannot be empty."
return
}
viewModelScope.launch {
try {
val task = Task(description = description, timestamp = System.currentTimeMillis())
tasksCollection.add(task).await()
_error.value = null
} catch (e: Exception) {
_error.value = "Error adding: ${e.localizedMessage}"
}
}
}
}
ui/TaskScreen.kt
.
package com.yourcompany.advancedconceptsapp.ui
// Imports: androidx.compose.*, androidx.lifecycle.viewmodel.compose.viewModel, java.text.SimpleDateFormat, etc.
import androidx.compose.foundation.layout.*
import androidx.compose.foundation.lazy.LazyColumn
import androidx.compose.foundation.lazy.items
import androidx.compose.material3.*
import androidx.compose.runtime.*
import androidx.compose.ui.Alignment
import androidx.compose.ui.Modifier
import androidx.compose.ui.unit.dp
import androidx.lifecycle.viewmodel.compose.viewModel
import com.yourcompany.advancedconceptsapp.data.Task
import java.text.SimpleDateFormat
import java.util.Date
import java.util.Locale
import androidx.compose.ui.res.stringResource
import com.yourcompany.advancedconceptsapp.R // Import R class
@OptIn(ExperimentalMaterial3Api::class) // For TopAppBar
@Composable
fun TaskScreen(taskViewModel: TaskViewModel = viewModel()) {
val tasks by taskViewModel.tasks.collectAsState()
val errorMessage by taskViewModel.error.collectAsState()
var taskDescription by remember { mutableStateOf("") }
Scaffold(
topBar = {
TopAppBar(title = { Text(stringResource(id = R.string.app_name)) }) // Use resource for flavor changes
}
) { paddingValues ->
Column(modifier = Modifier.padding(paddingValues).padding(16.dp).fillMaxSize()) {
// Input Row
Row(verticalAlignment = Alignment.CenterVertically, modifier = Modifier.fillMaxWidth()) {
OutlinedTextField(
value = taskDescription,
onValueChange = { taskDescription = it },
label = { Text("New Task Description") },
modifier = Modifier.weight(1f),
singleLine = true
)
Spacer(modifier = Modifier.width(8.dp))
Button(onClick = {
taskViewModel.addTask(taskDescription)
taskDescription = ""
}) { Text("Add") }
}
Spacer(modifier = Modifier.height(16.dp))
// Error Message
errorMessage?.let { Text(it, color = MaterialTheme.colorScheme.error, modifier = Modifier.padding(bottom = 8.dp)) }
// Task List
if (tasks.isEmpty() && errorMessage == null) {
Text("No tasks yet. Add one!")
} else {
LazyColumn(modifier = Modifier.weight(1f)) {
items(tasks, key = { it.id }) { task ->
TaskItem(task)
Divider()
}
}
}
}
}
}
@Composable
fun TaskItem(task: Task) {
val dateFormat = remember { SimpleDateFormat("yyyy-MM-dd HH:mm", Locale.getDefault()) }
Row(modifier = Modifier.fillMaxWidth().padding(vertical = 8.dp), verticalAlignment = Alignment.CenterVertically) {
Column(modifier = Modifier.weight(1f)) {
Text(task.description, style = MaterialTheme.typography.bodyLarge)
Text("Added: ${dateFormat.format(Date(task.timestamp))}", style = MaterialTheme.typography.bodySmall)
}
}
}
MainActivity.kt
: Set the content to TaskScreen
.
package com.yourcompany.advancedconceptsapp
import android.os.Bundle
import androidx.activity.ComponentActivity
import androidx.activity.compose.setContent
import androidx.compose.foundation.layout.fillMaxSize
import androidx.compose.material3.MaterialTheme
import androidx.compose.material3.Surface
import androidx.compose.ui.Modifier
import com.yourcompany.advancedconceptsapp.ui.TaskScreen
import com.yourcompany.advancedconceptsapp.ui.theme.AdvancedConceptsAppTheme
// Imports for WorkManager scheduling will be added in Step 3
class MainActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
AdvancedConceptsAppTheme {
Surface(modifier = Modifier.fillMaxSize(), color = MaterialTheme.colorScheme.background) {
TaskScreen()
}
}
}
// TODO: Schedule WorkManager job in Step 3
}
}
Create a background worker for periodic reporting.
worker/ReportingWorker.kt
. (Collection name will be updated later).
package com.yourcompany.advancedconceptsapp.worker
import android.content.Context
import android.util.Log
import androidx.work.CoroutineWorker
import androidx.work.WorkerParameters
import com.google.firebase.firestore.ktx.firestore
import com.google.firebase.ktx.Firebase
// Import BuildConfig later when needed
import kotlinx.coroutines.tasks.await
// Temporary placeholder - will be replaced by BuildConfig field
const val TEMPORARY_USAGE_LOG_COLLECTION = "usage_logs"
class ReportingWorker(appContext: Context, workerParams: WorkerParameters) :
CoroutineWorker(appContext, workerParams) {
companion object { const val TAG = "ReportingWorker" }
private val db = Firebase.firestore
override suspend fun doWork(): Result {
Log.d(TAG, "Worker started: Reporting usage.")
return try {
val logEntry = hashMapOf(
"timestamp" to System.currentTimeMillis(),
"message" to "App usage report.",
"worker_run_id" to id.toString()
)
// Use temporary constant for now
db.collection(TEMPORARY_USAGE_LOG_COLLECTION).add(logEntry).await()
Log.d(TAG, "Worker finished successfully.")
Result.success()
} catch (e: Exception) {
Log.e(TAG, "Worker failed", e)
Result.failure()
}
}
}
MainActivity.kt
‘s onCreate
method.
// Add these imports to MainActivity.kt
import android.content.Context
import android.util.Log
import androidx.work.*
import com.yourcompany.advancedconceptsapp.worker.ReportingWorker
import java.util.concurrent.TimeUnit
// Inside MainActivity class, after setContent { ... } block in onCreate
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
// ... existing code ...
}
// Schedule the worker
schedulePeriodicUsageReport(this)
}
// Add this function to MainActivity class
private fun schedulePeriodicUsageReport(context: Context) {
val constraints = Constraints.Builder()
.setRequiredNetworkType(NetworkType.CONNECTED)
.build()
val reportingWorkRequest = PeriodicWorkRequestBuilder<ReportingWorker>(
1, TimeUnit.HOURS // ~ every hour
)
.setConstraints(constraints)
.addTag(ReportingWorker.TAG)
.build()
WorkManager.getInstance(context).enqueueUniquePeriodicWork(
ReportingWorker.TAG,
ExistingPeriodicWorkPolicy.KEEP,
reportingWorkRequest
)
Log.d("MainActivity", "Periodic reporting work scheduled.")
}
ReportingWorker
and MainActivity
about scheduling.com.yourcompany.advancedconceptsapp
adb shell cmd jobscheduler run -f com.yourcompany.advancedconceptsapp 999
(The 999 is usually sufficient, it’s a job ID).usage_logs
collection.Create dev
and prod
flavors for different environments.
app/build.gradle.kts
:
android {
// ... namespace, compileSdk, defaultConfig ...
// ****** Enable BuildConfig generation ******
buildFeatures {
buildConfig = true
}
// *******************************************
flavorDimensions += "environment"
productFlavors {
create("dev") {
dimension = "environment"
applicationIdSuffix = ".dev" // CRITICAL: Changes package name for dev builds
versionNameSuffix = "-dev"
resValue("string", "app_name", "Task Reporter (Dev)")
buildConfigField("String", "TASKS_COLLECTION", "\"tasks_dev\"")
buildConfigField("String", "USAGE_LOG_COLLECTION", "\"usage_logs_dev\"")
}
create("prod") {
dimension = "environment"
resValue("string", "app_name", "Task Reporter")
buildConfigField("String", "TASKS_COLLECTION", "\"tasks\"")
buildConfigField("String", "USAGE_LOG_COLLECTION", "\"usage_logs\"")
}
}
// ... buildTypes, compileOptions, etc ...
}
Sync Gradle files.
applicationIdSuffix = ".dev"
. This means the actual package name for your development builds will become something like com.yourcompany.advancedconceptsapp.dev
. This requires an update to your Firebase project setup, explained next. Also note the buildFeatures { buildConfig = true }
block which is required to use buildConfigField
.Because the `dev` flavor now has a different application ID (`…advancedconceptsapp.dev`), the original `google-services.json` file (downloaded in Step 1) will not work for `dev` builds, causing a “No matching client found” error during build.
You must add this new Application ID to your Firebase project:
com.yourcompany.advancedconceptsapp.dev
(replace `com.yourcompany.advancedconceptsapp` with your actual base package name).google-services.json
file offered. This file now contains configurations for BOTH your base ID and the `.dev` suffixed ID.google-services.json
from the app/
directory and replace it with the **newly downloaded** one.app/src
-> New -> Directory. Name it dev
.dev
, create res/values/
directories.app/src
-> New -> Directory. Name it prod
.prod
, create res/values/
directories.app_name
string definition from app/src/main/res/values/strings.xml
into both app/src/dev/res/values/strings.xml
and app/src/prod/res/values/strings.xml
. Or, you can rely solely on the resValue
definitions in Gradle (as done above). Using resValue
is often simpler for single strings like app_name
. If you had many different resources (layouts, drawables), you’d put them in the respective dev/res
or prod/res
folders.TaskViewModel.kt
and ReportingWorker.kt
to use BuildConfig
instead of temporary constants.TaskViewModel.kt change
// Add this import
import com.yourcompany.advancedconceptsapp.BuildConfig
// Replace the temporary constant usage
// const val TEMPORARY_TASKS_COLLECTION = "tasks" // Remove this line
private val tasksCollection = db.collection(BuildConfig.TASKS_COLLECTION) // Use build config field
ReportingWorker.kt change
// Add this import
import com.yourcompany.advancedconceptsapp.BuildConfig
// Replace the temporary constant usage
// const val TEMPORARY_USAGE_LOG_COLLECTION = "usage_logs" // Remove this line
// ... inside doWork() ...
db.collection(BuildConfig.USAGE_LOG_COLLECTION).add(logEntry).await() // Use build config field
Modify TaskScreen.kt
to potentially use the flavor-specific app name (though resValue
handles this automatically if you referenced @string/app_name
correctly, which TopAppBar
usually does). If you set the title directly, you would load it from resources:
// In TaskScreen.kt (if needed)
import androidx.compose.ui.res.stringResource
import com.yourcompany.advancedconceptsapp.R // Import R class
// Inside Scaffold -> topBar
TopAppBar(title = { Text(stringResource(id = R.string.app_name)) }) // Use string resource
devDebug
, devRelease
, prodDebug
, and prodRelease
.devDebug
. Run the app. The title should say “Task Reporter (Dev)”. Data should go to tasks_dev
and usage_logs_dev
in Firestore.prodDebug
. Run the app. The title should be “Task Reporter”. Data should go to tasks
and usage_logs
.R8 is the default code shrinker and obfuscator in Android Studio (successor to Proguard). It’s enabled by default for release
build types. We need to ensure it doesn’t break our app, especially Firestore data mapping.
app/build.gradle.kts
Release Build Type:
android {
// ...
buildTypes {
release {
isMinifyEnabled = true // Should be true by default for release
isShrinkResources = true // R8 handles both
proguardFiles(
getDefaultProguardFile("proguard-android-optimize.txt"),
"proguard-rules.pro" // Our custom rules file
)
}
debug {
isMinifyEnabled = false // Usually false for debug
proguardFiles(
getDefaultProguardFile("proguard-android-optimize.txt"),
"proguard-rules.pro"
)
}
// ... debug build type ...
}
// ...
}
isMinifyEnabled = true
enables R8 for the release
build type.
app/proguard-rules.pro
:
app/proguard-rules.pro
file. Add the following:
# Keep Task data class and its members for Firestore serialization
-keep class com.yourcompany.advancedconceptsapp.data.Task { (...); *; }
# Keep any other data classes used with Firestore similarly
# -keep class com.yourcompany.advancedconceptsapp.data.AnotherFirestoreModel { (...); *; }
# Keep Coroutine builders and intrinsics (often needed, though AGP/R8 handle some automatically)
-keepnames class kotlinx.coroutines.intrinsics.** { *; }
# Keep companion objects for Workers if needed (sometimes R8 removes them)
-keepclassmembers class * extends androidx.work.Worker {
public static ** Companion;
}
# Keep specific fields/methods if using reflection elsewhere
# -keepclassmembers class com.example.SomeClass {
# private java.lang.String someField;
# public void someMethod();
# }
# Add rules for any other libraries that require them (e.g., Retrofit, Gson, etc.)
# Consult library documentation for necessary Proguard/R8 rules.
-keep class ... { <init>(...); *; }
: Keeps the Task
class, its constructors (<init>
), and all its fields/methods (*
) from being removed or renamed. This is crucial for Firestore.-keepnames
: Prevents renaming but allows removal if unused.-keepclassmembers
: Keeps specific members within a class.3. Test the Release Build:
prodRelease
build variant.prodRelease
as the variant. Click Finish.app/prod/release/
).adb install app-prod-release.apk
.usage_logs
)? If it crashes or data doesn’t save/load correctly, R8 likely removed something important. Check Logcat for errors (often ClassNotFoundException
or NoSuchMethodError
) and adjust your proguard-rules.pro
file accordingly.
Configure Gradle to upload development builds to testers via Firebase App Distribution.
api-project-xxx-yyy.json
move this file to root project at the same level of app folder *Ensure that this file be in your local app, do not push it to the remote repository because it contains sensible data and will be rejected later
app/build.gradle.kts
:
// Apply the plugin at the top
plugins {
// ... other plugins id("com.android.application"), id("kotlin-android"), etc.
alias(libs.plugins.google.firebase.appdistribution)
}
android {
// ... buildFeatures, flavorDimensions, productFlavors ...
buildTypes {
getByName("release") {
isMinifyEnabled = true // Should be true by default for release
isShrinkResources = true // R8 handles both
proguardFiles(
getDefaultProguardFile("proguard-android-optimize.txt"),
"proguard-rules.pro" // Our custom rules file
)
}
getByName("debug") {
isMinifyEnabled = false // Usually false for debug
proguardFiles(
getDefaultProguardFile("proguard-android-optimize.txt"),
"proguard-rules.pro"
)
}
firebaseAppDistribution {
artifactType = "APK"
releaseNotes = "Latest build with fixes/features"
testers = "briew@example.com, bri@example.com, cal@example.com"
serviceCredentialsFile="$rootDir/api-project-xxx-yyy.json
"//do not push this line to the remote repository or stablish as local variable } } }
Add library version to libs.version.toml
[versions]
googleFirebaseAppdistribution = "5.1.1"
[plugins]
google-firebase-appdistribution = { id = "com.google.firebase.appdistribution", version.ref = "googleFirebaseAppdistribution" }
Ensure the plugin classpath is in the
project-level
build.gradle.kts
:
project build.gradle.kts
plugins {
// ...
alias(libs.plugins.google.firebase.appdistribution) apply false
}
Sync Gradle files.
devDebug
, devRelease
, prodDebug
, prodRelease
)../gradlew assembleRelease appDistributionUploadProdRelease
./gradlew assembleRelease appDistributionUploadDevRelease
./gradlew assembleDebug appDistributionUploadProdDebug
./gradlew assembleDebug appDistributionUploadDevDebug
Automate building and distributing the `dev` build on push to a specific branch.
api-project-xxx-yyy.json
located at root project and copy the content.github/workflows/
..github/workflows/
, create a new file named android_build_distribute.yml
.
name: Android CI
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: set up JDK 17
uses: actions/setup-java@v3
with:
java-version: '17'
distribution: 'temurin'
cache: gradle
- name: Grant execute permission for gradlew
run: chmod +x ./gradlew
- name: Build devRelease APK
run: ./gradlew assembleRelease
- name: upload artifact to Firebase App Distribution
uses: wzieba/Firebase-Distribution-Github-Action@v1
with:
appId: ${{ secrets.FIREBASE_APP_ID }}
serviceCredentialsFileContent: ${{ secrets.FIREBASE_SERVICE_ACCOUNT_KEY_JSON }}
groups: testers
file: app/build/outputs/apk/dev/release/app-dev-release-unsigned.apk
.github/workflows/android_build_distribute.yml
file and push it to your main
branch on GitHub.
devDebug
and prodDebug
in Android Studio. Verify the app name changes and data goes to the correct Firestore collections (tasks_dev
/tasks
, usage_logs_dev
/usage_logs
).ReportingWorker
runs periodically and logs data to the correct Firestore collection based on the selected flavor.prodRelease
APK manually. Ensure all features work, especially adding/viewing tasks (Firestore interaction). Check Logcat for crashes related to missing classes/methods.devDebug
(or devRelease
) builds uploaded manually or via CI/CD. Ensure they can install and run the app.develop
branch. Verify the build appears in Firebase App Distribution.
Congratulations! You’ve navigated complex Android topics including Firestore, WorkManager, Compose, Flavors (with correct Firebase setup), R8, App Distribution, and CI/CD.
This project provides a solid foundation. From here, you can explore:
If you want to have access to the full code in my GitHub repository, contact me in the comments.
AdvancedConceptsApp/
├── .git/
├── .github/workflows/android_build_distribute.yml
├── .gradle/
├── app/
│ ├── build/
│ ├── libs/
│ ├── src/
│ │ ├── main/ # Common code, res, AndroidManifest.xml
│ │ │ └── java/com/yourcompany/advancedconceptsapp/
│ │ │ ├── data/Task.kt
│ │ │ ├── ui/TaskScreen.kt, TaskViewModel.kt, theme/
│ │ │ ├── worker/ReportingWorker.kt
│ │ │ └── MainActivity.kt
│ │ ├── dev/ # Dev flavor source set (optional overrides)
│ │ ├── prod/ # Prod flavor source set (optional overrides)
│ │ ├── test/ # Unit tests
│ │ └── androidTest/ # Instrumentation tests
│ ├── google-services.json # *** IMPORTANT: Contains configs for BOTH package names ***
│ ├── build.gradle.kts # App-level build script
│ └── proguard-rules.pro # R8/Proguard rules
├── api-project-xxx-yyy.json # Firebase service account key json
├── gradle/wrapper/
├── build.gradle.kts # Project-level build script
├── gradle.properties
├── gradlew
├── gradlew.bat
└── settings.gradle.kts
]]>
In Part 1 of this series, we learned about the importance of AWS and Pulumi. Now, let’s explore the demo part in this practical session, which will create a service on AWS VPC by using Pulumi.
It will be included with the details mentioned above
Pulumi Preview shows a dry-run of changes before applying them. It helps you see what resources will be created (+), updated (~), or deleted (-) without actually making any changes.
Step 4: Deploy Infrastructure
Pulumi up deploys or updates infrastructure by applying changes from your Pulumi code.
Creating VPC Peering with Pulumi
Removes all resources managed by Pulumi, restoring the environment to its original state.
Pulumi stack rm removes a Pulumi stack and its state but does not delete cloud resources unless –force is used.
After removed Stack
AWS Console Page after deleting VPC
Pulumi offers a powerful, flexible, and developer-friendly approach to managing AWS infrastructure. By leveraging Pulumi, you can:
With Pulumi’s modern IaC approach, you can move beyond traditional Terraform and CloudFormation and embrace a more scalable, flexible, and efficient way to manage AWS resources.
Pulumi is a modern Infrastructure as Code (IaC) tool that allows you to define, deploy, and manage cloud infrastructure using general-purpose programming languages. Unlike Terraform, which uses HCL, Pulumi enables you to define infrastructure using Python, making it easier for developers to integrate infrastructure with application code.
The Pulumi Dashboard (if using Pulumi Cloud) helps track:
So, yes, Pulumi destroys resources and updates the Pulumi Dashboard accordingly.
A company running a high-traffic web application on AWS wants to automate its cloud infrastructure using Pulumi. The goal is to deploy a highly available, scalable, and secure architecture with:
Pulumi is an Infrastructure as Code (IaC) tool that allows you to define cloud infrastructure using programming languages like Python, TypeScript, and Go. In this architecture, Pulumi interacts with AWS to deploy multiple services.
Components in the architecture.
Using Pulumi (e.g., Python, TypeScript), you write a script to define resources like VPC, EC2, S3, etc.
Run pulumi up, translating the code into AWS API calls to create and configure services.
Modify infrastructure using Pulumi’s code and redeploy using pulumi up.
Run Pulumi destroy to remove the entire setup.
Pulumi is a powerful Infrastructure-as-Code (IaC) tool that enables developers to provision and manage AWS resources using familiar programming languages like Python. Unlike traditional declarative tools like Terraform, Pulumi allows for greater flexibility through loops, conditionals, and reusable components.
In this blog, we explored how Pulumi can deploy AWS services like EC2, S3, RDS, and Lambda, along with an architecture diagram to illustrate the deployment. With Pulumi, you can streamline cloud infrastructure management while leveraging best practices in software development.
After covering AWS-Pulumi in Part 1, stay tuned for Part 2, where we’ll set up a VPC on AWS using Pulumi.
]]>In this blog, we will explore the various Containers, their functionalities, and how they interact to create a seamless customer shopping experience.
HCL Commerce Containers provide a modular and scalable approach to managing ecommerce applications.
HCL Commerce Containers are individual components that work together to deliver a complete e-commerce solution.
HCL Commerce containers
This blog explored the various HCL Commerce Containers, their functionalities, and how they work together to create a robust e-commerce solution. By understanding and implementing these Containers, you can enhance the performance and scalability of your e-commerce platform.
Please go through the link to learn about Deploying HCL commerce elasticsearch and solrbased solutions” https://blogs.perficient.com/2024/12/11/deploying-hcl-commerce-elasticsearch-and-solr-based-solutions/
]]>Terraform is a HashiCorp-owned Infrastructure as Code (IaC) technology that allows you to develop, deploy, alter, and manage infrastructure using code. It maintains your infrastructure’s lifespan, enables you to define resources and infrastructure in human-readable, declarative configuration files, and manages your infrastructure’s lifecycle.
Code is simply instructions written in the HCL (Hashi Corp Configuration Language) language in a human-readable format with the extension (.tf) or (.tf.json) which is written in HCL (Hashi Corp Configuration Language) Language.
Infrastructure as code (IaC) refers to using configuration files to control your IT infrastructure.
Managing IT infrastructure has traditionally been a laborious task. People would physically install and configure servers, which is time-consuming and costly.
Nowadays, businesses are growing rapidly, so manual-managed infrastructure can no longer meet the demands of today’s businesses.
To meet the customer’s demands and save costs, IT organizations quickly adopt the Public Cloud, which is mostly API-driven, and they architecting their application in such a way that to support a much higher level of elasticity and deploy their application on supporting technologies like Docker container and Public Cloud. To build, manage, and deploy the code on those technologies, a tool like Terraform is invaluable for delivering the product quickly.
A Jenkins Pipeline is a suite of plugins that supports building, deploying, and automating continuous integration and delivery (CI/CD) workflows. It provides a way to define the entire build process in a scripted or declarative format called a Jenkinsfile. This allows developers to manage and version their CI/CD processes alongside their application code.
Jenkins provides two types of pipelines:
AWS ECS (Elastic Container Service) is an AWS Container managed Service that allows you to run and manage Docker containers on a cluster of Virtual Servers.
The container OS maintains application isolation. Container service is trending nowadays.
In this architecture diagram, we will launch an EC2 Instance using Terraform in AWS and the user data for Jenkins Server configuration. By Jenkins CICD Pipeline, we fetch the Source Code from GitHub, create a Docker Image, and upload it to the ECR Docker Registry. We will deploy the application on the ECS Cluster with this docker image.
Step 1: Create an IAM user and an Access Key/Secret Key for the IAM user, and provide the appropriate permissions, such as ECR and Docker Container Policy.
Step 2: Create an ECR Repository to store the Docker Images.
Step 3: Create an ECS Cluster
Step 3.1: Create a task Definition. The Task Definition contains all the information to run the Container, such as the container Image URL and Compute Power.
Step 3.2- Execution Role: It is attached to the Task Definition with permission from CloudWatch Logs to collect the real-time logs of the Container and ECS Task Execution Role Policy.
Step 3.3: Create a Service in Cluster: If Task Definition cannot handle the deployment, we have to create a service that is an intermediary between the application and Container Instances.
Step 4: Jenkins Server Configuration
Let’s deploy the code using Jenkins on ECS Cluster: Jenkinsfile for CICD Pipeline: https://github.com/prafulitankar/GitOps/blob/main/Jenkinsfile
Create a Jenkins Pipeline, which should be a Declarative Pipeline.
We have done with the Infra setup and Jenkins Pipeline. Let’s Run the Jenkins Pipeline:
Once Jenkins Pipeline is successfully executed, the ECS service will try to deploy a new revision of Docker Image.
Output: Once the Pipeline was executed successfully, our application deployed successfully on the ECS Cluster output of the application.
We launched the Jenkins Server on an EC2 Instance with Terraform. Then, we created an ECR repository to store the Docker image, ECS Cluster, task definition, and Service to deploy the application. Using the Jenkins pipeline, we pulled the source code from GitHub, built the code, created a Docker image, and uploaded it to the ECR repository. This is our CI part, and then we deployed our application on ECS, which is CD.
]]>In the ever-evolving landscape of cloud computing, Infrastructure as Code (IaC) has emerged as a cornerstone practice for managing and provisioning infrastructure. IaC enables developers to define infrastructure configurations using code, ensuring consistency, automation, and scalability. AWS CloudFormation, a key service in the AWS ecosystem, simplifies IaC by allowing users to easily model and set up AWS resources. This blog explores the best practices for utilizing AWS CloudFormation to achieve reliable, secure, and efficient infrastructure management.
AWS CloudFormation provides a comprehensive solution for automating the deployment and management of AWS resources. The primary advantages of using CloudFormation include:
Breaking down large CloudFormation templates into smaller, reusable components enhances maintainability and scalability. Modularization allows you to create separate templates for different infrastructure components such as networking, compute instances, and databases.
Example:
The network.yml template creates the VPC and subnets in this example, while the compute.yml template provisions the EC2 instance. You can use Export and ImportValue functions to share resource outputs between templates.
Nested stacks allow you to create a parent stack that references child stacks, improving reusability and modularization.
Example:
Using nested stacks ensures a clean separation of concerns and simplifies stack management.
Enhance template reusability and flexibility through parameterization:
Example:
Securing infrastructure configurations is paramount. Best practices include:
Example:
Integrating CloudFormation with version control systems and CI/CD pipelines improves collaboration and automation:
Example Pipeline:
Validation and testing are critical for ensuring the reliability of CloudFormation templates:
Example:
cfn-lint template.yml aws cloudformation create-change-set --stack-name MyStack --template-body file://template.yml
Protecting infrastructure from unauthorized changes and maintaining consistency is essential:
Example Stack Policy:
Below is a high-level architecture diagram illustrating how AWS.
AWS CloudFormation is a powerful tool for implementing infrastructure as code, offering automation, consistency, and scalability. By following best practices such as template modularization, security considerations, and automation, organizations can enhance the reliability and efficiency of their cloud infrastructure. Adopting AWS CloudFormation simplifies infrastructure management and strengthens overall security and compliance.
Embracing these best practices will enable businesses to leverage the full potential of AWS CloudFormation, fostering a more agile and resilient cloud environment.
]]>
Automating deployments is crucial for efficiency and reliability in today’s fast-paced development environment. GitHub Actions provides a seamless way to implement CI/CD pipelines, allowing developers to automate the deployment of static websites without manual intervention.
In this blog, we will explore how to deploy a static website to an AWS S3 bucket using GitHub Actions. We’ll cover setting up an S3 bucket, configuring IAM roles for secure authentication, and leveraging GitHub Actions workflows to streamline deployment. By the end, you’ll have a fully automated pipeline that ensures quick and secure deployments with minimal effort.
First, create a GitHub repository. I already made one with the same name, which is why it exists.
You can clone the repository from the URL below and put it into your local system. I have added the website-related code to my GitHub repository, so you just need to clone it: https://github.com/Kunal2795/Static-Website.git.
Push the code to host this static website with your changes, such as updating the bucket name and AWS region. I already have it locally, so you just need to push it using the Git commands below:
Once the changes are pushed to your GitHub repository, ensure the main. The yaml file is in the .github/workflows directory.
If the main.yaml file is not present in the .github/workflows/ directory. Create it and add a job to run the static website pipeline in GitHub Actions. The main.yaml file is the primary configuration file in GitHub Actions that runs the entire pipeline.
Add the following job code to the main.yaml file in the .github/workflows/ directory:
name: Portfolio Deployment2
on:
push:
branches:
– main
jobs:
build-and-deploy:
runs-on: [self-hosted, silver]
steps:
– name: Checkout
uses: actions/checkout@v1
– name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-2
– name: Deploy static site to S3 bucket
run: aws s3 sync . s3://kc-devops –delete
You need to make some modifications in the above jobs, such as:
Launch an EC2 instance with Ubuntu OS using a simple configuration.
After that, create a self-hosted runner using specific commands. To get these commands, go to Settings in GitHub, navigate to Actions, click on Runners, and then select Create New Self-Hosted Runner.
Select Linux as the runner image.
Once the runner is downloaded and configured, check its status to ensure it is idle or offline. If it is offline, start the GitHub Runner service on your EC2 server.
Also, ensure that AWS CLI is installed on your server.
Create an IAM user and grant it full access to EC2 and S3 services.
Then, go to Security Credentials, create an Access Key and Secret Access Key, and securely copy and store both the Access Key and Secret Access Key in a safe place.
Next, navigate to GitHub Actions → Secrets & Variables → Actions, then add your AWS Access Key ID and Secret Access Key securely.
After adding the Access Key ID and Secret Access Key, proceed to the next section: S3.
Create an S3 bucket—I have created one with the name kc-devops.
Add the policy below to your S3 bucket and update the bucket name with your own bucket name.
After setting up everything, go to GitHub Actions, open the main. In the yaml file, update the bucket name and commit the changes.
Then, click the Actions tab to see all your triggered workflows and their status.
We can see that all the steps for the build and deploy jobs have been successfully completed.
Lastly, sign in to the AWS Management Console and open the Amazon S3 console. Check all the codes are stored in your bucket.
Then, go to the Properties tab. Under Static website hosting, find and click on the Endpoint URL. (Bucket Website endpoint)
This Endpoint URL is the Amazon S3 website endpoint for your bucket.
Finally, we have successfully deployed and hosted a static website using automation to the Amazon S3 bucket.
With this setup, whenever you push changes to your GitHub repository, GitHub Actions automatically trigger the deployment process. This ensures that your static website is seamlessly updated and deployed to your AWS S3 bucket without any manual intervention. This automation streamlines the deployment workflow, making it more efficient and error-free.
]]>
As part of security and compliance best practices, it is essential to enhance data protection by transitioning from AWS-managed encryption keys to Customer Managed Keys (CMK).
During database migration or restoration, it is not possible to directly change encryption from AWS-managed keys to Customer-Managed Keys (CMK).
During migration, the database snapshot must be created and re-encrypted with CMK to ensure a secure and efficient transition while minimizing downtime. This document provides a streamlined approach to saving time and ensuring compliance with best practices.
Fig: RDS Snapshot Encrypted with AWS-Managed KMS Key
This document aims to provide a structured process for creating a database snapshot, encrypting it with a new CMK, and restoring it while maintaining the original database configurations. This ensures minimal disruption to operations while strengthening data security.
Before proceeding with the snapshot and restoration process, ensure the following prerequisites are met:
Instance size, Database configurations, Networking details, Storage options
Following these steps ensures a secure, efficient, and smooth process for taking, encrypting, and restoring RDS snapshots in AWS. Implementing best practices such as automated backups, encryption key management, and proactive monitoring can enhance data security and operational resilience. Proper planning and validation at each step will minimize risks and help maintain business continuity.
]]>The Systems Manager (SSM) streamlines managing Windows instances in AWS. If you’ve ever forgotten the password for your Windows EC2 instance, SSM offers a secure and efficient way to reset it without additional tools or manual intervention.
In a production environment, losing access to a Windows EC2 instance due to an unknown or non-working password can cause significant downtime. Instead of taking a backup, creating a new instance, and reconfiguring the environment—which is time-consuming and impacts business operations—we leverage AWS Systems Manager (SSM) to efficiently recover access without disruption.
Before you start, ensure the following prerequisites are met:
Follow this procedure if all you need is a PowerShell prompt on the target instance.
With the session active, follow these steps to reset the password:
Replace <username> with the actual username and <password> with your new password.
AWS Systems Manager is a powerful tool that simplifies Windows password recovery and enhances the overall management and security of your instances. By leveraging SSM, you can avoid downtime, maintain access to critical instances, and adhere to AWS best practices for operational efficiency.
]]>