Software Development Articles / Blogs / Perficient https://blogs.perficient.com/category/services/innovation-product-development/development/software-development/ Expert Digital Insights Wed, 14 May 2025 18:22:09 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Software Development Articles / Blogs / Perficient https://blogs.perficient.com/category/services/innovation-product-development/development/software-development/ 32 32 30508587 Helpful Git Aliases To Maximize Developer Productivity https://blogs.perficient.com/2025/05/14/helpful-git-aliases-to-maximize-developer-productivity/ https://blogs.perficient.com/2025/05/14/helpful-git-aliases-to-maximize-developer-productivity/#respond Wed, 14 May 2025 16:14:25 +0000 https://blogs.perficient.com/?p=381308
Git is a powerful tool, but it can sometimes be overwhelming with the number of commands required to perform common tasks. If you’ve ever found yourself typing out long, complex Git commands and wondered if there’s a faster way to get things done, you’re not alone. One way to streamline your workflow and reduce repetitive typing is by using Git aliases. These are shorthand commands that allow you to perform lengthy Git operations with just a few characters.
 
In this post, we’ll explore some useful Git aliases that can help you maximize your productivity, speed up common workflows, and maintain a clean Git history.

How To Add Aliases To Your Git Config File

To start using Git aliases, you need to add them to your .gitconfig file. This file is typically located in your home directory, and it contains various configurations for your Git setup, including user details and aliases.
 
Here’s how to add aliases:
    1. Open the .gitconfig file:
      • On Linux/MacOS, the .gitconfig file is typically located in your home directory (~/.gitconfig).
      • On Windows, it is located at C:\Users\<YourUsername>\.gitconfig.
    2. Edit the .gitconfig file: You can manually add aliases to the [alias] section. If this section doesn’t already exist, simply add it at the top or bottom of the file. Below is an example of how your .gitconfig file should look once you add the aliases that we will cover in this post:
      [alias]
        # --- Branching ---
        co = checkout
        cob = checkout -b
        br = branch
      
        # --- Working Directory Status ---
        st = status
        df = diff
      
        # --- Commit & Push ---
        amod = "!f() { git add -u && git commit -m \"$1\" && git push; }; f"
        acp = "!f() { git add -A && git commit -m \"$1\" && git push; }; f"
      
        # --- Stash ---
        ss = stash
        ssd = stash drop
      
        # --- Reset / Cleanup ---
        nuke = reset --hard
        resetremote = !git reset --hard origin/main
      
        # --- Rebase Helpers ---
        rbc = rebase --continue
        rba = rebase --abort
        rbi = rebase -i
      
        # --- Log / History ---
        hist = log --oneline --graph --decorate --all
        ln = log --name-status
      
        # --- Fetch & Sync ---
        fetch = fetch --all --prune
        pullr = pull --rebase
        up = !git fetch --prune && git rebase origin/$(git rev-parse --abbrev-ref HEAD)
        cp = cherry-pick
    3. Save and close the file: Once you’ve added your aliases, save the file, and your new aliases will be available the next time you run Git commands in your terminal.
    4. Test the aliases: After saving your .gitconfig file, you can use your new aliases immediately. For example, try using git co to switch branches or git amod "your commit message" to commit your changes.

Explanation of the Aliases

I find these to be very helpful in my day-to-day work as a web developer. Here are some explanations of the aliases that I have added:

Branching

co = checkout

When switching between branches, this alias saves you from typing git checkout <branch_name>. With co, switching is as simple as:
git co <branch_name>
 

cob = checkout -b

Creating and switching to a new branch is easier with this alias. Instead of git checkout -b <new_branch_name>, simply use: 
git cob <new_branch_name>
 

br = branch

If you need to quickly list all branches, whether local or remote, this alias is a fast way to do so:
git br
 

Working Directory Status

st = status

One of the most frequently used commands in Git, git status shows the current state of your working directory. By aliasing it as st, you save time while checking what’s been staged or modified:
git st
 

df = diff

If you want to view the changes you’ve made compared to the last commit, use df for a quick comparison:
git df
 

Commit and Push

amod = “!f() { git add -u && git commit -m \”$1\” && git push; }; f”

For quick commits, this alias allows you to add modified and deleted files (but not new untracked files), commit, and push all in one command! It’s perfect for when you want to keep things simple and focus on committing changes:
git amod "Your commit message"
 

acp = “!f() { git add -A && git commit -m \”$1\” && git push; }; f”

Similar to amod, but this version adds all changes, including untracked files, commits them, and pushes to the remote. It’s ideal when you’re working with a full set of changes:
git acp "Your commit message"
 

Stash

ss = stash

When you’re in the middle of something but need to quickly save your uncommitted changes to come back to later, git stash comes to the rescue. With this alias, you can stash your changes with ease:
git ss
 

ssd = stash drop

Sometimes, after stashing, you may want to drop the stashed changes. With ssd, you can easily discard a stash:
git ssd
 

Reset / Cleanup

nuke = reset –hard

This alias will discard all local changes and reset your working directory to the last commit. It’s especially helpful when you want to start fresh or undo your recent changes:
git nuke
 

resetremote = !git reset –hard origin/main

When your local branch has diverged from the remote and you want to match it exactly, this alias will discard local changes and reset to the remote branch. It’s a lifesaver when you need to restore your local branch to match the remote:
git resetremote
 

Rebase Helpers

rbc = rebase –continue

If you’re in the middle of a rebase and have resolved any conflicts, git rebase --continue lets you proceed. The rbc alias lets you continue the rebase without typing the full command: 
git rbc
 

rba = rebase –abort

If something goes wrong during a rebase and you want to abandon the process, git rebase --abort will undo all changes from the rebase. This alias makes it quick and easy to abort a rebase:
git rba
 

rbi = rebase -i

For an interactive rebase, where you can squash or reorder commits, git rebase -i is an essential command. The rbi alias will save you from typing the whole command:
git rbi
 

Log / History

hist = log –oneline –graph –decorate –all

For a good-looking, concise view of your commit history, this alias combines the best of git log. It shows commits in a graph format, with decoration to show branch names and tags, all while keeping the output short:
git hist
 

ln = log –name-status

When you need to see what files were changed in each commit (with their status: added, modified, deleted), git log --name-status is invaluable. The ln alias helps you inspect commit changes more easily:
git ln
 

Fetch and Sync

fetch = fetch –all –prune

Fetching updates from all remotes and cleaning up any deleted branches with git fetch --all --prune is essential for keeping your remotes organized. The fetch alias makes this task a single command:
git fetch
 

pullr = pull –rebase

When removing changes from the remote, rebase are often better than merges. This keeps your history linear and avoids unnecessary merge commits. The pullr alias performs a pull with a rebase:
git pullr

up = !git fetch –prune && git rebase origin/$(git rev-parse –abbrev-ref HEAD)

This alias is a great shortcut if you want to quickly rebase your current branch onto its remote counterpart. It first fetches the latest updates from the remote and prunes any deleted remote-tracking branches, ensuring your local references are clean and up to date. Then it rebases your branch onto the corresponding remote, keeping your history in sync:
git up
 

cp = cherry-pick

Cherry-picking allows you to apply a specific commit from another branch to your current branch. This alias makes it easier to run:
git cp <commit-hash>

Final Thoughts

By setting up these Git aliases, you can reduce repetitive typing, speed up your development process, and make your Git usage more efficient. Once you’ve incorporated a few into your routine, they become second nature. Don’t hesitate to experiment and add your own based on the commands you use most. Put these in your .gitconfig file today and start enjoying the benefits of a more productive workflow!
]]>
https://blogs.perficient.com/2025/05/14/helpful-git-aliases-to-maximize-developer-productivity/feed/ 0 381308
Good Vibes Only: A Vibe Coding Primer https://blogs.perficient.com/2025/05/12/good-vibes-only-a-vibe-coding-primer/ https://blogs.perficient.com/2025/05/12/good-vibes-only-a-vibe-coding-primer/#respond Mon, 12 May 2025 17:35:28 +0000 https://blogs.perficient.com/?p=381298

In the ever-evolving landscape of software development, new terms and methodologies constantly emerge, reshaping how we think about and create technology. Recently, a phrase has been buzzing through the tech world, sparking both excitement and debate: “vibe coding.” While the idea of coding based on intuition or a “feel” isn’t entirely new, the term has gained significant traction and a more specific meaning in early 2025, largely thanks to influential figures in the AI space.

This article will delve into what “vibe coding” means today, explore its origins and core tenets, describe a typical workflow in this new paradigm, and discuss its potential benefits and inherent challenges. Prepare to look beyond the strictures of traditional development and into a more fluid, intuitive, and AI-augmented future.

What Exactly Is Vibe Coding? The Modern Definition

The recent popularization of “vibe coding” is strongly associated with Andrej Karpathy, a co-founder of OpenAI and former AI leader at Tesla. In early 2025, Karpathy described “vibe coding” as an approach that heavily leverages Large Language Models (LLMs). In this model, the developer’s role shifts from meticulously writing every line of code to guiding an AI with natural language prompts, descriptions, and desired outcomes—essentially, conveying the “vibe” of what they want to achieve. The AI then generates the corresponding code.

As Karpathy put it (paraphrasing common interpretations from early 2025 discussions), it’s less about traditional coding and more about a conversational dance with the AI:

“You see things, say things, run things, and copy-paste things, and it mostly works.”

This points to a future where the barrier between idea and functional code becomes increasingly permeable, with the developer acting more as a conductor or a curator of AI-generated software components.

So, is this entirely new? Yes and no.

  • The “New”: The specific definition tying “vibe coding” to the direct, extensive use of advanced LLMs like GitHub Copilot’s agent mode or similar tools is a recent development (as of early 2025). It’s about a human-AI symbiosis where the AI handles much of the syntactical heavy lifting.
  • The “Not So New”: The underlying desire for a more intuitive, less rigidly structured coding experience—coding by “feel” or “flow”—has always been a part of developer culture. Programmers have long talked about being “in the zone,” rapidly prototyping, or using their deep-seated intuition to solve problems, especially in creative coding, game development, or initial exploratory phases. This older, more informal notion of “vibe coding” can be seen as a spiritual precursor. Today’s “vibe coding” takes that innate human approach and supercharges it with powerful AI tools.

Therefore, when we talk about “vibe coding” today (in mid-2025), we’re primarily referring to this AI-assisted paradigm. It’s about effectively communicating your intent—the “vibe”—to an AI, which then translates that intent into code. The focus shifts from syntax to semantics, from meticulous construction to intuitive direction.

The Core Tenets of (AI-Augmented) Vibe Coding

Given this AI-centric understanding, the principles of vibe coding look something like this:

  1. Intuition and Intent as the Primary Driver

    The developer’s main input is their understanding of the problem and the desired “feel” or functionality of the solution. They translate this into natural language prompts or high-level descriptions for the AI. The “how” of the code generation is largely delegated.

  2. Prompt Engineering is Key

    Your ability to “vibe” effectively with the AI depends heavily on how well you can articulate your needs. Crafting clear, concise, and effective prompts becomes a critical skill, replacing some traditional coding skills.

  3. Rapid Iteration and AI-Feedback Loop

    The cycle is: prompt -> AI generates code -> test/review -> refine prompt -> repeat. This loop is incredibly fast. You can see your ideas (or the AI’s interpretation of them) come to life almost instantly, allowing for quick validation or correction of the “vibe.”

  4. Focus on the “What” and “Why,” Less on the “How”

    Developers concentrate on defining the problem, the user experience, and the desired outcome. The AI handles much of the underlying implementation details. The “vibe” is about the end result and its characteristics, not necessarily the elegance of every single line of generated code (though that can also be a goal).

  5. Embracing the “Black Box” (to a degree)

    While reviewing AI-generated code is crucial, there’s an implicit trust in the AI’s capability to handle complex boilerplate or even entire functions. The developer might not always delve into the deepest intricacies of every generated snippet, especially if it “just works” and fits the vibe. This is also a point of contention and risk.

  6. Minimal Upfront Specification, Maximum Exploration

    Detailed, exhaustive spec documents become less critical for the initial generation. You can start with a fuzzy idea, prompt the AI, see what it produces, and iteratively refine the “vibe” and the specifics as you go. It’s inherently exploratory.

  7. Orchestration Over Manual Construction

    The developer acts more like an orchestrator, piecing together AI-generated components, guiding the overall architecture through prompts, and ensuring the different parts harmonize to achieve the intended “vibe.”

A Typical AI-Driven Vibe Coding Workflow

Let’s walk through what a vibe coding session in this AI-augmented era might look like:

  1. The Conceptual Spark

    An idea for an application, feature, or fix emerges. The developer has a general “vibe” of what’s needed – “I need a simple web app to track my reading list, and it should feel clean and modern.”

  2. Choosing the Right AI Tool

    The developer selects their preferred LLM-based coding assistant (e.g., an advanced mode of GitHub Copilot, Cursor Composer, or other emerging tools).

  3. The Initial Prompt & Generation

    The developer crafts an initial prompt.

    Developer:

    Generate a Python Flask backend for a reading list app. It needs a PostgreSQL database with a 'books' table (title, author, status, rating). Create API endpoints for adding a book, listing all books, and updating a book's status.

    The AI generates a significant chunk of code.

  4. Review, Test, and “Vibe Check”

    The developer reviews the generated code. Does it look reasonable? Do the core structures align with the intended vibe? They might run it, test the endpoints (perhaps by asking the AI to generate test scripts too).

    Developer (to self): “Okay, this is a good start, but the ‘status’ should be an enum: ‘to-read’, ‘reading’, ‘read’. And I want a ‘date_added’ field.”

  5. Refinement through Iterative Prompting

    The developer provides feedback and further instructions to the AI.

    Developer:

    Refactor the 'books' model. Change 'status' to an enum with values 'to-read', 'reading', 'read'. Add a 'date_added' field that defaults to the current timestamp. Also, generate a simple HTML frontend using Bootstrap for listing and adding books that calls these APIs.

    The AI revises the code and generates the new parts.

  6. Integration and Manual Tweaks (if necessary)

    The developer might still need to do some light manual coding to connect pieces, adjust styles, or fix minor issues the AI missed. The goal is for the AI to do the bulk of the work.

  7. Achieving the “Vibe” or Reaching a Milestone

    This iterative process continues until the application meets the desired “vibe” and functionality, or a significant milestone is reached. The developer has guided the AI to create something that aligns with their initial, perhaps fuzzy, vision.

This workflow is highly dynamic. The developer is in a constant dialogue with the AI, shaping the output by refining their “vibe” into increasingly specific prompts.

Where AI-Driven Vibe Coding Shines (The Pros)

This new approach to coding offers several compelling advantages:

  • Accelerated Development & Prototyping: Generating boilerplate, standard functions, and even complex algorithms can be drastically faster, allowing for rapid prototyping and quicker MVP releases.
  • Reduced Cognitive Load for Routine Tasks: Developers can offload tedious and repetitive coding tasks to the AI, freeing up mental energy for higher-level architectural thinking, creative problem-solving, and refining the core “vibe.”
  • Lowering Barriers (Potentially): For some, it might lower the barrier to creating software, as deep expertise in a specific syntax might become less critical than the ability to clearly articulate intent.
  • Enhanced Learning and Exploration: Developers can quickly see how different approaches or technologies could be implemented by asking the AI, making it a powerful learning tool.
  • Focus on Creativity and Product Vision: By automating much of the rote coding, developers can spend more time focusing on the user experience, the product’s unique value, and its overall “vibe.”

The Other Side of the Vibe: Challenges and Caveats in the AI Era

Despite its promise, AI-driven vibe coding is not without its significant challenges and concerns:

  • Quality and Reliability of AI-Generated Code: LLMs can still produce code that is subtly flawed, inefficient, insecure, or simply incorrect. Thorough review and testing are paramount.
  • The “Black Box” Problem: Relying heavily on AI-generated code without fully understanding it can lead to maintenance nightmares and difficulty in debugging when things go wrong.
  • Security Vulnerabilities: AI models are trained on vast datasets, which may include insecure code patterns. Generated code could inadvertently introduce vulnerabilities. The “Bad Vibes Only” concern noted in some discussions highlights this risk.
  • Skill Atrophy and the Future of Developer Skills: Over-reliance on AI for core coding tasks could lead to an atrophy of fundamental programming skills. The skill set may shift towards prompt engineering and systems integration.
  • Bias and Homogenization: AI models can perpetuate biases present in their training data, potentially leading to less diverse or innovative solutions if not carefully guided.
  • Intellectual Property and Originality: Questions around the ownership and originality of AI-generated code are still being navigated legally and ethically.
  • Debugging “Vibes”: When the AI consistently misunderstands a complex “vibe” or prompt, debugging the interaction itself can become a new kind of challenge.
  • Not a Silver Bullet: For highly novel, complex, or performance-critical systems, the nuanced understanding and control offered by traditional, human-driven coding remain indispensable. Vibe coding may not be suitable for all types of software development.

Finding the Balance: Integrating Vibes into a Robust Workflow

The rise of AI-driven “vibe coding” doesn’t necessarily mean the end of traditional software development. Instead, it’s more likely to become another powerful tool in the developer’s arsenal. The most effective approaches will likely integrate the strengths of vibe coding—its speed, intuitiveness, and focus on intent—with the rigor, discipline, and deep understanding of established software engineering practices.

Perhaps “vibe coding” will be most potent in the initial phases of development: for brainstorming, rapid prototyping, generating initial structures, and handling common patterns. This AI-generated foundation can then be taken over by developers for refinement, security hardening, performance optimization, and integration into larger, more complex systems, applying critical thinking and deep expertise.

The future isn’t about replacing human developers with AI, but about augmenting them. The “vibe” is the creative human intent, and AI is becoming an increasingly powerful means of translating that vibe into reality. Learning to “vibe” effectively with AI—to communicate intent clearly, critically evaluate AI output, and seamlessly integrate it into robust engineering practices—will likely become a defining skill for the next generation of software creators.

So, as you navigate your coding journey, consider how you can harness this evolving concept. Whether you’re guiding an LLM or simply tapping into your own deep intuition, embracing the “vibe” might just unlock new levels of creativity and productivity. But always remember to pair that vibe with critical thinking and sound engineering judgment.

]]>
https://blogs.perficient.com/2025/05/12/good-vibes-only-a-vibe-coding-primer/feed/ 0 381298
Promises Made Simple: Understanding Async/Await in JavaScript https://blogs.perficient.com/2025/04/22/promises-made-simple-understanding-async-await-in-javascript/ https://blogs.perficient.com/2025/04/22/promises-made-simple-understanding-async-await-in-javascript/#respond Tue, 22 Apr 2025 09:42:05 +0000 https://blogs.perficient.com/?p=380376

JavaScript is single-threaded. That means it runs one task at a time, on one core. But then how does it handle things like API calls, file reads, or user interactions without freezing up?

That’s where Promises and async/await come into play. They help us handle asynchronous operations without blocking the main thread.

Let’s break down these concepts in the simplest way possible so whether you’re a beginner or a seasoned dev, it just clicks.

JavaScript has something called an event loop. It’s always running, checking if there’s work to do—like handling user clicks, network responses, or timers. In the browser, the browser runs it. In Node.js, Node takes care of it.

When an async function runs and hits an await, it pauses that function. It doesn’t block everything—other code keeps running. When the awaited Promise settles, that async function picks up where it left off.

 

What is a Promise?

  • ✅ Fulfilled – The operation completed successfully.
  • ❌ Rejected – Something went wrong.
  • ⏳ Pending – Still waiting for the result.

Instead of using nested callbacks (aka “callback hell”), Promises allow cleaner, more manageable code using chaining.

 Example:

fetchData()
  .then(data => process(data))
  .then(result => console.log(result))
  .catch(error => console.error(error));

 

Common Promise Methods

Let’s look at the essential Promise utility methods:

  1. Promise.all()

Waits for all promises to resolve. If any promise fails, the whole thing fails.

Promise.all([p1, p2, p3])
  .then(results => console.log(results))
  .catch(error => console.error(error));
  • ✅ Resolves when all succeed.
  • ❌ Rejects fast if any fail.
  1. Promise.allSettled()

Waits for all promises, regardless of success or failure.

Promise.allSettled([p1, p2, p3])
  .then(results => console.log(results));
  • Each result shows { status: “fulfilled”, value } or { status: “rejected”, reason }.
  • Great when you want all results, even the failed ones.
  1. Promise.race()

Returns as soon as one promise settles (either resolves or rejects).

Promise.race([p1, p2, p3])
  .then(result => console.log('Fastest:', result))
  .catch(error => console.error('First to fail:', error));
  1. Promise.any()

Returns the first fulfilled promise. Ignores rejections unless all fail.

Promise.any([p1, p2, p3])
  .then(result => console.log('First success:', result))
  .catch(error => console.error('All failed:', error));

5.Promise.resolve() / Promise.reject

  • resolve(value) creates a resolved promise.
  • reject (value) creates a rejected promise.

Used for quick returns or mocking async behavior.

 

Why Not Just Use Callbacks?

Before Promises, developers relied on callbacks:

getData(function(response) {
  process(response, function(result) {
    finalize(result);
  });
});

This worked, but quickly became messy i.e. callback hell.

 

 What is async/await Really Doing?

Under the hood, async/await is just syntactic sugar over Promises. It makes asynchronous code look synchronous, improving readability and debuggability.

How it works:

  • When you declare a function with async, it always returns a Promise.
  • When you use await inside an async function, the execution of that function pauses at that point.
  • It waits until the Promise is either resolved or rejected.
  • Once resolved, it returns the value.
  • If rejected, it throws the error, which you can catch using try…catch.
async function greet() {
  return 'Hello';
}
greet().then(msg => console.log(msg)); // Hello

Even though you didn’t explicitly return a Promise, greet() returns one.

 

Execution Flow: Synchronous vs Async/Await

Let’s understand how await interacts with the JavaScript event loop.

console.log("1");

setTimeout(() => console.log("2"), 0);

(async function() {
  console.log("3");
  await Promise.resolve();
  console.log("4");
})();

console.log("5");

Output:

Let’s understand how await interacts with the JavaScript event loop.

1
3
5
4
2

Explanation:

  • The await doesn’t block the main thread.
  • It puts the rest of the async function in the microtask queue, which runs after the current stack and before setTimeout (macrotask).
  • That’s why “4” comes after “5”.

 

 Best Practices with async/await

  1. Use try/catch for Error Handling

Avoid unhandled promise rejections by always wrapping await logic inside a try/catch.

async function getUser() {
  try {
    const res = await fetch('/api/user');
    if (!res.ok) throw new Error('User not found');
    const data = await res.json();
    return data;
  } catch (error) {
    console.error('Error fetching user:', error.message);
    throw error; // rethrow if needed
  }
}
  1. Run Parallel Requests with Promise.all

Don’t await sequentially unless there’s a dependency between the calls.

❌ Bad:

const user = await getUser();
const posts = await getPosts(); // waits for user even if not needed

✅ Better:

const [user, posts] = await Promise.all([getUser(), getPosts()]);
  1. Avoid await in Loops (when possible)

❌ Bad:

//Each iteration waits for the previous one to complete
for (let user of users) {
  await sendEmail(user);
}

✅ Better:

//Run in parallel
await Promise.all(users.map(user => sendEmail(user)));

Common Mistakes

  1. Using await outside async
const data = await fetch(url); // ❌ SyntaxError
  1. Forgetting to handle rejections
    If your async function throws and you don’t .catch() it (or use try/catch), your app may crash in Node or log warnings in the browser.
  2. Blocking unnecessary operations Don’t await things that don’t need to be awaited. Only await when the next step depends on the result.

 

Real-World Example: Chained Async Workflow

Imagine a system where:

  • You authenticate a user,
  • Then fetch their profile,
  • Then load related dashboard data.

Using async/await:

async function initDashboard() {
  try {
    const token = await login(username, password);
    const profile = await fetchProfile(token);
    const dashboard = await fetchDashboard(profile.id);
    renderDashboard(dashboard);
  } catch (err) {
    console.error('Error loading dashboard:', err);
    showErrorScreen();
  }
}

Much easier to follow than chained .then() calls, right?

 

Converting Promise Chains to Async/Await

Old way:

login()
  .then(token => fetchUser(token))
  .then(user => showProfile(user))
  .catch(error => showError(error));

With async/await:

async function start() {
  try {
    const token = await login();
    const user = await fetchUser(token);
    showProfile(user);
  } catch (error) {
    showError(error);
  }
}

Cleaner. Clearer. Less nested. Easier to debug.

 

Bonus utility wrapper for Error Handling

If you hate repeating try/catch, use a helper:

const to = promise => promise.then(res => [null, res]).catch(err => [err]);

async function loadData() {
  const [err, data] = await to(fetchData());
  if (err) return console.error(err);
  console.log(data);
}

 

Final Thoughts

Both Promises and async/await are powerful tools for handling asynchronous code. Promises came first and are still widely used, especially in libraries. async/awa is now the preferred style in most modern JavaScript apps because it makes the code cleaner and easier to understand.

 

Tip: You don’t have to choose one forever — they work together! In fact, async/await is built on top of Promises.

 

]]>
https://blogs.perficient.com/2025/04/22/promises-made-simple-understanding-async-await-in-javascript/feed/ 0 380376
Scoping, Hoisting and Temporal Dead Zone in JavaScript https://blogs.perficient.com/2025/04/17/scoping-hoisting-and-temporal-dead-zone-in-javascript/ https://blogs.perficient.com/2025/04/17/scoping-hoisting-and-temporal-dead-zone-in-javascript/#respond Thu, 17 Apr 2025 11:44:38 +0000 https://blogs.perficient.com/?p=380251

Before mastering JavaScript, it’s crucial to understand how it thinks behind the scenes. Concepts like scope, hoisting, and the temporal dead zone might sound tricky at first, but they form the backbone of how your code behaves.
In this blog, we’ll break down these key ideas in the simplest way possible—so you can write cleaner code, avoid unexpected bugs, and truly understand what’s happening when your script runs.

What is Scope in JavaScript?

Think of scope like a boundary or container that controls where you can use a variable in your code.

In JavaScript, a variable is only available in the part of the code where it was created. If you try to use it outside that area, it won’t work—that’s because of scope.

This helps in two big ways:

  • Keeps your code safe – Only the right parts of the code can access the variable.
  • Avoids name clashes – You can use the same variable name in different places without them interfering with each other.

JavaScript mainly uses two types of scope:

1.Global Scope – Available everywhere in your code.

2.Local Scope – Available only inside a specific function or block.

 

Global Scope

When you start writing JavaScript code, you’re already in the global scope—this is like the outermost area of your code where variables can live.

If you create a variable outside of any function or block, it’s considered global, which means it can be used anywhere in your code.

var a = 5; // Global variable
function add() {
  return a + 10; // Using the global variable inside a function
}
console.log(window.a); // 5

In this example, a is declared outside of any function, so it’s globally available—even inside add().

A quick note:

  • If you declare a variable with var, it becomes a property of the window object in browsers.
  • But if you use let or const, the variable is still global, but not attached to window.
let name = "xyz";
function changeName() {
  name = "abc";  // Changing the value of the global variable
}
changeName();
console.log(name); // abc

In this example, we didn’t create a new variable—we just changed the value of the existing one.

👉 Important:
If you redeclare a global variable inside a function (using let, const, or var again), JavaScript treats it as a new variable in a new scope—not the same one. We’ll cover that in more detail later.

 

 Local Scope

In JavaScript, local scope means a variable is only accessible in a certain part of the code—usually inside a function or a block.

There are two types of local scope:

1.Functional Scope

Whenever you create a function, it creates its own private area for variables. If you declare a variable inside a function, it only exists inside that function.

let firstName = "Shilpa"; // Global
function changeName() {
  let lastName = "Syal"; // Local to this function
console.log (`${firstName} ${lastName}`);
}
changeName();
console.log (lastName); // ❌ Error! Not available outside the function

You can even use the same variable name in different functions without any issue:

function mathMarks() {
  let marks = 80;
  console.log (marks);
}
function englishMarks() {
  let marks = 85;
  console.log (marks);
}

Here, both marks variables are separate because they live in different function scopes.

 

2.Block Scope

Thanks to let and const, you can now create variables that only exist inside a block (like an if, for, or {}).

 

function getMarks() {
  let marks = 60;
  if (marks > 50) {
    const points = 10;
    console.log (marks + points); // ✅ Works here
  }
  console.log (points); // ❌ Uncaught Reference Error: points is not defined
}

 As points variable is declared in if block using the let keyword, it will not be only accessible outside as shown above. Now try the above example using var keyword i.e declare “points” variable with var and spot the difference.

LEXICAL SCOPING & NESTED SCOPE:

When you create a function (outer function) that contains another function (inner function), then the inner function has access to the outer function’s variables and methods. This is known as Lexical Scoping.

function outerFunction() {
  let outerVar = "I’m outside";
  function innerFunction() {
      console.log (outerVar); // ✅ Can access outerVar
  }
  innerFunction();
}

In other terms, variables & methods defined in parent function are automatically available to its child functions. But it doesn’t work the other way around—the outer function can’t access the inner function’s variables.

 

VARIABLE SCOPE OR VARIABLE SHADOWING:

You can declare variables with the same name at different scopes. If there’s a variable in the global scope and you create variable with the same name in a function, then you will not get any error. In this case, local variables take priority over global variables. This is known as Variable shadowing, as inner scope variables temporary shadows the outer scope variable with the same name.

If the local variable and global variable have the same name then changing the value of one variable does not affect the value of another variable.

let name = "xyz"
function getName() {
  let name = "abc"            // Redeclaring the name variable
      console.log (name)  ;        //abc
}
getName();
console.log (name) ;          //xyz

To access a variable, JS Engine first going to look in the scope that is currently in execution, and if it doesn’t find there, it will look to its closest parent scope to see if a variable exist there and that lookup process will continue the way up, until JS Engine reaches the global scope. In that case, if the global scope doesn’t have the variable, then it will throw a reference error, as the variable doesn’t exist anywhere up the scope chain.

let bonus = 500;
function getSalary() {
 if(true) {
     return 10000 + bonus;  // Looks up and finds bonus in the outer scope
  }
}
   console.log (getSalary()); // 10500

 

Key Takeaways: Scoping Made Simple

Global Scope: Variables declared outside any function are global and can be used anywhere in your code.

Local Scope: Variables declared inside a function exist only inside that function and disappear once the function finishes.

Global Variables Last Longer: They stay alive as long as your program is running.

Local Variables Are Temporary: They’re created when the function runs and removed once it ends.

Lexical Scope: Inner functions can access variables from outer functions, but not the other way around.

Block Scope with let and const: You can create variables that exist only inside {} blocks like if, for, etc.

Same Name, No Clash: Variables with the same name in different scopes won’t affect each other—they live in separate “worlds.” 

Hoisting

To understand Hoisting in JS, it’s essential to know how execution context works. Execution context is an environment where JavaScript code is executed.

It has two main phases:

1.Creation Phase: During this phase JS allocated memory or hoist variables, functions and objects. Basically, hoisting happens here.

2.Execution Phase: During this phase, code is executed line by line.

-When js code runs, JavaScript hoists all the variables and functions i.e. assigns a memory space for those variables with special value undefined.

 

Key Takeaways from Hoisting and let’s explore some examples to illustrate how hoisting works in different scenarios:

  1. functions– Functions are fully hoisted. They can invoke before their declaration in code.
foo (); // Output: "Hello, world!"
 function foo () {
     console.log ("Hello, world!");
 }
  1. var – Variables declared with var are hoisted in global scope but initialized with undefined. Accessible before the declaration with undefined.
console.log (x); // Output: undefined
 var x = 5;

This code seems straightforward, but it’s interpreted as:

var x;
console.log (x); // Output: undefined
 x = 5;

3.Let, Const – Variables declared with Let and const are hoisted in local scope or script scope but stay in TDZ. These variables enter the Temporal Dead Zone (TDZ) until their declaration is encountered. Accessing in TDZ, results is reference Error

console.log (x); // Throws Reference Error: Cannot access 'x' before initialization
 let x = 5;


What is Temporal Dead Zone (TDZ)?

In JavaScript, all variable declarations—whether made using var, let, or const—are hoisted, meaning the memory for them is set aside during the compilation phase, before the code actually runs. However, the behaviour of hoisting differs based on how the variable is declared.

For variables declared with let and const, although they are hoisted, they are not initialized immediately like var variables. Instead, they remain in an uninitialized state and are placed in a separate memory space. During this phase, any attempt to access them will result in a Reference Error.

This period—from the start of the block until the variable is initialized—is known as the Temporal Dead Zone (TDZ). It’s called a “dead zone” because the variable exists in memory but cannot be accessed until it has been explicitly declared and assigned a value in the code.

console.log (x); //x is not defined -- Reference Error.
let a=10; //b is undefined.
var b= 100; // you cannot access a before initialization Reference Error.

👉 Important: The Temporal Dead Zone helps prevent the use of variables before they are properly declared and initialized, making code more predictable and reducing bugs.

 

🧾 Conclusion

JavaScript hoisting and scoping are foundational concepts that can feel tricky at first, but once you understand them, they make your code more structured and predictable. Hoisting helps explain why some variables and functions work even before they’re declared, while scoping defines where your variables live and how accessible they are. By keeping these concepts in mind and practicing regularly, you’ll be able to write cleaner, more reliable JavaScript. The more you experiment with them, the more confident you’ll become as a developer. Keep learning, keep building, and everything will start to click. Happy coding! 🙌

 

 

]]>
https://blogs.perficient.com/2025/04/17/scoping-hoisting-and-temporal-dead-zone-in-javascript/feed/ 0 380251
⚡ PERFATHON 2025 – Hackathon at Perficient 👩‍💻 https://blogs.perficient.com/2025/04/15/perfathon-2025-the-hackathon-at-perficient/ https://blogs.perficient.com/2025/04/15/perfathon-2025-the-hackathon-at-perficient/#respond Tue, 15 Apr 2025 20:30:48 +0000 https://blogs.perficient.com/?p=380047

April 10–11, 2025, marked an exciting milestone for Perficient India as we hosted our hackathon – Perfathon 2025. Held at our Bangalore office, this thrilling, high-energy event ran non-stop from 12 PM on April 10 to 4 PM on April 11, bringing together 6 enthusiastic teams, creative minds, and some truly impactful ideas.

Perf7 Perf8

Setting the Stage

The excitement wasn’t just limited to the two days — the buzz began a week in advance, with teasers and prep that got everyone curious and pumped. The organizing team went all out to set the vibe right from the moment we stepped in — from vibrant decoration and  music to cool Perfathon hoodies and high spirits all around.

Perf5 Perf6 Perf11 Perf25

Our General Manager, Sumantra Nandi, kicked off the event with inspiring words and warm introductions to the teams, setting the tone for what would be a fierce, friendly, and collaborative code fest.

Meet the Gladiators

Six teams, each with 3–5 members, jumped into this coding battleground:

  • Bro Code

  • Code Red

  • Ctrl Alt Defeat

  • Code Wizards

  • The Tech Titans

  • Black Pearl

Each team was given the freedom to either pick a curated list of internal problem statements or come up with their own. Some of the challenge themes included:  Internal Idea & Innovation Hub, Skills & Project Matchmaker , Ready to Integrate AI Package etc. The open-ended format allowed teams to think outside the box, pick what resonated with them, and own the solution-building process.

Perf12 Perf13  Perf16 Perf21Perf14 Perf15

 Let the Hacking Begin!

Using a chit system, teams were randomly assigned dedicated spaces to work from, and the presentation order was decided — adding an element of surprise and fun!

Day 1 saw intense brainstorming, constant collaboration, design sprints, and non-stop coding. Teams powered through challenges, pivoted when needed, and showcased problem-solving spirit.

Evaluation with Impact

Everyone presented their solutions to our esteemed judges, who evaluated them across several crucial dimensions: tech stack used, task distribution among team members, solution complexity, optimization and relevance, future scope and real-world impact, scalability and deployment plans, UI designs, AI component etc.

The judging wasn’t just about scoring — it was about constructive insights. Judges offered thought-provoking feedback and suggestions, pushing teams to reflect more deeply on their solutions and discover new layers of improvement. A heartfelt thank you to each judge for their valuable time and perspectives.

This marked the official beginning of the code battle — from here on, it was about execution, collaboration, and pushing through to build something meaningful.

Perf1 Perf2 Perf3 Perf24 Perf27 Perf28 Perf29

Time to Shine (Day 2)

As Day 2 commenced, the teams picked up right where they left off — crushing it with creativity and clean code. The GitHub repository was set up by the organizing team, allowing all code commits and pushes to be tracked live right from the start of the event. The Final Showdown kicked off around 4 PM on April 11, with the spotlight on each team to demo their working prototypes.

A team representative collected chits to decide the final presentation order. In the audience this time were not just internal leaders, but also a special client guest , Sravan Vashista, (IT CX Director and IT Country GM, Keysight Technologies) and our GM Sumantra Nandi, adding more weight to the final judgment.

Each team presented with full energy, integrated judge and audience feedback, and answered queries with clarity and confidence. The tension was real, and the performances were exceptional.

 And the Winners Are…

Before the grand prize distribution, our guest speaker, Sravan Vashista delivered an insightful and encouraging address. He applauded the energy in the room, appreciated the quality of solutions, and emphasized the importance of owning challenges and solving from within. The prize distribution was a celebration in itself — beaming faces, loud cheers, proud smiles, and a sense of fulfillment that only comes from doing something truly impactful.

After two action-packed days of code, creativity, and collaboration , it was finally time to crown our champions.

🥇 Code Red emerged victorious as the Perfathon 2025 Champions, thanks to their standout performance, technical depth, clear problem-solving approach, and powerful teamwork.

🥈 Code Wizards claimed the First Runners-Up spot with their solution and thoughtful execution.

🥉 Black Pearl took home the Second Runners-Up title, impressing everyone with their strong team synergy.

Each team received trophies and appreciation, but more importantly, they took home the experience of being real solution creators.

Perf10  Perf19 Perf23 Perf18 Perf30

🙌 Thank You, Team Perfathon!

A massive shoutout to our organizers, volunteers, and judges who made Perfathon a reality. Huge thanks to our leadership and HR team for their continuous support and encouragement, and to every participant who made the event what it was — memorable, meaningful, and magical.

Perf17 Perf33

Perf32 Perf9  Perf31

We’re already looking forward to Perfathon 2026. Until then, let’s keep the hacker spirit alive and continue being the solution-makers our organization needs.

]]>
https://blogs.perficient.com/2025/04/15/perfathon-2025-the-hackathon-at-perficient/feed/ 0 380047
Convert a Text File from UTF-8 Encoding to ANSI using Python in AWS Glue https://blogs.perficient.com/2025/04/14/convert-a-text-file-from-utf-8-encoding-to-ansi-using-python-in-aws-glue/ https://blogs.perficient.com/2025/04/14/convert-a-text-file-from-utf-8-encoding-to-ansi-using-python-in-aws-glue/#respond Mon, 14 Apr 2025 19:35:22 +0000 https://blogs.perficient.com/?p=379867

To convert a text file from UTF-8 encoded data to ANSI using AWS Glue, you will typically work with Python or PySpark. However, it’s important to understand that ANSI is not a specific encoding but often refers to Windows-1252 (or similar 8-bit encodings) in a Windows context.

AWS Glue, running on Apache Spark, uses UTF-8 as the default encoding. Converting to ANSI requires handling the character encoding during the writing phase, because Spark itself doesn’t support writing files in encodings other than UTF-8 natively. But there are a few workarounds.

Here’s a step-by-step guide to converting a text file from UTF-8 to ANSI using Python in AWS Glue. Assume you’re working with a plain text file and want to output a similarly formatted file in ANSI encoding.

General Process Flow

Technical Approach Step-By-Step Guide

Step 1: Add the import statements to the code

import boto3
import codecs

Step 2: Specify the source/target file paths & S3 bucket details

# Initialize S3 client
s3_client = boto3.client('s3')
s3_key_utf8 = ‘utf8_file_path/filename.txt’
s3_key_ansi = 'ansi_file_path/filename.txt'

# Specify S3 bucket and file paths
bucket_name = outgoing_bucket #'your-s3-bucket-name'
input_key = s3_key_utf8   #S3Path/name of input UTF-8 encoded file in S3
output_key = s3_key_ansi  #S3 Path/name to save the ANSI encoded file

Step 3: Write a function to convert the text file from UTF-8 to ANSI, based on the parameters supplied (S3 bucket name, source-file, target-file)

# Function to convert UTF-8 file to ANSI (Windows-1252) and upload back to S3
def convert_utf8_to_ansi(bucket_name, input_key, output_key):
    # Download the UTF-8 encoded file from S3
    response = s3_client.get_object(Bucket=bucket_name, Key=input_key)
    # Read the file content from the response body (UTF-8 encoded)
    utf8_content = response['Body'].read().decode('utf-8')
    # Convert the content to ANSI encoding (Windows-1252)
    ansi_content = utf8_content.encode('windows-1252', 'ignore')  # 'ignore' to handle invalid characters
    # Upload the converted file to S3 (in ANSI encoding)
    s3_client.put_object(Bucket=bucket_name, Key=output_key, Body=ansi_content) 

Step 4: Call the function that converts the text file from UTF-8 to ANSI

# Call the function to convert the file 
convert_utf8_to_ansi(bucket_name, input_key, output_key) 

 

]]>
https://blogs.perficient.com/2025/04/14/convert-a-text-file-from-utf-8-encoding-to-ansi-using-python-in-aws-glue/feed/ 0 379867
Android Development Codelab: Mastering Advanced Concepts https://blogs.perficient.com/2025/04/10/android-development-codelab-mastering-advanced-concepts/ https://blogs.perficient.com/2025/04/10/android-development-codelab-mastering-advanced-concepts/#respond Thu, 10 Apr 2025 22:28:06 +0000 https://blogs.perficient.com/?p=379698

 

This guide will walk you through building a small application step-by-step, focusing on integrating several powerful tools and concepts essential for modern Android development.

What We’ll Cover:

  • Jetpack Compose: Building the UI declaratively.
  • NoSQL Database (Firestore): Storing and retrieving data in the cloud.
  • WorkManager: Running reliable background tasks.
  • Build Flavors: Creating different versions of the app (e.g., dev vs. prod).
  • Proguard/R8: Shrinking and obfuscating your code for release.
  • Firebase App Distribution: Distributing test builds easily.
  • CI/CD (GitHub Actions): Automating the build and distribution process.

The Goal: Build a “Task Reporter” app. Users can add simple task descriptions. These tasks are saved to Firestore. A background worker will periodically “report” (log a message or update a counter in Firestore) that the app is active. We’ll have dev and prod flavors pointing to different Firestore collections/data and distribute the dev build for testing.

Prerequisites:

  • Android Studio (latest stable version recommended).
  • Basic understanding of Kotlin and Android development fundamentals.
  • Familiarity with Jetpack Compose basics (Composable functions, State).
  • A Google account to use Firebase.
  • A GitHub account (for CI/CD).

Let’s get started!


Step 0: Project Setup

  1. Create New Project: Open Android Studio -> New Project -> Empty Activity (choose Compose).
  2. Name: AdvancedConceptsApp (or your choice).
  3. Package Name: Your preferred package name (e.g., com.yourcompany.advancedconceptsapp).
  4. Language: Kotlin.
  5. Minimum SDK: API 24 or higher.
  6. Build Configuration Language: Kotlin DSL (build.gradle.kts).
  7. Click Finish.

Step 1: Firebase Integration (Firestore & App Distribution)

  1. Connect to Firebase: In Android Studio: Tools -> Firebase.
    • In the Assistant panel, find Firestore. Click “Get Started with Cloud Firestore”. Click “Connect to Firebase”. Follow the prompts to create a new Firebase project or connect to an existing one.
    • Click “Add Cloud Firestore to your app”. Accept changes to your build.gradle.kts (or build.gradle) files. This adds the necessary dependencies.
    • Go back to the Firebase Assistant, find App Distribution. Click “Get Started”. Add the App Distribution Gradle plugin by clicking the button. Accept changes.
  2. Enable Services in Firebase Console:
    • Go to the Firebase Console and select your project.
    • Enable Firestore Database (start in Test mode).
    • In the left menu, go to Build -> Firestore Database. Click “Create database”.
      • Start in Test mode for easier initial development (we’ll secure it later if needed). Choose a location close to your users. Click “Enable”.
    • Ensure App Distribution is accessible (no setup needed here yet).
  3. Download Initial google-services.json:
    • In Firebase Console -> Project Settings (gear icon) -> Your apps.
    • Ensure your Android app (using the base package name like com.yourcompany.advancedconceptsapp) is registered. If not, add it.
    • Download the google-services.json file.
    • Switch Android Studio to the Project view and place the file inside the app/ directory.
    • Note: We will likely replace this file in Step 4 after configuring build flavors.

Step 2: Building the Basic UI with Compose

Let’s create a simple UI to add and display tasks.

  1. Dependencies: Ensure necessary dependencies for Compose, ViewModel, Firestore, and WorkManager are in app/build.gradle.kts.
    app/build.gradle.kts

    
    dependencies {
        // Core & Lifecycle & Activity
        implementation("androidx.core:core-ktx:1.13.1") // Use latest versions
        implementation("androidx.lifecycle:lifecycle-runtime-ktx:2.8.1")
        implementation("androidx.activity:activity-compose:1.9.0")
        // Compose
        implementation(platform("androidx.compose:compose-bom:2024.04.01")) // Check latest BOM
        implementation("androidx.compose.ui:ui")
        implementation("androidx.compose.ui:ui-graphics")
        implementation("androidx.compose.ui:ui-tooling-preview")
        implementation("androidx.compose.material3:material3")
        implementation("androidx.lifecycle:lifecycle-viewmodel-compose:2.8.1")
        // Firebase
        implementation(platform("com.google.firebase:firebase-bom:33.0.0")) // Check latest BOM
        implementation("com.google.firebase:firebase-firestore-ktx")
        // WorkManager
        implementation("androidx.work:work-runtime-ktx:2.9.0") // Check latest version
    }
                    

    Sync Gradle files.

  2. Task Data Class: Create data/Task.kt.
    data/Task.kt

    
    package com.yourcompany.advancedconceptsapp.data
    
    import com.google.firebase.firestore.DocumentId
    
    data class Task(
        @DocumentId
        val id: String = "",
        val description: String = "",
        val timestamp: Long = System.currentTimeMillis()
    ) {
        constructor() : this("", "", 0L) // Firestore requires a no-arg constructor
    }
                    
  3. ViewModel: Create ui/TaskViewModel.kt. (We’ll update the collection name later).
    ui/TaskViewModel.kt

    
    package com.yourcompany.advancedconceptsapp.ui
    
    import androidx.lifecycle.ViewModel
    import androidx.lifecycle.viewModelScope
    import com.google.firebase.firestore.ktx.firestore
    import com.google.firebase.firestore.ktx.toObjects
    import com.google.firebase.ktx.Firebase
    import com.yourcompany.advancedconceptsapp.data.Task
    // Import BuildConfig later when needed
    import kotlinx.coroutines.flow.MutableStateFlow
    import kotlinx.coroutines.flow.StateFlow
    import kotlinx.coroutines.launch
    import kotlinx.coroutines.tasks.await
    
    // Temporary placeholder - will be replaced by BuildConfig field
    const val TEMPORARY_TASKS_COLLECTION = "tasks"
    
    class TaskViewModel : ViewModel() {
        private val db = Firebase.firestore
        // Use temporary constant for now
        private val tasksCollection = db.collection(TEMPORARY_TASKS_COLLECTION)
    
        private val _tasks = MutableStateFlow<List<Task>>(emptyList())
        val tasks: StateFlow<List<Task>> = _tasks
    
        private val _error = MutableStateFlow<String?>(null)
        val error: StateFlow<String?> = _error
    
        init {
            loadTasks()
        }
    
        fun loadTasks() {
            viewModelScope.launch {
                try {
                     tasksCollection.orderBy("timestamp", com.google.firebase.firestore.Query.Direction.DESCENDING)
                        .addSnapshotListener { snapshots, e ->
                            if (e != null) {
                                _error.value = "Error listening: ${e.localizedMessage}"
                                return@addSnapshotListener
                            }
                            _tasks.value = snapshots?.toObjects<Task>() ?: emptyList()
                            _error.value = null
                        }
                } catch (e: Exception) {
                    _error.value = "Error loading: ${e.localizedMessage}"
                }
            }
        }
    
         fun addTask(description: String) {
            if (description.isBlank()) {
                _error.value = "Task description cannot be empty."
                return
            }
            viewModelScope.launch {
                 try {
                     val task = Task(description = description, timestamp = System.currentTimeMillis())
                     tasksCollection.add(task).await()
                     _error.value = null
                 } catch (e: Exception) {
                    _error.value = "Error adding: ${e.localizedMessage}"
                }
            }
        }
    }
                    
  4. Main Screen Composable: Create ui/TaskScreen.kt.
    ui/TaskScreen.kt

    
    package com.yourcompany.advancedconceptsapp.ui
    
    // Imports: androidx.compose.*, androidx.lifecycle.viewmodel.compose.viewModel, java.text.SimpleDateFormat, etc.
    import androidx.compose.foundation.layout.*
    import androidx.compose.foundation.lazy.LazyColumn
    import androidx.compose.foundation.lazy.items
    import androidx.compose.material3.*
    import androidx.compose.runtime.*
    import androidx.compose.ui.Alignment
    import androidx.compose.ui.Modifier
    import androidx.compose.ui.unit.dp
    import androidx.lifecycle.viewmodel.compose.viewModel
    import com.yourcompany.advancedconceptsapp.data.Task
    import java.text.SimpleDateFormat
    import java.util.Date
    import java.util.Locale
    import androidx.compose.ui.res.stringResource
    import com.yourcompany.advancedconceptsapp.R // Import R class
    
    @OptIn(ExperimentalMaterial3Api::class) // For TopAppBar
    @Composable
    fun TaskScreen(taskViewModel: TaskViewModel = viewModel()) {
        val tasks by taskViewModel.tasks.collectAsState()
        val errorMessage by taskViewModel.error.collectAsState()
        var taskDescription by remember { mutableStateOf("") }
    
        Scaffold(
            topBar = {
                TopAppBar(title = { Text(stringResource(id = R.string.app_name)) }) // Use resource for flavor changes
            }
        ) { paddingValues ->
            Column(modifier = Modifier.padding(paddingValues).padding(16.dp).fillMaxSize()) {
                // Input Row
                Row(verticalAlignment = Alignment.CenterVertically, modifier = Modifier.fillMaxWidth()) {
                    OutlinedTextField(
                        value = taskDescription,
                        onValueChange = { taskDescription = it },
                        label = { Text("New Task Description") },
                        modifier = Modifier.weight(1f),
                        singleLine = true
                    )
                    Spacer(modifier = Modifier.width(8.dp))
                    Button(onClick = {
                        taskViewModel.addTask(taskDescription)
                        taskDescription = ""
                    }) { Text("Add") }
                }
                Spacer(modifier = Modifier.height(16.dp))
                // Error Message
                errorMessage?.let { Text(it, color = MaterialTheme.colorScheme.error, modifier = Modifier.padding(bottom = 8.dp)) }
                // Task List
                if (tasks.isEmpty() && errorMessage == null) {
                    Text("No tasks yet. Add one!")
                } else {
                    LazyColumn(modifier = Modifier.weight(1f)) {
                        items(tasks, key = { it.id }) { task ->
                            TaskItem(task)
                            Divider()
                        }
                    }
                }
            }
        }
    }
    
    @Composable
    fun TaskItem(task: Task) {
        val dateFormat = remember { SimpleDateFormat("yyyy-MM-dd HH:mm", Locale.getDefault()) }
        Row(modifier = Modifier.fillMaxWidth().padding(vertical = 8.dp), verticalAlignment = Alignment.CenterVertically) {
            Column(modifier = Modifier.weight(1f)) {
                Text(task.description, style = MaterialTheme.typography.bodyLarge)
                Text("Added: ${dateFormat.format(Date(task.timestamp))}", style = MaterialTheme.typography.bodySmall)
            }
        }
    }
                    
  5. Update MainActivity.kt: Set the content to TaskScreen.
    MainActivity.kt

    
    package com.yourcompany.advancedconceptsapp
    
    import android.os.Bundle
    import androidx.activity.ComponentActivity
    import androidx.activity.compose.setContent
    import androidx.compose.foundation.layout.fillMaxSize
    import androidx.compose.material3.MaterialTheme
    import androidx.compose.material3.Surface
    import androidx.compose.ui.Modifier
    import com.yourcompany.advancedconceptsapp.ui.TaskScreen
    import com.yourcompany.advancedconceptsapp.ui.theme.AdvancedConceptsAppTheme
    // Imports for WorkManager scheduling will be added in Step 3
    
    class MainActivity : ComponentActivity() {
        override fun onCreate(savedInstanceState: Bundle?) {
            super.onCreate(savedInstanceState)
            setContent {
                AdvancedConceptsAppTheme {
                    Surface(modifier = Modifier.fillMaxSize(), color = MaterialTheme.colorScheme.background) {
                        TaskScreen()
                    }
                }
            }
            // TODO: Schedule WorkManager job in Step 3
        }
    }
                    
  6. Run the App: Test basic functionality. Tasks should appear and persist in Firestore’s `tasks` collection (initially).

Step 3: WorkManager Implementation

Create a background worker for periodic reporting.

  1. Create the Worker: Create worker/ReportingWorker.kt. (Collection name will be updated later).
    worker/ReportingWorker.kt

    
    package com.yourcompany.advancedconceptsapp.worker
    
    import android.content.Context
    import android.util.Log
    import androidx.work.CoroutineWorker
    import androidx.work.WorkerParameters
    import com.google.firebase.firestore.ktx.firestore
    import com.google.firebase.ktx.Firebase
    // Import BuildConfig later when needed
    import kotlinx.coroutines.tasks.await
    
    // Temporary placeholder - will be replaced by BuildConfig field
    const val TEMPORARY_USAGE_LOG_COLLECTION = "usage_logs"
    
    class ReportingWorker(appContext: Context, workerParams: WorkerParameters) :
        CoroutineWorker(appContext, workerParams) {
    
        companion object { const val TAG = "ReportingWorker" }
        private val db = Firebase.firestore
    
        override suspend fun doWork(): Result {
            Log.d(TAG, "Worker started: Reporting usage.")
            return try {
                val logEntry = hashMapOf(
                    "timestamp" to System.currentTimeMillis(),
                    "message" to "App usage report.",
                    "worker_run_id" to id.toString()
                )
                // Use temporary constant for now
                db.collection(TEMPORARY_USAGE_LOG_COLLECTION).add(logEntry).await()
                Log.d(TAG, "Worker finished successfully.")
                Result.success()
            } catch (e: Exception) {
                Log.e(TAG, "Worker failed", e)
                Result.failure()
            }
        }
    }
                    
  2. Schedule the Worker: Update MainActivity.kt‘s onCreate method.
    MainActivity.kt additions

    
    // Add these imports to MainActivity.kt
    import android.content.Context
    import android.util.Log
    import androidx.work.*
    import com.yourcompany.advancedconceptsapp.worker.ReportingWorker
    import java.util.concurrent.TimeUnit
    
    // Inside MainActivity class, after setContent { ... } block in onCreate
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContent {
            // ... existing code ...
        }
        // Schedule the worker
        schedulePeriodicUsageReport(this)
    }
    
    // Add this function to MainActivity class
    private fun schedulePeriodicUsageReport(context: Context) {
        val constraints = Constraints.Builder()
            .setRequiredNetworkType(NetworkType.CONNECTED)
            .build()
    
        val reportingWorkRequest = PeriodicWorkRequestBuilder<ReportingWorker>(
                1, TimeUnit.HOURS // ~ every hour
             )
            .setConstraints(constraints)
            .addTag(ReportingWorker.TAG)
            .build()
    
        WorkManager.getInstance(context).enqueueUniquePeriodicWork(
            ReportingWorker.TAG,
            ExistingPeriodicWorkPolicy.KEEP,
            reportingWorkRequest
        )
        Log.d("MainActivity", "Periodic reporting work scheduled.")
    }
                    
  3. Test WorkManager:
    • Run the app. Check Logcat for messages from ReportingWorker and MainActivity about scheduling.
    • WorkManager tasks don’t run immediately, especially periodic ones. You can use ADB commands to force execution for testing:
      • Find your package name: com.yourcompany.advancedconceptsapp
      • Force run jobs: adb shell cmd jobscheduler run -f com.yourcompany.advancedconceptsapp 999 (The 999 is usually sufficient, it’s a job ID).
      • Or use Android Studio’s App Inspection tab -> Background Task Inspector to view and trigger workers.
    • Check your Firestore Console for the usage_logs collection.

Step 4: Build Flavors (dev vs. prod)

Create dev and prod flavors for different environments.

  1. Configure app/build.gradle.kts:
    app/build.gradle.kts

    
    android {
        // ... namespace, compileSdk, defaultConfig ...
    
        // ****** Enable BuildConfig generation ******
        buildFeatures {
            buildConfig = true
        }
        // *******************************************
    
        flavorDimensions += "environment"
    
        productFlavors {
            create("dev") {
                dimension = "environment"
                applicationIdSuffix = ".dev" // CRITICAL: Changes package name for dev builds
                versionNameSuffix = "-dev"
                resValue("string", "app_name", "Task Reporter (Dev)")
                buildConfigField("String", "TASKS_COLLECTION", "\"tasks_dev\"")
                buildConfigField("String", "USAGE_LOG_COLLECTION", "\"usage_logs_dev\"")
            }
            create("prod") {
                dimension = "environment"
                resValue("string", "app_name", "Task Reporter")
                buildConfigField("String", "TASKS_COLLECTION", "\"tasks\"")
                buildConfigField("String", "USAGE_LOG_COLLECTION", "\"usage_logs\"")
            }
        }
    
        // ... buildTypes, compileOptions, etc ...
    }
                    

    Sync Gradle files.

    Important: We added applicationIdSuffix = ".dev". This means the actual package name for your development builds will become something like com.yourcompany.advancedconceptsapp.dev. This requires an update to your Firebase project setup, explained next. Also note the buildFeatures { buildConfig = true } block which is required to use buildConfigField.
  2. Handling Firebase for Suffixed Application IDs

    Because the `dev` flavor now has a different application ID (`…advancedconceptsapp.dev`), the original `google-services.json` file (downloaded in Step 1) will not work for `dev` builds, causing a “No matching client found” error during build.

    You must add this new Application ID to your Firebase project:

    1. Go to Firebase Console: Open your project settings (gear icon).
    2. Your apps: Scroll down to the “Your apps” card.
    3. Add app: Click “Add app” and select the Android icon (</>).
    4. Register dev app:
      • Package name: Enter the exact suffixed ID: com.yourcompany.advancedconceptsapp.dev (replace `com.yourcompany.advancedconceptsapp` with your actual base package name).
      • Nickname (Optional): “Task Reporter Dev”.
      • SHA-1 (Optional but Recommended): Add the debug SHA-1 key from `./gradlew signingReport`.
    5. Register and Download: Click “Register app”. Crucially, download the new google-services.json file offered. This file now contains configurations for BOTH your base ID and the `.dev` suffixed ID.
    6. Replace File: In Android Studio (Project view), delete the old google-services.json from the app/ directory and replace it with the **newly downloaded** one.
    7. Skip SDK steps: You can skip the remaining steps in the Firebase console for adding the SDK.
    8. Clean & Rebuild: Back in Android Studio, perform a Build -> Clean Project and then Build -> Rebuild Project.
    Now your project is correctly configured in Firebase for both `dev` (with the `.dev` suffix) and `prod` (base package name) variants using a single `google-services.json`.
  3. Create Flavor-Specific Source Sets:
    • Switch to Project view in Android Studio.
    • Right-click on app/src -> New -> Directory. Name it dev.
    • Inside dev, create res/values/ directories.
    • Right-click on app/src -> New -> Directory. Name it prod.
    • Inside prod, create res/values/ directories.
    • (Optional but good practice): You can now move the default app_name string definition from app/src/main/res/values/strings.xml into both app/src/dev/res/values/strings.xml and app/src/prod/res/values/strings.xml. Or, you can rely solely on the resValue definitions in Gradle (as done above). Using resValue is often simpler for single strings like app_name. If you had many different resources (layouts, drawables), you’d put them in the respective dev/res or prod/res folders.
  4. Use Build Config Fields in Code:
      • Update TaskViewModel.kt and ReportingWorker.kt to use BuildConfig instead of temporary constants.

    TaskViewModel.kt change

    
    // Add this import
    import com.yourcompany.advancedconceptsapp.BuildConfig
    
    // Replace the temporary constant usage
    // const val TEMPORARY_TASKS_COLLECTION = "tasks" // Remove this line
    private val tasksCollection = db.collection(BuildConfig.TASKS_COLLECTION) // Use build config field
                        

    ReportingWorker.kt change

    
    // Add this import
    import com.yourcompany.advancedconceptsapp.BuildConfig
    
    // Replace the temporary constant usage
    // const val TEMPORARY_USAGE_LOG_COLLECTION = "usage_logs" // Remove this line
    
    // ... inside doWork() ...
    db.collection(BuildConfig.USAGE_LOG_COLLECTION).add(logEntry).await() // Use build config field
                        

    Modify TaskScreen.kt to potentially use the flavor-specific app name (though resValue handles this automatically if you referenced @string/app_name correctly, which TopAppBar usually does). If you set the title directly, you would load it from resources:

     // In TaskScreen.kt (if needed)
    import androidx.compose.ui.res.stringResource
    import com.yourcompany.advancedconceptsapp.R // Import R class
    // Inside Scaffold -> topBar

    TopAppBar(title = { Text(stringResource(id = R.string.app_name)) }) // Use string resource

  5. Select Build Variant & Test:
    • In Android Studio, go to Build -> Select Build Variant… (or use the “Build Variants” panel usually docked on the left).
    • You can now choose between devDebug, devRelease, prodDebug, and prodRelease.
    • Select devDebug. Run the app. The title should say “Task Reporter (Dev)”. Data should go to tasks_dev and usage_logs_dev in Firestore.
    • Select prodDebug. Run the app. The title should be “Task Reporter”. Data should go to tasks and usage_logs.

Step 5: Proguard/R8 Configuration (for Release Builds)

R8 is the default code shrinker and obfuscator in Android Studio (successor to Proguard). It’s enabled by default for release build types. We need to ensure it doesn’t break our app, especially Firestore data mapping.

    1. Review app/build.gradle.kts Release Build Type:
      app/build.gradle.kts

      
      android {
          // ...
          buildTypes {
              release {
                  isMinifyEnabled = true // Should be true by default for release
                  isShrinkResources = true // R8 handles both
                  proguardFiles(
                      getDefaultProguardFile("proguard-android-optimize.txt"),
                      "proguard-rules.pro" // Our custom rules file
                  )
              }
              debug {
                  isMinifyEnabled = false // Usually false for debug
                  proguardFiles(
                      getDefaultProguardFile("proguard-android-optimize.txt"),
                      "proguard-rules.pro"
                  )
              }
              // ... debug build type ...
          }
          // ...
      }
                 

      isMinifyEnabled = true enables R8 for the release build type.

    2. Configure app/proguard-rules.pro:
      • Firestore uses reflection to serialize/deserialize data classes. R8 might remove or rename classes/fields needed for this process. We need to add “keep” rules.
      • Open (or create) the app/proguard-rules.pro file. Add the following:
      
      # Keep Task data class and its members for Firestore serialization
      -keep class com.yourcompany.advancedconceptsapp.data.Task { (...); *; }
      # Keep any other data classes used with Firestore similarly
      # -keep class com.yourcompany.advancedconceptsapp.data.AnotherFirestoreModel { (...); *; }
      
      # Keep Coroutine builders and intrinsics (often needed, though AGP/R8 handle some automatically)
      -keepnames class kotlinx.coroutines.intrinsics.** { *; }
      
      # Keep companion objects for Workers if needed (sometimes R8 removes them)
      -keepclassmembers class * extends androidx.work.Worker {
          public static ** Companion;
      }
      
      # Keep specific fields/methods if using reflection elsewhere
      # -keepclassmembers class com.example.SomeClass {
      #    private java.lang.String someField;
      #    public void someMethod();
      # }
      
      # Add rules for any other libraries that require them (e.g., Retrofit, Gson, etc.)
      # Consult library documentation for necessary Proguard/R8 rules.
    • Explanation:
      • -keep class ... { <init>(...); *; }: Keeps the Task class, its constructors (<init>), and all its fields/methods (*) from being removed or renamed. This is crucial for Firestore.
      • -keepnames: Prevents renaming but allows removal if unused.
      • -keepclassmembers: Keeps specific members within a class.

3. Test the Release Build:

    • Select the prodRelease build variant.
    • Go to Build -> Generate Signed Bundle / APK…. Choose APK.
    • Create a new keystore or use an existing one (follow the prompts). Remember the passwords!
    • Select prodRelease as the variant. Click Finish.
    • Android Studio will build the release APK. Find it (usually in app/prod/release/).
    • Install this APK manually on a device: adb install app-prod-release.apk.
    • Test thoroughly. Can you add tasks? Do they appear? Does the background worker still log to Firestore (check usage_logs)? If it crashes or data doesn’t save/load correctly, R8 likely removed something important. Check Logcat for errors (often ClassNotFoundException or NoSuchMethodError) and adjust your proguard-rules.pro file accordingly.

 


 

Step 6: Firebase App Distribution (for Dev Builds)

Configure Gradle to upload development builds to testers via Firebase App Distribution.

  1. Download private key: on Firebase console go to Project Overview  at left top corner -> Service accounts -> Firebase Admin SDK -> Click on “Generate new private key” button ->
    api-project-xxx-yyy.json move this file to root project at the same level of app folder *Ensure that this file be in your local app, do not push it to the remote repository because it contains sensible data and will be rejected later
  2. Configure App Distribution Plugin in app/build.gradle.kts:
    app/build.gradle.kts

    
    // Apply the plugin at the top
    plugins {
        // ... other plugins id("com.android.application"), id("kotlin-android"), etc.
        alias(libs.plugins.google.firebase.appdistribution)
    }
    
    android {
        // ... buildFeatures, flavorDimensions, productFlavors ...
    
        buildTypes {
            getByName("release") {
                isMinifyEnabled = true // Should be true by default for release
                isShrinkResources = true // R8 handles both
                proguardFiles(
                    getDefaultProguardFile("proguard-android-optimize.txt"),
                    "proguard-rules.pro" // Our custom rules file
                )
            }
            getByName("debug") {
                isMinifyEnabled = false // Usually false for debug
                proguardFiles(
                    getDefaultProguardFile("proguard-android-optimize.txt"),
                    "proguard-rules.pro"
                )
            }
            firebaseAppDistribution {
                artifactType = "APK"
                releaseNotes = "Latest build with fixes/features"
                testers = "briew@example.com, bri@example.com, cal@example.com"
                serviceCredentialsFile="$rootDir/api-project-xxx-yyy.json"//do not push this line to the remote repository or stablish as local variable } } } 

    Add library version to libs.version.toml

    
    [versions]
    googleFirebaseAppdistribution = "5.1.1"
    [plugins]
    google-firebase-appdistribution = { id = "com.google.firebase.appdistribution", version.ref = "googleFirebaseAppdistribution" }
    
    Ensure the plugin classpath is in the 

    project-level

     build.gradle.kts: 

    project build.gradle.kts

    
    plugins {
        // ...
        alias(libs.plugins.google.firebase.appdistribution) apply false
    }
                    

    Sync Gradle files.

  3. Upload a Build Manually:
    • Select the desired variant (e.g., devDebugdevRelease, prodDebug , prodRelease).
    • In Android Studio Terminal  run  each commmand to generate apk version for each environment:
      • ./gradlew assembleRelease appDistributionUploadProdRelease
      • ./gradlew assembleRelease appDistributionUploadDevRelease
      • ./gradlew assembleDebug appDistributionUploadProdDebug
      • ./gradlew assembleDebug appDistributionUploadDevDebug
    • Check Firebase Console -> App Distribution -> Select .dev project . Add testers or use the configured group (`android-testers`).

Step 7: CI/CD with GitHub Actions

Automate building and distributing the `dev` build on push to a specific branch.

  1. Create GitHub Repository. Create a new repository on GitHub and push your project code to it.
    1. Generate FIREBASE_APP_ID:
      • on Firebase App Distribution go to Project Overview -> General -> App ID for com.yourcompany.advancedconceptsapp.dev environment (1:xxxxxxxxx:android:yyyyyyyyyy)
      • In GitHub repository go to Settings -> Secrets and variables -> Actions -> New repository secret
      • Set the name: FIREBASE_APP_ID and value: paste the App ID generated
    2. Add FIREBASE_SERVICE_ACCOUNT_KEY_JSON:
      • open api-project-xxx-yyy.json located at root project and copy the content
      • In GitHub repository go to Settings -> Secrets and variables -> Actions -> New repository secret
      • Set the name: FIREBASE_SERVICE_ACCOUNT_KEY_JSON and value: paste the json content
    3. Create GitHub Actions Workflow File:
      • In your project root, create the directories .github/workflows/.
      • Inside .github/workflows/, create a new file named android_build_distribute.yml.
      • Paste the following content:
    4. 
      name: Android CI 
      
      on: 
        push: 
          branches: [ "main" ] 
        pull_request: 
          branches: [ "main" ] 
      jobs: 
        build: 
          runs-on: ubuntu-latest 
          steps: 
          - uses: actions/checkout@v3
          - name: set up JDK 17 
            uses: actions/setup-java@v3 
            with: 
              java-version: '17' 
              distribution: 'temurin' 
              cache: gradle 
          - name: Grant execute permission for gradlew 
            run: chmod +x ./gradlew 
          - name: Build devRelease APK 
            run: ./gradlew assembleRelease 
          - name: upload artifact to Firebase App Distribution
            uses: wzieba/Firebase-Distribution-Github-Action@v1
            with:
              appId: ${{ secrets.FIREBASE_APP_ID }}
              serviceCredentialsFileContent: ${{ secrets.FIREBASE_SERVICE_ACCOUNT_KEY_JSON }}
              groups: testers
              file: app/build/outputs/apk/dev/release/app-dev-release-unsigned.apk
      
    1. Commit and Push: Commit the .github/workflows/android_build_distribute.yml file and push it to your main branch on GitHub.
    1. Verify: Go to the “Actions” tab in your GitHub repository. You should see the workflow running. If it succeeds, check Firebase App Distribution for the new build. Your testers should get notified.

 


 

Step 8: Testing and Verification Summary

    • Flavors: Switch between devDebug and prodDebug in Android Studio. Verify the app name changes and data goes to the correct Firestore collections (tasks_dev/tasks, usage_logs_dev/usage_logs).
    • WorkManager: Use the App Inspection -> Background Task Inspector or ADB commands to verify the ReportingWorker runs periodically and logs data to the correct Firestore collection based on the selected flavor.
    • R8/Proguard: Install and test the prodRelease APK manually. Ensure all features work, especially adding/viewing tasks (Firestore interaction). Check Logcat for crashes related to missing classes/methods.
    • App Distribution: Make sure testers receive invites for the devDebug (or devRelease) builds uploaded manually or via CI/CD. Ensure they can install and run the app.
    • CI/CD: Check the GitHub Actions logs for successful builds and uploads after pushing to the develop branch. Verify the build appears in Firebase App Distribution.

 

Conclusion

Congratulations! You’ve navigated complex Android topics including Firestore, WorkManager, Compose, Flavors (with correct Firebase setup), R8, App Distribution, and CI/CD.

This project provides a solid foundation. From here, you can explore:

    • More complex WorkManager chains or constraints.
    • Deeper R8/Proguard rule optimization.
    • More sophisticated CI/CD pipelines (deploy signed apks/bundles, running tests, deploying to Google Play).
    • Using different NoSQL databases or local caching with Room.
    • Advanced Compose UI patterns and state management.
    • Firebase Authentication, Cloud Functions, etc.

If you want to have access to the full code in my GitHub repository, contact me in the comments.


 

Project Folder Structure (Conceptual)


AdvancedConceptsApp/
├── .git/
├── .github/workflows/android_build_distribute.yml
├── .gradle/
├── app/
│   ├── build/
│   ├── libs/
│   ├── src/
│   │   ├── main/           # Common code, res, AndroidManifest.xml
│   │   │   └── java/com/yourcompany/advancedconceptsapp/
│   │   │       ├── data/Task.kt
│   │   │       ├── ui/TaskScreen.kt, TaskViewModel.kt, theme/
│   │   │       ├── worker/ReportingWorker.kt
│   │   │       └── MainActivity.kt
│   │   ├── dev/            # Dev flavor source set (optional overrides)
│   │   ├── prod/           # Prod flavor source set (optional overrides)
│   │   ├── test/           # Unit tests
│   │   └── androidTest/    # Instrumentation tests
│   ├── google-services.json # *** IMPORTANT: Contains configs for BOTH package names ***
│   ├── build.gradle.kts    # App-level build script
│   └── proguard-rules.pro # R8/Proguard rules
├── api-project-xxx-yyy.json # Firebase service account key json
├── gradle/wrapper/
├── build.gradle.kts      # Project-level build script
├── gradle.properties
├── gradlew
├── gradlew.bat
└── settings.gradle.kts
        

 

]]>
https://blogs.perficient.com/2025/04/10/android-development-codelab-mastering-advanced-concepts/feed/ 0 379698
Log Framework Integration in Azure Functions with Azure Cosmos DB https://blogs.perficient.com/2025/04/02/log-framework-integration-in-azure-functions-with-azure-cosmos-db/ https://blogs.perficient.com/2025/04/02/log-framework-integration-in-azure-functions-with-azure-cosmos-db/#respond Wed, 02 Apr 2025 09:30:54 +0000 https://blogs.perficient.com/?p=379516

Introduction

Logging is an essential part of application development, especially in cloud environments where monitoring and debugging are crucial. In Azure Functions, there is no built-in provision to log application-level details into a centralized database, making it challenging to check logs every time in the Azure portal. This blog focuses on integrating NLog into Azure Functions to store all logs in a single database (Cosmos DB), ensuring a unified logging approach for better monitoring and debugging.

Steps to Integrate Logging Framework

Integration steps

 

1. Create an Azure Function Project

Begin by creating an Azure Function project using the Azure Function template in Visual Studio.

2. Install Required Nuget Packages

To enable logging using NLog, install the following NuGet packages:Function App Explorer

Install-Package NLog
Install-Package NLog.Extensions.Logging
Install-Package Microsoft.Azure.Cosmos

 

 

3. Create and Configure Nlog.config

NLog uses an XML-based configuration file to define logging targets and rules. Create a new file named Nlog.config in the project root and configure it with the necessary settings.

Refer to the official NLog documentation for database target configuration: NLog Database Target

Important: Set Copy to Output Directory to Copy Always in the file properties to ensure deployment.

N Log Config Code

 

4. Create Log Database

Create an Azure Cosmos DB account with the SQL API.

Sample Cosmos DB Database and Container

  1. Database Name: LogDemoDb
  2. Container Name: Logs
  3. Partition Key: /Application

5. Define Necessary Variables

In the local.settings.json file, define the Cosmos DB connection string.

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "CosmosDBConnectionString": "AccountEndpoint=https://your-cosmosdb.documents.azure.com:443/;AccountKey=your-account-key;"
  }
}

Json App Settings

 

6. Configure NLog in Startup.cs

Modify Startup.cs to configure NLog and instantiate database connection strings and log variables.

using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using NLog.Extensions.Logging;
using Microsoft.Azure.Cosmos;

[assembly: FunctionsStartup(typeof(MyFunctionApp.Startup))]
namespace MyFunctionApp
{
    public class Startup : FunctionsStartup
    {
        public override void Configure(IFunctionsHostBuilder builder)
        {
            builder.Services.AddLogging(loggingBuilder =>
            {
                loggingBuilder.ClearProviders();
                loggingBuilder.SetMinimumLevel(LogLevel.Information);
                loggingBuilder.AddNLog();
            });

            builder.Services.AddSingleton(new CosmosClient(
                Environment.GetEnvironmentVariable("CosmosDBConnectionString")));
        }
    }
}

Startup Code

 

7. Add Logs in Necessary Places

To ensure efficient logging, add logs based on the following log level hierarchy:

Log Levels

Example Logging in Function Code:

 

using System;
using System.Threading.Tasks;
using Microsoft.Azure.Cosmos;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;

public class MyFunction
{
    private readonly ILogger<MyFunction> _logger;
    private readonly CosmosClient _cosmosClient;
    private readonly Container _container;

    public MyFunction(ILogger<MyFunction> logger, CosmosClient cosmosClient)
    {
        _logger = logger;
        _cosmosClient = cosmosClient;

        // Initialize Cosmos DB container
        _container = _cosmosClient.GetContainer("YourDatabaseName", "YourContainerName");
    }

    [FunctionName("MyFunction")]
    public async Task Run([TimerTrigger("0 */5 * * * *")] TimerInfo myTimer)
    {
        var logEntry = new
        {
            id = Guid.NewGuid().ToString(),
            timestamp = DateTime.UtcNow,
            logLevel = "Information",
            message = "Function executed at " + DateTime.UtcNow
        };

        // Insert log into Cosmos DB
        await _container.CreateItemAsync(logEntry, new PartitionKey(logEntry.id));

        _logger.LogInformation("Function executed at {time}", DateTime.UtcNow);
    }
}

8. Deployment

Once the function is ready, deploy it to Azure Function App using Visual Studio or Azure DevOps.

Deployment Considerations:

  • Define necessary environment variables in Azure Function Configuration Settings.
  • Ensure Azure Function App Service and SQL Database are in the same network to avoid connection issues.
  • Monitor logs using Application Insights for additional diagnostics.

Conclusion

By following these steps, you can successfully integrate NLog into your Azure Functions for efficient logging. This setup enables real-time monitoring, structured log storage, and improved debugging capabilities.

]]>
https://blogs.perficient.com/2025/04/02/log-framework-integration-in-azure-functions-with-azure-cosmos-db/feed/ 0 379516
Understanding and Implementing OAuth2 and OpenID Connect in .NET https://blogs.perficient.com/2025/04/01/understanding-and-implementing-oauth2-and-openid-connect-in-net/ https://blogs.perficient.com/2025/04/01/understanding-and-implementing-oauth2-and-openid-connect-in-net/#respond Tue, 01 Apr 2025 11:34:05 +0000 https://blogs.perficient.com/?p=378734

Authentication and authorization are two crucial aspects of web development. In modern applications, it’s essential to ensure that users are who they say they are (authentication) and have permission to access specific resources (authorization). OAuth2 and OpenID Connect are two widely used protocols that help achieve both goals.

What is OAuth2?

OAuth2 (Open Authorization 2.0) is an authorization framework that enables third-party applications to access a user’s resources without requiring them to share their credentials (username and password). It allows for delegated access, meaning that users can grant specific, controlled access to their data without revealing their login information.

OAuth2 is commonly used to enable users to authenticate via their existing accounts from services like Google, Facebook, or Microsoft. This allows users to securely log in to applications without exposing their sensitive credentials to the requesting application.

Key Concepts in OAuth2

  1. Resource Owner: The user who owns the data and grants permission to the client application.
  2. Client Application: The application requesting access to the user’s resources (e.g., a mobile app or a web application).
  3. Authorization Server: The server responsible for authenticating the user and issuing access tokens.
  4. Resource Server: The server that hosts the protected resources and validates the access tokens provided by the client.
  5. Access Token: A token issued by the authorization server that grants the client access to the protected resources.

OAuth2 Flow

  1. The user is redirected to the authorization server (e.g., Google’s OAuth2 server).
  2. The user authenticates and grants permission for the client application to access specific data (e.g., their Google profile).
  3. The authorization server issues an authorization code.
  4. The client application exchanges the authorization code for an access token.
  5. The client application uses the access token to request protected resources from the resource server.

Key Benefits of OAuth2

OAuth 2.0 is a widely adopted authorization framework that allows third-party applications to access user resources without sharing the credentials. It provides a secure and scalable way to manage authorization. Here are some key benefits of OAuth 2.0:

  1. Granular Access Control: Allows user to define fine-grained permissions for specific resources and grant third-party apps access to certain data or actions without providing blanket access to all their information.
  2. Improved Security: Credentials Protection and Scoped Access.
  3. Support for Multiple Grant Types: Supports several grant types (e.g., Authorization Code, Implicit, Client Credentials, and Resource Owner Password Credentials)
  4. Token-Based Authentication: Uses access tokens, which are temporary and can be scoped and time-limited.
  5. Token Expiry and Revocation: Tokens issued by OAuth 2.0 have an expiry time, which helps limit the duration of access.
  6. Interoperability: OAuth 2.0 is a well-defined, open standard that is widely supported by various service providers and applications, ensuring smooth integration between different systems and platforms.

 

Oauth

 

What is OpenID Connect?

OpenID Connect (OIDC) is an identity layer built on top of OAuth2. It is used to verify the identity of the user and obtain their profile information. While OAuth2 is used for authorization, OpenID Connect extends OAuth2 to include authentication.

In simple terms, OAuth2 tells the client what the user is allowed to do (authorization), while OpenID Connect tells the client who the user is (authentication).

Key Concepts in OpenID Connect

  1. ID Token: This is a JWT (JSON Web Token) that contains information about the authenticated user. It is issued by the authorization server and can be used by the client application to authenticate the user.
  2. Authentication Request: The client sends a request to the authorization server to authenticate the user and receive an ID token along with an access token.

OAuth2 and OpenID Connect in .NET

For better understanding, we’ll integrate with Google as the OAuth2 and OpenID Connect provider.

Step 1: Create a Google Developer Project

1.1 Open the Google Cloud Console

    • Open your browser and navigate to the Google Cloud Console.
    • Sign in to your Google account. If you’re not already signed in, you’ll be prompted to log in.

1.2 Create a New Project

    • On the top left of the page, you’ll see the Google Cloud Platform logo. To the right of the logo, there will be a dropdown that may say something like “Select a Project” or “My First Project.”
    • Click on this dropdown. A new window will appear, showing a list of your existing projects.
    • In the top right of the window, you’ll see a button that says “New Project.” Click on this button.

1.3 Fill in the Project Details

    • Project Name: Enter a name for your project.
    • After filling in the details, click Create.

Once your project is created, you’ll be redirected to the newly created project’s dashboard in the Google Cloud Console.


Step 2: Enable the Google+ API

2.1 Navigate to the APIs & Services Library

    • In the left sidebar, click on the hamburger icon (three horizontal lines) to open the navigation menu.
    • From the menu, go to APIs & Services > Library.

2.2 Search for the Google+ API

    • In the search bar at the top of the Library page, type Google+ API and press enter.
    • Click on the Google+ API result.
    • Then, click the Enable button to enable this API for your project.

Step 3: Create OAuth2 Credentials

3.1 Go to the Credentials Page

    • In the left sidebar, under APIs & Services, click on Credentials.

3.2 Create OAuth 2.0 Client ID

    • On the Credentials page, click the Create Credentials button at the top and select OAuth 2.0 Client ID.

3.3 Configure the OAuth Consent Screen

    • Before creating the OAuth credentials, you need to configure the OAuth consent screen. Click on the OAuth consent screen tab.
    • Choose External as the user type.

3.4 Fill in the Required Fields

    • App Name: Enter a name for your application.
    • User Support Email: Provide your email address.
    • Developer Contact Information: Enter your email address.
    • Click Save and Continue to proceed.

3.5 Create the OAuth 2.0 Client ID

    • After completing the OAuth consent screen setup, you’ll be asked to configure the OAuth credentials.

    • Select Web application as the application type.

    • Add the following Authorized redirect URI:

      Example Redirect URI (if your app is running on https://localhost:{{portnumber}}):

      https://localhost:{{portnumber}}/signin-google
    • Click Create.


Step 4: Obtain Client ID and Client Secret

4.1 Create OAuth Credentials

After completing the OAuth 2.0 configuration and clicking Create, a new window will appear with your Client ID and Client Secret.

4.2 Copy and Store Credentials

It is crucial to copy both the Client ID and Client Secret and store them securely. These credentials will be necessary for integrating Google authentication into your app and ensuring secure access to users’ Google accounts.

4.3 Use the Credentials in Your App

You will use these credentials in your application to authenticate users and interact with Google’s services. Keep them safe, as they allow access to sensitive user data.

Set Up the .NET Core Project:

    1. Create a new ASP.NET Core web application using the template for Web Application (Model-View-Controller).
    2. Add the Microsoft.AspNetCore.Authentication.Google NuGet package.
    3. Configure OAuth2 and OpenID Connect in Program.cs

In the Program.cs file, configure the authentication middleware to use Google’s OAuth2 and OpenID Connect:

// Add authentication services (Google login, etc.)

builder.Services.AddAuthentication(options =>

{

options.DefaultScheme = "Cookies";

options.DefaultChallengeScheme = GoogleDefaults.AuthenticationScheme;

})

.AddCookie()

.(options =>

{

options.ClientId = "client-id";

options.ClientSecret = "client-secret";

options.CallbackPath = "/signin-google";

});


app.UseAuthentication();

app.UseAuthorization();
  • AddGoogle enables OAuth2 and OpenID Connect using Google as the identity provider.
  • SaveTokens = true ensures that both the access token and ID token are saved.

The .AddGoogle method is part of the ASP.NET Core framework, introduced in .NET 9.0 for authentication purposes (typically in the context of OAuth 2.0 or OpenID Connect for integrating Google sign-in). If we are using a version of .NET earlier than 9.0, we will not have access to this method directly.

To handle lower versions of .NET (e.g., .NET 5.0, .NET 6.0, or .NET 7.0) and integrate Google authentication, we will need to use the older approach provided by the AddOAuth or AddGoogle method from previous versions. Here’s how we can implement it for .NET 6.0 or lower:

.AddOAuth("Google", options =>

{

options.ClientId = Configuration["Google:ClientId"];

options.ClientSecret = Configuration["Google:ClientSecret"];

options.CallbackPath = new PathString("/signin-google");

options.AuthorizationEndpoint = "https://accounts.google.com/o/oauth2/auth";

options.TokenEndpoint = "https://oauth2.googleapis.com/token";

options.UserInformationEndpoint = "https://www.googleapis.com/oauth2/v3/userinfo";

options.Scope.Add("openid");

options.Scope.Add("profile");

options.Scope.Add("email");

options.SaveTokens = true;

}

Step 5: Create a Controller to Handle Authentication

Create a simple controller to handle login and display the authenticated user’s profile.

  • The Login action redirects users to Google’s login page.
  • The Logout action logs the user out.
  • The Profile action the authenticated principal and displays the ID token.

Identity Management in ASP.NET Core relies heavily on OAuth 2.0 or OpenID Connect for user authentication. The Profile action retrieves user information, typically stored in claims, such as the ID token, and uses it for user management. With the appropriate configuration and token handling, the application can securely manage user identity, retrieve profile data, and display relevant information to the user.

Step 6: Add Views to Display User Information

Add a simple view to show the user’s profile in the Views/Account/Profile.cshtml:

Step 7: Run the Application

Run application using:

dotnet run

Conclusion

OAuth2 and OpenID Connect are powerful protocols for handling authentication and authorization in modern applications. By integrating these protocols into .NET applications, we can securely authenticate users, delegate access to resources, and ensure that only authorized users can access services.

By following the steps we should now have a basic understanding of how to implement OAuth2 and OpenID Connect in a .NET application. These concepts are essential for any developer working on building secure, scalable, and modern web applications.

]]>
https://blogs.perficient.com/2025/04/01/understanding-and-implementing-oauth2-and-openid-connect-in-net/feed/ 0 378734
Daily Scrum: An Agile Essential https://blogs.perficient.com/2025/03/28/daily-scrum-an-agile-essential/ https://blogs.perficient.com/2025/03/28/daily-scrum-an-agile-essential/#respond Fri, 28 Mar 2025 10:13:45 +0000 https://blogs.perficient.com/?p=379305

Mastering the Daily Scrum: A Guide to Effective Agile Meetings

In the fast-paced world of Agile, the Daily Scrum is a critical touchpoint that empowers teams to stay aligned, adapt to changes, and collaborate effectively. Despite its simplicity, this daily meeting often faces challenges that hinder its true potential. In this blog, we’ll explore what the Daily Scrum is, common pitfalls, and practical tips to enhance its effectiveness.

Understanding the Daily Scrum

The Daily Scrum is a short, time-boxed meeting where the development team synchronizes progress and plans the day ahead. It’s a core component of Scrum methodology, designed not as a status update but as a collaborative inspection and adaptation opportunity.

Image1

Unlike traditional meetings, the Daily Scrum is not meant for problem-solving or detailed discussions; instead, it focuses on:

  • Inspecting progress toward the Sprint Goal
  • Adapting the Sprint Backlog
  • Identifying potential roadblocks

Key Roles in a Daily Scrum

Roles

While the development team leads the conversation, other key stakeholders also play a role:

  • Development Team: Owns the responsibility of conducting the Daily Scrum.
  • Product Owner: May participate to provide insights into product backlog items.
  • Scrum Master: Ensures the meeting’s integrity, fosters discipline, and facilitates effective discussions.
  • Stakeholders/Observers: Can attend as silent listeners, ensuring the team remains focused.

Benefits of a Well-Executed Daily Scrum

When done right, the Daily Scrum offers numerous benefits:

  • Enhanced Team Cohesion: Fosters a sense of shared responsibility and accountability.
  • Quick Issue Identification: Helps identify impediments early.
  • Reduced Meetings: Minimizes the need for other status updates.
  • Faster Decision-Making: Enables swift, informed decisions.
  • Continuous Improvement: Promotes transparency and iterative learning.

Challenges and How to Overcome Them

Despite its advantages, teams often face challenges during the Daily Scrum. Here are some common issues and tips to address them:

  • Teams often face challenges during the Daily Scrum. Below are some common issues and actionable solutions:
    • Unpreparedness and Irrelevant Discussions: Stick to the purpose of the meeting.
    • Selection of Questions: Establish clear ground rules.
    • Visualizing the Work: Leverage Scrum boards for transparency.
    • Skipping or Cancelling: Fix the location and time to maintain consistency.
    • Late Joiners and Poor Attendance: Promote attentiveness and punctuality.
    • Distinguishing Blockers from Impediments: Use the ‘parking lot’ approach for unrelated issues.
    • Micromanaging: Encourage creativity and innovation.
    • Lack of Psychological Safety: Recommend video calls for remote teams to foster open communication.

The Quickest Meeting of Scrum

When Daily scrum comes into regular practice, team will feel it facile in sharing the project updates. The duration of this event is always maintained to be 15 minutes and this remains unaffected by any factors such as team size, Sprint duration, phases of the sprint and so on.

Daily Scrum vs. Standup: Understanding the Difference

While often used interchangeably, Daily Scrum and standup meetings differ in purpose and structure. A standup may serve as a general team sync, whereas the Daily Scrum is a focused, goal-oriented Agile practice within the Scrum framework.

Capture

Final Thoughts

A successful Daily Scrum isn’t just about following the process—it’s about fostering collaboration, adaptability, and continuous improvement. By embracing the principles of transparency and inspection, teams can unlock their true potential and drive project success.

Remember, the key to an effective Daily Scrum is commitment from the team. Keep it concise, keep it focused, and most importantly, keep it valuable.

Happy Scrumming!

]]>
https://blogs.perficient.com/2025/03/28/daily-scrum-an-agile-essential/feed/ 0 379305
Power Fx in Power Automate Desktop https://blogs.perficient.com/2025/03/25/power-fx-in-power-automate-desktop/ https://blogs.perficient.com/2025/03/25/power-fx-in-power-automate-desktop/#respond Wed, 26 Mar 2025 04:52:50 +0000 https://blogs.perficient.com/?p=379147

Power Fx Features

Power Fx is a low-code language expressing logic across the Microsoft Power Platform. It’s a general-purpose, strong-typed, declarative, and functional programming language described in human-friendly text. Makers can use Power Fx directly in an Excel-like formula bar or Visual Studio Code text window. Its concise and straightforward nature makes everyday programming tasks easy for both makers and developers.

Power Fx is expressed in human-friendly text. It’s a low-code language that makers can use directly in an Excel-like formula bar or Visual Studio Code text window. The “low” in low-code is due to the concise and straightforward nature of the language, making everyday programming tasks easy for both makers and developers.

Power Fx enables the full spectrum of development, from no-code makers without any programming knowledge to pro-code for professional developers. It enables diverse teams to collaborate and save time and effort.

Using Power Fx in Desktop Flow

To use Power Fx as an expression language in a desktop flow, you must create one and enable the respective toggle button when creating it through Power Automate for the desktop’s console.

Picture1

Differences in Power Fx-Enabled Flows

Each Power Fx expression must start with an “=” (equals to sign).

If you’re transitioning from flows where Power Fx is disabled, you might notice some differences. To streamline your experience while creating new desktop flows, here are some key concepts to keep in mind:

  • In the same fashion as Excel formulas, desktop flows that use Power Fx as their expression language use 1 (one) based array indexing instead of 0 (zero) based indexing. For example, expression =Index(numbersArray, 1) returns the first element of the numbersArray array.
  • Variable names are case-sensitive in desktop flows with Power Fx. For example, NewVar is different than newVar.
  • When Power Fx is enabled in a desktop flow, variable initialization is required before use. Attempting to use an uninitialized variable in Power Fx expressions results in an error.
  • The If action accepts a single conditional expression. Previously, it accepted multiple operands.
  • While flows without Power Fx enabled have the term “General value” to denote an unknown object type, Power Fx revolves around a strict type system. In Power Fx enabled flows, there’s a distinction between dynamic variables (variables whose type or value can be changed during runtime) and dynamic values (values whose type or schema is determined at runtime). To better understand this distinction, consider the following example. The dynamicVariable changes its type during runtime from a Numeric to a Boolean value, while dynamicValue is determined during runtime to be an untyped object, with its actual type being a Custom object:

With Power Fx Enabled

Picture2

With Power Fx Disabled

Picture3

  • Values that are treated as dynamic values are:
    • Data tables
    • Custom objects with unknown schema
    • Dynamic action outputs (for example, the “Run .NET Script” action)
    • Outputs from the “Run desktop flow” action
    • Any action output without a predefined schema (for example, “Read from Excel worksheet” or “Create New List”)
  • Dynamic values are treated similarly to the Power Fx Untyped Object and usually require explicit functions to be converted into the required type (for example, Bool() and Text()). To streamline your experience, there’s an implicit conversion when using a dynamic value as an action input or as a part of a Power Fx expression. There’s no validation during authoring, but depending on the actual value during runtime, a runtime error occurs if the conversion fails.
  • A warning message stating “Deferred type provided” is presented whenever a dynamic variable is used. These warnings arise from Power Fx’s strict requirement for strong-typed schemas (strictly defined types). Dynamic variables aren’t permitted in lists, tables, or as a property for Record values.
  • By combining the Run Power Fx expression action with expressions using the Collect, Clear, ClearCollect, and Patch functions, you can emulate behavior found in the actions Add item to list and Insert row into data table, which were previously unavailable for Power Fx-enabled desktop flows. While both actions are still available, use the Collect function when working with strongly typed lists (for example, a list of files). This function ensures the list remains typed, as the Add Item to List action converts the list into an untyped object.

Examples

  • The =1 in an input field equals the numeric value 1.
  • The = variableName is equal to the variableName variable’s value.
  • The expression = {‘prop’:”value”} returns a record value equivalent to a custom object.
  • The expression = Table({‘prop’:”value”}) returns a Power Fx table that is equivalent to a list of custom objects.
  • The expression – = [1,2,3,4] creates a list of numeric values.
  • To access a value from a List, use the function Index(var, number), where var is the list’s name and number is the position of the value to be retrieved.
  • To access a data table cell using a column index, use the Index() function. =Index(Index(DataTableVar, 1), 2) retrieves the value from the cell in row 1 within column 2. =Index(DataRowVar, 1) retrieves the value from the cell in row 1.
  • Define the Collection Variable:

Give your collection a name (e.g., myCollection) in the Variable Name field.

In the Value field, define the collection. Collections in PAD are essentially arrays, which you can define by enclosing the values in square brackets [ ].

1. Create a Collection of Numbers

Action: Set Variable

Variable Name: myNumberCollection

Value: [1, 2, 3, 4, 5]

2. Create a Collection of Text (Strings)

Action: Set Variable

Variable Name: myTextCollection

Value: [“Alice”, “Bob”, “Charlie”]

3. Create a Collection with Mixed Data Types

You can also create collections with mixed data types. For example, a collection with both numbers and strings:

Action: Set Variable

Variable Name: mixedCollection

Value: [1, “John”, 42, “Doe”]

  • To include an interpolated value in an input or a UI/web element selector, use the following syntax: Text before ${variable/expression} text after
    • Example: The total number is ${Sum(10, 20)}

 If you want to use the dollar sign ($) followed by a opening curly brace sign ({) within a Power Fx expression or in the syntax of a UI/Web element selector and have Power Automate for desktop not treat it as the string interpolation syntax, make sure to follow this syntax: $${ (the first dollar sign will act as an escape character)

Available Power Fx functions

For the complete list of all available functions in Power Automate for desktop flows, go to Formula reference – desktop flows.

Known Issues and Limitations

  • The following actions from the standard library of automation actions aren’t currently supported:
    • Switch
    • Case
    • Default case
  • Some Power Fx functions presented through IntelliSense aren’t currently supported in desktop flows. When used, they display the following design-time error: “Parameter ‘Value’: PowerFx type ‘OptionSetValueType’ isn’t supported.”

 

When and When Not to Use Power Fx on Desktop

When to Use Power Fx in Power Automate Desktop

  1. Complex Logic: If you need to implement more complicated conditions, calculations, or data transformations in your flows, Power Fx can simplify the process.
  2. Integration with Power Apps: If your automations are closely tied to Power Apps and you need consistent logic between them, Power Fx can offer a seamless experience as it’s used across the Power Platform.
  3. Data Manipulation: Power Fx excels at handling data operations like string manipulation, date formatting, mathematical operations, and more. It may be helpful if your flow requires manipulating data in these ways.
  4. Reusability: Power Fx functions can be reused in different parts of your flow or other flows, providing consistency and reducing the need for redundant logic.
  5. Low-Code Approach: If you’re building solutions that require a lot of custom logic but don’t want to dive into full-fledged programming, Power Fx can be a good middle ground.

When Not to Use Power Fx in Power Automate Desktop

  1. Simple Flows: For straightforward automation tasks that don’t require complex expressions (like basic UI automation or file manipulations), using Power Fx could add unnecessary complexity. It’s better to stick with the built-in actions.
  2. Limited Support in Desktop: While Power Fx is more prevalent in Power Apps, Power Automate Desktop doesn’t fully support all Power Fx features available in other parts of the Power Platform. If your flow depends on more advanced Power Fx capabilities, it might be limited in Power Automate Desktop.
  3. Learning Curve: Power Fx has its own syntax and can take time to get used to, mainly if you’re accustomed to more traditional automation methods. If you’re new to it, you may want to weigh the time it takes to learn Power Fx versus simply using the built-in features in Power Automate Desktop.

Conclusion

Yes, use Power Fx if your flow needs custom logic, data transformation, or integration with Power Apps and you’re comfortable with the learning curve.

No, avoid it if your flows are relatively simple or if you’re primarily focused on automation tasks like file manipulation, web scraping, or UI automation, where Power Automate Desktop’s native features will be sufficient.

]]>
https://blogs.perficient.com/2025/03/25/power-fx-in-power-automate-desktop/feed/ 0 379147
Responsible and Secure Use of GenAI for Software Developers https://blogs.perficient.com/2025/03/19/responsible-and-secure-use-of-genai-for-software-developers/ https://blogs.perficient.com/2025/03/19/responsible-and-secure-use-of-genai-for-software-developers/#comments Wed, 19 Mar 2025 15:01:09 +0000 https://blogs.perficient.com/?p=378898

In today’s development landscape, there are numerous ways to use GenAI in software development. Stand-alone IDEs with built-in GenAI, such as Cursor AI and Windsurf, and plugins for Integrated Development Environments (IDEs) are available. Some popular IDE plug-ins include GitHub Copilot, Tabnine, Codeium, and Amazon Q.  These tools are easy to use and promise significant productivity increases but can potentially breach company security policies.

Scenario

An automated process exports an Excel spreadsheet daily and places the file in an AWS S3 bucket. This spreadsheet contains sensitive data, including customer names, account numbers, addresses, order IDs, order dates, product SKUs and quantities, and credit card information. The file is descriptively named, such as “AcmeCompany_customer_sales_data_2025_02_20.xls.”

You are tasked with creating an AWS Lambda function in Python to ingest this file and insert the data into a MongoDB.

Your Thought Process

To build and test your Python utility, you might use a GenAI prompt like the following:

“Create a Python program that connects to an AWS system using the credentials username=xyz, password=abcde, retrieve the file in the AWS S3 Bucket named XYZBucket whose filename pattern matches AcmeCompany_customer_sales_data_2025_02_20.xls, read in the data from this file, convert it to JSON, and write it to a MongoDB collection named ‘daily_sales_data’ using the connection string ‘https://…/…’”.

The Problem

Great!  You have generated a program that does exactly what you need.  However, you have also shared Personally Identifiable Information (PII) and proprietary Payment Card Information (PCI) with the outside world. This action violates your company’s security protocols. It breaches several laws and regulations, such as the General Data Protection Regulation (GDPR) and the Payment Card Industry Data Security Standard (PCI DSS). Additionally, you have exposed details about your AWS system and MongoDB installation. This data is now potentially part of the AI model’s learning dataset, and the GenAI tool may share this exact data when another developer prompts it with the correct request.

Alternate Approach

You only need a few rows of random data for the GenAI tool to generate suitable code. Create an example test data Excel file with made-up or randomly generated names, account numbers, credit card numbers, etc., and a random file name. You can then prompt your GenAI tool separately with requests for individual pieces of the puzzle like:

  • “Show me an example of how to connect to a MongoDB database from a Python program.”
  • “How do I connect to an AWS S3 instance using the Boto3 library in a Python program?”
  • “How do I open an Excel file in an S3 bucket and read the data?”
  • “I need a method to read in the example file named ‘example.xls,’ convert it to JSON, and write it in a MongoDB collection named ‘test_data’.”

In all cases, omit all connection information and proprietary or protected data. Your GenAI tool will generate the code with placeholder comments like “your connection string here.”  You may have some additional work to tie all of this into real-world code, but you haven’t exposed any protected information or system details to the world.  Just because the GenAI tool can do everything you need doesn’t mean you should use it this way.

Future Considerations

The next wave of GenAI development tools will focus on looking at your entire codebase to suggest system-wide improvements to your code.  This almost certainly opens the possibility of Intellectual Property exposure.  In addition, credentials, connection strings, and passwords for at least a test system might exist in your codebase.  Unless the GenAI tool is hosted locally within your company, the risks to intellectual property and security are extensive.

Guidelines for the use of GenAI

  1. Data Privacy and Security

  • Avoid Sharing Sensitive Data: Never input Personally Identifiable Information (PII), Payment Card Information (PCI), or any other sensitive data into GenAI tools. Use anonymized or synthetic data instead.
  • Compliance with Regulations: Ensure that your use of GenAI complies with relevant data protection laws and regulations, such as GDPR, HIPAA, and PCI DSS.
  1. Ethical Use

  • Transparency: Be transparent about using GenAI in your projects. Inform stakeholders about how AI is being used and the data it processes.
  • Bias and Fairness: Be aware of potential biases in AI models and strive to mitigate them. Ensure your AI solutions are fair and do not discriminate against any group.
  1. Human Oversight

  • Review Outputs: Always review and validate the outputs generated by GenAI tools. Do not rely solely on AI-generated content for critical decisions.
  • Accountability: Take complete ownership of the results produced by GenAI tools. Ensure that there is human oversight in the decision-making process.
  1. Security Best Practices

  • Secure Development Practices: Follow secure software development practices, such as those outlined in the Secure Software Development Framework (SSDF). These include regular code reviews, vulnerability assessments, and secure coding standards1.
  • Access Control: Implement strict access controls to ensure that only authorized personnel can use GenAI tools and access the data they process.
  1. Continuous Monitoring and Improvement

  • Monitor AI Systems: Continuously monitor the performance and behavior of AI systems to detect and address any issues promptly.
  • Update and Improve: Regularly update AI models and tools to incorporate the latest security patches and improvements.

Software developers can leverage GenAI’s power by following these guidelines while ensuring these tools are used responsibly and securely.

Summary

It is easy to forget that, unless you are using a locally hosted GenAI tool, any data you submit as a prompt is not private. Your data is sent to the GenAI tool’s servers, parsed, potentially stored, and potentially shared with the next person who enters the correct prompt. You must constantly assess what you are giving the GenAI tool as a prompt to determine if it exposes sensitive data.

Similarly, you can use GenAI tools to improve your code or do a code review.  However, you must be careful about what code you ask the GenAI tool to review. Does the code contain usernames or passwords? Connection strings to databases? Is the code identifiable for a specific purpose, or does it contain proprietary algorithms or intellectual property?

Exposure of proprietary or protected information or intellectual property to a GenAI tool could lead to disciplinary action, termination of employment, and legal action.  If this data or code belonged to your customer, the consequences could be even worse, potentially leading to legal action and the cancellation of contracts worth millions to your company.

GenAI development tools are excellent and promise significant productivity increases. However, careful and diligent use of these tools is needed to mitigate potential risks to protected data and intellectual property.

]]>
https://blogs.perficient.com/2025/03/19/responsible-and-secure-use-of-genai-for-software-developers/feed/ 1 378898