Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ Expert Digital Insights Wed, 26 Mar 2025 13:46:38 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ 32 32 30508587 Power Fx in Power Automate Desktop https://blogs.perficient.com/2025/03/25/power-fx-in-power-automate-desktop/ https://blogs.perficient.com/2025/03/25/power-fx-in-power-automate-desktop/#respond Wed, 26 Mar 2025 04:52:50 +0000 https://blogs.perficient.com/?p=379147

Power Fx Features

Power Fx is a low-code language expressing logic across the Microsoft Power Platform. It’s a general-purpose, strong-typed, declarative, and functional programming language described in human-friendly text. Makers can use Power Fx directly in an Excel-like formula bar or Visual Studio Code text window. Its concise and straightforward nature makes everyday programming tasks easy for both makers and developers.

Power Fx is expressed in human-friendly text. It’s a low-code language that makers can use directly in an Excel-like formula bar or Visual Studio Code text window. The “low” in low-code is due to the concise and straightforward nature of the language, making everyday programming tasks easy for both makers and developers.

Power Fx enables the full spectrum of development, from no-code makers without any programming knowledge to pro-code for professional developers. It enables diverse teams to collaborate and save time and effort.

Using Power Fx in Desktop Flow

To use Power Fx as an expression language in a desktop flow, you must create one and enable the respective toggle button when creating it through Power Automate for the desktop’s console.

Picture1

Differences in Power Fx-Enabled Flows

Each Power Fx expression must start with an “=” (equals to sign).

If you’re transitioning from flows where Power Fx is disabled, you might notice some differences. To streamline your experience while creating new desktop flows, here are some key concepts to keep in mind:

  • In the same fashion as Excel formulas, desktop flows that use Power Fx as their expression language use 1 (one) based array indexing instead of 0 (zero) based indexing. For example, expression =Index(numbersArray, 1) returns the first element of the numbersArray array.
  • Variable names are case-sensitive in desktop flows with Power Fx. For example, NewVar is different than newVar.
  • When Power Fx is enabled in a desktop flow, variable initialization is required before use. Attempting to use an uninitialized variable in Power Fx expressions results in an error.
  • The If action accepts a single conditional expression. Previously, it accepted multiple operands.
  • While flows without Power Fx enabled have the term “General value” to denote an unknown object type, Power Fx revolves around a strict type system. In Power Fx enabled flows, there’s a distinction between dynamic variables (variables whose type or value can be changed during runtime) and dynamic values (values whose type or schema is determined at runtime). To better understand this distinction, consider the following example. The dynamicVariable changes its type during runtime from a Numeric to a Boolean value, while dynamicValue is determined during runtime to be an untyped object, with its actual type being a Custom object:

With Power Fx Enabled

Picture2

With Power Fx Disabled

Picture3

  • Values that are treated as dynamic values are:
    • Data tables
    • Custom objects with unknown schema
    • Dynamic action outputs (for example, the “Run .NET Script” action)
    • Outputs from the “Run desktop flow” action
    • Any action output without a predefined schema (for example, “Read from Excel worksheet” or “Create New List”)
  • Dynamic values are treated similarly to the Power Fx Untyped Object and usually require explicit functions to be converted into the required type (for example, Bool() and Text()). To streamline your experience, there’s an implicit conversion when using a dynamic value as an action input or as a part of a Power Fx expression. There’s no validation during authoring, but depending on the actual value during runtime, a runtime error occurs if the conversion fails.
  • A warning message stating “Deferred type provided” is presented whenever a dynamic variable is used. These warnings arise from Power Fx’s strict requirement for strong-typed schemas (strictly defined types). Dynamic variables aren’t permitted in lists, tables, or as a property for Record values.
  • By combining the Run Power Fx expression action with expressions using the Collect, Clear, ClearCollect, and Patch functions, you can emulate behavior found in the actions Add item to list and Insert row into data table, which were previously unavailable for Power Fx-enabled desktop flows. While both actions are still available, use the Collect function when working with strongly typed lists (for example, a list of files). This function ensures the list remains typed, as the Add Item to List action converts the list into an untyped object.

Examples

  • The =1 in an input field equals the numeric value 1.
  • The = variableName is equal to the variableName variable’s value.
  • The expression = {‘prop’:”value”} returns a record value equivalent to a custom object.
  • The expression = Table({‘prop’:”value”}) returns a Power Fx table that is equivalent to a list of custom objects.
  • The expression – = [1,2,3,4] creates a list of numeric values.
  • To access a value from a List, use the function Index(var, number), where var is the list’s name and number is the position of the value to be retrieved.
  • To access a data table cell using a column index, use the Index() function. =Index(Index(DataTableVar, 1), 2) retrieves the value from the cell in row 1 within column 2. =Index(DataRowVar, 1) retrieves the value from the cell in row 1.
  • Define the Collection Variable:

Give your collection a name (e.g., myCollection) in the Variable Name field.

In the Value field, define the collection. Collections in PAD are essentially arrays, which you can define by enclosing the values in square brackets [ ].

1. Create a Collection of Numbers

Action: Set Variable

Variable Name: myNumberCollection

Value: [1, 2, 3, 4, 5]

2. Create a Collection of Text (Strings)

Action: Set Variable

Variable Name: myTextCollection

Value: [“Alice”, “Bob”, “Charlie”]

3. Create a Collection with Mixed Data Types

You can also create collections with mixed data types. For example, a collection with both numbers and strings:

Action: Set Variable

Variable Name: mixedCollection

Value: [1, “John”, 42, “Doe”]

  • To include an interpolated value in an input or a UI/web element selector, use the following syntax: Text before ${variable/expression} text after
    • Example: The total number is ${Sum(10, 20)}

 If you want to use the dollar sign ($) followed by a opening curly brace sign ({) within a Power Fx expression or in the syntax of a UI/Web element selector and have Power Automate for desktop not treat it as the string interpolation syntax, make sure to follow this syntax: $${ (the first dollar sign will act as an escape character)

Available Power Fx functions

For the complete list of all available functions in Power Automate for desktop flows, go to Formula reference – desktop flows.

Known Issues and Limitations

  • The following actions from the standard library of automation actions aren’t currently supported:
    • Switch
    • Case
    • Default case
  • Some Power Fx functions presented through IntelliSense aren’t currently supported in desktop flows. When used, they display the following design-time error: “Parameter ‘Value’: PowerFx type ‘OptionSetValueType’ isn’t supported.”

 

When and When Not to Use Power Fx on Desktop

When to Use Power Fx in Power Automate Desktop

  1. Complex Logic: If you need to implement more complicated conditions, calculations, or data transformations in your flows, Power Fx can simplify the process.
  2. Integration with Power Apps: If your automations are closely tied to Power Apps and you need consistent logic between them, Power Fx can offer a seamless experience as it’s used across the Power Platform.
  3. Data Manipulation: Power Fx excels at handling data operations like string manipulation, date formatting, mathematical operations, and more. It may be helpful if your flow requires manipulating data in these ways.
  4. Reusability: Power Fx functions can be reused in different parts of your flow or other flows, providing consistency and reducing the need for redundant logic.
  5. Low-Code Approach: If you’re building solutions that require a lot of custom logic but don’t want to dive into full-fledged programming, Power Fx can be a good middle ground.

When Not to Use Power Fx in Power Automate Desktop

  1. Simple Flows: For straightforward automation tasks that don’t require complex expressions (like basic UI automation or file manipulations), using Power Fx could add unnecessary complexity. It’s better to stick with the built-in actions.
  2. Limited Support in Desktop: While Power Fx is more prevalent in Power Apps, Power Automate Desktop doesn’t fully support all Power Fx features available in other parts of the Power Platform. If your flow depends on more advanced Power Fx capabilities, it might be limited in Power Automate Desktop.
  3. Learning Curve: Power Fx has its own syntax and can take time to get used to, mainly if you’re accustomed to more traditional automation methods. If you’re new to it, you may want to weigh the time it takes to learn Power Fx versus simply using the built-in features in Power Automate Desktop.

Conclusion

Yes, use Power Fx if your flow needs custom logic, data transformation, or integration with Power Apps and you’re comfortable with the learning curve.

No, avoid it if your flows are relatively simple or if you’re primarily focused on automation tasks like file manipulation, web scraping, or UI automation, where Power Automate Desktop’s native features will be sufficient.

]]>
https://blogs.perficient.com/2025/03/25/power-fx-in-power-automate-desktop/feed/ 0 379147
7 Steps to Define a Data Governance Structure for a Mid-Sized Bank (Without Losing Your Mind) https://blogs.perficient.com/2025/03/25/7-steps-to-define-a-data-governance-for-a-mid-sized-bank-without-losing-your-mind/ https://blogs.perficient.com/2025/03/25/7-steps-to-define-a-data-governance-for-a-mid-sized-bank-without-losing-your-mind/#respond Tue, 25 Mar 2025 22:07:39 +0000 https://blogs.perficient.com/?p=379259

A mid-sized bank I was consulting with for their data warehouse modernization project finally realized that data isn’t just some necessary but boring stuff the IT department hoards in their digital cave. It’s the new gold, the ticking time bomb of risk, and the bane of every regulatory report that’s ever come back with more red flags than a beach during a shark sighting.

Welcome to the wild world of data governance, where dreams of order collide with the chaos of reality. Before you start mainlining espresso and squeezing that stress ball shaped suspiciously like your last audit report, let’s break this down into 7 steps that might just keep you sane.

  1. Wrangle Some Executive Buy-In

Let’s not pretend. Without exec sponsorship, your data governance initiative is just a Trello board with high hopes. You need someone in a suit (preferably with a C in their title) to not just bless but be convinced about your mission, and preferably get it added to their KPI this year.

Pro tip to get that signature: Skip the jargon about “metadata catalogs” and go straight for the jugular with words like “penalties” and “reputational risk.” Nothing gets an exec’s attention quite like the threat of their club memberships being revoked.

  1. Tame the Scope Before It Turns Into a Stampede

Organizations  have a knack for letting projects balloon faster than a tech startup’s valuation. Be ruthless. You don’t need to govern every scrap of data from the CEO’s coffee order to the janitor’s mop schedule.

Focus on the critical stuff:

  • Customer data (because knowing who owes you money is kind of important)
  • Transaction history (aka “where did all the money go?”)
  • Regulatory reporting (because nobody likes surprise visits from auditors)

Start small, prove it works, then expand. Rome wasn’t built in a day, and neither was a decent data governance structure.

  1. Pick a Framework (But Don’t Treat It Like Holy Scripture)

Sure, you could go full nerd and dive into DAMA-DMBOK, but unless you’re gunning for a PhD in bureaucracy, keep it simple. Aim for a model that’s more “I get it” and less “I need an interpreter”.

Focus on:

  • Who’s responsible for what (RACI, if you must use an acronym)
  • What data belongs where
  • Rules that sound smart but won’t make everyone quit in protest

Remember, frameworks are like diets – the best one is the one you’ll actually stick to.

  1. Recruit Your Data Stewards (and Convince Them It’s Not a Punishment)

Your data stewards are the poor souls standing between order and chaos, armed with nothing but spreadsheets and a dwindling supply of patience. Look for folks who:

  • Actually understand the data (a rare breed, cherish them)
  • Can handle details without going cross-eyed
  • Won’t melt down when stuck between the rock of compliance and the hard place of IT

Bonus: Give them a fancy title like “Data Integrity Czar.” It won’t pay more, but it might make them feel better about their life choices.

  1. Define Your Terms (Or Prepare for the “What Even Is a ‘Customer’?” Wars)

Get ready for some fun conversations about what words mean. You’d think “customer” would be straightforward, but you’d be wrong. So very, very wrong.

  • Establish a single source of truth
  • Create a glossary that doesn’t read like a legal document
  • Accept that these definitions will change more often than a teenager’s social media profile

It’s not perfect, but it’s governance, not a philosophical treatise on the nature of reality.

  1. Build Your Tech Stack (But Don’t Start with the Shiny Toys)

For the love of all that is holy and GDPR-compliant, don’t buy a fancy governance tool before you know what you’re doing. Your tech should support your process, not be a $250,000 band-aid for a broken system.

Figure out:

  • Who gets to see what (and who definitely shouldn’t)
  • How you’re classifying data (beyond “important” and “meh”)
  • Where your golden records live
  • What to do when it all inevitably goes sideways

Metadata management and data lineage tracking are great, but they’re the icing, not the cake.

  1. Make It Boring (In a Good Way)

The true test of your governance structure isn’t the PowerPoint that put the board to sleep. It’s whether it holds up when someone decides to get creative with data entry at 4:59 PM on Fridays.

So:

  • Schedule regular data quality check-ups
  • Treat data issues like actual problems, not minor inconveniences
  • Set up alerts (but not so many that everyone ignores them)
  • Reward the good, don’t just punish the bad

Bonus: Document Everything (Then Document Your Documentation)

If it’s not written down, it doesn’t exist. If it’s written down but buried in a SharePoint site that time forgot, it still doesn’t exist.

Think of governance like flossing – it’s not exciting, but it beats the alternative.

Several mid-sized banks have successfully implemented data governance structures, demonstrating the real-world benefits of these strategies. Here are a few notable examples:

Case Study of a Large American Bank

This bank’s approach to data governance offers valuable lessons for mid-sized banks. The bank implemented robust data governance practices to enhance data quality, security, and compliance. Their focus on:

  • Aligning data management with regulatory requirements
  • Ensuring accurate financial reporting
  • Improving decision-making processes

resulted in better risk management, increased regulatory compliance, and enhanced customer trust through secure and reliable financial services.

Regional Bank Case Study

A regional bank successfully tackled data quality issues impacting compliance, credit, and liquidity risk assessment. Their approach included:

  1. Establishing roles and responsibilities for data governance
  2. Creating domains with assigned data custodians and stewards
  3. Collecting and simplifying knowledge about critical data elements (CDEs)

For example, in liquidity risk assessment, they identified core CDEs such as liquidity coverage ratio and net stable funding ratio.

Mid-Sized Bank Acquisition

In another case, a major bank acquired a regional financial services company and faced the challenge of integrating disparate data systems. Their data governance implementation involved:

  • Launching a data consolidation initiative
  • Centralizing data from multiple systems into a unified data warehouse
  • Establishing a cross-functional data governance team
  • Defining clear data definitions, ownership rules, and access permissions

This approach eliminated data silos, created a single source of truth, and significantly improved data quality and reliability. It also facilitated more accurate reporting and analysis, leading to more effective risk management and smoother banking services for customers.

Parting Thought

In the end, defining a data governance structure for your bank isn’t about creating a bureaucratic nightmare. It’s about keeping your data in check, your regulators off your back, and your systems speaking the same language.

When it all comes together, and your data actually starts making sense, you’ll feel like a criminal mastermind watching their perfect plan unfold. Only, you know, legal and with fewer car chases.

Now go forth and govern. May your data be clean, your audits be boring, and your governance meetings be mercifully short.

]]>
https://blogs.perficient.com/2025/03/25/7-steps-to-define-a-data-governance-for-a-mid-sized-bank-without-losing-your-mind/feed/ 0 379259
From FinOps Bean Counting to Value Harvesting: The Rise of Unit Economics https://blogs.perficient.com/2025/03/25/from-finops-bean-counting-to-value-harvesting-the-rise-of-unit-economics/ https://blogs.perficient.com/2025/03/25/from-finops-bean-counting-to-value-harvesting-the-rise-of-unit-economics/#respond Tue, 25 Mar 2025 19:14:23 +0000 https://blogs.perficient.com/?p=379234

Applying FinOps concepts to your cloud consumption is not new. It’s often treated as an IT hygiene task: necessary but not strategic. And while cost optimization and waste reduction are worthy efforts, it’s all too common to see these activities fall victim to higher daily priorities. When they are in focus, it’s often attempted by looking for low-hanging wins using cloud-native services that aren’t overly interested in delivering a comprehensive picture of cloud spend. It’s just one of those activities that is hard to get too excited about.

I challenge us to reboot this thinking with a fresh, outcome-focused perspective:

First, let’s expand FinOps to consider the bigger picture of technology spending, which the FinOps Foundation calls “Cloud+” in its 2025 State of FinOps Report (https://data.finops.org). Complexity is increasing: multicloud and hybrid environments are the norm. Real technology spend includes observability tools, containers, data platforms, SaaS licensing, AI/ML, and peripheral services, sometimes hand-waved away as shadow IT or just life as part of an unavoidable cost center. The more we can pull in these broader costs, the more accurate our insight into technology investments. Which leads us to…

Second, let’s start thinking about Unit Economics. This is a challenge, and only a small percentage of organizations fully get there, but the business payoff in shifting to this mindset can bring immediate business performance results, well beyond just optimizing public cloud infrastructure. The story we need to tell in FinOps isn’t “How much are we spending?”; it’s whether we are profiting from our investments and understanding the impact on revenue and margin if cost drivers change. Let’s make sure every dollar spent is a good dollar aligned to business objectives. Controlling costs is necessary. Maximizing value is strategic.

Measuring What Matters

Unit Economics is about shifting focus—from tracking aggregate cloud spend to measuring value at the most meaningful level: per transaction, per customer, per workload, or per outcome. These metrics bridge the gap between cloud consumption and business impact, aligning technology decisions with revenue, profitability, customer experience, and other key performance indicators.

Unlike traditional IT financials, unit economic metrics are built to reflect how your business actually operates. They unify Finance, Engineering, and Product teams around shared goals, fostering a mindset where cost efficiency and value creation go hand in hand. When used effectively, these metrics inform everything from financial forecasting, product planning, digital strategy, M&A onboarding, and feature delivery—turning cloud from a cost center into a competitive advantage.

Asking the Right Questions

Establishing effective unit economics begins with curiosity, a willingness to think differently, and meaningful collaboration. Consider these exploratory questions:

  • Do we have the right visibility into our overall technical spend?
  • Does it feel like there has to be a better way to do this?
  • How mature are our tagging, cost allocation, and mapping practices?
  • Can we define measures that reflect our company’s business performance goals?
  • Are Finance, Product, and Engineering collaborating on goals, reporting, and forecasting?
  • Do we have the right tools to build a complete picture of technology value as we scale?

From Cost to Value: The Unit Economics Flow

To put unit economics into action, organizations can follow this basic flow:

  1. Collect cloud and technology cost data across hybrid infrastructure, marketing, SaaS, and 3rd party tools
  2. Allocate costs to business segments such as products, BU teams, or delivered services
  3. Define units of value that reflect relevant and meaningful business outcomes
  4. Integrate with business systems to leverage financial, sales & marketing, labor, and performance metrics
  5. Calculate and normalize unit metrics across products, departments, and regions
  6. Visualize, monitor, and act through dashboards, forecasts, and optimization reviews

This approach is a baseline for moving towards more informed decisions and the potential impact of future investments.

Maturing Unit Economics with Apptio Cloudability Intelligence

Technology alone doesn’t solve this challenge, but the right platform accelerates the journey. We leverage Apptio Cloudability to bring at-scale intelligence and automation to financial operating models. With Cloudability, our clients can:

  • Automate cost allocation using advanced tagging, showback, and chargeback models
  • Visualize unit cost metrics by product, application, or team
  • Simplify visibility into multicloud and container-based environments
  • Forecast spend using trends and usage patterns tied to real business activity
  • Detect cost anomalies and surface optimization recommendations
  • Track and benchmark progress on key metrics like Cost per Customer or Revenue per Technology Dollar

Our goal is to bring the right intelligence to fit your business strategy, not just your IT infrastructure, delivering insights into your everyday operating model and reinforcing a culture of accountability and shared ownership. Challenge yourself to change your mindset on cost vs. value and see how unit economics can drive impactful outcomes to your organization.

]]>
https://blogs.perficient.com/2025/03/25/from-finops-bean-counting-to-value-harvesting-the-rise-of-unit-economics/feed/ 0 379234
Activate to SFTP from Salesforce Data Cloud https://blogs.perficient.com/2025/03/12/activate-to-sftp-from-salesforce-data-cloud/ https://blogs.perficient.com/2025/03/12/activate-to-sftp-from-salesforce-data-cloud/#respond Wed, 12 Mar 2025 15:22:11 +0000 https://blogs.perficient.com/?p=378439

SFTP?  Isn’t that old?

It is an oldie, but a goodie.  🙂

With Data Cloud we can send data to a lot of external data sources like Marketing Cloud Engagement or Amazon S3 through Activation Targets.   But there are times we are working with a destination system like Eloqua or Marketo that has solid support for SFTP.  SFTP and Data Cloud work well together!

Even with Marketing Cloud Engagement you might want to get data flowing into Automation Studio instead of pushing directly to a Data Extension or Journey.  SFTP would allow that CSV file to flow into Automation Studio where a SSJS script for example could loop through those rows and send mass SMS messages.

Is it secure?

Yes, as we will see in this blog post the SFTP setup through Data Cloud supports both a SSH Key with a Passphrase and a Password on the SFTP site itself.

Let’s connect to Marketing Cloud Engagement through SFTP!

There are five main pieces to setup and test this.

  1. Create a new SSH Key
  2. Configure the SFTP Site in Marketing Cloud Engagement
  3. Test the SFTP Connection using a tool like FileZilla
  4. Configure that SFTP Activation Target in Data Cloud
  5. Build a Segment and Activation to leverage that SFTP Activation Target

This will feel like a lot of steps, but it really does not take that long to do.  Leveraging these out of the box Activation Targets, like this SFTP one, is going to save tons of time in the long run.

1. Create the new SSH Key

Here is a good blog post to introduce you to what a SSH Key is and how it works.  https://www.sectigo.com/resource-library/what-is-an-ssh-key

Here are a couple of good articles on how to generate a SSH Key.

  1. https://www.purdue.edu/science/scienceit/ssh-keys-windows.html
  2. https://www.ssh.com/academy/ssh/keygen

Very important note that Marketing Cloud only accepts SSH keys generated a certain way…   https://help.salesforce.com/s/articleView?id=000380791&type=1

I am on a Windows machine so I am going to open a command prompt and use the OpenSSH command.

Sftp 01

Once in the command prompt type the ssh-keygen command.

Sftp 02

Now enter your filename.

Sftp 03

Now enter your passphrase.  This is basically a password that is tied to your SSH Key to make it harder to break.  This is different than your SFTP password that will be set on the Marketing Cloud Engagement side.

Sftp 04

Now that your passphrase was entered twice correctly the SSH Key is generated.

Sftp 06

When using the command prompt the files were automatically created in my C:\Users\Terry.Luschen directory.

Sftp 07

Now in the command prompt as stated in #3 in the Salesforce documentation above you need to do one final command.

Change the key to an RFC4716 (SSH2) key format

  1. ssh-keygen -e -f originalfilename.pub > newfilename
  2. So in our example above my command was
    1. ssh-keygen -e -f MCE_SSH_01.pub > MCE_SSH_01b
      Sftp 12

The three files will look something like:

  1. MCE_SSH_01.pub – This is the Public Key file to be loaded into Marketing Cloud Engagement.
  2. MCE_SSH_01 – This is the Private Key file which we will use to load into Data Cloud and FileZilla
  3. MCE_SSH_01b – This is another Public Key file that can be used to load into Marketing Cloud Engagement

I opened the .pub file and removed the comment.

I also added a file extension of .txt to the MCE_SSH_01b file so it is now named MCE_SSH_01b.txt

Now that we have generated our SSH files we can upload the Public Key to Marketing Cloud Engagement.

2. Configure the SFTP Site in Marketing Cloud Engagement

Log into Marketing Cloud Engagement

Go to Setup, Administration, Data Management, Key Management

Sftp 08

Click ‘Create’ on the ‘Key Management’ page

Sftp 09

Fill out the ‘New Key’ details.

Make sure SSH is selected.

Select the ‘Public’ Key file you created earlier which has the .pub extension.

Check the ‘Public Key’ checkbox.

Sftp 10

Save the Key

Now go to Setup, Administration, Data Management, FTP Accounts

Sftp 14

Use the ‘Create User’ button to create a new User.

Sftp 15

Fill out the new FTP User page by entering an email address and password.  Note this is different than the passphrase create above that was tied to the SSH Key.  Click on Next.

Sftp 16

Select the ‘SSH Key and Password’ radio button.   Use the file picker to select the Marketing Cloud Key you just created above.  Click on Next.

Sftp 17

Select the type of security you need.  In this screen shot everything is selected but typically you should only select the checkboxes that are absolutely necessary.  Click on Next.

Sftp 18

If you are trying to restrict to certain IPs fill out this screen.  In our example we are not trying to restrict to just Data Cloud IPs for example.  Click on Next.

Sftp 19

Typically you would leave this screen as is. It allows the Root folder as the default and then when you configure the tool that will send data to the SFTP site you can select the exact folder to use.  Click on Save.

Sftp 20

Yeah! You now have configured our destination SFTP site.

Now we can test this!

3. Test the SFTP Connection using a tool like FileZilla

  1. I like to test using FileZilla, but you could use other SFTP tools.
  2. Download the FileZilla and install it on your computer.
  3. Choose Edit, Settings…
    1. Select SFTP under Connection and choose ‘Add key file..’ button
      Filezilla 01 Privatekey
    2. You can either pick the original private key file and FileZilla will produce another file for you. Or you can use the SSH2 file that was produced in the CMD prompt, which was named MCE_SSH_01b.txt in our example above.
    3. Depending on which file is uploaded you might have to enter the Passphrase.
  4. Open FileZilla and choose File, Site Manager…
  5. Click ‘New Site’ and fill out the information on the right.  Save it by clicking on OK.
    Filezilla 01 Newsite
  6. Open up your Site and click on the ‘Connect’ on the bottom of the screen.
    1. You will be prompted to enter your Passphrase that is connected to your SSH Key.
  7. Success!   FileZilla shows you the folders on the Marketing Cloud Engagement SFTP Site!
    Filezilla 01 Successfulconnection

4. Configure the SFTP Activation Target in Data Cloud

  1. Now let’s do the same connection in Data Cloud
  2. In Data Cloud Setup go to Data Cloud, External Integrations, Other Connectors
    Sftp In Datacloud 01
  3. Choose the ‘Target’ tab and ‘Secure File Transfer (SFTP)’.  Click on Next
    Sftp In Datacloud 02
  4. Fill out the connection information.
    1. The connection Name and API Name can be anything you want it to be
    2. The ‘Authentication Method’ is ‘SSH Private Key & Password’
    3. The Username and Password are the values from the Marketing Cloud SFTP User.
    4. The SSH Private Key is the first file created in the CMC prompt.  It was the MCE_SSH_03 file for us with no file extension.
    5. The Passphrase is the passphrase entered in the CMD prompt when generating your Key.
    6. No need to put anything in the ‘PGP Encryption’ field.
    7. It should look like this now…
      Sftp In Datacloud 03 Sftp Settings Top
    8. In the Connection Details’ section…
      1. Host Name and Port are from the Marketing Cloud SFTP Screen
        Sftp In Datacloud 05 Hostname And Port
      2. It should look like this now…
        Sftp In Datacloud 04 Sftp Settings Bottom
      3. You can ‘Test’ your connection before saving it.
  5. Now you need to create an Activation Target
    1. Open Data Cloud App
    2. Go to the Activation Targets tab, Click on New
      Activation Target 0
    3. Select ‘Secure File Transfer (SFTP)’ and click on ‘Next’
      Activation Target 1
    4. Fill in the ‘New Activation Target’ screen.
      1. Select the SFTP Connector that you created earlier in the ‘Select SFTP Connector’ drop-down.
        Activation Target 2
      2. Click on Next
    5. Fill out the final page selecting your File Format and other options.
      1. Note the maximum File size is 500MB.
        Activation Target 4
      2. If you leave the ‘File Name Type’ as Predetermined then you should always get a unique filename since it will be appended with a ‘Date/Time Suffix’.
        Activation Target 5

5. Build a Segment and Activation to leverage that SFTP Activation Target

  1. Open up the Data Cloud App
  2. Create your Segment from the Segment Tab
  3. Go to the Activations tab and click on ‘New’
    Activation Target 6
  4. Select your Segment and the ‘Activation Target’ we created above which in your SFTP site. Click on Continue.
  5. Add ‘Email’ or ‘SMS’ fields as necessary for your Activation.  Click on Next.
    Activation Target 7
  6. Fill out the ‘Add Attributes and Filters to Your Activation’ as necessary.  Click on Next.
    Activation Target 8
  7. Give your Activation a name and finalize Schedule and Refresh Type.  Click on Save.
    Activation Target 9
  8. You should now have your new Activation.
    Activation Target 10
  9. Go back to your Segment and choose ‘Publish Now’ if that is how you need to test your Segment
    Activation Target 11

Conclusion

After you publish your segment, it should run and your file should show up on your Marketing Cloud Engagement STFP site.   You can test this by opening FileZilla, connecting and looking in the proper folder.
Successpublish

That is it!  SFTP and Data Cloud work well together!

We see with just clicks and configuration we can send Segment data created in Data Cloud to a SFTP site!  We are using the standard ‘Activation Target’ and ‘Activation’ setup screens in Data Cloud.

If you are brainstorming about use cases for Agentforce, please read on with this blog post from my colleague Darshan Kukde!

Here is another blog post where I discuss using unstructured data in Salesforce Data Cloud so your Agent in Agentforce can help your customers in new ways!

If you want a demo of this in action or want to go deeper please reach out and connect!

]]>
https://blogs.perficient.com/2025/03/12/activate-to-sftp-from-salesforce-data-cloud/feed/ 0 378439
Responsible Design Starts within the Institution https://blogs.perficient.com/2025/03/08/responsible-design-starts-within-the-institution/ https://blogs.perficient.com/2025/03/08/responsible-design-starts-within-the-institution/#respond Sat, 08 Mar 2025 18:17:11 +0000 https://blogs.perficient.com/?p=378321

The global business landscape is complex, and responsible design has emerged as a critical imperative for organizations across sectors. It represents a fundamental shift from viewing design merely as a creative output to recognizing it as an ethical responsibility embedded within institutional structures and processes.

True transformation toward responsible design practices cannot be achieved through superficial initiatives or isolated projects. Rather, it requires deep institutional commitment—reshaping governance frameworks, decision-making processes, and organizational cultures to prioritize human dignity, social equity, and environmental stewardship.

This framework explores how institutions can move beyond performative gestures toward authentic integration of responsible design principles throughout their operations, creating systems that consistently produce outcomes aligned with broader societal values and planetary boundaries.

The Institutional Imperative

What is Responsible Design?
Responsible design is the deliberate creation of products, services, and systems that prioritize human wellbeing, social equity, and environmental sustainability. While individual designers often champion ethical approaches, meaningful and lasting change requires institutional transformation. This framework explores how organizations can systematically embed responsible design principles into their core structures, cultures, and everyday practices.

Why Institutions Matter
The imperative for responsible design within institutions stems from their unique position of influence. Institutions have extensive reach, making their design choices impactful at scale. They establish standards and expectations for design professionals, effectively shaping the future direction of the field. Moreover, integrating responsible design practices yields tangible benefits: enhanced reputation, stronger stakeholder relationships, and significantly reduced ethical and operational risks.

Purpose of This Framework
This article examines the essential components of responsible design, showcases institutions that have successfully implemented ethical design practices, and provides practical strategies for navigating the challenges of organizational transformation. By addressing these dimensions systematically, organizations can transcend isolated ethical initiatives to build environments where responsible design becomes the institutional default—creating cultures where ethical considerations are woven into every decision rather than treated as exceptional concerns.

Defining Responsible Design
Responsible design encompasses four interconnected dimensions: ethical consideration, inclusivity, sustainability, and accountability. These dimensions form a comprehensive framework for evaluating the ethical, social, and environmental implications of design decisions, ultimately ensuring that design practices contribute to a more just and sustainable world.

Interconnected Dimensions
These four dimensions function not as isolated concepts but as integrated facets of a holistic approach to responsible design. Ethical consideration must guide inclusive practices to ensure diverse stakeholder perspectives are genuinely valued and incorporated. Sustainability principles should drive robust accountability measures that minimize environmental harm while maximizing social benefit. By weaving these dimensions together throughout the design process, institutions can cultivate a design culture that authentically champions human wellbeing, social equity, and environmental stewardship in every project.

A Framework for the Future
This framework serves as both compass and blueprint, guiding institutions toward design practices that meaningfully contribute to a more equitable and sustainable future. When organizations fully embrace these dimensions of responsible design, they align their creative outputs with their deepest values, enhance their societal impact, and participate in addressing our most pressing collective challenges. The result is design that not only serves immediate business goals but also advances the greater good across communities and generations.

Ethical Consideration

Understanding Ethical Design
Ethical consideration: A thoughtful evaluation of implications across diverse stakeholders. This process demands a comprehensive assessment of how design decisions might impact various communities, particularly those who are vulnerable or historically overlooked. Responsible designers must look beyond intended outcomes to anticipate potential unintended consequences that could emerge from their work.

Creating Positive Social Impact
Beyond harm prevention, ethical consideration actively pursues opportunities for positive social impact. This might involve designing solutions that address pressing social challenges or leveraging design to foster inclusion and community empowerment. When institutions weave ethical considerations throughout their design process, they position themselves to contribute meaningfully to social equity and justice through their creations.

Implementation Strategies
Organizations can embed ethical consideration into their practices through several concrete approaches: establishing dedicated ethical review panels, conducting thorough stakeholder engagement sessions, and developing robust ethical design frameworks. By placing ethics at the center of design decision-making, institutions ensure their work not only reflects their core values but also advances collective wellbeing across society.

Inclusive Practices

Understanding Inclusive Design
Inclusive practices: Creating designs that meaningfully serve and represent all populations, particularly those historically marginalized. This approach demands that designers actively seek diverse perspectives, challenge their inherent biases, and develop solutions that transcend physical, cognitive, cultural, and socioeconomic barriers. By centering previously excluded voices, inclusive design creates more robust and universally beneficial outcomes.

Empowering Marginalized Communities
True inclusive design transcends mere accommodation—it fundamentally shifts power dynamics by elevating marginalized communities from subjects to co-creators. This transformation might involve establishing paid consulting opportunities for community experts, creating accessible design workshops in underserved neighborhoods, or forming equitable partnerships where decision-making authority is genuinely shared. When institutions embrace these collaborative approaches, they produce designs that authentically address community needs while building lasting relationships based on mutual respect and shared purpose.

Implementation Strategies
Organizations can systematically embed inclusive practices by recruiting design teams that reflect diverse lived experiences, conducting immersive community-based research with appropriate compensation for participants, and establishing measurable inclusive design standards with accountability mechanisms. By integrating these approaches throughout their processes, institutions not only create more accessible and equitable designs but also contribute to dismantling systemic barriers that have historically limited full participation in society.

Sustainability

Definition and Core Principles
Sustainability: Minimizing environmental impact and resource consumption across the entire design lifecycle. This comprehensive approach spans from raw material sourcing through to end-of-life disposal, challenging designers to eliminate waste, preserve natural resources, and significantly reduce pollution. Sustainable design necessitates careful consideration of long-term environmental consequences, including addressing critical challenges like climate change, habitat destruction, and biodiversity loss.

Beyond Harm Reduction
True sustainability transcends mere harm reduction to actively generate positive environmental outcomes. This transformative approach creates products and services that harness renewable energy, conserve vital water resources, or restore damaged ecosystems. When institutions fully embrace sustainability principles, they contribute meaningfully to environmental resilience and help foster regenerative systems that benefit both present and future generations.

Implementation Strategies
Organizations can embed sustainability through strategic, measurable approaches including rigorous lifecycle assessments, integrated eco-design methodologies, and significant investments in renewable energy infrastructure and waste reduction technologies. By elevating sustainability to a core organizational value, institutions can dramatically reduce their ecological footprint while simultaneously driving innovation and contributing to planetary health and wellbeing.

Accountability

Definition and Core Principles
Accountability: Taking ownership of both intended and unintended outcomes of design decisions. This principle demands establishing robust systems for monitoring and evaluating design impacts, along with mechanisms for corrective action when necessary. Accountable designers maintain transparency throughout their process, actively seek stakeholder feedback, and acknowledge responsibility for any negative consequences, even those that were unforeseen. This foundation of responsibility ensures designs serve their intended purpose while minimizing potential harm.

Learning and Growth
True accountability transcends mere acknowledgment of errors—it transforms mistakes into catalysts for improvement. This transformative process involves critically examining design failures, implementing process refinements, enhancing designer training, and establishing more comprehensive ethical frameworks. When institutions embrace accountability as a pathway to excellence rather than just a response to failure, they cultivate stakeholder trust while continuously elevating the quality and integrity of their design practices.

Implementation Strategies
Organizations can foster a culture of accountability by establishing well-defined responsibility chains, implementing comprehensive monitoring systems, and creating accessible channels for feedback and remediation. Effective implementation includes regular ethical audits, transparent reporting practices, and systematic incorporation of lessons learned. By prioritizing accountability at every organizational level, institutions ensure their designs consistently uphold ethical standards, promote inclusivity, and advance sustainability goals.

Patagonia’s Environmental Responsibility

Environmental Integration in Design
Patagonia has revolutionized responsible design by weaving environmental considerations into the fabric of its product development process. The company’s groundbreaking “Worn Wear” program—which actively encourages repair and reuse over replacement—emerged organically from the organization’s core values rather than as a response to market trends. Patagonia’s governance structure reinforces this commitment through rigorous environmental impact assessments at every design stage, ensuring sustainability remains central rather than peripheral to innovation.

Sustainability Initiatives
Patagonia demonstrates unwavering environmental responsibility through comprehensive initiatives that permeate all aspects of their operations. The company has pioneered the use of recycled and organic materials in outdoor apparel, dramatically reduced water consumption through innovative manufacturing processes, and committed to donating 1% of sales to grassroots environmental organizations, a pledge that has generated over $140 million in grants to date. These initiatives represent the concrete manifestation of Patagonia’s mission rather than superficial corporate social responsibility efforts.

Environmental Leadership as a Competitive Advantage
Patagonia’s remarkable business success powerfully illustrates how environmental responsibility can create lasting competitive advantage in the marketplace. By elevating environmental considerations from afterthought to guiding principle, the company has cultivated a fiercely loyal customer base willing to pay premium prices for products aligned with their values. Patagonia’s approach has redefined industry standards for sustainable business practices, serving as a compelling case study for organizations seeking to integrate responsible design into their operational DNA while achieving exceptional business results.

IDEO’s Human-Centered Evolution

Organizational Restructuring
IDEO transformed from a traditional product design firm into a responsible design leader through deliberate organizational change. The company revolutionized its project teams by integrating ethicists and community representatives alongside designers, ensuring diverse perspectives influence every creation. Their acclaimed “Little Book of Design Ethics” now serves as the foundational document guiding all projects, while their established ethics review board rigorously evaluates proposals against comprehensive responsible design criteria before approval.

Ethical Integration in Design Process
IDEO’s evolution exemplifies the critical importance of embedding ethical considerations throughout the design process. By incorporating ethicists and community advocates directly into project teams, the company ensures that marginalized voices are heard, and ethical principles shape all design decisions from conception to implementation. The “Little Book of Design Ethics” functions not simply as a reference manual but as a living framework that empowers designers to navigate complex ethical challenges with confidence and integrity.

Cultural Transformation
IDEO’s remarkable journey demonstrates that responsible design demands a fundamental cultural shift within organizations. The company has cultivated an environment where ethical awareness and accountability are celebrated as core values rather than compliance requirements. By prioritizing human impact alongside business outcomes, IDEO has established itself as the preeminent leader in genuinely human-centered design. Their case offers actionable insights for institutions seeking to implement responsible design practices while maintaining innovation and market leadership.

Addressing Resistance to Change
Institutional transformation inevitably encounters resistance. Change disrupts established routines and challenges comfort zones, often triggering reactions ranging from subtle hesitation to outright opposition. Overcoming this resistance requires thoughtful planning, transparent communication, and meaningful stakeholder engagement throughout the process.

Why People Resist Change
Resistance typically stems from several key factors:
• Fear of the unknown and potential failure
• Perceived threats to job security, status, or expertise
• Skepticism about the benefits compared to required effort
• Attachment to established processes and organizational identity
• Past negative experiences with change initiatives

Effective Strategies for Change Management
• Phased implementation with clearly defined pilot projects that demonstrate value
• Identifying and empowering internal champions across departments to model and advocate for new approaches
• Creating safe spaces for constructive critique of existing practices without blame
• Developing narratives that connect responsible design to institutional identity and core values

Keys to Successful Transformation
By implementing these strategies, institutions can cultivate an environment that embraces rather than resists change. Transparent communication creates trust, active stakeholder engagement fosters ownership, and focusing on shared values helps align diverse perspectives. When people understand both the rationale for change and their role in the transformation process, resistance diminishes and the foundation for responsible design practices strengthens.

Balancing Competing Priorities
The complex tension between profit motives and ethical considerations demands sophisticated strategic approaches. Modern institutions navigate a challenging landscape of competing demands: maximizing shareholder value, meeting evolving customer needs, and fulfilling expanding social and environmental responsibilities. Successfully balancing these interconnected priorities requires thoughtful deliberation and strategic decision-making that acknowledges their interdependence.

Tensions in Modern Organizations
These inherent tensions can be effectively managed through:
• Developing comprehensive metrics that capture long-term value creation beyond quarterly financial results, including social impact assessments and sustainability indicators
• Identifying and prioritizing “win-win” opportunities where responsible design enhances market position, builds brand loyalty, and creates competitive advantages

Strategic Decision Frameworks
• Creating robust decision frameworks that explicitly weigh ethical considerations alongside financial metrics, allowing for transparent evaluation of tradeoffs
• Building compelling business cases that demonstrate how responsible design significantly reduces long-term risks related to regulation, reputation, and resource scarcity

Long-term Value Integration
By thoughtfully integrating ethical considerations into core decision-making processes and developing nuanced metrics that capture multidimensional long-term value creation, institutions can successfully reconcile profit motives with responsible design principles. This strategic approach enables organizations to achieve sustainable financial success while meaningfully contributing to a more just, equitable, and environmentally sustainable world.

Beyond Token Inclusion
Meaningful participation requires addressing deep-rooted power imbalances in institutional structures. Too often, inclusion is reduced to superficial gestures—inviting representatives from marginalized communities to consultations while denying them genuine influence over outcomes and decisions that affect their lives.

The Challenge of Meaningful Participation
To achieve authentic participation, institutions must confront and transform these entrenched power dynamics. This means moving beyond symbolic representation to creating spaces where traditionally excluded voices carry substantial weight in shaping both processes and outcomes.

Key Requirements for True Inclusion:
• Redistributing decision-making authority through participatory governance structures that give community members voting rights on critical decisions
• Providing fair financial compensation for community members’ time, expertise, and design contributions—recognizing their input as valuable professional consultation
• Implementing responsive feedback mechanisms with sufficient authority to pause, redirect, or fundamentally reshape projects when community concerns arise
• Establishing community oversight boards with substantive veto power and resources to monitor implementation

Building Equity Through Empowerment
By fundamentally redistributing decision-making authority and genuinely empowering marginalized communities, institutions can transform design processes from extractive exercises to collaborative partnerships. This shift ensures that design benefits flow equitably to all community members, not just those with pre-existing privilege. Such transformation demands more than good intentions—it requires concrete commitments to equity, justice, and collective accountability.

The Microsoft Inclusive Design Transformation

Restructuring Design Hierarchy
Microsoft fundamentally transformed its design process by establishing direct reporting channels between accessibility teams and executive leadership. This strategic restructuring ensured inclusive design considerations could not be sidelined or overridden by product managers focused solely on deadlines or feature development. Additionally, they created a protected budget specifically for community engagement that was safeguarded from reallocation to other priorities—even during tight financial cycles.

Elevating Accessibility Teams
This structural change demonstrates a commitment to inclusive design that transcends corporate rhetoric. By elevating accessibility specialists to positions with genuine organizational influence and providing them with unfiltered access to executive leadership, Microsoft ensures that inclusive design principles are embedded in strategic decisions at the highest levels of the organization. This repositioning signals to the entire company that accessibility is a core business value, not an optional consideration.

Dedicated Community Engagement
The protected budget for community engagement reinforces this commitment through tangible resource allocation. By dedicating specific funding for meaningful partnerships with marginalized communities, Microsoft ensures diverse voices directly influence product development from conception through launch. This approach has yielded measurable improvements in product accessibility and market reach, demonstrating how institutional transformation of design processes can simultaneously advance inclusion, equity, and business outcomes.

Regulatory Alignment

Anticipating Regulatory Changes
Visionary institutions position themselves ahead of regulatory evolution rather than merely reacting to it. As global regulations on environmental sustainability, accessibility, and data privacy grow increasingly stringent, organizations that proactively integrate these considerations into their design processes create significant competitive advantages while minimizing disruption.

Case Study: Proactive Compliance
Consider this example:
• European medical device leader Ottobock established a specialized regulatory forecasting team that maps emerging accessibility requirements across global markets
• Their “compliance plus” philosophy ensures designs exceed current standards by 20-30%, virtually eliminating costly redesigns when regulations tighten

Benefits of Forward-Thinking Regulation Strategy
Proactive regulatory alignment transforms compliance from a burden into a strategic asset. Organizations that embrace this approach not only mitigate financial and reputational risks but also establish themselves as industry leaders in responsible design. This strategic positioning requires continuous environmental scanning and a genuine commitment to ethical design principles that transcend minimum requirements.

Market Differentiation

Rising Consumer Expectations
The evolving landscape of consumer expectations presents strategic opportunities to harmonize responsible design with market advantage. Today’s consumers are not merely preferring but actively demanding products and services that demonstrate ethical production standards, environmental sustainability practices, and social responsibility commitments. Organizations that authentically meet these heightened expectations can secure significant competitive advantages and cultivate deeply loyal customer relationships.

Real-World Success Stories
Consider these compelling examples:
• Herman Miller revolutionized the furniture industry through circular design principles, exemplified by their groundbreaking Aeron chair remanufacturing program
• This innovative initiative established a premium market position while substantially reducing material consumption and environmental impact

Creating Win-Win Outcomes
When organizations strategically align responsible design principles with market opportunities, they forge powerful win-win scenarios that simultaneously benefit business objectives and societal wellbeing. Success in this approach demands both nuanced understanding of evolving consumer expectations and unwavering commitment to developing innovative solutions that address these expectations while advancing sustainability goals.

Beyond Good Intentions
Concrete measurement systems are essential for true accountability. While noble intentions set the direction, only robust metrics can verify real progress in responsible design. Organizations must implement comprehensive measurement frameworks to track outcomes, identify improvement opportunities, and demonstrate genuine commitment.

Effective Measurement Systems
Leading examples include:
• IBM’s Responsible Design Dashboard, which provides quantifiable metrics across diverse product lines
• Google’s HEART framework (Happiness, Engagement, Adoption, Retention, Task success) that seamlessly integrates ethical dimensions into standard performance indicators
• Transparent annual responsible design audits with publicly accessible results that foster organizational accountability

Benefits of Implementation
By embracing data-driven measurement systems, organizations transform aspirational goals into verifiable outcomes. This approach demonstrates an authentic commitment to responsible design principles while creating a foundation for continuous improvement. The willingness to measure and transparently share both successes and challenges distinguishes truly responsible organizations from those with merely good intentions.

Incentive Restructuring

The Power of Aligned Incentives
Human behavior is fundamentally shaped by incentives. To foster responsible design practices, institutions must strategically align rewards systems with desired ethical outcomes. When designers and stakeholders are recognized and compensated for responsible design initiatives, they naturally prioritize these values in their work.

Implementation Strategies
Organizations are achieving this alignment through concrete approaches:
• Salesforce has integrated diversity and inclusion metrics directly into executive compensation packages, ensuring leadership accountability
• Leading firms like Frog Design have embedded responsible design outcomes as key criteria in employee performance reviews
• Structured recognition programs celebrate and amplify exemplary responsible design practices, increasing visibility and adoption

Creating a Culture of Responsible Design
Thoughtfully restructured incentives transform organizational culture by signaling what truly matters. When ethical, inclusive, and sustainable practices are rewarded, they become embedded in institutional values rather than treated as optional considerations. This transformation requires rigorous assessment of current incentive frameworks and bold leadership willing to realign reward systems with responsible design principles.

Institutional Culture and Learning Systems
Responsible design flourishes within robust learning ecosystems. Rather than a one-time achievement, responsible design represents an ongoing journey of discovery, adaptation, and refinement. Organizations must establish comprehensive learning infrastructures that nurture this evolutionary process and ensure design practices remain ethically sound, inclusive, and forward-thinking.

Key Components of Learning Infrastructure
An effective learning infrastructure incorporates:
• Rigorous post-implementation reviews that critically assess ethical outcomes and user impact
• Vibrant communities of practice that facilitate knowledge exchange and cross-pollination across departments
• Strategic partnerships with academic institutions to integrate cutting-edge ethical frameworks and research
• Diverse external advisory boards that provide constructive critique and alternative perspectives

Benefits of Learning Systems
By investing in robust learning infrastructure, organizations cultivate a culture of continuous improvement and adaptive excellence. These systems ensure responsible design practices evolve in response to emerging challenges, technological shifts, and evolving societal expectations. Success requires unwavering institutional commitment to evidence-based learning, collaborative problem-solving, and transparent communication across all levels of the organization.

The Philips Healthcare Example

The Responsibility Lab Initiative
Philips Healthcare established a groundbreaking “Responsibility Lab” where designers regularly rotate through immersive experiences with diverse users from various backgrounds and abilities. This innovative rotation system ensures that responsible design knowledge becomes deeply embedded across the organization rather than remaining isolated within a specialized team.

Benefits of Experiential Learning
This approach powerfully demonstrates how experiential learning catalyzes responsible design practices. By immersing designers directly in the lived experiences of diverse users, Philips enables them to develop profound insights into the ethical, social, and environmental implications of their design decisions—insights that could not be gained through traditional research methods alone.

Organizational Knowledge Distribution
The strategic rotation system ensures that valuable ethical design principles flow throughout the organization, transforming responsible design from a specialized function into a shared organizational capability. This case study exemplifies how institutions can build effective learning systems that not only foster a culture of responsible design but also make it an integral part of their operational DNA.

The Institutional Journey

A Continuous Transformation
Institutionalizing responsible design is not a destination but a dynamic journey of continuous evolution. It demands skillful navigation through competing priorities, entrenched power dynamics, and ever-shifting external pressures. Forward-thinking institutions recognize that responsible design is not merely adjacent to their core mission—it is fundamental to their long-term viability, relevance, and social license to operate in an increasingly conscientious marketplace.

Beyond Sporadic Initiatives
By addressing these dimensions systematically and holistically, organizations transcend fragmentary ethical initiatives to achieve truly institutionalized responsible design. This transformation creates environments where ethical considerations and responsible practices become the natural default—woven into the organizational DNA—rather than exceptional efforts requiring special attention or resources.

Embrace the Journey of Continuous Growth
Immerse yourself in a transformative journey that thrives on continuous learning, adaptive thinking, and cross-disciplinary collaboration. This mindset unlocks the potential for design practices that fuel a more just, equitable, and sustainable world. By embracing this profound shift, institutions can drive real change.

Achieving this radical transformation requires visionary leadership, ethical conduct, and an innovative culture. It demands the united courage to challenge outdated norms and champion a brighter future. When institutions embody this ethos, they become beacons of progress, inspiring others to follow suit.

The path forward is not without obstacles, but the rewards are immense. Institutions that lead with this mindset will not only transform their own practices but also catalyze systemic change across industries. They will set new standards, reshape markets, and pave the way for a more responsible, inclusive, and sustain.

]]>
https://blogs.perficient.com/2025/03/08/responsible-design-starts-within-the-institution/feed/ 0 378321
Efi Pylarinou, Top Global Tech Thought Leader On FinTech https://blogs.perficient.com/2025/03/05/efi-pylarinou-fintech-leader/ https://blogs.perficient.com/2025/03/05/efi-pylarinou-fintech-leader/#respond Wed, 05 Mar 2025 12:00:20 +0000 https://blogs.perficient.com/?p=378051

In the latest episode of the “What If? So What?” podcast, Jim Hertzfeld had the pleasure of speaking with Efi Pylarinou, a renowned FinTech expert, speaker, and author of “The Fast Future Blur.” Efi shares her journey from Wall Street to becoming a leading voice in financial technology and innovation. The conversation covers a wide range of topics, including the concept of everywhere banking, the impact of AI, and the future of financial services.

Efi Pylarinou’s career has taken her from the cutthroat world of Wall Street to the serene landscapes of Switzerland. With a background in traditional financial services, Efi has witnessed firsthand the transformative power of technology in the industry and emphasizes the importance of adapting to new tech cycles and the challenges posed by legacy systems.

One of the key topics Jim and Efi discuss is everywhere banking, which encapsulates two industry trends: open banking and embedded finance. Efi explains that financial services are no longer confined to physical branches or mobile apps. Instead, banking can be integrated into commerce sites, travel platforms, and educational portals. This shift is driven by advancements in technology and changing consumer expectations.

Efi also highlights the critical role of AI in financial services. While AI is not new, recent advancements have opened up new possibilities for intelligent banking. However, she stresses that simply using AI as a tool is not enough. Businesses need to adopt an AI-native mindset to truly harness its potential.

Another significant trend is the evolution of digital identity and blockchain technology. Efi talks about how these innovations revolutionize our thoughts about money and financial transactions. With more than 90% of central banks exploring digital currencies, the future of money is poised to change dramatically.

Listen to the full episode to stay updated on the latest trends in FinTech and financial services.

Subscribe to the “What If? So What?” podcast for more engaging conversations with industry experts.

Listen now on your favorite podcast platform or visit our website.

 

Subscribe Where You Listen

Apple | Spotify | Amazon | Overcast

Meet our Guest

Wisw Efi Pylarinou Headshot

Efi Pylarinou, Top Global Tech Thought Leader On FinTech

Dr. Efi Pylarinou is a seasoned Wall Street professional and ex-academic who has become a Top Global Fintech, Linkedin, and Tech Thought Leader. Author of The Fast Future Blur, she’s also a domain expert with a Ph.D. in Finance, founder of the Financial Services Intelligence Hub, a prolific content creator, and Faculty at Fast Future Executive.

Connect with Efi

 

Meet the Host

Jim Hertzfeld

Jim Hertzfeld is Area Vice President, Strategy for Perficient.

For over two decades, he has worked with clients to convert market insights into real-world digital products and customer experiences that actually grow their business. More than just a strategist, Jim is a pragmatic rebel known for challenging the conventional and turning grand visions into actionable steps. His candid demeanor, sprinkled with a dose of cynical optimism, shapes a narrative that challenges and inspires listeners.

Connect with Jim:

LinkedIn | Perficient

]]>
https://blogs.perficient.com/2025/03/05/efi-pylarinou-fintech-leader/feed/ 0 378051
Tea-Time: Tips for Leveraging Time After Standup https://blogs.perficient.com/2025/02/28/tea-time-tips-for-leveraging-time-after-standup/ https://blogs.perficient.com/2025/02/28/tea-time-tips-for-leveraging-time-after-standup/#respond Fri, 28 Feb 2025 16:17:50 +0000 https://blogs.perficient.com/?p=377830

It’s typical to aim for 15-minute Standups, but how many times have your standups gotten side-tracked and suddenly more than a half-hour has gone by? These occurrences are not exactly my cup of tea…

Of course, sometimes topics need to be discussed, and planning a follow-up meeting will only slow down or delay resolution.

It’s important to keep Standups on-topic, and if run effectively, consider taking time after the Standup (I like to call it a Stay-After) with a smaller audience to cover “Tea-time” topics:

  • T: Tabled discussions.
  • E: Expectation setting.
  • A: Addressing blockers.

Why have a Stay-After

Likely, Standup meetings have all members of a team in attendance. To make the best use of everyone’s time, staying after Standup is a great opportunity to have a smaller, focused discussion with only the relevant team members. Typically, a Stay-After meeting is used to cover time-sensitive topics – “TEA”:

  • Tabled discussions: These are conversations that perhaps went too long during Standup and need to be continued once everyone else completes their updates.
  • Expectations: Often, the project manager or another team member may have process changes or other announcements to make to the team or specific team members, making a Stay-After an ideal time to communicate those quick updates.
  • Addressing blockers: Part of Standup is that team members escalate any blockers they are facing on an assignment. A Stay-After is also a good opportunity to troubleshoot or help provide clarifications to help unblock the team member.

Determining the agenda for a Stay-After

Stay-After meetings can be planned or unplanned.

Planned topics typically come up during the prior workday. These are usually if a team member requires some clarification of a work assignment, or, to share information. The project manager can send an invite immediately following the next standup that contains the necessary attendees and agenda.

Unplanned topics typically arise during the Standup itself because of one of these scenarios:

  • A team member requests other specific team members to stay-back after the Standup for a specific topic.
  • A team member requires help to troubleshoot a technical blocker.
  • The project manager requests specific team members stay-back after the Standup if they recognize that a conversation is going too long.

It’s not uncommon that there may be both planned and unplanned topics for a Stay-After. The PM or team needs to determine which topics to give priority to for that specific day and time. De-prioritized topics may need to be addressed as part of a different meeting or as part of the next day’s Stay-After.

Running an effective Stay-After

Like actual Standups, there is likely only limited time available to hold a Stay-After. Consider these tips to make sure the time is used most efficiently:

  • Keep the conversation on-topic. Keep the focus on what decisions or help is needed.
  • If you find that a conversation requires more time or team members who are not in attendance, pause and plan a dedicated meeting for that topic.
  • Record any quick decisions or action items and move on to the next topic, if applicable.
  • Allow team members to drop off the call if the remaining topics are no longer relevant to them.

In Summary

Taking advantage of Standup Stay-After “Tea-time” is a great way to make sure that all team members get a chance to participate in the daily Standups, but, also allow time-sensitive topics to be addressed without delay. Consider these tips at your next Standup, and it will help get your team started off to a tea-rrific day.

]]>
https://blogs.perficient.com/2025/02/28/tea-time-tips-for-leveraging-time-after-standup/feed/ 0 377830
How to Automate Content Updates Using AEM Groovy Scripts https://blogs.perficient.com/2025/02/27/how-to-automate-content-updates-using-aem-groovy-scripts/ https://blogs.perficient.com/2025/02/27/how-to-automate-content-updates-using-aem-groovy-scripts/#respond Thu, 27 Feb 2025 14:34:32 +0000 https://blogs.perficient.com/?p=377880

As an AEM author, updating existing page content is a routine task. However, manual updates, like rolling out a new template, can become tedious and costly when dealing with thousands of pages.

Fortunately, automation scripts can save the day. Using Groovy scripts within AEM can streamline the content update process, reducing time and costs. In this blog, we’ll outline the key steps and best practices for using Groovy scripts to automate content updates.

The Benefits of Utilizing Groovy Scripts

Groovy is a powerful scripting language that integrates seamlessly with AEM. It allows developers to perform complex operations with minimal code, making it an excellent tool for tasks such as: 

  • Automating repetitive tasks
  • Accessing and modifying repository content 
  • Bulk updating properties across multiple nodes
  • Managing template and component mappings efficiently

The Groovy Console for AEM provides an intuitive interface for running scripts, enabling rapid development and testing without redeploying code.   

Important things to know about Groovy Console 

  • Security – Due to security concerns, Groovy Console should not be installed in any production environment.  
  • Any content that needs to be updated in production environments should be packaged to a lower environment, using Groovy Console to update and validate content. Then you can repackage and deploy to production environments.  

How to Update Templates for Existing Web Pages

To illustrate how to use Groovy, let’s learn how to update templates for existing web pages authored inside AEM

Our first step is to identify the following:

  • Templates that need to be migrated
  • Associated components and their dependencies
  • Potential conflicts or deprecated functionalities

You should have source and destination template component mappings and page paths.  

As a pre-requisite for this solution, you will need to have JDK 11, Groovy 3.0.9, and Maven 3.6.3.   

Steps to Create a Template Mapping Script 

1. Create a CSV File 

The CSV file should contain two columns: 

  • Source → The legacy template path. 
  • Target → The new template path. 

Save this file as template-map.csv.

Source,Target 

"/apps/legacy/templates/page-old","/apps/new/templates/page-new" 

"/apps/legacy/templates/article-old","/apps/new/templates/article-new"v

2. Load the Mapping File in migrate.groovy 

In your migrate.groovy script, insert the following code to load the mapping file: 

def templateMapFile = new File("work${File.separator}config${File.separator}template-map.csv") 

assert templateMapFile.exists() : "Template Mapping File not found!"

3. Implement the Template Mapping Logic 

Next, we create a function to map source templates to target templates by utilizing the CSV file. 

String mapTemplate(sourceTemplateName, templateMapFile) { 

    /*this function uses the sourceTemplateName to look up the template 

    we will use to create new XML*/ 

    def template = '' 

    assert templateMapFile : "Template Mapping File not found!" 

 

    for (templateMap in parseCsv(templateMapFile.getText(ENCODING), separator: SEPARATOR)) { 

        def sourceTemplate = templateMap['Source'] 

        def targetTemplate = templateMap['Target'] 

        if (sourceTemplateName.equals(sourceTemplate)) { 

            template = targetTemplate 

        } 

    }   

        assert template : "Template ${sourceTemplateName} not found!" 

         

    return template 

}

After creating a package using Groovy script on your local machine, you can directly install it through the Package Manager. This package can be installed on both AEM as a Cloud Service (AEMaaCS) and on-premises AEM.

Execute the script in a non-production environment, verify that templates are correctly updated, and review logs for errors or skipped nodes. After running the script, check content pages to ensure they render as expected, validate that new templates are functioning correctly, and test associated components for compatibility. 

Groovy Scripts Minimize Manual Effort and Reduce Errors

Leveraging automation through scripting languages like Groovy can significantly simplify and accelerate AEM migrations. By following a structured approach, you can minimize manual effort, reduce errors, and ensure a smooth transition to the new platform, ultimately improving overall maintainability. 

More AEM Insights

Don’t miss out on more AEM insights and follow our Adobe blog! 

]]>
https://blogs.perficient.com/2025/02/27/how-to-automate-content-updates-using-aem-groovy-scripts/feed/ 0 377880
Kotlin Multiplatform vs. React Native vs. Flutter: Building Your First App https://blogs.perficient.com/2025/02/26/kotlin-multiplatform-vs-react-native-vs-flutter-building-your-first-app/ https://blogs.perficient.com/2025/02/26/kotlin-multiplatform-vs-react-native-vs-flutter-building-your-first-app/#respond Wed, 26 Feb 2025 21:50:16 +0000 https://blogs.perficient.com/?p=377508

Choosing the right framework for your first cross-platform app can be challenging, especially with so many great options available. To help you decide, let’s compare Kotlin Multiplatform (KMP), React Native, and Flutter by building a simple “Hello World” app with each framework. We’ll also evaluate them across key aspects like setup, UI development, code sharing, performance, community, and developer experience. By the end, you’ll have a clear understanding of which framework is best suited for your first app.

Building a “Hello World” App

1. Kotlin Multiplatform (KMP)

Kotlin Multiplatform allows you to share business logic across platforms while using native UI components. Here’s how to build a “Hello World” app:

Steps:

  1. Set Up the Project:
    • Install Android Studio and the Kotlin Multiplatform Mobile plugin.
    • Create a new KMP project using the “Mobile Library” template.
  2. Shared Code:In the shared module, create a Greeting class with a function to return “Hello World”.
    // shared/src/commonMain/kotlin/Greeting.kt
    class Greeting {
        fun greet(): String {
            return "Hello, World!"
        }
    }
  3. Platform-Specific UIs:For Android, use Jetpack Compose or XML layouts in the androidApp module. For iOS, use SwiftUI or UIKit in the iosApp module.Android (Jetpack Compose):
    // androidApp/src/main/java/com/example/androidApp/MainActivity.kt
    class MainActivity : ComponentActivity() {
        override fun onCreate(savedInstanceState: Bundle?) {
            super.onCreate(savedInstanceState)
            setContent {
                Text(text = Greeting().greet())
            }
        }
    }

    iOS (SwiftUI):

    // iosApp/iosApp/ContentView.swift
    struct ContentView: View {
        var body: some View {
            Text(Greeting().greet())
        }
    }
  4. Run the App:Build and run the app on Android and iOS simulators/emulators.

Pros and Cons:

Pros:

  • Native performance and look.
  • Shared business logic reduces code duplication.

Cons:

  • Requires knowledge of platform-specific UIs (Jetpack Compose for Android, SwiftUI/UIKit for iOS).
  • Initial setup can be complex.

2. React Native

React Native allows you to build cross-platform apps using JavaScript and React. Here’s how to build a “Hello World” app:

Steps:

  1. Set Up the Project:
    • Install Node.js and the React Native CLI.
    • Create a new project:
      npx react-native init HelloWorldApp
  2. Write the Code:Open App.js and replace the content with the following:
    import React from 'react';
    import { Text, View } from 'react-native';
    
    const App = () => {
        return (
            <View style={{ flex: 1, justifyContent: 'center', alignItems: 'center' }}>
                <Text>Hello, World!</Text>
            </View>
        );
    };
    
    export default App;
  3. Run the App:Start the Metro bundler:
    npx react-native start

    Run the app on Android or iOS:

    npx react-native run-android
    npx react-native run-ios

Pros and Cons:

Pros:

  • Easy setup and quick development.
  • Hot reload for instant updates.

Cons:

  • Performance may suffer for complex apps due to the JavaScript bridge.
  • Limited native look and feel.

3. Flutter

Flutter is a UI toolkit for building natively compiled apps for mobile, web, and desktop using Dart. Here’s how to build a “Hello World” app:

Steps:

  1. Set Up the Project:
    • Install Flutter SDK and Android Studio/VS Code.
    • Create a new project:
      flutter create hello_world_app
  2. Write the Code:Open lib/main.dart and replace the content with the following:
    import 'package:flutter/material.dart';
    
    void main() {
        runApp(MyApp());
    }
    
    class MyApp extends StatelessWidget {
        @override
        Widget build(BuildContext context) {
            return MaterialApp(
                home: Scaffold(
                    appBar: AppBar(title: Text('Hello World App')),
                    body: Center(child: Text('Hello, World!')),
                ),
            );
        }
    }
  3. Run the App:Run the app on Android or iOS:
    flutter run

Pros and Cons:

Pros:

  • Single codebase for UI and business logic.
  • Excellent performance and rich UI components.

Cons:

  • Larger app size compared to native apps.
  • Requires learning Dart.

Comparing the Frameworks

1. Initial Setup

  • KMP: Moderate setup complexity, especially for iOS. Requires configuring Gradle files and platform-specific dependencies.
  • React Native: Easy setup with tools like Expo and React Native CLI.
  • Flutter: Smoothest setup with the Flutter CLI and flutter doctor command.

Best option: Flutter (for ease of initial setup).

2. UI Development

  • KMP: Platform-specific UIs (Jetpack Compose for Android, SwiftUI/UIKit for iOS). Offers native flexibility but requires separate UI code.
  • React Native: Declarative UI with JSX. Powerful but can feel like a middle ground between native and custom rendering.
  • Flutter: Widget-based system for consistent cross-platform UIs. Highly customizable but requires learning Dart.

Best option: A tie between KMP (for native UI flexibility) and Flutter (for cross-platform consistency).

3. Code Sharing

  • KMP: Excels at sharing business logic while allowing native UIs.
  • React Native: High code sharing but may require platform-specific code for advanced features.
  • Flutter: High code sharing for both UI and business logic but requires Dart.

Best option: Kotlin Multiplatform (for its focus on sharing business logic).

4. Performance

  • KMP: Native performance due to native UIs and compiled shared code.
  • React Native: Good performance but can struggle with complex UIs due to the JavaScript bridge.
  • Flutter: Excellent performance, often close to native, but may not match native performance in all scenarios.

Winner: Kotlin Multiplatform (for native performance).

5. Community and Ecosystem

  • KMP: Growing community backed by JetBrains. Kotlin ecosystem is mature.
  • React Native: Large and active community with a rich ecosystem.
  • Flutter: Thriving community with strong Google support.

Best option: React Native (for its large and mature community), but Flutter is a close contender.

6. Developer Experience

  • KMP: Gentle learning curve for Kotlin developers but requires platform-specific UI knowledge.
  • React Native: Familiar for JavaScript/React developers but may require native mobile knowledge.
  • Flutter: Excellent developer experience with hot reload and comprehensive documentation.

Best option: Flutter (for its excellent developer experience and tooling).

7. AI-Assisted Development Speed

With the rise of AI tools like GitHub Copilot, ChatGPT, Gemini, Claude, etc.. Developers can significantly speed up app development. Let’s evaluate how each framework benefits from AI assistance:

  • KMP: AI tools can help generate Kotlin code for shared logic and even platform-specific UIs. However, the need for platform-specific knowledge may limit the speed gains.
  • React Native: JavaScript is widely supported by AI tools, making it easy to generate boilerplate code, components, and even entire screens. The large ecosystem also means AI can suggest relevant libraries and solutions.
  • Flutter: Dart is less commonly supported by AI tools compared to JavaScript, but Flutter’s widget-based system is highly structured, making it easier for AI to generate consistent and functional code.

Best option: React Native (due to JavaScript’s widespread support in AI tools).

The resolution:

There’s no one-size-fits-all answer. The best choice depends on your priorities:

    • Prioritize Performance and Native UI: Choose Kotlin Multiplatform.
    • Prioritize Speed of Development and a Large Community: Choose React Native.
    • Prioritize Ease of Use, Cross-Platform Consistency, and Fast Development: Choose Flutter.

For Your First App:

  • Simple App, Fast Development: Flutter is an excellent choice. Its ease of setup, hot reload, and comprehensive widget system will get you up and running quickly.
  • Existing Kotlin/Android Skills, Focus on Shared Logic: Kotlin Multiplatform allows you to leverage your existing knowledge while sharing a significant portion of your codebase.
  • Web Developer, Familiar with React: React Native is a natural fit, allowing you to utilize your web development skills for mobile development.

Conclusion

Each framework has its strengths and weaknesses, and the best choice depends on your team’s expertise, project requirements, and long-term goals. For your first app, consider starting with Flutter for its ease of use and fast development, React Native if you’re a web developer, or Kotlin Multiplatform if you’re focused on performance and native UIs.

Try building a simple app with each framework to see which one aligns best with your preferences and project requirements.

References

  1. Kotlin Multiplatform Documentation: https://kotlinlang.org/docs/multiplatform.html
  2. React Native Documentation: https://reactnative.dev/docs/getting-started
  3. Flutter Documentation: https://flutter.dev/docs
  4. JetBrains Blog on KMP: https://blog.jetbrains.com/kotlin/
  5. React Native Community: https://github.com/react-native-community
  6. Flutter Community: https://flutter.dev/community

 

]]>
https://blogs.perficient.com/2025/02/26/kotlin-multiplatform-vs-react-native-vs-flutter-building-your-first-app/feed/ 0 377508
What To Expect When Migrating Your Site To A New Platform https://blogs.perficient.com/2025/02/26/what-to-expect-when-migrating-your-site-to-a-new-platform/ https://blogs.perficient.com/2025/02/26/what-to-expect-when-migrating-your-site-to-a-new-platform/#respond Wed, 26 Feb 2025 15:59:30 +0000 https://blogs.perficient.com/?p=377633

This series of blog posts will cover the main areas of activity for your marketing, product, and UX teams before, during, and after site migration to a new digital experience platform.

Migrating your site to a different platform can be a daunting prospect, especially if the site is sizable in both page count and number of assets, such as documents and images. However, this can also be a perfect opportunity to freshen up your content, perform an asset library audit, and reorganize the site overall.

Once you’ve hired a consultant, like Perficient, to help you implement your new CMS and migrate your content over, you will work with them to identify several action items your team will need to tackle to ensure successful site migration.

Whether you are migrating from/to some of the major enterprise digital experiences platforms like Sitecore, Optimizely, Adobe, or from the likes of Sharepoint or WordPress, there are some common steps to take to make sure content migration runs smoothly and is executed in a manner that adds value to your overall web experience.

Part I – “Keep, Kill, Merge”

One of the first questions you will need to answer is“What do we need to carry over?” The instinctive answer would be everything. The rational answer is that we will migrate the site over as is and then worry about optimization later. There are multiple reasons why this is usually not the best option.

  • This is a perfect opportunity to do a high-level overview of the entire sitemap and dive a bit deeper into the content. It will help determine if you still need a long-forgotten page about an event that ended years ago or a product that is no longer being offered in a certain market. Perhaps it hasn’t been purged simply because there is always higher-priority work to be done.
  • It is far more rational to do this type of analysis ahead of the migration rather than after. If nothing else, it is simply for efficiency purposes. By trimming down the number of pages, you ensure that the migration process is shorter and more purposeful. You also save time and resources.

Even though this activity might take time, it is essential to use this opportunity in the best possible manner. A consultant like Perficient can help drive the process. They will pull up an initial list of active pages, set up simple audit steps, and ensure that decisions are recorded clearly and organized.

Step I – Site Scan

The first step is to ensure all current site pages are accounted for. As simple as this may seem, it doesn’t always end up being so, especially on large multi-language sites. You might have pages that are not crawlable, are temporarily unpublished, are still in progress, etc.

Depending on your current system capabilities, putting together a comprehensive list can be relatively easy. Getting a CMS export is the safest way to confirm that you have accounted for everything in the system.

Crawling tools, such as Screaming Frog, are frequently used to generate reports that can be exported for further refinement. Cross-referencing these sources will ensure you get the full picture, including anything that might be housed externally.

An Analyst Uses A Computer And Dashboard For Data Business Analysis And Data Management System With Kpi And Metrics Connected To The Database For Technology Finance, Operations, Sales, Marketing

Step II – Deep Dive

Once you’ve ensured that all pages made it to a comprehensive list you can easily filter, edit, and share, the fun part begins.

The next step involves reviewing and analyzing the sitemap and each page. The goal is to determine those that will stay vs candidates for removal. Various different factors can impact this decision from business goals, priorities, page views, conversion rate, SEO considerations, and marketing campaigns to compliance and regulations. Ultimately, it is important to assess each page’s value to the business and make decisions accordingly.

This audit will likely require input from multiple stakeholders, including subject matter experts, product owners, UX specialists, and others. It is essential to involve all interested parties at an early stage. Securing buy-in from key stakeholders at this point is critical for the following phases of the process. This especially applies to review and sign-off prior to going live.

Depending on your time and resources, the keep-kill-merge can either be done in full or limited to keep-kill. The merge option might require additional analysis, as well as follow-up design and content work. Leaving that effort for after the site migration is completed might just be the rational choice.

Step III – Decisions and Path Forward

Once the audit process has been completed, it is important to record findings and decisions simply and easily consumable for teams that will implement those updates. Proper documentation is essential when dealing with large sets of pages and associated content. This will inform the implementation team’s roadmap and timelines.

At this point, it is crucial to establish regular communication between a contact person (such as a product owner or content lead) and the team in charge of content migration from the consultant side. This partnership will ensure that all subsequent activities are carried out respecting the vision and business needs identified at the onset.

Completing the outlined activities properly will help smooth the transition into the next process phase, thus setting your team up for a successful site migration.

]]>
https://blogs.perficient.com/2025/02/26/what-to-expect-when-migrating-your-site-to-a-new-platform/feed/ 0 377633
Setting Up CloudFront Using Python https://blogs.perficient.com/2025/02/25/setting-up-cloudfront-using-python/ https://blogs.perficient.com/2025/02/25/setting-up-cloudfront-using-python/#respond Wed, 26 Feb 2025 05:40:01 +0000 https://blogs.perficient.com/?p=376999

Python is an open-source programming language. We can use Python to build/enable AWS services such as Terraform or other IAC code. In this blog, we are going to discuss setting up the CloudFront service using Python.

Why We Use Python

As we know, Python is an imperative language. This means that you can write more customized scripts that can perform advanced complex operations, handle errors, interact with APIs, etc. You also have access to AWS SDKs like Boto3 that allow you to perform any AWS operation you desire, including custom ones that might not yet be supported by Terraform.

How It Works

We have defined methods and classes in the boto3 library for AWS services that we can use to create/modify/update AWS services.

Prerequisites

We require only Python and Boto3 library.

1                                                                      Picture3

 

How to Write Code

As we know, boto3 has different functions that handle AWS services. We have lots of functions, but below are the basic functions to manage CloudFront service:

  • create_distribution is used to create CloudFront Distribution,
  • update_distribution is used to update CloudFront Distribution,
  • delete_distribution is used to delete CloudFront Distribution,
  • create_cache_policy is used to create cache policy,
  • create_invalidation is used to create invalidation requests.

create_distribution and update_distribution require the lots configuration values as well. You can use a Python dictionary variable and pass it to a function, or you can pass it as JSON, but you have to perform parsing as well for that.

Let me share with you a basic example of creating CloudFront distribution using Python & boto3:

import boto3
import os 

s3_origin_domain_name = '<s3bucketname>.s3.amazonaws.com'  
origin_id = 'origin-id'

distribution_config = {
        'CallerReference': str(hash("unique-reference")),
        'Comment': 'My CloudFront Distribution',
        'Enabled': True,
        'Origins': {
            'Items': [
                {
                    'Id': origin_id,
                    'DomainName': s3_origin_domain_name,
                    'S3OriginConfig': {
                        'OriginAccessIdentity': ''
                    },
                    'CustomHeaders': {
                        'Quantity': 0,
                        'Items': []
                    }
                }
            ],
            'Quantity': 1
        },
        'DefaultCacheBehavior': {
            'TargetOriginId': origin_id,
            'ViewerProtocolPolicy': 'redirect-to-https',
            'AllowedMethods': {
                'Quantity': 2,
                'Items': ['GET', 'HEAD'],
                'CachedMethods': {
                    'Quantity': 2,
                    'Items': ['GET', 'HEAD']
                }
            },
            'ForwardedValues': {
                'QueryString': False,
                'Cookies': {
                    'Forward': 'none'
                }
            },
            'MinTTL': 3600
        },
        'ViewerCertificate': {
            'CloudFrontDefaultCertificate': True
        },
        'PriceClass': 'PriceClass_100' 
    }
try:
        aws_access_key = os.getenv('AWS_ACCESS_KEY_ID')
  aws_secret_key = os.getenv('AWS_SECRET_ACCESS_KEY')
        session = boto3.Session(
aws_access_key_id=aws_access_key,
aws_secret_access_key=aws_secret_key,
             region_name='us-east-1'
          )
        client = session.client('cloudfront')
        response = client.create_distribution(DistributionConfig=distribution_config)
        print("CloudFront Distribution created successfully!")
        print(response)
except Exception as e:
        print(f"Error creating CloudFront distribution: {e}")

As you can see in the above sample code, after importing the boto3 module, we have the distribution_config variable where all the configs are stored. After that, we call the  create_dirtibution function to cdn distribution:

        response = client.create_distribution(DistributionConfig=distribution_config)

So, in a similar way, you can write more complex Python code to implement your complex AWS infrastructure as well and automate setting up a cache invalidation request pipeline, which will give users functionality and allow them to clear CDN cache without logging in to the AWS console.

]]>
https://blogs.perficient.com/2025/02/25/setting-up-cloudfront-using-python/feed/ 0 376999
Tell your Project’s Story using Azure DevOps Queries and Dashboards https://blogs.perficient.com/2025/02/24/tell-your-projects-story-using-azure-devops/ https://blogs.perficient.com/2025/02/24/tell-your-projects-story-using-azure-devops/#comments Mon, 24 Feb 2025 14:50:44 +0000 https://blogs.perficient.com/?p=377552

Sometimes purely looking at an Azure DevOps backlog or Board may not tell the right story in terms of progress made towards a specific goal. At first glance, it may seem like a horror story, but in reality, it is not the case. The data needs to be read or conveyed in the right way.

Though Azure DevOps provides multiple ways to view work items, it also provides a powerful reporting capability in terms of writing queries and configuring dashboards.

Work Items in Azure DevOps contain various fields which can enable data reports. However, to make that data meaningful, the right queries and the use of dashboards can help to present the precise state of the work.

Author purposeful Queries

Every Azure DevOps query should have a motive. Fields on work items are attributes which can help to provide an answer. Let us look at a few use cases and how those queries are configured.

Example 1: I want to find all Bugs in my project that are not in a State of ‘Closed’ or ‘Removed’ and which contain a tag ‘CMS’. I can use the work item fields ‘Work Item Type,’ ‘State,’ and ‘Tags’ to find any matches.

Query Example 1

Example 2: I want to find all Bugs that are Severity 1 or Severity 2 that are not Closed or Resolved (I want to see only Severity 1 or 2 Bugs that are in New or Active State.) In this example, I have grouped the 2 rows for Severity to be an ‘Or’ condition. This allows me to get results that include both Severity 1 and Severity 2 results.

Query Example 2

Example 3: I want to find all Bugs that contain the Tag “missing requirement” which were created on or after November 5, 2024. Another helpful attribute to report on is by Date – in this example, I am querying for results created after a specific date, but you can change the operator or set a date range for further control of results.

Query Example 3

Tips:

    • In these examples, I am using out-of-box field types, however, there are ways to create custom fields for an Azure DevOps project, to further enrich or customize your reports.
    • Review the columns you have showing once you run a query. You may need to use the ‘Column Options’ to enable additional columns for additional data points.
    • Save your query as a Shared Query, so that its results can be viewed by other members of the team.

Having these queries is great if you need a list of work items or if you want to make bulk updates for items which match your criteria. However, we can take these results a step further by visualizing the data onto a Dashboard.

Publish your results with Dashboards

Sometimes visuals can help to better portray a story; the same can be true when reporting on a project’s progress.

Out-of-the-box, Azure DevOps provides a variety of widget types which can be used to configure a Dashboard. Some widgets require the use of a Query, while others are purely based on settings you define.

Here are a few examples of widgets I use most often:

  • Burndown: This widget does NOT require a query. When you place the widget, you’ll be able to control how data is pulled:
    • Work Item Type
    • Dates span
    • Team (if applicable)
    • Field Criteria
    • Interval of time (days/weeks/months)
  • Chart for Work Items: This widget is based on a Query. I find this widget to be very versatile, as you can choose the type of chart and which data points you want to display.
  • Query Results: This widget is based on a Query and will simply display the results in a list, however, you can select which columns of data to show/hide in the widget.
  • Query Tiles: This widget is based on a Query and will display the number of results matching a query. You can further customize this widget to dynamically show in unique colors, based on specific count criteria.

Tips:

    • There is no limit to the number of Dashboards you can have. Consider creating Dashboards for unique purposes or for unique audiences, which contain only the relevant data needed.
    • Queries and Dashboards are only as good as your data. Make sure you are regularly maintaining your work items to ensure they are tagged, parented, and prioritized appropriately.
    • There is also an option to export query results into excel files, if you find that dashboard widgets are not filling all your reporting needs.

Convey the Story

Identify what is most important for your team or client to know, monitor, or be aware of. Once you have the data you need, you will be better equipped to explain progress and status to your team and the client.

In my personal experience, some types of unique dashboards I found to be effective for my clients or team members:

  • Dashboard related to UAT activities
  • Dashboard for Backlog Maintenance monitoring
  • Dashboard for Executives
  • Dashboard for QA Sprint-Based Activities
  • Dashboard for Dependency monitoring

Example of an Executive Dashboard, using Burndown, Chart for Work Items, and Query Tile widgets:Executive Dashboard

With each of these dashboards, I wrote unique queries to find data that my team or client most often needed to reference. This enabled them to know if we were on track or if some action is needed.

By having a precise way to explain the story of your project, you will find that your team is able to make better decisions when the right data is available to them, in order to lead to a happy project ending.

]]>
https://blogs.perficient.com/2025/02/24/tell-your-projects-story-using-azure-devops/feed/ 1 377552