Power Fx is a low-code language expressing logic across the Microsoft Power Platform. It’s a general-purpose, strong-typed, declarative, and functional programming language described in human-friendly text. Makers can use Power Fx directly in an Excel-like formula bar or Visual Studio Code text window. Its concise and straightforward nature makes everyday programming tasks easy for both makers and developers.
Power Fx is expressed in human-friendly text. It’s a low-code language that makers can use directly in an Excel-like formula bar or Visual Studio Code text window. The “low” in low-code is due to the concise and straightforward nature of the language, making everyday programming tasks easy for both makers and developers.
Power Fx enables the full spectrum of development, from no-code makers without any programming knowledge to pro-code for professional developers. It enables diverse teams to collaborate and save time and effort.
To use Power Fx as an expression language in a desktop flow, you must create one and enable the respective toggle button when creating it through Power Automate for the desktop’s console.
Each Power Fx expression must start with an “=” (equals to sign).
If you’re transitioning from flows where Power Fx is disabled, you might notice some differences. To streamline your experience while creating new desktop flows, here are some key concepts to keep in mind:
With Power Fx Disabled
Give your collection a name (e.g., myCollection) in the Variable Name field.
In the Value field, define the collection. Collections in PAD are essentially arrays, which you can define by enclosing the values in square brackets [ ].
Action: Set Variable
Variable Name: myNumberCollection
Value: [1, 2, 3, 4, 5]
Action: Set Variable
Variable Name: myTextCollection
Value: [“Alice”, “Bob”, “Charlie”]
You can also create collections with mixed data types. For example, a collection with both numbers and strings:
Action: Set Variable
Variable Name: mixedCollection
Value: [1, “John”, 42, “Doe”]
If you want to use the dollar sign ($) followed by a opening curly brace sign ({) within a Power Fx expression or in the syntax of a UI/Web element selector and have Power Automate for desktop not treat it as the string interpolation syntax, make sure to follow this syntax: $${ (the first dollar sign will act as an escape character)
For the complete list of all available functions in Power Automate for desktop flows, go to Formula reference – desktop flows.
Yes, use Power Fx if your flow needs custom logic, data transformation, or integration with Power Apps and you’re comfortable with the learning curve.
No, avoid it if your flows are relatively simple or if you’re primarily focused on automation tasks like file manipulation, web scraping, or UI automation, where Power Automate Desktop’s native features will be sufficient.
]]>A mid-sized bank I was consulting with for their data warehouse modernization project finally realized that data isn’t just some necessary but boring stuff the IT department hoards in their digital cave. It’s the new gold, the ticking time bomb of risk, and the bane of every regulatory report that’s ever come back with more red flags than a beach during a shark sighting.
Welcome to the wild world of data governance, where dreams of order collide with the chaos of reality. Before you start mainlining espresso and squeezing that stress ball shaped suspiciously like your last audit report, let’s break this down into 7 steps that might just keep you sane.
Let’s not pretend. Without exec sponsorship, your data governance initiative is just a Trello board with high hopes. You need someone in a suit (preferably with a C in their title) to not just bless but be convinced about your mission, and preferably get it added to their KPI this year.
Pro tip to get that signature: Skip the jargon about “metadata catalogs” and go straight for the jugular with words like “penalties” and “reputational risk.” Nothing gets an exec’s attention quite like the threat of their club memberships being revoked.
Organizations have a knack for letting projects balloon faster than a tech startup’s valuation. Be ruthless. You don’t need to govern every scrap of data from the CEO’s coffee order to the janitor’s mop schedule.
Focus on the critical stuff:
Start small, prove it works, then expand. Rome wasn’t built in a day, and neither was a decent data governance structure.
Sure, you could go full nerd and dive into DAMA-DMBOK, but unless you’re gunning for a PhD in bureaucracy, keep it simple. Aim for a model that’s more “I get it” and less “I need an interpreter”.
Focus on:
Remember, frameworks are like diets – the best one is the one you’ll actually stick to.
Your data stewards are the poor souls standing between order and chaos, armed with nothing but spreadsheets and a dwindling supply of patience. Look for folks who:
Bonus: Give them a fancy title like “Data Integrity Czar.” It won’t pay more, but it might make them feel better about their life choices.
Get ready for some fun conversations about what words mean. You’d think “customer” would be straightforward, but you’d be wrong. So very, very wrong.
It’s not perfect, but it’s governance, not a philosophical treatise on the nature of reality.
For the love of all that is holy and GDPR-compliant, don’t buy a fancy governance tool before you know what you’re doing. Your tech should support your process, not be a $250,000 band-aid for a broken system.
Figure out:
Metadata management and data lineage tracking are great, but they’re the icing, not the cake.
The true test of your governance structure isn’t the PowerPoint that put the board to sleep. It’s whether it holds up when someone decides to get creative with data entry at 4:59 PM on Fridays.
So:
Bonus: Document Everything (Then Document Your Documentation)
If it’s not written down, it doesn’t exist. If it’s written down but buried in a SharePoint site that time forgot, it still doesn’t exist.
Think of governance like flossing – it’s not exciting, but it beats the alternative.
Several mid-sized banks have successfully implemented data governance structures, demonstrating the real-world benefits of these strategies. Here are a few notable examples:
Case Study of a Large American Bank
This bank’s approach to data governance offers valuable lessons for mid-sized banks. The bank implemented robust data governance practices to enhance data quality, security, and compliance. Their focus on:
resulted in better risk management, increased regulatory compliance, and enhanced customer trust through secure and reliable financial services.
Regional Bank Case Study
A regional bank successfully tackled data quality issues impacting compliance, credit, and liquidity risk assessment. Their approach included:
For example, in liquidity risk assessment, they identified core CDEs such as liquidity coverage ratio and net stable funding ratio.
Mid-Sized Bank Acquisition
In another case, a major bank acquired a regional financial services company and faced the challenge of integrating disparate data systems. Their data governance implementation involved:
This approach eliminated data silos, created a single source of truth, and significantly improved data quality and reliability. It also facilitated more accurate reporting and analysis, leading to more effective risk management and smoother banking services for customers.
Parting Thought
In the end, defining a data governance structure for your bank isn’t about creating a bureaucratic nightmare. It’s about keeping your data in check, your regulators off your back, and your systems speaking the same language.
When it all comes together, and your data actually starts making sense, you’ll feel like a criminal mastermind watching their perfect plan unfold. Only, you know, legal and with fewer car chases.
Now go forth and govern. May your data be clean, your audits be boring, and your governance meetings be mercifully short.
]]>Applying FinOps concepts to your cloud consumption is not new. It’s often treated as an IT hygiene task: necessary but not strategic. And while cost optimization and waste reduction are worthy efforts, it’s all too common to see these activities fall victim to higher daily priorities. When they are in focus, it’s often attempted by looking for low-hanging wins using cloud-native services that aren’t overly interested in delivering a comprehensive picture of cloud spend. It’s just one of those activities that is hard to get too excited about.
I challenge us to reboot this thinking with a fresh, outcome-focused perspective:
First, let’s expand FinOps to consider the bigger picture of technology spending, which the FinOps Foundation calls “Cloud+” in its 2025 State of FinOps Report (https://data.finops.org). Complexity is increasing: multicloud and hybrid environments are the norm. Real technology spend includes observability tools, containers, data platforms, SaaS licensing, AI/ML, and peripheral services, sometimes hand-waved away as shadow IT or just life as part of an unavoidable cost center. The more we can pull in these broader costs, the more accurate our insight into technology investments. Which leads us to…
Second, let’s start thinking about Unit Economics. This is a challenge, and only a small percentage of organizations fully get there, but the business payoff in shifting to this mindset can bring immediate business performance results, well beyond just optimizing public cloud infrastructure. The story we need to tell in FinOps isn’t “How much are we spending?”; it’s whether we are profiting from our investments and understanding the impact on revenue and margin if cost drivers change. Let’s make sure every dollar spent is a good dollar aligned to business objectives. Controlling costs is necessary. Maximizing value is strategic.
Unit Economics is about shifting focus—from tracking aggregate cloud spend to measuring value at the most meaningful level: per transaction, per customer, per workload, or per outcome. These metrics bridge the gap between cloud consumption and business impact, aligning technology decisions with revenue, profitability, customer experience, and other key performance indicators.
Unlike traditional IT financials, unit economic metrics are built to reflect how your business actually operates. They unify Finance, Engineering, and Product teams around shared goals, fostering a mindset where cost efficiency and value creation go hand in hand. When used effectively, these metrics inform everything from financial forecasting, product planning, digital strategy, M&A onboarding, and feature delivery—turning cloud from a cost center into a competitive advantage.
Establishing effective unit economics begins with curiosity, a willingness to think differently, and meaningful collaboration. Consider these exploratory questions:
To put unit economics into action, organizations can follow this basic flow:
This approach is a baseline for moving towards more informed decisions and the potential impact of future investments.
Technology alone doesn’t solve this challenge, but the right platform accelerates the journey. We leverage Apptio Cloudability to bring at-scale intelligence and automation to financial operating models. With Cloudability, our clients can:
Our goal is to bring the right intelligence to fit your business strategy, not just your IT infrastructure, delivering insights into your everyday operating model and reinforcing a culture of accountability and shared ownership. Challenge yourself to change your mindset on cost vs. value and see how unit economics can drive impactful outcomes to your organization.
]]>It is an oldie, but a goodie.
With Data Cloud we can send data to a lot of external data sources like Marketing Cloud Engagement or Amazon S3 through Activation Targets. But there are times we are working with a destination system like Eloqua or Marketo that has solid support for SFTP. SFTP and Data Cloud work well together!
Even with Marketing Cloud Engagement you might want to get data flowing into Automation Studio instead of pushing directly to a Data Extension or Journey. SFTP would allow that CSV file to flow into Automation Studio where a SSJS script for example could loop through those rows and send mass SMS messages.
Yes, as we will see in this blog post the SFTP setup through Data Cloud supports both a SSH Key with a Passphrase and a Password on the SFTP site itself.
There are five main pieces to setup and test this.
This will feel like a lot of steps, but it really does not take that long to do. Leveraging these out of the box Activation Targets, like this SFTP one, is going to save tons of time in the long run.
Here is a good blog post to introduce you to what a SSH Key is and how it works. https://www.sectigo.com/resource-library/what-is-an-ssh-key
Here are a couple of good articles on how to generate a SSH Key.
Very important note that Marketing Cloud only accepts SSH keys generated a certain way… https://help.salesforce.com/s/articleView?id=000380791&type=1
I am on a Windows machine so I am going to open a command prompt and use the OpenSSH command.
Once in the command prompt type the ssh-keygen command.
Now enter your filename.
Now enter your passphrase. This is basically a password that is tied to your SSH Key to make it harder to break. This is different than your SFTP password that will be set on the Marketing Cloud Engagement side.
Now that your passphrase was entered twice correctly the SSH Key is generated.
When using the command prompt the files were automatically created in my C:\Users\Terry.Luschen directory.
Now in the command prompt as stated in #3 in the Salesforce documentation above you need to do one final command.
Change the key to an RFC4716 (SSH2) key format
The three files will look something like:
I opened the .pub file and removed the comment.
I also added a file extension of .txt to the MCE_SSH_01b file so it is now named MCE_SSH_01b.txt
Now that we have generated our SSH files we can upload the Public Key to Marketing Cloud Engagement.
Log into Marketing Cloud Engagement
Go to Setup, Administration, Data Management, Key Management
Click ‘Create’ on the ‘Key Management’ page
Fill out the ‘New Key’ details.
Make sure SSH is selected.
Select the ‘Public’ Key file you created earlier which has the .pub extension.
Check the ‘Public Key’ checkbox.
Save the Key
Now go to Setup, Administration, Data Management, FTP Accounts
Use the ‘Create User’ button to create a new User.
Fill out the new FTP User page by entering an email address and password. Note this is different than the passphrase create above that was tied to the SSH Key. Click on Next.
Select the ‘SSH Key and Password’ radio button. Use the file picker to select the Marketing Cloud Key you just created above. Click on Next.
Select the type of security you need. In this screen shot everything is selected but typically you should only select the checkboxes that are absolutely necessary. Click on Next.
If you are trying to restrict to certain IPs fill out this screen. In our example we are not trying to restrict to just Data Cloud IPs for example. Click on Next.
Typically you would leave this screen as is. It allows the Root folder as the default and then when you configure the tool that will send data to the SFTP site you can select the exact folder to use. Click on Save.
Yeah! You now have configured our destination SFTP site.
Now we can test this!
After you publish your segment, it should run and your file should show up on your Marketing Cloud Engagement STFP site. You can test this by opening FileZilla, connecting and looking in the proper folder.
That is it! SFTP and Data Cloud work well together!
We see with just clicks and configuration we can send Segment data created in Data Cloud to a SFTP site! We are using the standard ‘Activation Target’ and ‘Activation’ setup screens in Data Cloud.
If you are brainstorming about use cases for Agentforce, please read on with this blog post from my colleague Darshan Kukde!
Here is another blog post where I discuss using unstructured data in Salesforce Data Cloud so your Agent in Agentforce can help your customers in new ways!
If you want a demo of this in action or want to go deeper please reach out and connect!
]]>The global business landscape is complex, and responsible design has emerged as a critical imperative for organizations across sectors. It represents a fundamental shift from viewing design merely as a creative output to recognizing it as an ethical responsibility embedded within institutional structures and processes.
True transformation toward responsible design practices cannot be achieved through superficial initiatives or isolated projects. Rather, it requires deep institutional commitment—reshaping governance frameworks, decision-making processes, and organizational cultures to prioritize human dignity, social equity, and environmental stewardship.
This framework explores how institutions can move beyond performative gestures toward authentic integration of responsible design principles throughout their operations, creating systems that consistently produce outcomes aligned with broader societal values and planetary boundaries.
The Institutional Imperative
What is Responsible Design?
Responsible design is the deliberate creation of products, services, and systems that prioritize human wellbeing, social equity, and environmental sustainability. While individual designers often champion ethical approaches, meaningful and lasting change requires institutional transformation. This framework explores how organizations can systematically embed responsible design principles into their core structures, cultures, and everyday practices.
Why Institutions Matter
The imperative for responsible design within institutions stems from their unique position of influence. Institutions have extensive reach, making their design choices impactful at scale. They establish standards and expectations for design professionals, effectively shaping the future direction of the field. Moreover, integrating responsible design practices yields tangible benefits: enhanced reputation, stronger stakeholder relationships, and significantly reduced ethical and operational risks.
Purpose of This Framework
This article examines the essential components of responsible design, showcases institutions that have successfully implemented ethical design practices, and provides practical strategies for navigating the challenges of organizational transformation. By addressing these dimensions systematically, organizations can transcend isolated ethical initiatives to build environments where responsible design becomes the institutional default—creating cultures where ethical considerations are woven into every decision rather than treated as exceptional concerns.
Defining Responsible Design
Responsible design encompasses four interconnected dimensions: ethical consideration, inclusivity, sustainability, and accountability. These dimensions form a comprehensive framework for evaluating the ethical, social, and environmental implications of design decisions, ultimately ensuring that design practices contribute to a more just and sustainable world.
Interconnected Dimensions
These four dimensions function not as isolated concepts but as integrated facets of a holistic approach to responsible design. Ethical consideration must guide inclusive practices to ensure diverse stakeholder perspectives are genuinely valued and incorporated. Sustainability principles should drive robust accountability measures that minimize environmental harm while maximizing social benefit. By weaving these dimensions together throughout the design process, institutions can cultivate a design culture that authentically champions human wellbeing, social equity, and environmental stewardship in every project.
A Framework for the Future
This framework serves as both compass and blueprint, guiding institutions toward design practices that meaningfully contribute to a more equitable and sustainable future. When organizations fully embrace these dimensions of responsible design, they align their creative outputs with their deepest values, enhance their societal impact, and participate in addressing our most pressing collective challenges. The result is design that not only serves immediate business goals but also advances the greater good across communities and generations.
Ethical Consideration
Understanding Ethical Design
Ethical consideration: A thoughtful evaluation of implications across diverse stakeholders. This process demands a comprehensive assessment of how design decisions might impact various communities, particularly those who are vulnerable or historically overlooked. Responsible designers must look beyond intended outcomes to anticipate potential unintended consequences that could emerge from their work.
Creating Positive Social Impact
Beyond harm prevention, ethical consideration actively pursues opportunities for positive social impact. This might involve designing solutions that address pressing social challenges or leveraging design to foster inclusion and community empowerment. When institutions weave ethical considerations throughout their design process, they position themselves to contribute meaningfully to social equity and justice through their creations.
Implementation Strategies
Organizations can embed ethical consideration into their practices through several concrete approaches: establishing dedicated ethical review panels, conducting thorough stakeholder engagement sessions, and developing robust ethical design frameworks. By placing ethics at the center of design decision-making, institutions ensure their work not only reflects their core values but also advances collective wellbeing across society.
Inclusive Practices
Understanding Inclusive Design
Inclusive practices: Creating designs that meaningfully serve and represent all populations, particularly those historically marginalized. This approach demands that designers actively seek diverse perspectives, challenge their inherent biases, and develop solutions that transcend physical, cognitive, cultural, and socioeconomic barriers. By centering previously excluded voices, inclusive design creates more robust and universally beneficial outcomes.
Empowering Marginalized Communities
True inclusive design transcends mere accommodation—it fundamentally shifts power dynamics by elevating marginalized communities from subjects to co-creators. This transformation might involve establishing paid consulting opportunities for community experts, creating accessible design workshops in underserved neighborhoods, or forming equitable partnerships where decision-making authority is genuinely shared. When institutions embrace these collaborative approaches, they produce designs that authentically address community needs while building lasting relationships based on mutual respect and shared purpose.
Implementation Strategies
Organizations can systematically embed inclusive practices by recruiting design teams that reflect diverse lived experiences, conducting immersive community-based research with appropriate compensation for participants, and establishing measurable inclusive design standards with accountability mechanisms. By integrating these approaches throughout their processes, institutions not only create more accessible and equitable designs but also contribute to dismantling systemic barriers that have historically limited full participation in society.
Sustainability
Definition and Core Principles
Sustainability: Minimizing environmental impact and resource consumption across the entire design lifecycle. This comprehensive approach spans from raw material sourcing through to end-of-life disposal, challenging designers to eliminate waste, preserve natural resources, and significantly reduce pollution. Sustainable design necessitates careful consideration of long-term environmental consequences, including addressing critical challenges like climate change, habitat destruction, and biodiversity loss.
Beyond Harm Reduction
True sustainability transcends mere harm reduction to actively generate positive environmental outcomes. This transformative approach creates products and services that harness renewable energy, conserve vital water resources, or restore damaged ecosystems. When institutions fully embrace sustainability principles, they contribute meaningfully to environmental resilience and help foster regenerative systems that benefit both present and future generations.
Implementation Strategies
Organizations can embed sustainability through strategic, measurable approaches including rigorous lifecycle assessments, integrated eco-design methodologies, and significant investments in renewable energy infrastructure and waste reduction technologies. By elevating sustainability to a core organizational value, institutions can dramatically reduce their ecological footprint while simultaneously driving innovation and contributing to planetary health and wellbeing.
Accountability
Definition and Core Principles
Accountability: Taking ownership of both intended and unintended outcomes of design decisions. This principle demands establishing robust systems for monitoring and evaluating design impacts, along with mechanisms for corrective action when necessary. Accountable designers maintain transparency throughout their process, actively seek stakeholder feedback, and acknowledge responsibility for any negative consequences, even those that were unforeseen. This foundation of responsibility ensures designs serve their intended purpose while minimizing potential harm.
Learning and Growth
True accountability transcends mere acknowledgment of errors—it transforms mistakes into catalysts for improvement. This transformative process involves critically examining design failures, implementing process refinements, enhancing designer training, and establishing more comprehensive ethical frameworks. When institutions embrace accountability as a pathway to excellence rather than just a response to failure, they cultivate stakeholder trust while continuously elevating the quality and integrity of their design practices.
Implementation Strategies
Organizations can foster a culture of accountability by establishing well-defined responsibility chains, implementing comprehensive monitoring systems, and creating accessible channels for feedback and remediation. Effective implementation includes regular ethical audits, transparent reporting practices, and systematic incorporation of lessons learned. By prioritizing accountability at every organizational level, institutions ensure their designs consistently uphold ethical standards, promote inclusivity, and advance sustainability goals.
Patagonia’s Environmental Responsibility
Environmental Integration in Design
Patagonia has revolutionized responsible design by weaving environmental considerations into the fabric of its product development process. The company’s groundbreaking “Worn Wear” program—which actively encourages repair and reuse over replacement—emerged organically from the organization’s core values rather than as a response to market trends. Patagonia’s governance structure reinforces this commitment through rigorous environmental impact assessments at every design stage, ensuring sustainability remains central rather than peripheral to innovation.
Sustainability Initiatives
Patagonia demonstrates unwavering environmental responsibility through comprehensive initiatives that permeate all aspects of their operations. The company has pioneered the use of recycled and organic materials in outdoor apparel, dramatically reduced water consumption through innovative manufacturing processes, and committed to donating 1% of sales to grassroots environmental organizations, a pledge that has generated over $140 million in grants to date. These initiatives represent the concrete manifestation of Patagonia’s mission rather than superficial corporate social responsibility efforts.
Environmental Leadership as a Competitive Advantage
Patagonia’s remarkable business success powerfully illustrates how environmental responsibility can create lasting competitive advantage in the marketplace. By elevating environmental considerations from afterthought to guiding principle, the company has cultivated a fiercely loyal customer base willing to pay premium prices for products aligned with their values. Patagonia’s approach has redefined industry standards for sustainable business practices, serving as a compelling case study for organizations seeking to integrate responsible design into their operational DNA while achieving exceptional business results.
IDEO’s Human-Centered Evolution
Organizational Restructuring
IDEO transformed from a traditional product design firm into a responsible design leader through deliberate organizational change. The company revolutionized its project teams by integrating ethicists and community representatives alongside designers, ensuring diverse perspectives influence every creation. Their acclaimed “Little Book of Design Ethics” now serves as the foundational document guiding all projects, while their established ethics review board rigorously evaluates proposals against comprehensive responsible design criteria before approval.
Ethical Integration in Design Process
IDEO’s evolution exemplifies the critical importance of embedding ethical considerations throughout the design process. By incorporating ethicists and community advocates directly into project teams, the company ensures that marginalized voices are heard, and ethical principles shape all design decisions from conception to implementation. The “Little Book of Design Ethics” functions not simply as a reference manual but as a living framework that empowers designers to navigate complex ethical challenges with confidence and integrity.
Cultural Transformation
IDEO’s remarkable journey demonstrates that responsible design demands a fundamental cultural shift within organizations. The company has cultivated an environment where ethical awareness and accountability are celebrated as core values rather than compliance requirements. By prioritizing human impact alongside business outcomes, IDEO has established itself as the preeminent leader in genuinely human-centered design. Their case offers actionable insights for institutions seeking to implement responsible design practices while maintaining innovation and market leadership.
Addressing Resistance to Change
Institutional transformation inevitably encounters resistance. Change disrupts established routines and challenges comfort zones, often triggering reactions ranging from subtle hesitation to outright opposition. Overcoming this resistance requires thoughtful planning, transparent communication, and meaningful stakeholder engagement throughout the process.
Why People Resist Change
Resistance typically stems from several key factors:
• Fear of the unknown and potential failure
• Perceived threats to job security, status, or expertise
• Skepticism about the benefits compared to required effort
• Attachment to established processes and organizational identity
• Past negative experiences with change initiatives
Effective Strategies for Change Management
• Phased implementation with clearly defined pilot projects that demonstrate value
• Identifying and empowering internal champions across departments to model and advocate for new approaches
• Creating safe spaces for constructive critique of existing practices without blame
• Developing narratives that connect responsible design to institutional identity and core values
Keys to Successful Transformation
By implementing these strategies, institutions can cultivate an environment that embraces rather than resists change. Transparent communication creates trust, active stakeholder engagement fosters ownership, and focusing on shared values helps align diverse perspectives. When people understand both the rationale for change and their role in the transformation process, resistance diminishes and the foundation for responsible design practices strengthens.
Balancing Competing Priorities
The complex tension between profit motives and ethical considerations demands sophisticated strategic approaches. Modern institutions navigate a challenging landscape of competing demands: maximizing shareholder value, meeting evolving customer needs, and fulfilling expanding social and environmental responsibilities. Successfully balancing these interconnected priorities requires thoughtful deliberation and strategic decision-making that acknowledges their interdependence.
Tensions in Modern Organizations
These inherent tensions can be effectively managed through:
• Developing comprehensive metrics that capture long-term value creation beyond quarterly financial results, including social impact assessments and sustainability indicators
• Identifying and prioritizing “win-win” opportunities where responsible design enhances market position, builds brand loyalty, and creates competitive advantages
Strategic Decision Frameworks
• Creating robust decision frameworks that explicitly weigh ethical considerations alongside financial metrics, allowing for transparent evaluation of tradeoffs
• Building compelling business cases that demonstrate how responsible design significantly reduces long-term risks related to regulation, reputation, and resource scarcity
Long-term Value Integration
By thoughtfully integrating ethical considerations into core decision-making processes and developing nuanced metrics that capture multidimensional long-term value creation, institutions can successfully reconcile profit motives with responsible design principles. This strategic approach enables organizations to achieve sustainable financial success while meaningfully contributing to a more just, equitable, and environmentally sustainable world.
Beyond Token Inclusion
Meaningful participation requires addressing deep-rooted power imbalances in institutional structures. Too often, inclusion is reduced to superficial gestures—inviting representatives from marginalized communities to consultations while denying them genuine influence over outcomes and decisions that affect their lives.
The Challenge of Meaningful Participation
To achieve authentic participation, institutions must confront and transform these entrenched power dynamics. This means moving beyond symbolic representation to creating spaces where traditionally excluded voices carry substantial weight in shaping both processes and outcomes.
Key Requirements for True Inclusion:
• Redistributing decision-making authority through participatory governance structures that give community members voting rights on critical decisions
• Providing fair financial compensation for community members’ time, expertise, and design contributions—recognizing their input as valuable professional consultation
• Implementing responsive feedback mechanisms with sufficient authority to pause, redirect, or fundamentally reshape projects when community concerns arise
• Establishing community oversight boards with substantive veto power and resources to monitor implementation
Building Equity Through Empowerment
By fundamentally redistributing decision-making authority and genuinely empowering marginalized communities, institutions can transform design processes from extractive exercises to collaborative partnerships. This shift ensures that design benefits flow equitably to all community members, not just those with pre-existing privilege. Such transformation demands more than good intentions—it requires concrete commitments to equity, justice, and collective accountability.
The Microsoft Inclusive Design Transformation
Restructuring Design Hierarchy
Microsoft fundamentally transformed its design process by establishing direct reporting channels between accessibility teams and executive leadership. This strategic restructuring ensured inclusive design considerations could not be sidelined or overridden by product managers focused solely on deadlines or feature development. Additionally, they created a protected budget specifically for community engagement that was safeguarded from reallocation to other priorities—even during tight financial cycles.
Elevating Accessibility Teams
This structural change demonstrates a commitment to inclusive design that transcends corporate rhetoric. By elevating accessibility specialists to positions with genuine organizational influence and providing them with unfiltered access to executive leadership, Microsoft ensures that inclusive design principles are embedded in strategic decisions at the highest levels of the organization. This repositioning signals to the entire company that accessibility is a core business value, not an optional consideration.
Dedicated Community Engagement
The protected budget for community engagement reinforces this commitment through tangible resource allocation. By dedicating specific funding for meaningful partnerships with marginalized communities, Microsoft ensures diverse voices directly influence product development from conception through launch. This approach has yielded measurable improvements in product accessibility and market reach, demonstrating how institutional transformation of design processes can simultaneously advance inclusion, equity, and business outcomes.
Regulatory Alignment
Anticipating Regulatory Changes
Visionary institutions position themselves ahead of regulatory evolution rather than merely reacting to it. As global regulations on environmental sustainability, accessibility, and data privacy grow increasingly stringent, organizations that proactively integrate these considerations into their design processes create significant competitive advantages while minimizing disruption.
Case Study: Proactive Compliance
Consider this example:
• European medical device leader Ottobock established a specialized regulatory forecasting team that maps emerging accessibility requirements across global markets
• Their “compliance plus” philosophy ensures designs exceed current standards by 20-30%, virtually eliminating costly redesigns when regulations tighten
Benefits of Forward-Thinking Regulation Strategy
Proactive regulatory alignment transforms compliance from a burden into a strategic asset. Organizations that embrace this approach not only mitigate financial and reputational risks but also establish themselves as industry leaders in responsible design. This strategic positioning requires continuous environmental scanning and a genuine commitment to ethical design principles that transcend minimum requirements.
Market Differentiation
Rising Consumer Expectations
The evolving landscape of consumer expectations presents strategic opportunities to harmonize responsible design with market advantage. Today’s consumers are not merely preferring but actively demanding products and services that demonstrate ethical production standards, environmental sustainability practices, and social responsibility commitments. Organizations that authentically meet these heightened expectations can secure significant competitive advantages and cultivate deeply loyal customer relationships.
Real-World Success Stories
Consider these compelling examples:
• Herman Miller revolutionized the furniture industry through circular design principles, exemplified by their groundbreaking Aeron chair remanufacturing program
• This innovative initiative established a premium market position while substantially reducing material consumption and environmental impact
Creating Win-Win Outcomes
When organizations strategically align responsible design principles with market opportunities, they forge powerful win-win scenarios that simultaneously benefit business objectives and societal wellbeing. Success in this approach demands both nuanced understanding of evolving consumer expectations and unwavering commitment to developing innovative solutions that address these expectations while advancing sustainability goals.
Beyond Good Intentions
Concrete measurement systems are essential for true accountability. While noble intentions set the direction, only robust metrics can verify real progress in responsible design. Organizations must implement comprehensive measurement frameworks to track outcomes, identify improvement opportunities, and demonstrate genuine commitment.
Effective Measurement Systems
Leading examples include:
• IBM’s Responsible Design Dashboard, which provides quantifiable metrics across diverse product lines
• Google’s HEART framework (Happiness, Engagement, Adoption, Retention, Task success) that seamlessly integrates ethical dimensions into standard performance indicators
• Transparent annual responsible design audits with publicly accessible results that foster organizational accountability
Benefits of Implementation
By embracing data-driven measurement systems, organizations transform aspirational goals into verifiable outcomes. This approach demonstrates an authentic commitment to responsible design principles while creating a foundation for continuous improvement. The willingness to measure and transparently share both successes and challenges distinguishes truly responsible organizations from those with merely good intentions.
Incentive Restructuring
The Power of Aligned Incentives
Human behavior is fundamentally shaped by incentives. To foster responsible design practices, institutions must strategically align rewards systems with desired ethical outcomes. When designers and stakeholders are recognized and compensated for responsible design initiatives, they naturally prioritize these values in their work.
Implementation Strategies
Organizations are achieving this alignment through concrete approaches:
• Salesforce has integrated diversity and inclusion metrics directly into executive compensation packages, ensuring leadership accountability
• Leading firms like Frog Design have embedded responsible design outcomes as key criteria in employee performance reviews
• Structured recognition programs celebrate and amplify exemplary responsible design practices, increasing visibility and adoption
Creating a Culture of Responsible Design
Thoughtfully restructured incentives transform organizational culture by signaling what truly matters. When ethical, inclusive, and sustainable practices are rewarded, they become embedded in institutional values rather than treated as optional considerations. This transformation requires rigorous assessment of current incentive frameworks and bold leadership willing to realign reward systems with responsible design principles.
Institutional Culture and Learning Systems
Responsible design flourishes within robust learning ecosystems. Rather than a one-time achievement, responsible design represents an ongoing journey of discovery, adaptation, and refinement. Organizations must establish comprehensive learning infrastructures that nurture this evolutionary process and ensure design practices remain ethically sound, inclusive, and forward-thinking.
Key Components of Learning Infrastructure
An effective learning infrastructure incorporates:
• Rigorous post-implementation reviews that critically assess ethical outcomes and user impact
• Vibrant communities of practice that facilitate knowledge exchange and cross-pollination across departments
• Strategic partnerships with academic institutions to integrate cutting-edge ethical frameworks and research
• Diverse external advisory boards that provide constructive critique and alternative perspectives
Benefits of Learning Systems
By investing in robust learning infrastructure, organizations cultivate a culture of continuous improvement and adaptive excellence. These systems ensure responsible design practices evolve in response to emerging challenges, technological shifts, and evolving societal expectations. Success requires unwavering institutional commitment to evidence-based learning, collaborative problem-solving, and transparent communication across all levels of the organization.
The Philips Healthcare Example
The Responsibility Lab Initiative
Philips Healthcare established a groundbreaking “Responsibility Lab” where designers regularly rotate through immersive experiences with diverse users from various backgrounds and abilities. This innovative rotation system ensures that responsible design knowledge becomes deeply embedded across the organization rather than remaining isolated within a specialized team.
Benefits of Experiential Learning
This approach powerfully demonstrates how experiential learning catalyzes responsible design practices. By immersing designers directly in the lived experiences of diverse users, Philips enables them to develop profound insights into the ethical, social, and environmental implications of their design decisions—insights that could not be gained through traditional research methods alone.
Organizational Knowledge Distribution
The strategic rotation system ensures that valuable ethical design principles flow throughout the organization, transforming responsible design from a specialized function into a shared organizational capability. This case study exemplifies how institutions can build effective learning systems that not only foster a culture of responsible design but also make it an integral part of their operational DNA.
The Institutional Journey
A Continuous Transformation
Institutionalizing responsible design is not a destination but a dynamic journey of continuous evolution. It demands skillful navigation through competing priorities, entrenched power dynamics, and ever-shifting external pressures. Forward-thinking institutions recognize that responsible design is not merely adjacent to their core mission—it is fundamental to their long-term viability, relevance, and social license to operate in an increasingly conscientious marketplace.
Beyond Sporadic Initiatives
By addressing these dimensions systematically and holistically, organizations transcend fragmentary ethical initiatives to achieve truly institutionalized responsible design. This transformation creates environments where ethical considerations and responsible practices become the natural default—woven into the organizational DNA—rather than exceptional efforts requiring special attention or resources.
Embrace the Journey of Continuous Growth
Immerse yourself in a transformative journey that thrives on continuous learning, adaptive thinking, and cross-disciplinary collaboration. This mindset unlocks the potential for design practices that fuel a more just, equitable, and sustainable world. By embracing this profound shift, institutions can drive real change.
Achieving this radical transformation requires visionary leadership, ethical conduct, and an innovative culture. It demands the united courage to challenge outdated norms and champion a brighter future. When institutions embody this ethos, they become beacons of progress, inspiring others to follow suit.
The path forward is not without obstacles, but the rewards are immense. Institutions that lead with this mindset will not only transform their own practices but also catalyze systemic change across industries. They will set new standards, reshape markets, and pave the way for a more responsible, inclusive, and sustain.
In the latest episode of the “What If? So What?” podcast, Jim Hertzfeld had the pleasure of speaking with Efi Pylarinou, a renowned FinTech expert, speaker, and author of “The Fast Future Blur.” Efi shares her journey from Wall Street to becoming a leading voice in financial technology and innovation. The conversation covers a wide range of topics, including the concept of everywhere banking, the impact of AI, and the future of financial services.
Efi Pylarinou’s career has taken her from the cutthroat world of Wall Street to the serene landscapes of Switzerland. With a background in traditional financial services, Efi has witnessed firsthand the transformative power of technology in the industry and emphasizes the importance of adapting to new tech cycles and the challenges posed by legacy systems.
One of the key topics Jim and Efi discuss is everywhere banking, which encapsulates two industry trends: open banking and embedded finance. Efi explains that financial services are no longer confined to physical branches or mobile apps. Instead, banking can be integrated into commerce sites, travel platforms, and educational portals. This shift is driven by advancements in technology and changing consumer expectations.
Efi also highlights the critical role of AI in financial services. While AI is not new, recent advancements have opened up new possibilities for intelligent banking. However, she stresses that simply using AI as a tool is not enough. Businesses need to adopt an AI-native mindset to truly harness its potential.
Another significant trend is the evolution of digital identity and blockchain technology. Efi talks about how these innovations revolutionize our thoughts about money and financial transactions. With more than 90% of central banks exploring digital currencies, the future of money is poised to change dramatically.
Listen to the full episode to stay updated on the latest trends in FinTech and financial services.
Subscribe to the “What If? So What?” podcast for more engaging conversations with industry experts.
Listen now on your favorite podcast platform or visit our website.
Apple | Spotify | Amazon | Overcast
Efi Pylarinou, Top Global Tech Thought Leader On FinTech
Dr. Efi Pylarinou is a seasoned Wall Street professional and ex-academic who has become a Top Global Fintech, Linkedin, and Tech Thought Leader. Author of The Fast Future Blur, she’s also a domain expert with a Ph.D. in Finance, founder of the Financial Services Intelligence Hub, a prolific content creator, and Faculty at Fast Future Executive.
Connect with Efi
Jim Hertzfeld is Area Vice President, Strategy for Perficient.
For over two decades, he has worked with clients to convert market insights into real-world digital products and customer experiences that actually grow their business. More than just a strategist, Jim is a pragmatic rebel known for challenging the conventional and turning grand visions into actionable steps. His candid demeanor, sprinkled with a dose of cynical optimism, shapes a narrative that challenges and inspires listeners.
]]>It’s typical to aim for 15-minute Standups, but how many times have your standups gotten side-tracked and suddenly more than a half-hour has gone by? These occurrences are not exactly my cup of tea…
Of course, sometimes topics need to be discussed, and planning a follow-up meeting will only slow down or delay resolution.
It’s important to keep Standups on-topic, and if run effectively, consider taking time after the Standup (I like to call it a Stay-After) with a smaller audience to cover “Tea-time” topics:
Likely, Standup meetings have all members of a team in attendance. To make the best use of everyone’s time, staying after Standup is a great opportunity to have a smaller, focused discussion with only the relevant team members. Typically, a Stay-After meeting is used to cover time-sensitive topics – “TEA”:
Stay-After meetings can be planned or unplanned.
Planned topics typically come up during the prior workday. These are usually if a team member requires some clarification of a work assignment, or, to share information. The project manager can send an invite immediately following the next standup that contains the necessary attendees and agenda.
Unplanned topics typically arise during the Standup itself because of one of these scenarios:
It’s not uncommon that there may be both planned and unplanned topics for a Stay-After. The PM or team needs to determine which topics to give priority to for that specific day and time. De-prioritized topics may need to be addressed as part of a different meeting or as part of the next day’s Stay-After.
Like actual Standups, there is likely only limited time available to hold a Stay-After. Consider these tips to make sure the time is used most efficiently:
Taking advantage of Standup Stay-After “Tea-time” is a great way to make sure that all team members get a chance to participate in the daily Standups, but, also allow time-sensitive topics to be addressed without delay. Consider these tips at your next Standup, and it will help get your team started off to a tea-rrific day.
]]>As an AEM author, updating existing page content is a routine task. However, manual updates, like rolling out a new template, can become tedious and costly when dealing with thousands of pages.
Fortunately, automation scripts can save the day. Using Groovy scripts within AEM can streamline the content update process, reducing time and costs. In this blog, we’ll outline the key steps and best practices for using Groovy scripts to automate content updates.
Groovy is a powerful scripting language that integrates seamlessly with AEM. It allows developers to perform complex operations with minimal code, making it an excellent tool for tasks such as:
The Groovy Console for AEM provides an intuitive interface for running scripts, enabling rapid development and testing without redeploying code.
To illustrate how to use Groovy, let’s learn how to update templates for existing web pages authored inside AEM.
Our first step is to identify the following:
You should have source and destination template component mappings and page paths.
As a pre-requisite for this solution, you will need to have JDK 11, Groovy 3.0.9, and Maven 3.6.3.
1. Create a CSV File
The CSV file should contain two columns:
Save this file as template-map.csv.
Source,Target "/apps/legacy/templates/page-old","/apps/new/templates/page-new" "/apps/legacy/templates/article-old","/apps/new/templates/article-new"v
2. Load the Mapping File in migrate.groovy
In your migrate.groovy script, insert the following code to load the mapping file:
def templateMapFile = new File("work${File.separator}config${File.separator}template-map.csv") assert templateMapFile.exists() : "Template Mapping File not found!"
3. Implement the Template Mapping Logic
Next, we create a function to map source templates to target templates by utilizing the CSV file.
String mapTemplate(sourceTemplateName, templateMapFile) { /*this function uses the sourceTemplateName to look up the template we will use to create new XML*/ def template = '' assert templateMapFile : "Template Mapping File not found!" for (templateMap in parseCsv(templateMapFile.getText(ENCODING), separator: SEPARATOR)) { def sourceTemplate = templateMap['Source'] def targetTemplate = templateMap['Target'] if (sourceTemplateName.equals(sourceTemplate)) { template = targetTemplate } } assert template : "Template ${sourceTemplateName} not found!" return template }
After creating a package using Groovy script on your local machine, you can directly install it through the Package Manager. This package can be installed on both AEM as a Cloud Service (AEMaaCS) and on-premises AEM.
Execute the script in a non-production environment, verify that templates are correctly updated, and review logs for errors or skipped nodes. After running the script, check content pages to ensure they render as expected, validate that new templates are functioning correctly, and test associated components for compatibility.
Leveraging automation through scripting languages like Groovy can significantly simplify and accelerate AEM migrations. By following a structured approach, you can minimize manual effort, reduce errors, and ensure a smooth transition to the new platform, ultimately improving overall maintainability.
Don’t miss out on more AEM insights and follow our Adobe blog!
]]>Choosing the right framework for your first cross-platform app can be challenging, especially with so many great options available. To help you decide, let’s compare Kotlin Multiplatform (KMP), React Native, and Flutter by building a simple “Hello World” app with each framework. We’ll also evaluate them across key aspects like setup, UI development, code sharing, performance, community, and developer experience. By the end, you’ll have a clear understanding of which framework is best suited for your first app.
Kotlin Multiplatform allows you to share business logic across platforms while using native UI components. Here’s how to build a “Hello World” app:
shared
module, create a Greeting
class with a function to return “Hello World”.
// shared/src/commonMain/kotlin/Greeting.kt
class Greeting {
fun greet(): String {
return "Hello, World!"
}
}
androidApp
module. For iOS, use SwiftUI or UIKit in the iosApp
module.Android (Jetpack Compose):
// androidApp/src/main/java/com/example/androidApp/MainActivity.kt
class MainActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
Text(text = Greeting().greet())
}
}
}
iOS (SwiftUI):
// iosApp/iosApp/ContentView.swift
struct ContentView: View {
var body: some View {
Text(Greeting().greet())
}
}
Pros:
Cons:
React Native allows you to build cross-platform apps using JavaScript and React. Here’s how to build a “Hello World” app:
npx react-native init HelloWorldApp
App.js
and replace the content with the following:
import React from 'react';
import { Text, View } from 'react-native';
const App = () => {
return (
<View style={{ flex: 1, justifyContent: 'center', alignItems: 'center' }}>
<Text>Hello, World!</Text>
</View>
);
};
export default App;
npx react-native start
Run the app on Android or iOS:
npx react-native run-android
npx react-native run-ios
Pros:
Cons:
Flutter is a UI toolkit for building natively compiled apps for mobile, web, and desktop using Dart. Here’s how to build a “Hello World” app:
flutter create hello_world_app
lib/main.dart
and replace the content with the following:
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(title: Text('Hello World App')),
body: Center(child: Text('Hello, World!')),
),
);
}
}
flutter run
Pros:
Cons:
flutter doctor
command.Best option: Flutter (for ease of initial setup).
Best option: A tie between KMP (for native UI flexibility) and Flutter (for cross-platform consistency).
Best option: Kotlin Multiplatform (for its focus on sharing business logic).
Winner: Kotlin Multiplatform (for native performance).
Best option: React Native (for its large and mature community), but Flutter is a close contender.
Best option: Flutter (for its excellent developer experience and tooling).
With the rise of AI tools like GitHub Copilot, ChatGPT, Gemini, Claude, etc.. Developers can significantly speed up app development. Let’s evaluate how each framework benefits from AI assistance:
Best option: React Native (due to JavaScript’s widespread support in AI tools).
There’s no one-size-fits-all answer. The best choice depends on your priorities:
Each framework has its strengths and weaknesses, and the best choice depends on your team’s expertise, project requirements, and long-term goals. For your first app, consider starting with Flutter for its ease of use and fast development, React Native if you’re a web developer, or Kotlin Multiplatform if you’re focused on performance and native UIs.
Try building a simple app with each framework to see which one aligns best with your preferences and project requirements.
]]>
This series of blog posts will cover the main areas of activity for your marketing, product, and UX teams before, during, and after site migration to a new digital experience platform.
Migrating your site to a different platform can be a daunting prospect, especially if the site is sizable in both page count and number of assets, such as documents and images. However, this can also be a perfect opportunity to freshen up your content, perform an asset library audit, and reorganize the site overall.
Once you’ve hired a consultant, like Perficient, to help you implement your new CMS and migrate your content over, you will work with them to identify several action items your team will need to tackle to ensure successful site migration.
Whether you are migrating from/to some of the major enterprise digital experiences platforms like Sitecore, Optimizely, Adobe, or from the likes of Sharepoint or WordPress, there are some common steps to take to make sure content migration runs smoothly and is executed in a manner that adds value to your overall web experience.
One of the first questions you will need to answer is, “What do we need to carry over?” The instinctive answer would be everything. The rational answer is that we will migrate the site over as is and then worry about optimization later. There are multiple reasons why this is usually not the best option.
Even though this activity might take time, it is essential to use this opportunity in the best possible manner. A consultant like Perficient can help drive the process. They will pull up an initial list of active pages, set up simple audit steps, and ensure that decisions are recorded clearly and organized.
The first step is to ensure all current site pages are accounted for. As simple as this may seem, it doesn’t always end up being so, especially on large multi-language sites. You might have pages that are not crawlable, are temporarily unpublished, are still in progress, etc.
Depending on your current system capabilities, putting together a comprehensive list can be relatively easy. Getting a CMS export is the safest way to confirm that you have accounted for everything in the system.
Crawling tools, such as Screaming Frog, are frequently used to generate reports that can be exported for further refinement. Cross-referencing these sources will ensure you get the full picture, including anything that might be housed externally.
Once you’ve ensured that all pages made it to a comprehensive list you can easily filter, edit, and share, the fun part begins.
The next step involves reviewing and analyzing the sitemap and each page. The goal is to determine those that will stay vs candidates for removal. Various different factors can impact this decision from business goals, priorities, page views, conversion rate, SEO considerations, and marketing campaigns to compliance and regulations. Ultimately, it is important to assess each page’s value to the business and make decisions accordingly.
This audit will likely require input from multiple stakeholders, including subject matter experts, product owners, UX specialists, and others. It is essential to involve all interested parties at an early stage. Securing buy-in from key stakeholders at this point is critical for the following phases of the process. This especially applies to review and sign-off prior to going live.
Depending on your time and resources, the keep-kill-merge can either be done in full or limited to keep-kill. The merge option might require additional analysis, as well as follow-up design and content work. Leaving that effort for after the site migration is completed might just be the rational choice.
Once the audit process has been completed, it is important to record findings and decisions simply and easily consumable for teams that will implement those updates. Proper documentation is essential when dealing with large sets of pages and associated content. This will inform the implementation team’s roadmap and timelines.
At this point, it is crucial to establish regular communication between a contact person (such as a product owner or content lead) and the team in charge of content migration from the consultant side. This partnership will ensure that all subsequent activities are carried out respecting the vision and business needs identified at the onset.
Completing the outlined activities properly will help smooth the transition into the next process phase, thus setting your team up for a successful site migration.
]]>Python is an open-source programming language. We can use Python to build/enable AWS services such as Terraform or other IAC code. In this blog, we are going to discuss setting up the CloudFront service using Python.
As we know, Python is an imperative language. This means that you can write more customized scripts that can perform advanced complex operations, handle errors, interact with APIs, etc. You also have access to AWS SDKs like Boto3 that allow you to perform any AWS operation you desire, including custom ones that might not yet be supported by Terraform.
We have defined methods and classes in the boto3 library for AWS services that we can use to create/modify/update AWS services.
We require only Python and Boto3 library.
As we know, boto3 has different functions that handle AWS services. We have lots of functions, but below are the basic functions to manage CloudFront service:
create_distribution and update_distribution require the lots configuration values as well. You can use a Python dictionary variable and pass it to a function, or you can pass it as JSON, but you have to perform parsing as well for that.
Let me share with you a basic example of creating CloudFront distribution using Python & boto3:
import boto3 import os s3_origin_domain_name = '<s3bucketname>.s3.amazonaws.com' origin_id = 'origin-id' distribution_config = { 'CallerReference': str(hash("unique-reference")), 'Comment': 'My CloudFront Distribution', 'Enabled': True, 'Origins': { 'Items': [ { 'Id': origin_id, 'DomainName': s3_origin_domain_name, 'S3OriginConfig': { 'OriginAccessIdentity': '' }, 'CustomHeaders': { 'Quantity': 0, 'Items': [] } } ], 'Quantity': 1 }, 'DefaultCacheBehavior': { 'TargetOriginId': origin_id, 'ViewerProtocolPolicy': 'redirect-to-https', 'AllowedMethods': { 'Quantity': 2, 'Items': ['GET', 'HEAD'], 'CachedMethods': { 'Quantity': 2, 'Items': ['GET', 'HEAD'] } }, 'ForwardedValues': { 'QueryString': False, 'Cookies': { 'Forward': 'none' } }, 'MinTTL': 3600 }, 'ViewerCertificate': { 'CloudFrontDefaultCertificate': True }, 'PriceClass': 'PriceClass_100' } try: aws_access_key = os.getenv('AWS_ACCESS_KEY_ID') aws_secret_key = os.getenv('AWS_SECRET_ACCESS_KEY') session = boto3.Session( aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key, region_name='us-east-1' ) client = session.client('cloudfront') response = client.create_distribution(DistributionConfig=distribution_config) print("CloudFront Distribution created successfully!") print(response) except Exception as e: print(f"Error creating CloudFront distribution: {e}")
As you can see in the above sample code, after importing the boto3 module, we have the distribution_config variable where all the configs are stored. After that, we call the create_dirtibution function to cdn distribution:
response = client.create_distribution(DistributionConfig=distribution_config)
So, in a similar way, you can write more complex Python code to implement your complex AWS infrastructure as well and automate setting up a cache invalidation request pipeline, which will give users functionality and allow them to clear CDN cache without logging in to the AWS console.
]]>Sometimes purely looking at an Azure DevOps backlog or Board may not tell the right story in terms of progress made towards a specific goal. At first glance, it may seem like a horror story, but in reality, it is not the case. The data needs to be read or conveyed in the right way.
Though Azure DevOps provides multiple ways to view work items, it also provides a powerful reporting capability in terms of writing queries and configuring dashboards.
Work Items in Azure DevOps contain various fields which can enable data reports. However, to make that data meaningful, the right queries and the use of dashboards can help to present the precise state of the work.
Every Azure DevOps query should have a motive. Fields on work items are attributes which can help to provide an answer. Let us look at a few use cases and how those queries are configured.
Example 1: I want to find all Bugs in my project that are not in a State of ‘Closed’ or ‘Removed’ and which contain a tag ‘CMS’. I can use the work item fields ‘Work Item Type,’ ‘State,’ and ‘Tags’ to find any matches.
Example 2: I want to find all Bugs that are Severity 1 or Severity 2 that are not Closed or Resolved (I want to see only Severity 1 or 2 Bugs that are in New or Active State.) In this example, I have grouped the 2 rows for Severity to be an ‘Or’ condition. This allows me to get results that include both Severity 1 and Severity 2 results.
Example 3: I want to find all Bugs that contain the Tag “missing requirement” which were created on or after November 5, 2024. Another helpful attribute to report on is by Date – in this example, I am querying for results created after a specific date, but you can change the operator or set a date range for further control of results.
Tips:
Having these queries is great if you need a list of work items or if you want to make bulk updates for items which match your criteria. However, we can take these results a step further by visualizing the data onto a Dashboard.
Sometimes visuals can help to better portray a story; the same can be true when reporting on a project’s progress.
Out-of-the-box, Azure DevOps provides a variety of widget types which can be used to configure a Dashboard. Some widgets require the use of a Query, while others are purely based on settings you define.
Here are a few examples of widgets I use most often:
Tips:
Identify what is most important for your team or client to know, monitor, or be aware of. Once you have the data you need, you will be better equipped to explain progress and status to your team and the client.
In my personal experience, some types of unique dashboards I found to be effective for my clients or team members:
Example of an Executive Dashboard, using Burndown, Chart for Work Items, and Query Tile widgets:
With each of these dashboards, I wrote unique queries to find data that my team or client most often needed to reference. This enabled them to know if we were on track or if some action is needed.
By having a precise way to explain the story of your project, you will find that your team is able to make better decisions when the right data is available to them, in order to lead to a happy project ending.
]]>