Platforms and Technology Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/ Expert Digital Insights Thu, 20 Mar 2025 20:14:34 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Platforms and Technology Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/ 32 32 30508587 Adobe GenStudio for Marketers in 5 Minutes  https://blogs.perficient.com/2025/03/21/adobe-genstudio-for-marketers-in-5-minutes/ https://blogs.perficient.com/2025/03/21/adobe-genstudio-for-marketers-in-5-minutes/#comments Fri, 21 Mar 2025 11:03:55 +0000 https://blogs.perficient.com/?p=379072

Adobe launched GenStudio for Performance Marketing and has made many improvements and updates leading up to Adobe Summit 2025. We’ve had an opportunity to use it here at Perficient, and have discovered a number of exciting features (along with nuances) of the product.

We see an evolving future of its rollout, especially as more and more marketing teams adopt the capabilities it has into their own digital marketing ecosystems. 

What GenStudio Is 

GenStudio may very well be a marketer’s dream. We do see it as a game-changer for how marketing content is created, activated, and measured. That’s because it greatly reduces the amount of time that is typically required to request, build, assemble, review, and publish content for marketing campaigns.

These various flows in creating content can now be handled by the AI capabilities of GenStudio. Not only that, but the content generated can follow brand standards and guidelines that are established in GenStudio. 

Some of the main features to highlight: 

  • An AI-first approach 
  • Brand scoring based on generated content, with feedback review 
  • Integrations with existing brand-approved assets 
  • Extensibility options 
  • Channel activation directly from GenStudio 

We’d like to note that although there are many Generative AI capabilities within creating content, human review is always a part of the approval and publication process. 

GenStudio Use Cases 

There are a few use cases that have been described by Adobe that can be addressed with GenStudio.  

  • Reuse of content from previous campaigns across channels 
  • Personalization of content to products, offers, and segments 
  • Content localization across geos and languages 
  • Content optimization based on insights 

Our experience so far has been focused on the content creation process, and seeing how our content looks and behaves in some of our channels. We look forward to creating personalized experiences, along with seeing how the content performs based on things like Content Analytics, recently announced at Adobe Summit. 

The Process

After onboarding, defining users and groups, and establishing some processes for adopting GenStudio, the first step is to establish the Brand Guidelines.  

Brand Setup 

New Brands can be created (along with Storage Permissions) within the interface, either using a guidelines document or manually.    

Expert Tip: Use a PDF document that has all your brand guidelines defined to upload, and GenStudio will create the various guidelines based on the document(s).   

Once a brand is uploaded, review the guidelines, add new ones, and make necessary adjustments.  

The following example illustrates the WKND brand: 

GenStudio for Adobe's fictitious WKND Brand

Note that the permissions to edit and publish a brand should be kept to brand owners. Changes to the brand which are then published may also impact other systems that use these brand guidelines, such as Adobe Experience Manager, or Orchestration Agents. 

Once the brand has been published, it can then be used to generate emails, meta ads, banner ads, and other display ads.  

Content Creation 

Content creation is based on templates. These templates allow the creation of content that may greatly reduce the amount of time to build out content with existing tools. What we would like to see eventually from Adobe in this area is the ability to create and design layouts within the tool, as opposed to having to upload HTML files that need to adhere to certain frameworks. Another approach may be to create a process that can reference existing layouts such as emails from Marketo, or Experience Fragments in AEM, and them brought into GenStudio. 

Assets can also be brought into GenStudio and then used in generating content. Assets that are managed in AEM as a Cloud Service can also be used.  

Note: The Assets that are part of AEMaaCS need to be marked as “Approved” before being made available in GenStudio. Assets can also be sourced from ContentHub.  

Expert tip: Because there are several ways of sourcing Assets that are brought into GenStudio, we suggest working with a partner such as Perficient to guide these processes. 

Example content generation for an event at Adobe Summit:
GenStudio Content Generation

Content Review 

After the content creation process, content can then be sent for approval. For example, in the above display ad, a content reviewer may ask for re-phrasing to help improve the brand score, if appropriate. Once approved, the content is then published as part of a campaign and can be downloaded in the form of HTML, images, or CSV files for publication. 

Content Activation 

Activating content can also be done on various channels such as Meta, Google Campaign Manager 360, and others. (Note that as of this writing, 3/19/25, the only channel available for activation is Meta.) Once these additional channels are rolled out, we look forward to exploring those capabilities and insights based on those channels, which is another feature available as part of GenStudio.  

Excited About the Future of GenStudio 

We’re excited about the features that Adobe GenStudio for Performance Marketing provides now, and what will be rolled out over time as features become available. Working with the tool itself feels slick, and having the Generative AI features built on top of it makes us feel like we’re really using some cutting-edge technologies. 

]]>
https://blogs.perficient.com/2025/03/21/adobe-genstudio-for-marketers-in-5-minutes/feed/ 2 379072
Perficient Achieves AWS Glue Service Delivery Designation https://blogs.perficient.com/2025/03/19/perficient-achieves-aws-glue-service-delivery-designation/ https://blogs.perficient.com/2025/03/19/perficient-achieves-aws-glue-service-delivery-designation/#respond Wed, 19 Mar 2025 14:43:34 +0000 https://blogs.perficient.com/?p=378901

Perficient has earned the AWS Glue Service Delivery Designation, demonstrating our deep technical expertise and proven success in delivering scalable, cost-effective, and high-performance data integration, data pipeline orchestration, and data catalog solutions.

What is the AWS Service Delivery Program?

The AWS Service Delivery Program is an AWS Specialization Program designed to validate AWS Partners with deep technical knowledge, hands-on experience, and a history of success in implementing specific AWS services for customers.

By achieving the AWS Glue specialization, Perficient is now recognized as a trusted partner to help organizations unlock the full potential of their data—from discovery and transformation to governance and automation.

What This Means for Customers

With the AWS Glue Service Delivery Designation, Perficient provides customers with a faster, more reliable, and cost-effective approach to data transformation, integration, and analytics. This designation translates into tangible business outcomes:

  • Accelerated Time-to-Insight – Automate and streamline ETL processes, enabling real-time and predictive analytics that drive smarter decision-making.
  • Cost Efficiency & Scalability – Reduce operational overhead with a serverless, pay-as-you-go model, ensuring businesses only pay for what they use.
  • Enhanced Data Governance & Compliance – Leverage a centralized, searchable data catalog for better data discovery, security, and compliance across industries.
  • Seamless Data Integration – Connect structured and unstructured data from multiple sources, improving accessibility and usability across the enterprise.
  • Future-Ready Data Strategy – Enable AI/ML-powered insights by preparing data pipelines that fuel advanced analytics and innovation.

Why Perficient?

Perficient is an AWS Advanced Services Partner dedicated to helping enterprises transform and innovate through cloud-first solutions. We specialize in delivering end-to-end data and cloud strategies that drive business growth, efficiency, and resilience.

With deep expertise in AWS Glue and broader AWS analytics services, we empower organizations to modernize their data ecosystems, optimize cloud infrastructure, and harness AI-driven insights. Our industry-focused solutions enable companies to unlock new business value and gain a competitive edge.

Beyond technology, we are committed to building long-term partnerships, solving complex challenges, and making a positive impact in the communities where we operate.

]]>
https://blogs.perficient.com/2025/03/19/perficient-achieves-aws-glue-service-delivery-designation/feed/ 0 378901
HCL Commerce Containers Explained https://blogs.perficient.com/2025/03/19/hcl-commerce-containers-explained/ https://blogs.perficient.com/2025/03/19/hcl-commerce-containers-explained/#comments Wed, 19 Mar 2025 05:31:20 +0000 https://blogs.perficient.com/?p=378730

In this blog, we will explore the various Containers, their functionalities, and how they interact to create a seamless customer shopping experience.

HCL Commerce Containers provide a modular and scalable approach to managing ecommerce applications.

Benefits of HCL Commerce Containers

  • Improved Performance: The system becomes faster and more responsive by caching frequent requests and optimizing search queries.
  • Scalability: Each Container can be scaled independently based on demand, ensuring the system can handle high traffic.
  • Manageability: Containers are designed to perform specific tasks, making the system easier to monitor, debug, and maintain.

 HCL Commerce Containers are individual components that work together to deliver a complete e-commerce solution.

Different Commerce Containers

  1. Cache app: This app implements caching mechanisms to store frequently accessed data in memory, reducing latency and improving response times for repeated requests.
  2. Nextjs-app: This app utilizes the Next.js framework to build server-side rendered (SSR) and statically generated (SSG) React applications. It dynamically interfaces with backend services like store-web or ts-web to fetch and display product data.
  3. Query-app: Acts as a middleware for handling search queries. It leverages Elasticsearch for full-text search capabilities and integrates with the cache app to enhance search performance by caching query results.
  4. Store-web: It handles the user interface and shopping experience, including browsing products, adding items to the cart, and checking out.
  5. Ts-app, Ts-web, Ts-utils:
    • Ts-app: Manages background processes such as order processing, user authentication, and other backend services.
    • Ts-web: This container is for the administrative tools. It supports tasks like cataloging, marketing, promotions, and order management, providing administrators and business users the necessary tools.
    • Ts-utils: Contains utility scripts and tools for automating routine tasks and maintenance operations.
  6. Ingest-app, Nifi-app:
    • Ingest-app: Handles the ingestion of product and catalog data into Elasticsearch, ensuring that the search index is current.
    • Nifi-app: This app utilizes Apache NiFi for orchestrating data flow pipelines. It automates the extraction, transformation, and loading (ETL) processes, ensuring data consistency and integrity across systems.
  7. Registry app: This app implements a service registry to maintain a directory of all microservices and their instances (Containers). It facilitates service discovery and load balancing within the microservices architecture.
  8. Tooling-web: Provides a suite of monitoring and debugging tools for developers and administrators. It includes dashboards for tracking system performance, logs, and metrics to aid in troubleshooting and maintaining system health.
Hcl commerce containers

HCL Commerce containers

Conclusion

This blog explored the various HCL Commerce Containers, their functionalities, and how they work together to create a robust e-commerce solution. By understanding and implementing these Containers, you can enhance the performance and scalability of your e-commerce platform.

Please go through the link to learn about Deploying HCL commerce elasticsearch and solrbased solutions”  https://blogs.perficient.com/2024/12/11/deploying-hcl-commerce-elasticsearch-and-solr-based-solutions/

]]>
https://blogs.perficient.com/2025/03/19/hcl-commerce-containers-explained/feed/ 1 378730
Disabling Cookies in Sitecore SDKs https://blogs.perficient.com/2025/03/12/disabling-cookies-in-sitecore-sdks/ https://blogs.perficient.com/2025/03/12/disabling-cookies-in-sitecore-sdks/#comments Wed, 12 Mar 2025 20:51:03 +0000 https://blogs.perficient.com/?p=378419

Intro 📖

In this post, we’ll take a look at a couple Sitecore SDKs commonly used to build XM Cloud head applications. Specifically, we’ll be looking at how to disable cookies used by these SDKs. This can be useful for data privacy and/or regulatory compliance reasons. These SDKs allow your application to integrate with other composable Sitecore services like analytics, personalization, and search. The cookies these SDKs use need to be considered as part of your application’s overall data protection strategy.

It’s worth noting that, even without any additional SDKs, an XM Cloud head application can issue cookies; see XM Cloud visitor cookies for more information.

Sitecore Cloud SDK ☁

The Sitecore Cloud SDK allows developers to integrate with Sitecore’s Digital Experience Platform (DXP) products. These include Sitecore CDP, Sitecore Personalize, etc. You can read the official documentation here. To learn more about the first-party cookies used by this SDK, see Cloud SDK cookies. These cookies include:

  • sc_{SitecoreEdgeContextId}
    • Stored in the browser when the SDK’s initialization function is called (more on this function later).
  • sc_{SitecoreEdgeContextId}_personalize
    • Needed to run A/B/n tests; configured in the addPersonalize() function. Also stored in the browser when the initialization function is called.

📝 Sitecore are actively working on integrating the disparate Sitecore SDKs into the Sitecore Cloud SDK. The latest version, 0.5, was released on January 29, 2025, and added search capabilities (see the XM Cloud changelog entry here). As Sitecore’s Technical Product Manager Christian Hahn put it in this recent Discover Sitecore YouTube video:

“…[the] Cloud SDK is not another Sitecore SDK–it is the Sitecore SDK.”

It’s safe to assume that, eventually, the Sitecore Cloud SDK will be the only Sitecore SDK developers need to include in their head applications to integrate with any other Sitecore DXP offerings (which will be nice 👌).

ℹ For the remainder of this post, assume that a pre-0.5 version of the Cloud SDK is in use, say, 0.3.0—any version that doesn’t include search widgets (such that the Search JS SDK for React is still required).

Sitecore Search JS SDK for React 🔍

The Search JS SDK for React allows developers to create components such as search bars, search results components, etc. These components interact with search sources defined and indexed in Sitecore Search. You can read the official documentation here. While the latest version of the Cloud SDK includes some search dependencies, for older Next.js applications using older versions of the Cloud SDK, the Search JS SDK for React can still be used to build search interfaces.

The Search JS SDK for React uses a cookie to track events context called __ec (reference). This SDK is historically based on Sitecore Discover whose cookies are similarly documented here, e.g., __rutma.

ℹ For the remainder of this post, assume that version 2.5.5 of the Search JS SDK for React is in use.

The Problem 🙅‍♂️

Let’s say your XM Cloud project leverages JSS for Next.js, including the multisite add-on. This add-on (which is included in the official starter kit by default) allows a single Next.js application to drive multiple headless sites. Next, let’s assume that some of these sites operate outside of the United States and are potentially subject to different data protection and privacy laws. Finally, let’s assume that not all of the sites will use the full feature set from these SDKs. For example, what if a couple of the sites are small and don’t need to integrate with Sitecore Search at all?

How do you disable the cookies written to the browser when the Search SDK’s <WidgetsProvider> component is initialized? Even though the smaller sites aren’t using search widgets on any of their pages, the <WidgetsProvider> component is (usually) included in the Layout.tsx file and is still initialized. We don’t want to remove the component since other sites do use search widgets and require the <WidgetsProvider> component.

Can these SDKs be configured to (conditionally) not create cookies on the client browser?

The Solution ✅

🚨 First and foremost (before we get into how to disable cookies used by these SDKs), know that you must ensure that your application is compliant with any and all data privacy and data protection laws to which it is subject. This includes allowing users to opt-out of all browser cookies. Cookie preferences, their management, third-party solutions, GDPR, CCPA, etc. are all great topics but are well outside the scope of this post. To get started, refer to Sitecore’s documentation on data privacy to understand who is responsible for what when building an XM Cloud application.

With that small disclaimer out of the way, the programmatic hooks discussed in the sections below can be used in conjunction with whatever cookie management solution that makes sense for your application. Let’s assume that, for these smaller sites operating in different geographies that require neither CDP nor search, we just want to disable cookies from these SDKs altogether.

To disable Cloud SDK cookies:

The short version: just don’t call the SDK’s init() function 😅. One way this can be done is to add an environment variable and check its value within the .\src\<rendering-host>\src\lib\context\sdk\events.ts file and either return early or throw before the call to Events.init():

import * as Events from '@sitecore-cloudsdk/events/browser';
import { SDK } from '@sitecore-jss/sitecore-jss-nextjs/context';

const sdkModule: SDK<typeof Events> = {
  sdk: Events,
  init: async (props) => {
    // Events module can't be initialized on the server side
    // We also don't want to initialize it in development mode
    if (typeof window === 'undefined')
      throw 'Browser Events SDK is not initialized in server context';
    if (process.env.NODE_ENV === 'development')
      throw 'Browser Events SDK is not initialized in development environment';
    // We don't want to initialize if the application doesn't require it
    if (process.env.DISABLE_CLOUD_SDK === 'true') // <===== HERE
      throw 'Browser Events SDK is not initialized for this site';

    await Events.init({
      siteName: props.siteName,
      sitecoreEdgeUrl: props.sitecoreEdgeUrl,
      sitecoreEdgeContextId: props.sitecoreEdgeContextId,
      // Replace with the top level cookie domain of the website that is being integrated e.g ".example.com" and not "www.example.com"
      cookieDomain: window.location.hostname.replace(/^www\./, ''),
      // Cookie may be created in personalize middleware (server), but if not we should create it here
      enableBrowserCookie: true,
    });
  },
};

export default sdkModule;

By not calling Events.init(), the cookies aren’t written to the browser.

📝 Note that in newer versions of the XM Cloud starter kit using the Cloud SDK, the initialize function may be in the Bootstrap.tsx file; however, the same principle applies—don’t call the initialize() function by either returning early or setting up conditions such that the function is never called.

For consistency, assuming your application uses the OOTB CdpPageView.tsx component, you’d probably want to do something similar within that component. By default, page view events are turned off when in development mode. Simply add another condition to ensure that the return value of disabled() is true:

import {
  CdpHelper,
  LayoutServicePageState,
  useSitecoreContext,
} from '@sitecore-jss/sitecore-jss-nextjs';
import { useEffect } from 'react';
import config from 'temp/config';
import { context } from 'lib/context';

/**
 * This is the CDP page view component.
 * It uses the Sitecore Cloud SDK to enable page view events on the client-side.
 * See Sitecore Cloud SDK documentation for details.
 * https://www.npmjs.com/package/@sitecore-cloudsdk/events
 */
const CdpPageView = (): JSX.Element => {
  ...
  /**
   * Determines if the page view events should be turned off.
   * IMPORTANT: You should implement based on your cookie consent management solution of choice.
   * By default it is disabled in development mode
   */
  const disabled = () => {
    return process.env.NODE_ENV === 'development' || process.env.DISABLE_CLOUD_SDK === 'true'; // <===== HERE
  };
  ...
  return <></>;
};

export default CdpPageView;

To disable Search JS SDK for React (Sitecore Discover) cookies:

The <WidgetsProvider> component (imported from @sitecore-search/react) includes a property named trackConsent (documented here) and it controls exactly that—whether or not tracking cookies related to visitor actions are created. Setting the value of this property to false disables the various cookies. In the Layout.tsx file, assuming we added another environment variable, the code would look something like this:

ata-enlighter-language="typescript">/**
 * This Layout is needed for Starter Kit.
 */
import React from 'react';
...
import { Environment, WidgetsProvider } from '@sitecore-search/react';

const Layout = ({ layoutData, headLinks }: LayoutProps): JSX.Element => {
  ...
  return (
    <>
      ...
        <div className="App">
          <WidgetsProvider
            env={process.env.NEXT_CEC_APP_ENV as Environment}
            customerKey={process.env.NEXT_CEC_CUSTOMER_KEY}
            apiKey={process.env.NEXT_CEC_API_KEY}
            publicSuffix={true}
            trackConsent={!(process.env.DISABLE_TRACK_CONSENT === 'true') /* <===== HERE */}
          >
            ...
          </WidgetsProvider>
        </div>
      ...
    </>
  );
};

export default Layout;

If trackConsent is false, then the various __r… cookies are not written to the browser.

⚠ It’s worth mentioning that, by default, trackConsent is true. To opt-out of cookies, developers must set the property to false.

 

Whether you control the use of cookies by using environment variables as described in this post or by integrating a more complex cookie preference and consent management system, the onus is on you and your XM Cloud head application to avoid using cookies without a user’s consent.

Thanks for the read! 🙏

]]>
https://blogs.perficient.com/2025/03/12/disabling-cookies-in-sitecore-sdks/feed/ 1 378419
Activate to SFTP from Salesforce Data Cloud https://blogs.perficient.com/2025/03/12/activate-to-sftp-from-salesforce-data-cloud/ https://blogs.perficient.com/2025/03/12/activate-to-sftp-from-salesforce-data-cloud/#respond Wed, 12 Mar 2025 15:22:11 +0000 https://blogs.perficient.com/?p=378439

SFTP?  Isn’t that old?

It is an oldie, but a goodie.  🙂

With Data Cloud we can send data to a lot of external data sources like Marketing Cloud Engagement or Amazon S3 through Activation Targets.   But there are times we are working with a destination system like Eloqua or Marketo that has solid support for SFTP.  SFTP and Data Cloud work well together!

Even with Marketing Cloud Engagement you might want to get data flowing into Automation Studio instead of pushing directly to a Data Extension or Journey.  SFTP would allow that CSV file to flow into Automation Studio where a SSJS script for example could loop through those rows and send mass SMS messages.

Is it secure?

Yes, as we will see in this blog post the SFTP setup through Data Cloud supports both a SSH Key with a Passphrase and a Password on the SFTP site itself.

Let’s connect to Marketing Cloud Engagement through SFTP!

There are five main pieces to setup and test this.

  1. Create a new SSH Key
  2. Configure the SFTP Site in Marketing Cloud Engagement
  3. Test the SFTP Connection using a tool like FileZilla
  4. Configure that SFTP Activation Target in Data Cloud
  5. Build a Segment and Activation to leverage that SFTP Activation Target

This will feel like a lot of steps, but it really does not take that long to do.  Leveraging these out of the box Activation Targets, like this SFTP one, is going to save tons of time in the long run.

1. Create the new SSH Key

Here is a good blog post to introduce you to what a SSH Key is and how it works.  https://www.sectigo.com/resource-library/what-is-an-ssh-key

Here are a couple of good articles on how to generate a SSH Key.

  1. https://www.purdue.edu/science/scienceit/ssh-keys-windows.html
  2. https://www.ssh.com/academy/ssh/keygen

Very important note that Marketing Cloud only accepts SSH keys generated a certain way…   https://help.salesforce.com/s/articleView?id=000380791&type=1

I am on a Windows machine so I am going to open a command prompt and use the OpenSSH command.

Sftp 01

Once in the command prompt type the ssh-keygen command.

Sftp 02

Now enter your filename.

Sftp 03

Now enter your passphrase.  This is basically a password that is tied to your SSH Key to make it harder to break.  This is different than your SFTP password that will be set on the Marketing Cloud Engagement side.

Sftp 04

Now that your passphrase was entered twice correctly the SSH Key is generated.

Sftp 06

When using the command prompt the files were automatically created in my C:\Users\Terry.Luschen directory.

Sftp 07

Now in the command prompt as stated in #3 in the Salesforce documentation above you need to do one final command.

Change the key to an RFC4716 (SSH2) key format

  1. ssh-keygen -e -f originalfilename.pub > newfilename
  2. So in our example above my command was
    1. ssh-keygen -e -f MCE_SSH_01.pub > MCE_SSH_01b
      Sftp 12

The three files will look something like:

  1. MCE_SSH_01.pub – This is the Public Key file to be loaded into Marketing Cloud Engagement.
  2. MCE_SSH_01 – This is the Private Key file which we will use to load into Data Cloud and FileZilla
  3. MCE_SSH_01b – This is another Public Key file that can be used to load into Marketing Cloud Engagement

I opened the .pub file and removed the comment.

I also added a file extension of .txt to the MCE_SSH_01b file so it is now named MCE_SSH_01b.txt

Now that we have generated our SSH files we can upload the Public Key to Marketing Cloud Engagement.

2. Configure the SFTP Site in Marketing Cloud Engagement

Log into Marketing Cloud Engagement

Go to Setup, Administration, Data Management, Key Management

Sftp 08

Click ‘Create’ on the ‘Key Management’ page

Sftp 09

Fill out the ‘New Key’ details.

Make sure SSH is selected.

Select the ‘Public’ Key file you created earlier which has the .pub extension.

Check the ‘Public Key’ checkbox.

Sftp 10

Save the Key

Now go to Setup, Administration, Data Management, FTP Accounts

Sftp 14

Use the ‘Create User’ button to create a new User.

Sftp 15

Fill out the new FTP User page by entering an email address and password.  Note this is different than the passphrase create above that was tied to the SSH Key.  Click on Next.

Sftp 16

Select the ‘SSH Key and Password’ radio button.   Use the file picker to select the Marketing Cloud Key you just created above.  Click on Next.

Sftp 17

Select the type of security you need.  In this screen shot everything is selected but typically you should only select the checkboxes that are absolutely necessary.  Click on Next.

Sftp 18

If you are trying to restrict to certain IPs fill out this screen.  In our example we are not trying to restrict to just Data Cloud IPs for example.  Click on Next.

Sftp 19

Typically you would leave this screen as is. It allows the Root folder as the default and then when you configure the tool that will send data to the SFTP site you can select the exact folder to use.  Click on Save.

Sftp 20

Yeah! You now have configured our destination SFTP site.

Now we can test this!

3. Test the SFTP Connection using a tool like FileZilla

  1. I like to test using FileZilla, but you could use other SFTP tools.
  2. Download the FileZilla and install it on your computer.
  3. Choose Edit, Settings…
    1. Select SFTP under Connection and choose ‘Add key file..’ button
      Filezilla 01 Privatekey
    2. You can either pick the original private key file and FileZilla will produce another file for you. Or you can use the SSH2 file that was produced in the CMD prompt, which was named MCE_SSH_01b.txt in our example above.
    3. Depending on which file is uploaded you might have to enter the Passphrase.
  4. Open FileZilla and choose File, Site Manager…
  5. Click ‘New Site’ and fill out the information on the right.  Save it by clicking on OK.
    Filezilla 01 Newsite
  6. Open up your Site and click on the ‘Connect’ on the bottom of the screen.
    1. You will be prompted to enter your Passphrase that is connected to your SSH Key.
  7. Success!   FileZilla shows you the folders on the Marketing Cloud Engagement SFTP Site!
    Filezilla 01 Successfulconnection

4. Configure the SFTP Activation Target in Data Cloud

  1. Now let’s do the same connection in Data Cloud
  2. In Data Cloud Setup go to Data Cloud, External Integrations, Other Connectors
    Sftp In Datacloud 01
  3. Choose the ‘Target’ tab and ‘Secure File Transfer (SFTP)’.  Click on Next
    Sftp In Datacloud 02
  4. Fill out the connection information.
    1. The connection Name and API Name can be anything you want it to be
    2. The ‘Authentication Method’ is ‘SSH Private Key & Password’
    3. The Username and Password are the values from the Marketing Cloud SFTP User.
    4. The SSH Private Key is the first file created in the CMC prompt.  It was the MCE_SSH_03 file for us with no file extension.
    5. The Passphrase is the passphrase entered in the CMD prompt when generating your Key.
    6. No need to put anything in the ‘PGP Encryption’ field.
    7. It should look like this now…
      Sftp In Datacloud 03 Sftp Settings Top
    8. In the Connection Details’ section…
      1. Host Name and Port are from the Marketing Cloud SFTP Screen
        Sftp In Datacloud 05 Hostname And Port
      2. It should look like this now…
        Sftp In Datacloud 04 Sftp Settings Bottom
      3. You can ‘Test’ your connection before saving it.
  5. Now you need to create an Activation Target
    1. Open Data Cloud App
    2. Go to the Activation Targets tab, Click on New
      Activation Target 0
    3. Select ‘Secure File Transfer (SFTP)’ and click on ‘Next’
      Activation Target 1
    4. Fill in the ‘New Activation Target’ screen.
      1. Select the SFTP Connector that you created earlier in the ‘Select SFTP Connector’ drop-down.
        Activation Target 2
      2. Click on Next
    5. Fill out the final page selecting your File Format and other options.
      1. Note the maximum File size is 500MB.
        Activation Target 4
      2. If you leave the ‘File Name Type’ as Predetermined then you should always get a unique filename since it will be appended with a ‘Date/Time Suffix’.
        Activation Target 5

5. Build a Segment and Activation to leverage that SFTP Activation Target

  1. Open up the Data Cloud App
  2. Create your Segment from the Segment Tab
  3. Go to the Activations tab and click on ‘New’
    Activation Target 6
  4. Select your Segment and the ‘Activation Target’ we created above which in your SFTP site. Click on Continue.
  5. Add ‘Email’ or ‘SMS’ fields as necessary for your Activation.  Click on Next.
    Activation Target 7
  6. Fill out the ‘Add Attributes and Filters to Your Activation’ as necessary.  Click on Next.
    Activation Target 8
  7. Give your Activation a name and finalize Schedule and Refresh Type.  Click on Save.
    Activation Target 9
  8. You should now have your new Activation.
    Activation Target 10
  9. Go back to your Segment and choose ‘Publish Now’ if that is how you need to test your Segment
    Activation Target 11

Conclusion

After you publish your segment, it should run and your file should show up on your Marketing Cloud Engagement STFP site.   You can test this by opening FileZilla, connecting and looking in the proper folder.
Successpublish

That is it!  SFTP and Data Cloud work well together!

We see with just clicks and configuration we can send Segment data created in Data Cloud to a SFTP site!  We are using the standard ‘Activation Target’ and ‘Activation’ setup screens in Data Cloud.

If you are brainstorming about use cases for Agentforce, please read on with this blog post from my colleague Darshan Kukde!

Here is another blog post where I discuss using unstructured data in Salesforce Data Cloud so your Agent in Agentforce can help your customers in new ways!

If you want a demo of this in action or want to go deeper please reach out and connect!

]]>
https://blogs.perficient.com/2025/03/12/activate-to-sftp-from-salesforce-data-cloud/feed/ 0 378439
Deployment of Infra using Terraform(IaC) and Automate CICD using Jenkins on AWS ECS https://blogs.perficient.com/2025/03/11/deployment-of-infra-using-terraformiac-and-automate-cicd-using-jenkins-on-aws-ecs/ https://blogs.perficient.com/2025/03/11/deployment-of-infra-using-terraformiac-and-automate-cicd-using-jenkins-on-aws-ecs/#respond Tue, 11 Mar 2025 18:43:02 +0000 https://blogs.perficient.com/?p=378120

Terraform

Terraform is a HashiCorp-owned Infrastructure as Code (IaC) technology that allows you to develop, deploy, alter, and manage infrastructure using code. It maintains your infrastructure’s lifespan, enables you to define resources and infrastructure in human-readable, declarative configuration files, and manages your infrastructure’s lifecycle.

Code is simply instructions written in the HCL (Hashi Corp Configuration Language) language in a human-readable format with the extension (.tf) or (.tf.json) which is written in HCL (Hashi Corp Configuration Language) Language.

What is IaC?

Infrastructure as code (IaC) refers to using configuration files to control your IT infrastructure.

What is the Purpose of  IaC?

Managing IT infrastructure has traditionally been a laborious task. People would physically install and configure servers, which is time-consuming and costly.

Nowadays, businesses are growing rapidly, so manual-managed infrastructure can no longer meet the demands of today’s businesses.

To meet the customer’s demands and save costs, IT organizations quickly adopt the Public Cloud, which is mostly API-driven, and they architecting their application in such a way that to support a much higher level of elasticity and deploy their application on supporting technologies like Docker container and Public Cloud. To build, manage, and deploy the code on those technologies, a tool like Terraform is invaluable for delivering the product quickly.

Terraform Workflow

Tf Workflow

Terraform Init

  • The Terraform Init command initializes a working directory containing Terraform configuration files.

Terraform Plan

  • The Terraform Plan command is used to create an execution plan.

Terraform Apply

  • The Terraform Apply command is used to apply the changes required to reach the desired state of the configuration.

Terraform Refresh

  • The Terraform Refresh command reconciles the state Terraform knows about (via its state file) with the real-world infrastructure. This does not modify infrastructure but does modify the state file.

Terraform Destroy

  • The Terraform Destroy command is used to destroy the Terraform-managed infrastructure.

Jenkins Pipeline

A Jenkins Pipeline is a suite of plugins that supports building, deploying, and automating continuous integration and delivery (CI/CD) workflows. It provides a way to define the entire build process in a scripted or declarative format called a Jenkinsfile. This allows developers to manage and version their CI/CD processes alongside their application code.

Why Jenkins Pipeline?

Infrastructure as Code (IaC)

  • The build process is defined in a Jenkinsfile written in Groovy-based DSL (Domain-Specific Language).
  • The Jenkinsfile can be stored and versioned in the same repository as the application source code, ensuring synchronization between code and build processes.

Reusability and Maintainability

  • A single Jenkins pipeline can be reused across multiple environments (development, testing, production).
  • Update the Jenkinsfile to change the build process, reducing the need to manually modify multiple jobs in Jenkins.

Improved Version Control

  • Both the application code and build process are versioned together.
  • Older releases can be built using the corresponding Jenkinsfile, ensuring compatibility.

Automation and Scalability

  • The pipeline automates the entire CI/CD workflow, including code fetching, building, testing, and deployment.
  • It supports parallel stages, enabling multiple tasks (e.g., unit and integration tests) to run concurrently.

Simplified Configuration Management

  • Job configurations are no longer stored as XML files in Jenkins. Instead, they are defined as code in the Jenkinsfile, making backup and restoration easier.

Types of Jenkins Pipelines

Jenkins provides two types of pipelines:

Declarative Pipeline

  • Easier to use, structured, and designed for most users.
  • Uses a defined syntax and provides built-in error handling.

Scripted Pipeline

  • More flexible but requires advanced Groovy scripting knowledge.

AWS ECS 

AWS ECS (Elastic Container Service) is an AWS Container managed Service that allows you to run and manage Docker containers on a cluster of Virtual Servers.

Container Deployment Era

The container OS maintains application isolation. Container service is trending nowadays.

  • Lightweight: Containers have less overhead than virtual machines. They can be used with the host OS without installing it; they contain only the libraries and modules required to run the application.
  • Portable: Containers can be moved from one host to another and run across the OS distribution and clouds.
  • Efficient: Better resource utilization than the virtual machines, which cannot occupy the entire hardware and gradually increase based on requirement.
  • Fast Deployment: Containers can build quickly from container images, and it is easy to roll back.
  • Microservices: It is based on loosely coupled architecture and supports distributed best for microservices.

Architecture

Arch

In this architecture diagram, we will launch an EC2 Instance using Terraform in AWS and the user data for Jenkins Server configuration. By Jenkins CICD Pipeline, we fetch the Source Code from GitHub, create a Docker Image, and upload it to the ECR Docker Registry. We will deploy the application on the ECS Cluster with this docker image.

Step 1: Create an IAM user and an Access Key/Secret Key for the IAM user, and provide the appropriate permissions, such as ECR and Docker Container Policy.

Img 1

Step 2: Create an ECR Repository to store the Docker Images.

Img 2

Step 3: Create an ECS Cluster

Img 4

Step 3.1: Create a task Definition. The Task Definition contains all the information to run the Container, such as the container Image URL and Compute Power.

Img 5

Step 3.2- Execution Role: It is attached to the Task Definition with permission from CloudWatch Logs to collect the real-time logs of the Container and ECS Task Execution Role Policy.

Iam Role

Step 3.3: Create a Service in Cluster: If Task Definition cannot handle the deployment, we have to create a service that is an intermediary between the application and Container Instances.

Img 6

Step 4: Jenkins Server Configuration

Img 3.0

Let’s deploy the code using Jenkins on ECS Cluster: Jenkinsfile for CICD Pipeline: https://github.com/prafulitankar/GitOps/blob/main/Jenkinsfile

Create a Jenkins Pipeline, which should be a Declarative Pipeline.

Img 7.0

We have done with the Infra setup and Jenkins Pipeline. Let’s Run the Jenkins Pipeline:

Img 7

Once Jenkins Pipeline is successfully executed, the ECS service will try to deploy a new revision of Docker Image.

Img 8

OutputOnce the Pipeline was executed successfully, our application deployed successfully on the ECS Cluster output of the application.

Img 9

We launched the Jenkins Server on an EC2 Instance with Terraform. Then, we created an ECR repository to store the Docker image, ECS Cluster, task definition, and Service to deploy the application. Using the Jenkins pipeline, we pulled the source code from GitHub, built the code, created a Docker image, and uploaded it to the ECR repository. This is our CI part, and then we deployed our application on ECS, which is CD.

]]>
https://blogs.perficient.com/2025/03/11/deployment-of-infra-using-terraformiac-and-automate-cicd-using-jenkins-on-aws-ecs/feed/ 0 378120
Best Practices for IaC using AWS CloudFormation https://blogs.perficient.com/2025/03/11/best-practices-for-iac-using-aws-cloudformation/ https://blogs.perficient.com/2025/03/11/best-practices-for-iac-using-aws-cloudformation/#comments Tue, 11 Mar 2025 15:41:28 +0000 https://blogs.perficient.com/?p=378210

In the ever-evolving landscape of cloud computing, Infrastructure as Code (IaC) has emerged as a cornerstone practice for managing and provisioning infrastructure. IaC enables developers to define infrastructure configurations using code, ensuring consistency, automation, and scalability. AWS CloudFormation, a key service in the AWS ecosystem, simplifies IaC by allowing users to easily model and set up AWS resources. This blog explores the best practices for utilizing AWS CloudFormation to achieve reliable, secure, and efficient infrastructure management.

Why Use AWS CloudFormation?

AWS CloudFormation provides a comprehensive solution for automating the deployment and management of AWS resources. The primary advantages of using CloudFormation include:

  • Consistency: Templates define the infrastructure in a standardized manner, eliminating configuration drift.
  • Automation: Automatic provisioning and updating of infrastructure, reducing manual intervention.
  • Scalability: Easily replicate infrastructure across multiple environments and regions.
  • Dependency Management: Automatically handles resource creation in the correct sequence based on dependencies.
  • Rollback Capability: Automatic rollback to the previous state in case of deployment failures.

Comparison with Other IaC Tools

  • AWS CloudFormation stands out among other IaC tools, such as Terraform and Ansible, due to its deep integration with AWS services. Unlike Terraform, which supports multiple cloud providers, CloudFormation is tailored specifically for AWS, offering native support and advanced features like Drift Detection and Stack Policies. Additionally, CloudFormation provides out-of-the-box rollback functionality, making it more reliable for AWS-centric workloads.

Best Practices for CloudFormation

1. Organize Templates Efficiently

Modularization

Breaking down large CloudFormation templates into smaller, reusable components enhances maintainability and scalability. Modularization allows you to create separate templates for different infrastructure components such as networking, compute instances, and databases.

Example:

Mod

compute.yml

Com

The network.yml template creates the VPC and subnets in this example, while the compute.yml template provisions the EC2 instance. You can use Export and ImportValue functions to share resource outputs between templates.

Nested Stacks

Nested stacks allow you to create a parent stack that references child stacks, improving reusability and modularization.

Example:

Nes

Using nested stacks ensures a clean separation of concerns and simplifies stack management.

2. Parameterization and Reusability

Enhance template reusability and flexibility through parameterization:

  • Parameters Section: Define configurable values such as instance types, environment names, and AMI IDs.
  • Mappings Section: Use mappings to create static mappings between parameter values and resource properties.
  • Default Values: Set default values for optional parameters to simplify deployments.
  • AWS CloudFormation Macros: Use macros to extend template functionality and perform custom transformations.

Example:

Par

3. Security Considerations

Securing infrastructure configurations is paramount. Best practices include:

  • IAM Roles and Policies: Assign least privilege permissions to CloudFormation stacks and resources.
  • Secrets Management: Store sensitive data such as passwords and API keys in AWS Secrets Manager or Systems Manager Parameter Store.
  • Encryption: Enable encryption for data at rest using AWS KMS.
  • Stack Policies: Apply stack policies to protect critical resources from unintended updates.

Example:

Sec

4. Version Control and Automation

Integrating CloudFormation with version control systems and CI/CD pipelines improves collaboration and automation:

  • Version Control: Store templates in Git repositories to track changes and facilitate code reviews.
  • CI/CD Pipelines: Automate template validation, deployment, and rollback using AWS CodePipeline or Jenkins.
  • Infrastructure as Code Testing: Incorporate automated testing frameworks to validate templates before deployment.

Example Pipeline:

Ver

5. Template Validation and Testing

Validation and testing are critical for ensuring the reliability of CloudFormation templates:

  • Linting: Use the cfn-lint tool to validate templates against AWS best practices and syntax rules.
  • Change Sets: Preview changes before applying them using CloudFormation Change Sets.
  • Unit Testing: Write unit tests to verify custom macros and transformations.
  • Integration Testing: Deploy templates in isolated environments to validate functionality and performance.

Example:

cfn-lint template.yml

aws cloudformation create-change-set --stack-name MyStack --template-body file://template.yml

6. Stack Policies and Drift Detection

Protecting infrastructure from unauthorized changes and maintaining consistency is essential:

  • Stack Policies: Define stack policies to prevent accidental updates to critical resources.
  • Drift Detection: Regularly perform drift detection to identify and remediate unauthorized changes.
  • Audit Trails: Enable AWS CloudTrail to log API activity and monitor changes.

Example Stack Policy:

  1. Define the Stack Policy in a separate JSON file:

Picture7

  1. Apply the policy while creating or updating the stack:

Picture8

AWS CloudFormation Architecture

Below is a high-level architecture diagram illustrating how AWS.

Picture9

Step-by-Step Configuration

  1. Create a CloudFormation Template: Write the YAML or JSON template defining AWS resources.
  2. Upload to S3: Store the template in an S3 bucket for easy access.
  3. Deploy Stack: Create the stack using the AWS Management Console, CLI, or SDK.
  4. Monitor Stack Events: Track resource creation and update progress in the AWS Console.
  5. Update Stack: Modify the template and update the stack with the new configuration.
  6. Perform Drift Detection: Identify and resolve configuration drift.

Conclusion

AWS CloudFormation is a powerful tool for implementing infrastructure as code, offering automation, consistency, and scalability. By following best practices such as template modularization, security considerations, and automation, organizations can enhance the reliability and efficiency of their cloud infrastructure. Adopting AWS CloudFormation simplifies infrastructure management and strengthens overall security and compliance.

Embracing these best practices will enable businesses to leverage the full potential of AWS CloudFormation, fostering a more agile and resilient cloud environment.

 

]]>
https://blogs.perficient.com/2025/03/11/best-practices-for-iac-using-aws-cloudformation/feed/ 1 378210
Boost Sitecore Performance with Vercel Fluid Compute https://blogs.perficient.com/2025/03/10/boost-sitecore-performance-with-vercel-fluid-compute/ https://blogs.perficient.com/2025/03/10/boost-sitecore-performance-with-vercel-fluid-compute/#comments Mon, 10 Mar 2025 18:02:17 +0000 https://blogs.perficient.com/?p=378307

Are you using Vercel to supercharge your Sitecore experience? If so, you’re already benefiting from a powerful, globally optimized platform designed for modern web applications. But did you know you can take your performance even further? Vercel Fluid Compute is a game-changer, optimizing workloads for higher efficiency, lower costs, and enhanced scalability—perfect for high-performance Sitecore deployments.

What is Vercel Fluid Compute?

Fluid Compute is Vercel’s next-generation execution model, blending the best of serverless and traditional compute. Unlike conventional serverless architectures, which often suffer from cold starts and limited concurrency, Fluid Compute allows multiple requests to be processed within a single function instance. This leads to reduced latency, faster response times, and better resource utilization.

Why Sitecore Developers Should Care

Sitecore is a powerful digital experience platform, but ensuring smooth, high-speed performance at scale can be challenging. Fluid Compute helps reduce performance bottlenecks and optimize infrastructure costs, making it a perfect fit for Sitecore-powered applications. Here’s how it benefits you:

  • Faster Load Times: By reusing function instances and reducing cold starts, Fluid Compute improves Sitecore’s response times, ensuring users get the content they need—fast.
  • Cost Savings: Efficient resource usage can reduce compute costs by up to 85%, a significant reduction for enterprises managing high-traffic Sitecore applications.
  • Scalability Without Hassle: Fluid Compute dynamically scales functions based on demand, ensuring seamless performance even during traffic spikes.
  • Better Background Processing: Features like waitUntil enable asynchronous tasks such as logging and analytics to run without delaying user responses.

How Fluid Compute Compares

To truly understand the advantages of Fluid Compute for Sitecore, take a look at this comparison chart:

Vercelfluidcompute

As shown, Fluid Compute outperforms both traditional servers and serverless architectures in key areas such as scaling, concurrency, and cost efficiency. By preventing cold starts, enabling efficient auto-scaling, and optimizing resource usage, Fluid Compute ensures your Sitecore application runs at peak performance with minimal overhead.

How to Enable Fluid Compute for Sitecore on Vercel

One of the best aspects of Fluid Compute is how easy it is to implement.

  1. Deploy Your Sitecore-Powered App to Vercel – Ensure your Sitecore front-end is running on Vercel’s platform.
  2. Enable Fluid Compute – Simply update your Vercel project settings to opt into Fluid Compute for serverless functions.
  3. Enjoy Enhanced Performance – With zero additional configuration, your Sitecore app now benefits from better efficiency, lower costs, and higher scalability.

The Future of Sitecore Performance

As brands continue to push the boundaries of digital experiences, having a highly optimized, scalable compute model is essential. With Vercel Fluid Compute, Sitecore developers can future-proof their applications, ensuring exceptional performance while keeping costs in check.

]]>
https://blogs.perficient.com/2025/03/10/boost-sitecore-performance-with-vercel-fluid-compute/feed/ 1 378307
Automating Backup and Restore with AWS Backup Service using Python https://blogs.perficient.com/2025/03/05/automating-backup-and-restore-with-aws-backup-service-using-python/ https://blogs.perficient.com/2025/03/05/automating-backup-and-restore-with-aws-backup-service-using-python/#respond Wed, 05 Mar 2025 07:29:59 +0000 https://blogs.perficient.com/?p=377944

Protecting data is vital for any organization, and AWS Backup Service offers a centralized, automated solution to back up your AWS resources. This blog will examine how to automate backup and restore operations using AWS Backup and Python, ensuring your data remains secure and recoverable.

Why We use AWS Backup Service

Manual backup processes can be error-prone and time-consuming. AWS Backup streamlines and centralizes our backup tasks, providing consistent protection across AWS services like EC2, RDS, DynamoDB, EFS, and more. By leveraging Python to manage AWS Backup, we can achieve further automation, integrate with other systems, and customize solutions to meet our business needs.

How It Works

AWS Backup enables us to set up backup policies and schedules through backup plans. These plans determine the timing and frequency of backups and their retention duration. By utilizing Python scripts, we can create, manage, and monitor these backup operations using the AWS SDK for Python, Boto3.

Prerequisites

Before we begin, we must have:

  1. An AWS account.
  2. Basic knowledge of Python programming.
  3. AWS CLI installed and configured.
  4. Boto3 library installed in your Python environment.

Automating Backup/Restore with AWS Backup

Step 1: Set Up AWS Backup

To start, we log into the AWS Management Console and navigate to the AWS Backup service. Once there, we create a new backup vault to serve as the designated storage location for our backups. After setting up the vault, the next step is to define a backup plan. This plan should clearly specify the AWS resources we intend to back up, as well as outline the backup schedule and retention period for each backup. By following these steps, we effectively organize and automate our data protection strategy within AWS.

Step 2: Write Python Scripts for Backup Automation

To automate our EC2 instance backups using AWS Backup with Python, we begin by installing the boto3 library with pip install boto3 and configuring our AWS credentials in ~/.aws/credentials. Using boto3, we connect to the AWS Backup service and define a backup plan with our desired schedule and retention policy. We then assign the EC2 instance to this plan by specifying its ARN. Finally, we run the Python script to create the backup plan and associate the instance, efficiently automating the backup process.

Find the complete code here.

import boto3
from botocore.exceptions import ClientError
def start_backup_job(instance_arn):
    client = boto3.client('backup', region_name='eu-west-1') # Ensure the correct region

After Running the code, we will get the output as below.

1

We can see the Job ID triggered via Code in the AWS Backup Job Console.

2

Step 3: Automate Restore Operations:

To automate our restore operations for an EC2 instance using AWS Backup with Python, we start by using the boto3 library to connect to the AWS Backup service. Once connected, we retrieve the backup recovery points for our EC2 instance and select the appropriate recovery point based on our restore requirements. We then initiate a restore job by specifying the restored instance’s recovery point and desired target. By scripting this process, we can automatically restore EC2 instances to a previous state, streamlining our disaster recovery efforts and minimizing downtime.

Find the complete code here.

import boto3
from botocore.exceptions import ClientError
def restore_backup(recovery_point_arn):
    client = boto3.client('backup', region_name='eu-west-1')

After Running the code, we will get the output as below.

3

We can see the Job ID triggered via Code in AWS Restore Job Console.

4

After the Restore Job is Completed, we can navigate to the EC2 region and see a new EC2 instance launched using the below Job.

5

 

Step 4: Monitor and Schedule

Additionally, we may implement Amazon CloudWatch to monitor our backup and restore operations by tracking key metrics. To automate these processes, we schedule our scripts to run automatically, using either cron jobs on our servers or AWS Lambda for serverless execution. This approach enables us to streamline and manage our backup activities efficiently.

Conclusion

We enhance our data protection strategy by automating backup and restore operations using AWS Backup and Python. By leveraging AWS Backup’s centralized capabilities and Python’s automation power, we ensure consistent and reliable backups, allowing us to focus on more strategic initiatives. We experiment with various backup policies and extend automation to meet our organization’s unique needs.

]]>
https://blogs.perficient.com/2025/03/05/automating-backup-and-restore-with-aws-backup-service-using-python/feed/ 0 377944
Automate the Deployment of a Static Website to an S3 Bucket Using GitHub Actions https://blogs.perficient.com/2025/03/05/automate-the-deployment-of-a-static-website-to-an-s3-bucket-using-github-actions/ https://blogs.perficient.com/2025/03/05/automate-the-deployment-of-a-static-website-to-an-s3-bucket-using-github-actions/#comments Wed, 05 Mar 2025 06:43:31 +0000 https://blogs.perficient.com/?p=377956

Automating deployments is crucial for efficiency and reliability in today’s fast-paced development environment. GitHub Actions provides a seamless way to implement CI/CD pipelines, allowing developers to automate the deployment of static websites without manual intervention.

In this blog, we will explore how to deploy a static website to an AWS S3 bucket using GitHub Actions. We’ll cover setting up an S3 bucket, configuring IAM roles for secure authentication, and leveraging GitHub Actions workflows to streamline deployment. By the end, you’ll have a fully automated pipeline that ensures quick and secure deployments with minimal effort.

Prerequisites

  1. Amazon S3 Bucket: Create an S3 bucket and enable static website hosting.
  2. IAM User & Permissions: Create an IAM user with access to S3 and store credentials securely.
  3. GitHub Repository: Your static website code should be in a GitHub repository.
  4. GitHub Secrets: Store AWS credentials in GitHub Actions Secrets.
  5. Amazon EC2 – to create a self-hosted runner.

Deploy a Static Website to an S3 Bucket

Step 1

First, create a GitHub repository. I already made one with the same name, which is why it exists.

Static 1

 

 

Step 2

You can clone the repository from the URL below and put it into your local system. I have added the website-related code to my GitHub repository, so you just need to clone it: https://github.com/Kunal2795/Static-Website.git.

 

Step 3

Push the code to host this static website with your changes, such as updating the bucket name and AWS region. I already have it locally, so you just need to push it using the Git commands below:

Static 2

Step 4

Once the changes are pushed to your GitHub repository, ensure the main. The yaml file is in the .github/workflows directory.

Staticc 3

If the main.yaml file is not present in the .github/workflows/ directory. Create it and add a job to run the static website pipeline in GitHub Actions. The main.yaml file is the primary configuration file in GitHub Actions that runs the entire pipeline.

Add the following job code to the main.yaml file in the .github/workflows/ directory:

name: Portfolio Deployment2

on:

  push:

    branches:

    – main

jobs:

  build-and-deploy:

    runs-on: [self-hosted, silver]

    steps:

    – name: Checkout

      uses: actions/checkout@v1

 

    – name: Configure AWS Credentials

      uses: aws-actions/configure-aws-credentials@v1

      with:

        aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}

        aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

        aws-region: us-east-2

 

    – name: Deploy static site to S3 bucket

      run: aws s3 sync . s3://kc-devops –delete

 

You need to make some modifications in the above jobs, such as:

  • runs-on – Add either a self-hosted runner or a default runner (I have added a self-hosted runner).
  • AWS-access-key-id – You need to add the Access Key ID variable name (store the variable value in Variables, which I will show you below).
  • AWS-secret-access-key – You need to add the Secret Access Key ID variable name (store its value in Variables, which I will show you below)
  • AWS-region – Add Region of s3 bucket
  • run – In that section, you need to add the path of your bucket where you want to store your static website code.

How to Create a Self-hosted Runner

Launch an EC2 instance with Ubuntu OS using a simple configuration.

Static 4

After that, create a self-hosted runner using specific commands. To get these commands, go to Settings in GitHub, navigate to Actions, click on Runners, and then select Create New Self-Hosted Runner.

Select Linux as the runner image.

Static 5

Static 6

Run the above commands step by step on your EC2 server to download and configure the self-hosted runner.

Static 7

 

Static 8

Once the runner is downloaded and configured, check its status to ensure it is idle or offline. If it is offline, start the GitHub Runner service on your EC2 server.

Also, ensure that AWS CLI is installed on your server.

Static 9

IAM User

Create an IAM user and grant it full access to EC2 and S3 services.

Static 10

Then, go to Security Credentials, create an Access Key and Secret Access Key, and securely copy and store both the Access Key and Secret Access Key in a safe place.

Static 11

 

Next, navigate to GitHub Actions → Secrets & Variables → Actions, then add your AWS Access Key ID and Secret Access Key securely.

Static 12

After adding the Access Key ID and Secret Access Key, proceed to the next section: S3.

Create an S3 bucket—I have created one with the name kc-devops.

Static 13

Add the policy below to your S3 bucket and update the bucket name with your own bucket name.

Static 14

After setting up everything, go to GitHub Actions, open the main. In the yaml file, update the bucket name and commit the changes.

Then, click the Actions tab to see all your triggered workflows and their status.

Static 15

We can see that all the steps for the build and deploy jobs have been successfully completed.

Static 16

Lastly, sign in to the AWS Management Console and open the Amazon S3 console. Check all the codes are stored in your bucket.

Static 17

Then, go to the Properties tab. Under Static website hosting, find and click on the Endpoint URL. (Bucket Website endpoint)

This Endpoint URL is the Amazon S3 website endpoint for your bucket.

Static 18

Output

Finally, we have successfully deployed and hosted a static website using automation to the Amazon S3 bucket.

Static 19

Conclusion

With this setup, whenever you push changes to your GitHub repository, GitHub Actions automatically trigger the deployment process. This ensures that your static website is seamlessly updated and deployed to your AWS S3 bucket without any manual intervention. This automation streamlines the deployment workflow, making it more efficient and error-free.

 

]]>
https://blogs.perficient.com/2025/03/05/automate-the-deployment-of-a-static-website-to-an-s3-bucket-using-github-actions/feed/ 1 377956
7 Ways to Connect at Adobe Summit 2025 https://blogs.perficient.com/2025/03/04/7-ways-to-connect-at-adobe-summit-2025/ https://blogs.perficient.com/2025/03/04/7-ways-to-connect-at-adobe-summit-2025/#respond Tue, 04 Mar 2025 15:03:52 +0000 https://blogs.perficient.com/?p=377999

Adobe Summit is the platform’s premier digital experience conference. This year it’s being held in Las Vegas at The Venetian Convention and Expo Center, March 18 – 20.

Attending the conference is a great way to learn about the latest digital trends, connect with peers and experts from around the world, and explore Adobe’s vision for the future of AI-powered digital experiences.

As a sponsor, our experts are excited to return to the Summit, connect with industry leaders, and share insights on how to grow their businesses and deliver exceptional digital experiences.

While we’re there, attendees will have several opportunities to engage with us. Read on to learn more about where to find us!

1. Women in Digital Breakfast

This highly popular breakfast event has been a Perficient tradition since 2017 and this year we’re excited to partner with Adobe to share it with Summit attendees again.

Attendees will join us before the opening keynote to grab a bite and have an opportunity to connect with peers and engage in meaningful discussions. Our panel of inspiring women in the digital space will share their journeys, insights, and strategies for success and making an impact in today’s digital landscape.

This year’s event has already sold out, but you can still join our waitlist.

Join the Waitlist

2. AI-Enabled, Consumer-Centric Find Care Experiences Lunch

When: Tuesday, March 18 | 11:30 A.M. – 1:30 P.M.

Where: Grand Lux Café in the Palazzo

What: Healthcare systems and providers are invited to join us for lunch and discover how AI-enabled, consumer-centric experiences can build brand loyalty, support better health outcomes, and improve satisfaction across the full care journey, starting with the steps to choose a provider–a critical decision that signals real-time intent.

Register: How AI-Driven Find Care Experiences Drive Growth and Loyalty

3. Welcome Reception in the Community Pavilion

When: Tuesday, March 18 | 5:30 – 7:00 P.M.

Where: Community Pavilion in The Venetian Convention and Expo Center

What: Stop by booth #1189 and see us! You can meet the team, learn more about Perficient’s capabilities, and enjoy refreshments.

4. Make Marketing Data and Your CDP Work for You Breakfast

When: Wednesday, March 19 | 8:00 A.M. – 9:30 A.M.

Where: Grand Lux Café in the Palazzo

What: Join us for breakfast before the day-two keynote and learn how to make sense of the data you need to meet customer experience objectives, boost your marketing team’s impact, and get the most out of your Adobe Experience Platform investment.

Register: Make Marketing Data and Your CDP Work for You

5. Beyond GenStudio: Crafting a Modern Content Supply Chain Vision Lunch

When: Wednesday, March 19 | 11:30 A.M. – 1:30 P.M.

Where: Grand Lux Café in the Palazzo

What: Join us for lunch before heading back to afternoon sessions and explore why having a clear, strategic vision is essential before deploying new technologies. We’ll discuss how GenStudio and other tools can fit into your existing content workflow to maximize efficiency and creativity.

Register: Beyond GenStudio: Crafting a Modern Content Supply Chain Vision

6. Summit Session: Marketo Engage Data Hygiene Strategies With Qualcomm

When: Thursday, March 20 | 1:00 – 2:00 P.M.

Where: The Venetian Convention and Expo Center

What: Delve into advanced Marketo Engage strategies for maintaining data hygiene using executable campaigns. During this session, you’ll learn how to implement a governance framework for managing required fields and validation rules while improving both the quality of data and the efficiency of your marketing operations.

Add to Schedule: Adobe Summit – Sessions

7. Join Us in The Grand Lux Café at The Palazzo

When: Tuesday, March 18, and Wednesday, March 19 from 8:00 A.M. until 5:00 P.M.

Where: The Grand Lux Café in the Palazzo (Note: The Grand Lux Café at The Palazzo is different from The Grand Lux Café at The Venetian)

What: Located on The Palazzo side of the resort, The Grand Lux Café provides a quiet and comfortable space for meetings with Perficient experts and executives.

To find us, stroll through the waterfall atrium and past the Love sculpture. Head north toward Sands Avenue, through the Palazzo Casino, and you’ll see The Grand Lux in the right corner.

See You at Adobe Summit!

From Sneaks to Bash, and all the innovation in between there is so much to look forward to at Adobe Summit.

No matter what your conference schedule looks like, we hope to see you there.

[BONUS] Join Us After Adobe Summit

Following Adobe Summit, we are co-hosting a virtual Adobe user group meetup to discuss the latest innovations that will be announced at the conference. It’ll be an opportunity to debrief on what attendees learned and what they’re excited about and gain insights from Adobe experts.

Adobe Summit 2025: Top Insights, Favorite Sessions, and What’s Next!

Sign Up Here

 

]]>
https://blogs.perficient.com/2025/03/04/7-ways-to-connect-at-adobe-summit-2025/feed/ 0 377999
RDS Migration: AWS-Managed to CMK Encryption https://blogs.perficient.com/2025/03/04/rds-migration-aws-managed-to-cmk-encryption/ https://blogs.perficient.com/2025/03/04/rds-migration-aws-managed-to-cmk-encryption/#respond Tue, 04 Mar 2025 06:00:17 +0000 https://blogs.perficient.com/?p=377717

As part of security and compliance best practices, it is essential to enhance data protection by transitioning from AWS-managed encryption keys to Customer Managed Keys (CMK).

Business Requirement

During database migration or restoration, it is not possible to directly change encryption from AWS-managed keys to Customer-Managed Keys (CMK).

During migration, the database snapshot must be created and re-encrypted with CMK to ensure a secure and efficient transition while minimizing downtime. This document provides a streamlined approach to saving time and ensuring compliance with best practices.

P1

                        Fig: RDS Snapshot Encrypted with AWS-Managed KMS Key

 

Objective

This document aims to provide a structured process for creating a database snapshot, encrypting it with a new CMK, and restoring it while maintaining the original database configurations. This ensures minimal disruption to operations while strengthening data security.

  • Recovery Process
  • Prerequisites
  • Configuration Overview
  • Best Practices

 

Prerequisites

Before proceeding with the snapshot and restoration process, ensure the following   prerequisites are met:

  1. AWS Access: You must have the IAM permissions to create, copy, and restore RDS snapshots.
  2. AWS KMS Key: Ensure you have a Customer-Managed Key (CMK) available in the AWS Key Management Service (KMS) for encryption.
  3. Database Availability: Verify that the existing database is healthy enough to take an accurate snapshot.
  4. Storage Considerations: Ensure sufficient storage is available to accommodate the snapshot and the restored instance.
  5. Networking Configurations: Ensure appropriate security groups, subnet groups, and VPC settings are in place.
  6. Backup Strategy: Have a backup plan in case of any failure during the process.

Configuration Overview

Step 1: Take a Snapshot of the Existing Database

  1. Log in to the AWS console with your credentials.
  2. Navigate to the RDS section where you manage database instances.
  3. Select the existing database for which you want to create the snapshot.
  4. Click on the Create Snapshot button.
  5. Provide a name and description for the snapshot, if necessary.
  6. Click Create Snapshot to initiate the snapshot creation process.
  7. Wait for the snapshot creation to complete before proceeding to the next step.

P2

Step 2: Copy Snapshot with New Encryption Keys

  1. Navigate to the section where your snapshots are stored.
  2. Locate the newly created snapshot in the list of available snapshots.
  3. Select the snapshot and click the Copy Snapshot option.
  4. In the encryption settings, choose New Encryption Key (this will require selecting a new Customer Managed Key (CMK)).
  5. Follow the prompts to copy the snapshot with the new encryption key. Click Next to continue.

P3

 

P4

Step 4: Navigate to the Newly Created Snapshot, Action to Restore

  1. Once the new snapshot is successfully created, navigate to the list of available snapshots.
  2. Locate the newly created snapshot.
  3. Select the snapshot and choose the Restore or Action → Restore option.

P5

 

Step 5: Fill in the Details as Old One

  1. When prompted to restore the snapshot, fill in the details using the same configuration as the old database. This includes:

Instance size, Database configurations, Networking details, Storage options

  1. Ensure all configurations match the old setup to maintain continuity.

Step 6: Create the Restored Database Output

  1. After filling in the necessary details, click Create to restore the snapshot to a new instance.
  2. Waiting for the process to be completed.
  3. Verify that the new database is restored successfully.

P6

 

Best Practices for RDS Encryption

  • Enable automated backups and validate snapshots.
  • Secure encryption keys and monitor storage costs.
  • Test restored databases before switching traffic.
  • Ensure security groups and CloudWatch monitoring are set up.
  • This ensures a secure and efficient RDS snapshot process.

 

Conclusion

Following these steps ensures a secure, efficient, and smooth process for taking, encrypting, and restoring RDS snapshots in AWS. Implementing best practices such as automated backups, encryption key management, and proactive monitoring can enhance data security and operational resilience. Proper planning and validation at each step will minimize risks and help maintain business continuity.

]]>
https://blogs.perficient.com/2025/03/04/rds-migration-aws-managed-to-cmk-encryption/feed/ 0 377717