Cloud Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/cloud/ Expert Digital Insights Wed, 30 Apr 2025 10:45:12 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Cloud Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/cloud/ 32 32 30508587 End-to-End Monitoring for EC2: Deploying Dynatrace OneAgent on Linux https://blogs.perficient.com/2025/04/30/end-to-end-monitoring-for-ec2-deploying-dynatrace-oneagent-on-linux/ https://blogs.perficient.com/2025/04/30/end-to-end-monitoring-for-ec2-deploying-dynatrace-oneagent-on-linux/#respond Wed, 30 Apr 2025 10:45:12 +0000 https://blogs.perficient.com/?p=380533

Objective: Enable resource monitoring for AWS EC2 instances using the Dynatrace monitoring tool (OneAgent) to gain real-time insights into system performance, detect anomalies, and optimize resource utilization.

What is Dynatrace?

Dynatrace is a platform for observability and application performance monitoring (APM) that delivers real-time insights into application performance, infrastructure oversight, and analytics powered by AI. It assists teams in detecting, diagnosing, and resolving problems more quickly by providing comprehensive monitoring across logs, metrics, traces, and insights into user experience.

Dynatrace OneAgent

Dynatrace OneAgent is primarily a single binary file that comprises a collection of specialized services tailored to your monitoring setup. These services collect metrics related to various components of your hosts, including hardware specifications, operating systems, and application processes. The agent also has the capability to closely monitor specific technologies (such as Java, Node.js, and .NET) by embedding itself within these processes and analyzing them from the inside. This enables you to obtain code-level visibility into the services that your application depends on.

Key Features of Dynatrace OneAgent

  • Automatic Deployment – OneAgent installs automatically and starts collecting data without manual configuration.
  • Full-Stack Monitoring – It monitors everything from application code to databases, servers, containers, and networks.
  • AI-Powered Insights – Works with Dynatrace’s Davis AI engine to detect anomalies and provide root cause analysis.
  • Auto-Discovery – Automatically detects services, processes, and dependencies.
  • Low Overhead – Designed to have minimal impact on system performance.
  • Multi-Platform Support – Works with Windows, Linux, Kubernetes, AWS, Azure, GCP, and more.

Prerequisites to Implement OneAgent

  1. Dynatrace account
  2. AWS EC2 instance with Linux as the operating system and enable the SSH port (22).

How to Implement Dynatrace OneAgent

Step 1. Dynatrace OneAgent configuration

Log in to the Dynatrace portal and search for Deploy OneAgent.

P1

Select the platform on which your application is running. In our case, it is Linux.

P2

Create a token that is required for authentication.

P3

After generating a token, you will receive a command to download and execute the installer on the EC2 instance.

P4

Step 2: Log in to the EC2 instance using SSH and run the command to download the installer.

After this, run the command to run the installer.

P5

P6

The Dynatrace one agent has now been installed on the EC2 instance.

P7

Output

Now we can monitor various resource usage based on application and infrastructure level on the Dynatrace dashboard.

P8

Conclusion

Enabling resource monitoring for AWS EC2 instances using Dynatrace provides comprehensive observability, allowing teams to detect performance issues, optimize resource utilization, and ensure application reliability. By leveraging Dynatrace OneAgent, organizations can automate monitoring, gain AI-driven insights, and enhance cloud efficiency. Implementing this solution not only improves operational visibility but also facilitates proactive troubleshooting, reduces downtime, and optimizes cloud costs.

 

 

]]>
https://blogs.perficient.com/2025/04/30/end-to-end-monitoring-for-ec2-deploying-dynatrace-oneagent-on-linux/feed/ 0 380533
Redwood is coming… https://blogs.perficient.com/2025/04/24/redwood-is-coming-or-is-it-already-here/ https://blogs.perficient.com/2025/04/24/redwood-is-coming-or-is-it-already-here/#respond Thu, 24 Apr 2025 11:19:32 +0000 https://blogs.perficient.com/?p=380523

If you are a Game of Thrones fan, you are probably familiar with the “winter is coming” phrase.  When it comes to Oracle Fusion, the Redwood experience has been coming for years, but now it’s almost here.

Oracle is in the process of overhauling the whole fusion suite with what they call the “Redwood Experience.” The newly designed Redwood pages are not only responsive and more powerful than their ancestors, but they bring great capability to the table.

  • Redwood pages are built for the future. They are all AI-ready and some come with pre-built AI capabilities.
  • They are geared toward a “Journey Guide” concept, so enterprise-level software implementations are no longer full of “technical jargon.”
  • The new AI Studio and the Visual Studio give Oracle Fusion clients the ability to modify the application for their business needs.

How to Move Forward with the Redwood Experience

Adopting to Redwood is not a straightforward task.  Every quarterly release, Oracle will add more and more pages with the Redwood design, but how do you adopt and take on to the Redwood experience and explore AI opportunities?

  1. First, deploy the setup screens where Redwood experience is available.
  2. Second, review quarterly updates and decide what screens are mature enough to be deployed.
  3. Third, review is the new design is beginning new functionality or lacking any functionality. For instance, Oracle Work Definition Redwood pages are bringing new functionality, whereas the newly designed Order Management pages won’t support certain flows.  Having said that, Order Management screens brings so much when in comes to AI capabilities, if the “not yet available” features is not a business requirement, moving to the Redwood experience will bring efficiency in customer service and much better user experience.
  4. Fourth, have a game plan to roll out with your pace. With the cloud, you are in total control of how and when you roll out the SCM pages. According to Oracle, there isn’t yet a definitive timeframe that Oracle will make the Redwood pages mandatory (04/2025).  Please note that some of the pages are already in play and some made have been mandatory.

 

 

User acceptance and adoption comes with time, so the sooner the transition begins, the more successful the implementations will go. Perficient can help you with your transition from traditional Fusion or legacy on-prem applications to the SCM Redwood experience. When you are ready to take the first step and you’re looking for some advice, contact us. Our strategy is to craft a path for our clients that will make the transition as seamless as possible to the user community and their support staff.

 

Redwood - Manage Manufacturer

New modern looking newly designed Manage Manufacturers Redwood Experience with built-in AI Assist

 

 

Below are the Supply Chain features Oracle has released from release 24D to 25B. (2024 Q3- 2025 Q2) only for Inventory Management and yet it is an overwhelming list.  Please stay tuned for our Redwood series that will be talking about select features.

Inventory Management
24D
Create Guided Journeys for Redwood Pages in the Setup and Maintenance Work Area
Integrate Manufacturing and Maintenance Direct Work Order Transactions with Your Warehouse Management System
Redwood: Audit Receipt Accrual Clearing Balances Using a New User Experience
Redwood: Correct Receipts Using a Redwood Page
Redwood: Create an Interorganization Transfer Using a Mobile Device
Redwood: Create and Edit Accrual Cutoff Rules Using a New User Experience
Redwood: Create Cycle Counts Using a Redwood Page
Redwood: Create Receipt Returns Using a Redwood Page
Redwood: Create Unordered Receipts Using a Redwood Page
Redwood: Inspect Receipts Using a Redwood Page
Redwood: Inspect Received Goods Using a Mobile Device
Redwood: Manage Inbound Shipments and Create ASN or ASBN Using a Redwood Page
Redwood: Review and Clear Open Receipt Accrual Balance Using a New User Experience
Redwood: Review Receipt Accounting Distributions Using a New User Experience
Redwood: Review Receipt Accounting Exceptions using a New User Experience
Redwood: View Item Quantities Using a Redwood Page
Redwood: View Lot Attributes in Mobile Inventory Transactions
Redwood: View Receipts and Receipt Returns in Supplier Portal Using a Redwood Page
Redwood: View the Inventory Management (New) Tile as Inventory Management (Mobile)
Replenish Locations Using Radio Frequency Identification
25A
Capture Recall Notices from the U.S. Food and Drug Administration Curated and Communicated by Oracle
Collaborate with Notes When Reviewing Open Accrual Balances
Complete Recall Containment Tasks Bypassing the Recall Count And Disposition
Create a Flow Manufacturing Work Definition Associated with a Production Line
Manage Shipping Profile Options
Redwood: Approve Physical Inventory Adjustments Using a Redwood Page
Redwood: Compare Standard Costs Using a New User Experience
Redwood: Create and Update Cost Scenarios Using a New User Experience
Redwood: Create and Update Standard Costs Using a New User Experience
Redwood: Create Manual Count Schedules Using a Redwood Page
Redwood: Create Nudges to Notify Users of Item Shortage and Item Stockout
Redwood: Define Pull Sequences and Generate Supplier and Intraorganization Kanban Cards
Redwood: Enhanced Costed BOM Report with Indented View of Lower-Level Subassembly Details
Redwood: Enter Receipt Quantity by Distribution in the Responsive Self-Service Receiving Application
Redwood: Manage ABC Classes, Classification Sets, and Assignment Groups Using a Redwood Page
Redwood: Manage Account Aliases Using a Redwood Page
Redwood: Manage and Create Physical Inventories Using a Redwood Page
Redwood: Manage Consigned Inventory Using a Redwood Page
Redwood: Manage Consumption Rules Using a Redwood Page
Redwood: Manage Interorganization Parameters Using a Redwood Page
Redwood: Manage Intersubinventory Parameters Using a Redwood Page
Redwood: Manage Inventory Transaction Reasons Using a Redwood Page
Redwood: Manage Lot and Serial Attribute Mappings Using a Redwood Page
Redwood: Manage Lot Expiration Actions Using a Redwood Page
Redwood: Manage Lot Grades Using a Redwood Page
Redwood: Manage Movement Requests Using a Redwood Page
Redwood: Manage Pick Slip Grouping Rules Using a Redwood Page
Redwood: Manage Picking Rules and Picking Rule Assignments Using a Redwood Page
Redwood: Manage Receiving Parameters Using a Redwood Page
Redwood: Manage Shipment Lines Using a Redwood Page
Redwood: Manage Shipments Using a Redwood Page
Redwood: Manage Transfer Orders Using a Redwood Page
Redwood: Perform Inventory Transactions Directly from Item Quantities
Redwood: Put Away Receipts Using a Redwood Page
Redwood: Receive Expected Shipments Using a Redwood Page
Redwood: Receive Multiple Lines Together in Responsive Self-Service Receiving as a Casual Receiver
Redwood: Receive Work Order Destination Purchases Using the Responsive Self-Service Receiving Application
Redwood: Record Physical Inventory Tags Using a Mobile Device
Redwood: Record Physical Inventory Tags Using a Spreadsheet
Redwood: Review Completed Transactions Using a Redwood Page
Redwood: Review Consumption Advices Using a Redwood Page
Redwood: Review Standard Costs Import Exceptions Using a New User Experience
Redwood: SCM AI Agents
Redwood: Search and View Supplier ASN in Receiving
Redwood: Signal and Track Supplier and Intraorganization Kanban Replenishment
Redwood: Use Descriptive Flexfields and Attachments in Mobile Inventory
Redwood: Use Redwood Style in Movement Request Approvals Notification
Redwood: View Item Supply and Demand Using a Redwood Page
Redwood: View Rollup Costs Using a New User Experience
Redwood: View Scenario Exceptions Using a New User Experience
Summarize and Categorize the Manual Accrual Clearing Transactions for a Period Using Generative AI
25B
Analyze Kanban Activity Using Oracle Transactional Business Intelligence and Business Intelligence Cloud Connector
Define Pull Sequences and Generate Production and Interorganization Kanban Cards
Define Time Fence to Locate Recalled Parts and Withdraw Irrelevant Recalls
Implement a Temporary Kanban Card for Short-Term Demand Surge
Manage and Track Supplier Kanban Cards Through the Supplier Portal
Receive FYI Notifications when a Recall Notice is Ingested
Redwood: Accounting Overhead Rules
Redwood: Analyze Gross Margin
Redwood: Capture Lot and Serial Numbers with a Streamlined Flow for Mobile Cycle Counting
Redwood: Confirm Picks Using a Mobile Device with an Improved User Experience
Redwood: Confirm Picks Using a Redwood Page
Redwood: Cost Accounting Landing Page
Redwood: Cost Accounting Periods
Redwood: Create and Edit Cost Adjustments
Redwood: Create and Edit Cost Analysis Groups Using a New User Experience
Redwood: Create and Edit Cost Books Using a New User Experience
Redwood: Create and Edit Cost Component Mappings Using a New User Experience
Redwood: Create and Edit Cost Elements Using a New User Experience
Redwood: Create and Edit Cost Organization Relationships Using a New User Experience
Redwood: Create and Edit Cost Organizations Using a New User Experience
Redwood: Create and Edit Cost Profiles Using a New User Experience
Redwood: Create and Edit Default Cost Profiles Using a New User Experience
Redwood: Create and Edit Item Cost Profiles Using a New User Experience
Redwood: Create and Edit Overhead Cost Element Groups Using a New User Experience
Redwood: Create and Edit Overhead Expense Pools Using a New User Experience
Redwood: Create and Edit Valuation Structures Using a New User Experience
Redwood: Create and Edit Valuation Units Using a New User Experience
Redwood: Create Cost Accounting Distributions
Redwood: Enter Miscellaneous Transactions on a Mobile Device Using a Streamlined Flow
Redwood: Implement Cost Accounting Using Quick Setup
Redwood: Manage Cycle Count Sequences Using a Redwood Page
Redwood: Manage Default Packing Configurations Using a Redwood Page
Redwood: Manage Inventory Business Event Configurations Using a Redwood Page
Redwood: Manage Material Statuses Using a Redwood Page
Redwood: Manage Pending Transactions Using a Redwood Page
Redwood: Manage Pick Wave Release Rules Using a Redwood Page
Redwood: Manage Release Sequence Rules Using a Redwood Page
Redwood: Manage Reservation Interface Records Using a Spreadsheet
Redwood: Manage Reservations Using a Redwood Page
Redwood: Manage Ship Confirm Rules Using a Redwood Page
Redwood: Manage Shipment Interface Records Using a Spreadsheet
Redwood: Manage Shipping Cost Types Using a Redwood Page
Redwood: Manage Shipping Document Job Set Rules Using a Redwood Page
Redwood: Manage Shipping Document Output Preferences Using a Redwood Page
Redwood: Manage Shipping Exceptions Using a Redwood Page
Redwood: Manage Shipping Parameters Using a Redwood Page
Redwood: Manage Shipping Transaction Correction Records Using a Spreadsheet
Redwood: Manage Transaction Sources and Types Using a Redwood Page
Redwood: Manage Transportation Schedules Using a Redwood Page
Redwood: Manage Units of Measure Usages Using a Redwood Page
Redwood: Receive Multiple Distribution Purchase Orders on the Expected Shipment Lines and Received Lines Pages
Redwood: Record PAR Counts on a Mobile Device Using a Streamlined Flow
Redwood: Review and Approve Item Cost Profiles
Redwood: Review Consigned Inventory in Supplier Portal Using a Redwood Page
Redwood: Review Consumption Advice in Supplier Portal Using a Redwood Page
Redwood: Review Cost Accounting Distributions
Redwood: Review Cost Accounting Processes
Redwood: Review Inventory Valuation
Redwood: Review Item Costs
Redwood: Review Maintenance Work Order Costs
Redwood: Review Standard Purchase Cost Variances
Redwood: Review Work Order Costs
Redwood: Standard Cost Overhead Absorption Rules
Redwood: Use a Redwood Template for Automatic Debit Memo Failure Notifications
Redwood: Use a Redwood Template for Confirm Receipt Notifications
Redwood: Use a Redwood Template for Create ASN Notifications
Redwood: Use Additional Pick Slip Grouping Rules Criteria
Redwood: Use an Improved Experience for Mobile Inventory Transactions
Redwood: Use Improved Capabilities in the Responsive Self-Service Receiving Application
Redwood: Use Improved Search Capabilities on Expected Shipment Lines Page
Redwood: Use Improved Sorting of Source Picking Locations During Pick Confirm
Redwood: Use Locators on Transfer Orders
Redwood: Use Saved Searches on Redwood Pages
Redwood: Use the Improved Inventory Management Landing Page
Redwood: View Additional Information When Creating a Receipt Using a Mobile Device
Redwood: View Additional Information When Performing a Subinventory Transfer Using a Mobile Device
Redwood: View Electronic Records Using a Redwood Page

 

]]>
https://blogs.perficient.com/2025/04/24/redwood-is-coming-or-is-it-already-here/feed/ 0 380523
Meet Perficient at Data Summit 2025 https://blogs.perficient.com/2025/04/22/meet-perficient-at-data-summit-2025/ https://blogs.perficient.com/2025/04/22/meet-perficient-at-data-summit-2025/#respond Tue, 22 Apr 2025 18:39:18 +0000 https://blogs.perficient.com/?p=380394

Data Summit 2025 is just around the corner, and we’re excited to connect, learn, and share ideas with fellow leaders in the data and AI space. As the pace of innovation accelerates, events like this offer a unique opportunity to engage with peers, discover groundbreaking solutions, and discuss the future of data-driven transformation. 

We caught up with Jerry Locke, a data solutions expert at Perficient, who’s not only attending the event but also taking the stage as a speaker. Here’s what he had to say about this year’s conference and why it matters: 

Why is this event important for the data industry? 

“Anytime you can meet outside of the screen is always a good thing. For me, it’s all about learning, networking, and inspiration. The world of data is expanding at an unprecedented pace. Global data volume is projected to reach over 180 zettabytes (or 180 trillion gigabytes) by 2025—tripling from just 64 zettabytes in 2020. That’s a massive jump. The question we need to ask is: What are modern organizations doing to not only secure all this data but also use it to unlock new business opportunities? That’s what I’m looking to explore at this summit.” 

What topics do you think will be top-of-mind for attendees this year? 

“I’m especially interested in the intersection of data engineering and AI. I’ve been lucky to work on modern data teams where we’ve adopted CI/CD pipelines and scalable architectures. AI has completely transformed how we manage data pipelines—mostly for the better. The conversation this year will likely revolve around how to continue that momentum while solving real-world challenges.” 

Are there any sessions you’re particularly excited to attend? 

“My plan is to soak in as many sessions on data and AI as possible. I’m especially curious about the use cases being shared, how organizations are applying these technologies today, and more importantly, how they plan to evolve them over the next few years.” 

What makes this event special for you, personally? 

“I’ve never been to this event before, but several of my peers have, and they spoke highly of the experience. Beyond the networking, I’m really looking forward to being inspired by the incredible work others are doing. As a speaker, I’m honored to be presenting on serverless engineering in today’s cloud-first world. I’m hoping to not only share insights but also get thoughtful feedback from the audience and my peers. Ultimately, I want to learn just as much from the people in the room as they might learn from me.” 

What’s one thing you hope listeners take away from your presentation? 

“My main takeaway is simple: start. If your data isn’t on the cloud yet, start that journey. If your engineering isn’t modernized, begin that process. Serverless is a key part of modern data engineering, but the real goal is enabling fast, informed decision-making through your data. It won’t always be easy—but it will be worth it.

I also hope that listeners understand the importance of composable data systems. If you’re building or working with data systems, composability gives you agility, scalability, and future-proofing. So instead of a big, all-in-one data platform (monolith), you get a flexible architecture where you can plug in best-in-class tools for each part of your data stack. Composable data systems let you choose the best tool for each job, swap out or upgrade parts without rewriting everything, and scale or customize workflows as your needs evolve.” 

Don’t miss Perficient at Data Summit 2025. A global digital consultancy, Perficient is committed to partnering with clients to tackle complex business challenges and accelerate transformative growth. 

]]>
https://blogs.perficient.com/2025/04/22/meet-perficient-at-data-summit-2025/feed/ 0 380394
Part 1 – Marketing Cloud Personalization and Mobile Apps: Functionality 101 https://blogs.perficient.com/2025/04/21/part-1-marketing-cloud-personalization-and-mobile-apps-functionality-101/ https://blogs.perficient.com/2025/04/21/part-1-marketing-cloud-personalization-and-mobile-apps-functionality-101/#comments Mon, 21 Apr 2025 21:45:01 +0000 https://blogs.perficient.com/?p=379201

Over the past three years working with Marketing Cloud Personalization (formerly Interaction Studio), I’ve always been intrigued by the Mobile icon and its capabilities. A few months ago, I decided to take a hands-on approach by developing my own application to explore this functionality firsthand, testing its implementation and understanding its real-world impact. And that  is what this blog is about.

The Overall Process

The overall steps of the Marketing Cloud Personalization Mobile integration goes as follows:

  1. Have an Application (Understatement)
  2. Have access to the app project and code.
  3. Integrate the Evergage SDK library to the app.
  4. Create a Mobile App inside Personalization UI
  5. Create a connection between the app and the Personalization Dataset
  6. Track views and actions of the user in the app (code implementation).
  7. Publish and track campaign actions and push notifications.

That’s all… easy right?. Within this blog we will review how to do the connection between MCP and the mobile app and how to create a first interaction (steps 1 and part of step 6).

For this demo, I developed an iOS application using the Swift programming language. While I’m not yet an expert, I’ve been steadily learning how to navigate Xcode and implement functionality using Swift. This project has been a great opportunity to expand my skills in iOS development and better understand the tools and frameworks available within Apple’s ecosystem.

Integrate the Evergage SDK in the App

The iOS app I create is very simple (for now), it just a label, a button and an input field. The user types something in the input field, then clicks the button and the data is sent to the label to be shown.

Iphone 16 App Simulator View

So, we need to add the Evergage SDK inside the app project. Download the Evergage iOS SDK (v1.4.1), unzip it and open the static folder. There, the Evergage.xcframework is the one we are about to use. When you have the folder ready, you need to copy the folder into your app. You should have something like this:

Evergage Framework FolderMobileapp Folder Structure

After you added your folder, you need to Build your app again with Command + B.

Now we need to validate the framework is there, so go to Target -> General -> Frameworks, Libraries and Embedded Content. You should see something like this, and since I’m using the static folder, the Do Not Embed is ok.

General Information In Xcode

Validate the Framework Search Path contains a path where the framework was copied/installed. This step would probably be done manually since sometimes the path doesn’t appear. Build the app again to validate if no errors appears.

Framework Search Paths

To validate this works, go to the AppDelegate.swift and type Import Evergage, if no errors appear, you are good to go 🙂

Import Evergage View

 

Create a Mobile App Inside Personalization

Next, we have to create the Native App inside the Personalization dataset of your choice.

Hoover over Mobile and click Add Native App

Mpc Mobile View

Fill the information of the App Name and Bundle ID. For the Bundle ID, go to Target > General > Identity

Add Native App

You will with something like this:

Demoapp Mpc View

Create the Connection to the Dataset

In the AppDelegate.swift , we will do the equivalent to add the JavaScript beacon on the page.

  1. First, we need to import the Evergage class reference. This allow the start of the Marketing Cloud Personalization iOS SDK. Our tracking interactions now should be done inside a UIViewController inherited classes.
  2. Change the didFinishLaunchingWithOptions to willFinishLaounchingWithOptions
  3. Inside the application function we do the following:
    1. Create a singleton instance of Evergage. A Singleton is a creational design pattern that lets you ensure that a class has only one instance, while providing a global access point to this instance. So with this, it provides a global access point to the instance, which can be used to coordinate actions across our app.
    2. Set the user id. For this, we set the evergage.userId using the evergage.anonymousId , but if we already have the email or an id for the user, we should passed right away.
    3. Start the Evergage configuration. Here we pass the Personalization’s account id and dataset id. Other values set are the usePushNotifications and the useDesignMode . The last one help us to connect the Personalization web console for action mapping screen.

 

//Other imports
Import Evergage

@main
class AppDelegate: UIResponder, UIApplicationDelegate {



    func application(_ application: UIApplication, willFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool{
        
        //Create an singleton instance of Evergage
        let evergage = Evergage.sharedInstance()
        
        //Set User ID as anonymous
        evergage.userId = evergage.anonymousId
        
        //Start the Evergage Configuration with our Dataset information
        evergage.start { (clientConfigurationBuilder)   in
            clientConfigurationBuilder.account = "ACCOUNT_ID"
            clientConfigurationBuilder.dataset = "DATASET_ID"
            // if we want to user push notification campaings
            clientConfigurationBuilder.usePushNotifications = true
            //Allow user-initiated gesture to connect to the Personalization web console for action mapping screens.
            clientConfigurationBuilder.useDesignMode = true
        }
        
        
        
        // Override point for customization after application launch.
        return true
    }
}

 

 

If we launch the app at this very moment, we will get the following inside  Marketing Cloud personalization

Eventstream Report Interaction Action Description

This is very good and with that we are certain its working and sending the information to Marketing Cloud Personalization.

Track Actions

So, in order to track a screen we can use the evergageScreen . We use this property as part of the EVGScreen and EVGContext classes for tracking and personalization. This is possible when the app is using UIViewController for each of the screens or pages we have.

class ViewController: UIViewController {

        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view.
            trackScreen()
        }
        
        func trackScreen(){
            
            evergageScreen?.trackAction("Main Screen")
            
        }
}

 

Interaction Action Forbutton

If we would want to track the action of click a button, we can do something similar, for example this:

@IBAction func handleClick(_ sender: UIButton) {
        
        labelText.text = inputField.text
        evergageScreen?.trackAction("Button clicked")
        
    }

In this code, each time the user clicks a button, the handleClick function will trigger the action. the inputField.text will be assign to the labelText.text and the trackAction function will be triggered and the action will sent to our dataset.

Wrapping Up Part 1: What’s next?

That wraps up the first part of this tutorial! We’ve covered the basic about how to add the Personalization SDK inside a mobile iOS application, how to create a Mobile App within Personalization and do a very basic action tracking in a view. In Part 2, we’ll dive into tracking more complex actions like view item and view item detail which are part of the catalog object action’s for tracking items.

]]>
https://blogs.perficient.com/2025/04/21/part-1-marketing-cloud-personalization-and-mobile-apps-functionality-101/feed/ 5 379201
What does SFO have to do with Oracle? https://blogs.perficient.com/2025/04/21/what-does-sfo-have-to-do-with-oracle/ https://blogs.perficient.com/2025/04/21/what-does-sfo-have-to-do-with-oracle/#respond Mon, 21 Apr 2025 10:33:06 +0000 https://blogs.perficient.com/?p=380320

Isn’t SFO an airport?  The airport one would travel if the destination is Oracle’s Redwood Shores campus.  Widely known as the initialism for the San Francisco International Airport, the answer would be correct if this question were posed in that context.  However, in Oracle Fusion, SFO stands for the Supply Chain Financial Orchestration. Based on what it does, we cannot call it an airport, but it sure is a control tower for financial transactions.

As companies are expanding their presence across countries and continents through mergers and acquisitions or natural growth, it becomes inevitable for the companies to transact across the borders and produce intercompany financial transactions.

Supply Chain Financial Orchestration (SFO), is the place where Oracle Fusion handles those transactions. The material may move one way, but for legal or financial reasons the financial flow could be following a different path.

A Typical Scenario

A Germany-based company sells to its EU customers from its Berlin office, but ships from its warehouses in New Delhi and Beijing.

Global

Oracle Fusion SFO takes care of all those transactions and as transactions are processed in Cost Management, financial trade transactions are created, and corporations can see their internal margins, intercompany accounting, and intercompany invoices.

Oh wait, the financial orchestration doesn’t have to be across countries only.  What if a corporation wants to measure its manufacturing and sales operations profitability?  Supply Chain Financial Orchestration is there for you.

In short, SFO is a tool that is part of the Supply Chain management offering that helps create intercompany trade transactions for various business cases.

Contact Mehmet Erisen at Perficient for more introspection of this functionality, and how Perficient and Oracle Fusion Cloud can digitalize and modernize your ERP platform.

www.oracle.com

www.perficient.com

]]>
https://blogs.perficient.com/2025/04/21/what-does-sfo-have-to-do-with-oracle/feed/ 0 380320
Roeslein and Associates goes live with Oracle Project Driven Supply Chain https://blogs.perficient.com/2025/04/21/roeslein-and-associates-goes-live-with-oracle-project-driven-supply-chain/ https://blogs.perficient.com/2025/04/21/roeslein-and-associates-goes-live-with-oracle-project-driven-supply-chain/#respond Mon, 21 Apr 2025 10:20:05 +0000 https://blogs.perficient.com/?p=368833

Roeslein & Associates 

Business Challenge + Opportunity 

Replaced disparate and outdated legacy systems with Oracle Fusion Cloud Manufacturing at a well-established manufacturing company.  We implemented a scalable Fusion solution, including Project Driven Supply Chain (PDSC), and full Financial and Supply Chain Management Suites to enable Roeslein to execute and extend their business processes globally. 

The challenge in manufacturing was to set standard manufacturing processes to fulfill highly customized demand originating from their customers. In addition, Perficient designed a Supply Chain Data Architecture to support the functionality of the solution. 

Achievements

  • Created Global Solution Template to be used globally 
  • Redesigned Enterprise Structure to enable Roeslein to track profits in different business units. 
  • Defined processes to execute standard manufacturing processes for custom and highly flexible manufacturing demand 
  • Implemented Project Driven Supply Chain including Inventory, Manufacturing, Order Management, Procurement and Cost Management 
  • Implemented Solutions to support aftermarket part orders in addition to Manufacturing Orders 
  • Designed two Integration between Fusion and UKG to support labor capture in Manufacturing and Projects 
  • Built Integration between Roeslein’s  eCommerce Platform and Fusion to support of their Aftermarket Business 

 

Contact Mehmet Erisen at Perficient for more introspection of this phenomenal achievement.  Congratulations to Roeslein & Associates and their entire staff! 

]]>
https://blogs.perficient.com/2025/04/21/roeslein-and-associates-goes-live-with-oracle-project-driven-supply-chain/feed/ 0 368833
How the Change to TLS Certificate Lifetimes Will Affect Sitecore Projects (and How to Prepare) https://blogs.perficient.com/2025/04/18/how-the-change-to-tls-certificate-lifetimes-will-affect-sitecore-projects-and-how-to-prepare/ https://blogs.perficient.com/2025/04/18/how-the-change-to-tls-certificate-lifetimes-will-affect-sitecore-projects-and-how-to-prepare/#respond Fri, 18 Apr 2025 13:54:17 +0000 https://blogs.perficient.com/?p=380286

TLS certificate lifetimes are being significantly reduced over the next few years as part of an industry-wide push toward greater security and automation. Here’s the phased timeline currently in place:

  • Now through March 15, 2026: Maximum lifetime is 398 days

  • Starting March 15, 2026: Reduced to 200 days

  • Starting March 15, 2027: Further reduced to 100 days

  • Starting March 15, 2029: Reduced again to just 47 days

For teams managing Sitecore implementations, this is more than a policy shift—it introduces operational urgency. As certificates begin expiring more frequently, any reliance on manual tracking or last-minute renewals could result in costly downtime or broken integrations.

If your Sitecore environment includes secure endpoints, custom domains, or external integrations, now is the time to assess your certificate strategy and move toward automation.

Why This Matters for Sitecore

Sitecore projects often involve:

  • Multiple environments (development, staging, production) with different certificates

  • Custom domains or subdomains used for CDNs, APIs, headless apps, or marketing campaigns

  • Third-party integrations that require secure connections

  • Marketing and personalization features that rely on seamless uptime

A single expired certificate can lead to downtime, loss of customer trust, or failed integrations—any of which could severely impact your digital experience delivery.

Key Risks of Shorter TLS Lifetimes

  • Increased risk of missed renewals if teams rely on manual tracking

  • Broken environments due to expired certs in Azure, IIS, or Kubernetes configurations

  • Delayed deployments when certificates must be re-issued last minute

  • SEO and trust damage if browsers start flagging your site as insecure

How to Prepare Your Sitecore Project Teams

To stay ahead of the TLS certificate lifecycle changes, here are concrete steps you should take:

1. Inventory All TLS Certificates

  • Audit all environments and domains using certificates

  • Include internal services, custom endpoints, and non-production domains

  • Use a centralized tracking tool (e.g., Azure Key Vault, HashiCorp Vault, or a certificate management platform)

2. Automate Certificate Renewals

  • Wherever possible, switch to automated certificate issuance and renewal

  • Use services like:

    • Azure App Service Managed Certificates

    • Let’s Encrypt with automation scripts

    • ACME protocol integrations for Kubernetes

  • For Azure-hosted Sitecore instances, leverage Key Vault and App Gateway integrations

3. Establish Certificate Ownership

  • Assign clear ownership of certificate management per environment or domain

  • Document who is responsible for renewals and updates

  • Add certificate health checks to your DevOps dashboards

4. Integrate Certificate Checks into CI/CD Pipelines

  • Validate certificate validity before deployments

  • Fail builds if certificates are nearing expiration

  • Include certificate management tasks as part of environment provisioning

5. Educate Your Team

  • Hold knowledge-sharing sessions with developers, infrastructure engineers, and marketers

  • Make sure everyone understands the impact of expired certificates on the Sitecore experience

6. Test Expiry Scenarios

  • Simulate certificate expiry in non-production environments

  • Monitor behavior in Sitecore XP and XM environments, including CD and CM roles

  • Validate external systems (e.g., CDNs, integrations, identity providers) against cert failures

Final Thoughts

TLS certificate management is no longer a “set it and forget it” task. With shorter lifetimes becoming the norm, proactive planning is essential to avoid downtime and ensure secure, uninterrupted experiences for your users.

Start by auditing your current certificates and work toward automating renewals. Make certificate monitoring part of your DevOps practice, and ensure your Sitecore teams are aware of the upcoming changes.

Action Items for This Week:

  • Identify all TLS certificates in your Sitecore environments

  • Document renewal dates and responsible owners

  • Begin automating renewals for at least one domain

  • Review Azure and Sitecore documentation for certificate integration options

]]>
https://blogs.perficient.com/2025/04/18/how-the-change-to-tls-certificate-lifetimes-will-affect-sitecore-projects-and-how-to-prepare/feed/ 0 380286
Security Best Practices in Sitecore XM Cloud https://blogs.perficient.com/2025/04/16/security-best-practices-in-sitecore-xm-cloud/ https://blogs.perficient.com/2025/04/16/security-best-practices-in-sitecore-xm-cloud/#respond Wed, 16 Apr 2025 23:45:38 +0000 https://blogs.perficient.com/?p=380233

Securing your Sitecore XM Cloud environment is critical to protecting your content, your users, and your brand. This post walks through key areas of XM Cloud security, including user management, authentication, secure coding, and best practices you can implement today to reduce your security risks.

We’ll also take a step back to look at the Sitecore Cloud Portal—the central control panel for managing user access across your Sitecore organization. Understanding both the Cloud Portal and XM Cloud’s internal security tools is essential for building a strong foundation of security.


Sitecore Cloud Portal User Management: Centralized Access Control

The Sitecore Cloud Portal is the gateway to managing user access across all Sitecore DXP tools, including XM Cloud. Proper setup here ensures that only the right people can view or change your environments and content.

Organization Roles

Each user you invite to your Sitecore organization is assigned an Organization Role, which defines their overall access level:

  • Organization Owner – Full control over the organization, including user and app management.

  • Organization Admin – Can manage users and assign app access, but cannot assign/remove Owners.

  • Organization User – Limited access; can only use specific apps they’ve been assigned to.

Tip: Assign the “Owner” role sparingly—only to those who absolutely need full administrative control.

App Roles

Beyond organization roles, users are granted App Roles for specific products like XM Cloud. These roles determine what actions they can take inside each product:

  • Admin – Full access to all features of the application.

  • User – More limited, often focused on content authoring or reviewing.

Managing Access

From the Admin section of the Cloud Portal, Organization Owners or Admins can:

  • Invite new team members and assign roles.

  • Grant access to apps like XM Cloud and assign appropriate app-level roles.

  • Review and update roles as team responsibilities shift.

  • Remove access when team members leave or change roles.

Security Tips:

  • Review user access regularly.

  • Use the least privilege principle—only grant what’s necessary.

  • Enable Multi-Factor Authentication (MFA) and integrate Single Sign-On (SSO) for extra protection.


XM Cloud User Management and Access Rights

Within XM Cloud itself, there’s another layer of user and role management that governs access to content and features.

Key Concepts

  • Users: Individual accounts representing people who work in the XM Cloud instance.

  • Roles: Collections of users with shared permissions.

  • Domains: Logical groupings of users and roles, useful for managing access in larger organizations.

Recommendation: Don’t assign permissions directly to users—assign them to roles instead for easier management.

Access Rights

Permissions can be set at the item level for things like reading, writing, deleting, or publishing. Access rights include:

  • Read

  • Write

  • Create

  • Delete

  • Administer

Each right can be set to:

  • Allow

  • Deny

  • Inherit

Best Practices

  • Follow the Role-Based Access Control (RBAC) model.

  • Create custom roles to reflect your team’s structure and responsibilities.

  • Audit roles and access regularly to prevent privilege creep.

  • Avoid modifying default system users—create new accounts instead.


Authentication and Client Credentials

XM Cloud supports robust authentication mechanisms to control access between services, deployments, and repositories.

Managing Client Credentials

When integrating external services or deploying via CI/CD, you’ll often need to authenticate through client credentials.

  • Use the Sitecore Cloud Portal to create and manage client credentials.

  • Grant only the necessary scopes (permissions) to each credential.

  • Rotate credentials periodically and revoke unused ones.

  • Use secure secrets management tools to store client IDs and secrets outside of source code.

For Git and deployment pipelines, connect XM Cloud environments to your repository using secure tokens and limit access to specific environments or branches when possible.


Secure Coding and Data Handling

Security isn’t just about who has access—it’s also about how your code and data behave in production.

Secure Coding Practices

  • Sanitize all inputs to prevent injection attacks.

  • Avoid exposing sensitive information in logs or error messages.

  • Use HTTPS for all external communications.

  • Validate data both on the client and server sides.

  • Keep dependencies up to date and monitor for vulnerabilities.

Data Privacy and Visitor Personalization

When using visitor data for personalization, be transparent and follow data privacy best practices:

  • Explicitly define what data is collected and how it’s used.

  • Give visitors control over their data preferences.

  • Avoid storing personally identifiable information (PII) unless absolutely necessary.


Where to Go from Here

Securing your XM Cloud environment is an ongoing process that involves team coordination, regular reviews, and constant vigilance. Here’s how to get started:

  • Audit your Cloud Portal roles and remove unnecessary access.

  • Establish a role-based structure in XM Cloud and limit direct user permissions.

  • Implement secure credential management for deployments and integrations.

  • Train your developers on secure coding and privacy best practices.

The stronger your security practices, the more confidence you—and your clients—can have in your digital experience platform.

]]>
https://blogs.perficient.com/2025/04/16/security-best-practices-in-sitecore-xm-cloud/feed/ 0 380233
Managed Service Offering (MSO) Support Ticketing System https://blogs.perficient.com/2025/04/10/managed-service-offering-mso-support-ticketing-system/ https://blogs.perficient.com/2025/04/10/managed-service-offering-mso-support-ticketing-system/#respond Thu, 10 Apr 2025 06:26:07 +0000 https://blogs.perficient.com/?p=379087

A ticketing system, such as a Dynamic Tracking Tool, can be a powerful tool for MSO support teams, providing a centralized and efficient way to manage incidents and service requests. Here are some more details on the benefits.

  1. Organize and triage cases: With a ticketing system, MSO support teams can easily prioritize cases based on their priority, status, and other relevant information. This allows them to quickly identify and resolve critical issues before they become major problems.
  2. Automate distribution and assignment: A ticketing system can automate the distribution and assignment of incidents to the right department staff member. This ensures that incidents are quickly and efficiently handled by the most qualified support team members.
  3. Increase collaboration: A ticketing system can increase collaboration between customer service teams and other stakeholders. It allows for easy and quick ticket assignment, collaboration in resolving issues, and real-time changes.
  4. Consolidate support needs: Using a ticketing system consolidates all support needs in one place, providing a record of customer interactions stored in the system. This allows support teams to quickly and easily access customer history, track communication, and resolve issues more effectively.
  5. Dynamics Tracking Tool: This shows various reports, such as the Real-Time Tracking Report and Historical Data Report, which are provided to monitor and analyze tracking data efficiently.

Overall, a ticketing system can help MSO support teams to be more organized, efficient, and effective in managing incidents and service requests.

Ticketchart

Benefits of a Dynamic Ticketing Management System

Benefitsofdynamics

 

  1. Prioritization: A ticketing system efficiently prioritizes incidents based on their impact on the business and their urgency. This ensures critical issues are resolved quickly, minimizing downtime and maximizing productivity.
  2. Efficiency: A ticketing system streamlines the incident management process, reducing the time and effort required to handle incidents. It allows support teams to focus on resolving issues rather than spending time on administrative tasks such as logging incidents and updating users.
  3. Collaboration: A ticketing system enables collaboration between support teams, allowing them to share information and expertise to resolve incidents more efficiently. It also enables users to collaborate with support teams, providing real-time updates and feedback on the status of their incidents.
  4. Tracking & Reporting: A ticketing system provides detailed monitoring and reporting capabilities, allowing businesses to analyze incident data and identify trends and patterns. This information can be used to identify recurring issues, develop strategies to prevent incidents from occurring, and improve the overall quality of support services.
  5. Professionalism: A ticketing system provides a professional and consistent approach to incident management, ensuring that all incidents are handled promptly and efficiently. This helps to enhance the reputation of the support team and the business as a whole.
  6. Transparency: A ticketing system provides transparency in the incident management process, allowing users to track the status of their incidents in real time. It also provides visibility into the actions taken by support teams, enabling users to understand how incidents are being resolved.
  7. Continuity: A ticketing system provides continuity in the incident management process, ensuring that incidents are handled consistently and effectively across the organization. It also ensures that incident data is captured and stored in a centralized location, providing a comprehensive view of the incident management process.

A Support System Orbits Around 3-Tiered Support

3tieredsupportsystem

Tier 1

Tier 1 tech support is typically the first level of technical support in a multi-tiered technical support model. It is responsible for handling basic customer issues and providing initial diagnosis and resolution of technical problems.

A Tier 1 specialist’s primary responsibility is to gather customer information and analyze the symptoms to determine the underlying problem. They may use pre-determined scripts or workflows to troubleshoot common technical issues and provide basic solutions.

If the issue is beyond their expertise, they may escalate it to the appropriate Tier 2 or Tier 3 support team for further investigation and resolution.

Overall, Tier 1 tech support is critical for providing initial assistance to customers and ensuring that technical issues are addressed promptly and efficiently.

Tier 2

Tier 2 support is the second level of technical support in a multi-tiered technical support model, and it typically involves more specialized technical knowledge and skills than Tier 2 support.

Tier 2 support is staffed by technicians with in-depth technical knowledge and experience troubleshooting complex technical issues. These technicians are responsible for providing more advanced technical assistance to customers, and they may use more specialized tools or equipment to diagnose and resolve technical problems.

Tier 2 support is critical for resolving complex technical issues and ensuring that customers receive high-quality technical assistance.

Tier 3

Support typically involves highly specialized technical knowledge and skills, and technicians at this level are often subject matter experts in their respective areas. They may be responsible for developing new solutions or workarounds for complex technical issues and providing training and guidance to Tier 1 and Tier 2 support teams.

In some cases, Tier 3 support may be provided by the product or service vendor, while in other cases, it may be provided by a third-party provider. The goal of Tier 3 support is to ensure that the most complex technical issues are resolved as quickly and efficiently as possible, minimizing downtime and ensuring customer satisfaction.

Overall, Tier 3 support is critical in providing advanced technical assistance and ensuring that the most complex technical problems are resolved effectively.

Determine The Importance of Tickets/Incidents/Issues/Cases

The first step in a support ticketing system is to determine the incident’s importance. This involves assessing the incident’s impact on the user and the business and assigning a priority level based on the severity of the issue.

Importanceoftickets

  1. Receiving: The step is to receive the incident report from the user. This can be done through various channels, such as email, phone, or a web-based form.
  2. Validating: This step involves validating the incident and verifying that it is a valid issue that needs to be addressed by the Support team.
  3. Logging: Once the incident has been validated, it is logged into an incident application, which is used to track and manage it throughout the process.
  4. Screening: The next step is to screen the incident and determine the user’s symptoms. This involves asking questions to gather more information about the issue and to identify any patterns or trends that may help resolve the incident.
  5. Prioritizing: Once the symptoms have been identified, the next step is to prioritize the incident based on its impact on the user and the business.
  6. Assigning: After the incident has been prioritized, it is assigned to a support team that will handle it. If the support team cannot handle the incident, it is escalated to a higher-level tier.
  7. Escalating: If the incident requires more advanced expertise or resources, it is escalated to a higher-level tier where it can be resolved more effectively.
  8. Resolving: The support team or higher-level tier works on resolving the incident and provides updates to the user until the issue is resolved.
  9. Closing: Once the incident has been resolved, the ticket is closed by logging the resolution and changing the ticket status to indicate that the incident has been successfully resolved.

Summary

Ticketing systems are essential for businesses that want to manage customer service requests efficiently. These systems allow customers to submit service requests, track the progress of their requests, and receive updates when their requests are resolved. The ticketing system also enables businesses to assign service requests to the appropriate employees or teams and prioritize them based on urgency or severity. This helps streamline workflow and ensure service requests are addressed promptly and efficiently. Additionally, ticketing systems can provide valuable insights into customer behavior, allowing businesses to identify areas where they can improve their products or services.

]]>
https://blogs.perficient.com/2025/04/10/managed-service-offering-mso-support-ticketing-system/feed/ 0 379087
Domain Setup and Mail Flow Configuration in Microsoft 365 https://blogs.perficient.com/2025/04/05/domain-setup-and-mail-flow-configuration-in-microsoft-365/ https://blogs.perficient.com/2025/04/05/domain-setup-and-mail-flow-configuration-in-microsoft-365/#respond Sat, 05 Apr 2025 07:13:50 +0000 https://blogs.perficient.com/?p=379625

Why Do We Need to Add and Verify a Domain in Microsoft 365 (M365)?

  1. Establishing Professional Identity
  • By adding your custom domain, you can create email addresses (e.g., you@yourcompany.com) that align with your business name. This adds professionalism and credibility to your communications.
  1. Personalizing Services
  • Verifying your domain allows you to customize services like Teams, SharePoint, and OneDrive to reflect your organization’s identity, making collaboration more consistent and branded.
  1. Email Delivery and Routing
  • To ensure emails sent to your custom domain are routed correctly to Microsoft 365, adding and verifying your domain is critical. This involves setting up DNS records like MX, SPF, and CNAME.
  1. Securing Your Domain
  • Verifying your domain protects it from unauthorized use. Only verified owners can manage the domain within Microsoft 365.

Add a Domain

  1. Go to the Microsoft 365 admin center.
  2. Go to the Settings > Domains page.
  3. Select Add domain.

Picture1

  1. Enter the name of the domain you want to add, then select Next.

Picture2

  1. Choose how you want to verify that you own the domain.
  1. If your domain registrar uses Domain Connect, Microsoft will set up your records automatically by having you sign in to your registrar and confirm the connection to Microsoft 365.
  2. We can use a TXT record to verify your domain.
  1. Once domain is verified, we can go ahead and add other Exchange online record like TXT, MX and CNAME record.

  2. Please find the below recording for above mentioned steps.

Add new domain

Refer: Add a domain to Microsoft 365 – Microsoft 365 admin | Microsoft Learn

Add and Verify the Exchange Online Record

1. TXT Record Verification

  • Sign in to the Microsoft 365 Admin Center.
  • Navigate to Settings > Domains and select your domain.
  • Add a TXT record to your DNS hosting provider with the following details:
    • Host/Name: @
    • TXT Value: MS=msXXXXXXXX (unique ID provided in the admin center)
    • TTL: 3600 seconds (or default value)
  • Save the record and return to the Admin Center to click Verify.

2. MX Record Verification

  • If TXT verification isn’t supported, use an MX record instead.
  • Add an MX record to your DNS hosting provider:
    • Host/Name: @
    • Points to Address: Domain-com.mail.protection.outlook.com
    • Priority: 0
    • TTL: 3600 seconds
  • Save the record and verify it in the Admin Center.

3. CNAME Record Verification

  • Add a CNAME record for services like Autodiscover:
    • Alias/Name: Autodiscover
    • Target: Autodiscover.outlook.com
    • TTL: 3600 seconds
  • Save the record and ensure it’s correctly configured.

Refer: Add DNS records to connect your domain – Microsoft 365 admin | Microsoft Learn

Why Are TXT, MX, and CNAME Records Important for Exchange Online?

TXT, MX, and CNAME records play crucial roles in ensuring that your domain is correctly configured for Exchange Online and that your email and services work smoothly. Here’s why they matter:

TXT Records

TXT records are used to verify domain ownership and secure email systems.

  • Domain Verification: When adding your custom domain to Microsoft 365, a TXT record proves that you own the domain.
  • Email Security: TXT records support SPF (Sender Policy Framework), DKIM (DomainKeys Identified Mail), and DMARC (Domain-based Message Authentication, Reporting, and Conformance), which prevent email spoofing and improve deliverability by ensuring only authorized servers can send emails on behalf of your domain.

MX (Mail Exchange) Records

MX records are critical for routing emails to the correct servers.

  • They direct incoming emails for your domain to the Microsoft 365/Exchange Online mail servers.
  • A misconfigured MX record can cause email delivery issues, so having it set up correctly is essential.

CNAME Records

CNAME records are used for service configuration.

  • For Exchange Online, CNAME records like Autodiscover ensure that users can seamlessly connect their email clients (like Outlook) to Exchange Online without manually entering settings.
  • They simplify and automate the connection process for end-users.

Together, these DNS records form the backbone of your domain’s email configuration, ensuring that everything from verification to email delivery and client connectivity operates effectively. Without these properly configured records, you might encounter issues like failed email delivery or difficulties in connecting to Exchange Online.

]]>
https://blogs.perficient.com/2025/04/05/domain-setup-and-mail-flow-configuration-in-microsoft-365/feed/ 0 379625
Mastering AWS IaC with Pulumi and Python – Part 2 https://blogs.perficient.com/2025/04/04/mastering-aws-iac-with-pulumi-and-python-part-2/ https://blogs.perficient.com/2025/04/04/mastering-aws-iac-with-pulumi-and-python-part-2/#respond Sat, 05 Apr 2025 04:34:29 +0000 https://blogs.perficient.com/?p=379632

In Part 1 of this series, we learned about the importance of AWS and Pulumi. Now, let’s explore the demo part in this practical session, which will create a service on AWS VPC by using Pulumi.

Before We Start, Ensure You Have the Following

AWS Account with IAM permissions for resource creation

  • Install Pulumi CLI:
    • # curl -fsSL https://get.pulumi.com | sh
  • Install Python & Virtual Environment:
    • # python3 -m venv venv
    • # source venv/bin/activate # On Windows: venv\Scripts\activate
      •  # pip install pulumi boto3

Configure AWS Credentials

  • Check if AWS CLI is Installed
    • Run the command:
    • # aws –version
  • If AWS CLI is not installed, download and install it from AWS CLI installation guide.

Create an IAM User and Assign Permissions

  • Go to the AWS Management Console → IAM → Users
  • Click Create User, provide a username, and check Access Key – Programmatic Access
  • Assign necessary policies/permissions (e.g., AdministratorAccess or a custom policy).

Generate Security Credentials

  • After creating the user, download or copy the Access Key ID and Secret Access Key.

Configure AWS CLI with IAM User Credentials

  • Run:
    • # aws configure
  • Enter the credentials when prompted:
    • Access Key ID
    • Secret Access Key
    • Default region (e.g., us-east-1)
    • Output format (e.g., json)

Verify Configuration

  • Run a test command, such as:
    • # aws sts get-caller-identity
  • If everything is set up correctly, this will return the IAM user details.

Pulumi Version

Part2 1

AWS Configuration

Picture2 2

Pulumi Dashboard

Picture3

It will be included with the details mentioned above

  • Overview
  • Readme
  • Updates
  • Deployments
  • Resources
  • Settings

Deployment Steps with Commands and Screenshots

Step 1: Initialize a Pulumi Project

  • # pulumi new aws-python

Step 2: Define AWS Resources

  • Modify __main__.py to create a VPC:

Picture4

Step 3. Pulumi Preview

  • # Pulumi Preview

Pulumi Preview shows a dry-run of changes before applying them. It helps you see what resources will be created (+), updated (~), or deleted (-) without actually making any changes.

Picture5

Step 4: Deploy Infrastructure

  • # pulumi up

Pulumi up deploys or updates infrastructure by applying changes from your Pulumi code.

Picture6

Picture7

Step 5: Verify Deployment

AWS Console Page

Creating VPC Peering with Pulumi

Picture8

Pulumi destroy

  • # Pulumi Destroy

Removes all resources managed by Pulumi, restoring the environment to its original state.  Picture9

Picture10

Step 6: Pulumi Stack Remove

  • # Pulumi Stack rm <stack name>

Pulumi stack rm removes a Pulumi stack and its state but does not delete cloud resources unless –force is used.

Picture11

Picture12

After removed Stack

Picture13

AWS Console Page after deleting VPC

Picture14

Conclusion

Pulumi offers a powerful, flexible, and developer-friendly approach to managing AWS infrastructure. By leveraging Pulumi, you can:

  • Simplify Infrastructure Management – Define cloud resources as code for consistency and repeatability.
  • Enhance Productivity—Create a dynamic infrastructure by using Python’s full capabilities, including loops, functions, and modules.
  • Improve Collaboration – Version control your infrastructure with Git and integrate seamlessly with CI/CD pipelines.
  • Achieve Multi-Cloud Flexibility – Deploy AWS, Azure, and Google Cloud workloads without changing tools.
  • Maintain Security & Compliance – Use IAM policies, automated policies, and state management to enforce best practices.

With Pulumi’s modern IaC approach, you can move beyond traditional Terraform and CloudFormation and embrace a more scalable, flexible, and efficient way to manage AWS resources.

Key Takeaways

  • Code-Driven Infrastructure – Use loops, conditionals, and functions for dynamic configurations.
  • Multi-Cloud & Hybrid Support – Pulumi works across AWS, Azure, Google Cloud, and Kubernetes.
  • State Management & Versioning – Store state remotely with Pulumi Cloud or AWS S3 + DynamoDB.
  • Developer-Friendly – No need to learn a new domain-specific language (DSL); use Python!
  • Experiment with More AWS Services – Deploy API Gateway, Lambda, or DynamoDB.
  • Implement CI/CD with Pulumi – Automate deployments using GitHub Actions, Jenkins, or AWS CodePipeline.
  • Explore Pulumi Stacks – Manage multiple environments efficiently.
  • Read the Official Pulumi Docs – Pulumi AWS Documentation

References

]]>
https://blogs.perficient.com/2025/04/04/mastering-aws-iac-with-pulumi-and-python-part-2/feed/ 0 379632
Log Framework Integration in Azure Functions with Azure Cosmos DB https://blogs.perficient.com/2025/04/02/log-framework-integration-in-azure-functions-with-azure-cosmos-db/ https://blogs.perficient.com/2025/04/02/log-framework-integration-in-azure-functions-with-azure-cosmos-db/#respond Wed, 02 Apr 2025 09:30:54 +0000 https://blogs.perficient.com/?p=379516

Introduction

Logging is an essential part of application development, especially in cloud environments where monitoring and debugging are crucial. In Azure Functions, there is no built-in provision to log application-level details into a centralized database, making it challenging to check logs every time in the Azure portal. This blog focuses on integrating NLog into Azure Functions to store all logs in a single database (Cosmos DB), ensuring a unified logging approach for better monitoring and debugging.

Steps to Integrate Logging Framework

Integration steps

 

1. Create an Azure Function Project

Begin by creating an Azure Function project using the Azure Function template in Visual Studio.

2. Install Required Nuget Packages

To enable logging using NLog, install the following NuGet packages:Function App Explorer

Install-Package NLog
Install-Package NLog.Extensions.Logging
Install-Package Microsoft.Azure.Cosmos

 

 

3. Create and Configure Nlog.config

NLog uses an XML-based configuration file to define logging targets and rules. Create a new file named Nlog.config in the project root and configure it with the necessary settings.

Refer to the official NLog documentation for database target configuration: NLog Database Target

Important: Set Copy to Output Directory to Copy Always in the file properties to ensure deployment.

N Log Config Code

 

4. Create Log Database

Create an Azure Cosmos DB account with the SQL API.

Sample Cosmos DB Database and Container

  1. Database Name: LogDemoDb
  2. Container Name: Logs
  3. Partition Key: /Application

5. Define Necessary Variables

In the local.settings.json file, define the Cosmos DB connection string.

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "CosmosDBConnectionString": "AccountEndpoint=https://your-cosmosdb.documents.azure.com:443/;AccountKey=your-account-key;"
  }
}

Json App Settings

 

6. Configure NLog in Startup.cs

Modify Startup.cs to configure NLog and instantiate database connection strings and log variables.

using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using NLog.Extensions.Logging;
using Microsoft.Azure.Cosmos;

[assembly: FunctionsStartup(typeof(MyFunctionApp.Startup))]
namespace MyFunctionApp
{
    public class Startup : FunctionsStartup
    {
        public override void Configure(IFunctionsHostBuilder builder)
        {
            builder.Services.AddLogging(loggingBuilder =>
            {
                loggingBuilder.ClearProviders();
                loggingBuilder.SetMinimumLevel(LogLevel.Information);
                loggingBuilder.AddNLog();
            });

            builder.Services.AddSingleton(new CosmosClient(
                Environment.GetEnvironmentVariable("CosmosDBConnectionString")));
        }
    }
}

Startup Code

 

7. Add Logs in Necessary Places

To ensure efficient logging, add logs based on the following log level hierarchy:

Log Levels

Example Logging in Function Code:

 

using System;
using System.Threading.Tasks;
using Microsoft.Azure.Cosmos;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;

public class MyFunction
{
    private readonly ILogger<MyFunction> _logger;
    private readonly CosmosClient _cosmosClient;
    private readonly Container _container;

    public MyFunction(ILogger<MyFunction> logger, CosmosClient cosmosClient)
    {
        _logger = logger;
        _cosmosClient = cosmosClient;

        // Initialize Cosmos DB container
        _container = _cosmosClient.GetContainer("YourDatabaseName", "YourContainerName");
    }

    [FunctionName("MyFunction")]
    public async Task Run([TimerTrigger("0 */5 * * * *")] TimerInfo myTimer)
    {
        var logEntry = new
        {
            id = Guid.NewGuid().ToString(),
            timestamp = DateTime.UtcNow,
            logLevel = "Information",
            message = "Function executed at " + DateTime.UtcNow
        };

        // Insert log into Cosmos DB
        await _container.CreateItemAsync(logEntry, new PartitionKey(logEntry.id));

        _logger.LogInformation("Function executed at {time}", DateTime.UtcNow);
    }
}

8. Deployment

Once the function is ready, deploy it to Azure Function App using Visual Studio or Azure DevOps.

Deployment Considerations:

  • Define necessary environment variables in Azure Function Configuration Settings.
  • Ensure Azure Function App Service and SQL Database are in the same network to avoid connection issues.
  • Monitor logs using Application Insights for additional diagnostics.

Conclusion

By following these steps, you can successfully integrate NLog into your Azure Functions for efficient logging. This setup enables real-time monitoring, structured log storage, and improved debugging capabilities.

]]>
https://blogs.perficient.com/2025/04/02/log-framework-integration-in-azure-functions-with-azure-cosmos-db/feed/ 0 379516