Migration Articles / Blogs / Perficient https://blogs.perficient.com/tag/migration/ Expert Digital Insights Thu, 22 May 2025 18:13:44 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Migration Articles / Blogs / Perficient https://blogs.perficient.com/tag/migration/ 32 32 30508587 IOT and API Integration With MuleSoft: The Road to Seamless Connectivity https://blogs.perficient.com/2025/05/21/iot-and-api-integration-with-mulesoft-the-road-to-seamless-connectivity/ https://blogs.perficient.com/2025/05/21/iot-and-api-integration-with-mulesoft-the-road-to-seamless-connectivity/#respond Wed, 21 May 2025 09:08:59 +0000 https://blogs.perficient.com/?p=381483

In today’s hyper-connected world, the Internet of Things (IoT) is transforming industries, from smart manufacturing to intelligent healthcare. However, the real potential of IoT is to connect continuously with enterprise systems, providing real-time insights and automating. This is where MuleSoft’s Anypoint Platform comes in, a disturbance in integrating IoT units and API to create an ecosystem. This blog explains how MuleSoft sets the platform for connection and introduces a strong basis for IoT and API integration that goes beyond the specific dashboard to offer scalability, safety, and efficiency.

Objective

In this blog, I will show MuleSoft’s ability to integrate IoT devices with enterprise systems through API connectivity, focusing on real-time data processing. I will provide an example of how MuleSoft’s Anypoint Platform connects to an MQTT broker and processes IoT device sensor data. The example highlights MuleSoft’s ability to handle IoT protocols like MQTT and transform data for insights.

How Does MuleSoft Facilitate IoT Integration?

The MuleSoft’s Anypoint Platform is specific to the API connection, native protocol support, and a comprehensive integration structure to handle the complications of IoT integration. This is how MuleSoft IOT does the integration comfortably:

  1. API Connectivity for Scalable Ecosystems

MuleSoft’s API strategy categorizes integrations into System, Process, and Experience APIs, allowing modular connections between IoT devices and enterprise systems. For example, in a smart city, System APIs gather data from traffic sensors and insights into a dashboard. This scalability avoids the chaos of point-to-point integrations, a fault in most visualization-focused tools.

  1. Native IoT Protocol Support

IoT devices are based on protocols such as MQTT, AMQP, and CoAP, which MuleSoft supports. Without middleware, this enables direct communication between sensors and gateways. In a scenario, MuleSoft is better able to connect MQTT data from temperature sensors to a cloud platform such as Azure IoT Hub than other tools that require custom plugins.

  1. Real-Time Processing and Automation

IoT requires real-time data processing, and MuleSoft’s runtime engine processes data streams in real time while supporting automation. For example, if a factory sensor picks up a fault, MuleSoft can invoke an API to notify maintenance teams and update systems. MuleSoft integrates visualization with actionable workflows.

  1. Pre-Built Connectors for Setup

MuleSoft’s Anypoint Exchange provides connectors for IoT platforms (e.g., AWS IoT) and enterprise systems (e.g., Salesforce). In healthcare, connectors link patient wearables to EHRs, reducing development time. This plug-and-play approach beats custom integrations commonly required by other tools.

  1. Centralized Management and Security

IoT devices manage sensitive information, and MuleSoft maintains security through API encryption and OAuth. Its Management Center provides a dashboard to track device health and data flows, offering centralized control that standalone dashboard applications cannot provide without additional infrastructure.

  1. Hybrid and Scalable Deployments

MuleSoft’s hybrid model supports both on-premises and cloud environments, providing flexibility for IoT deployments. Its scalability handles growing networks, such as fleets of connected vehicles, making it a future-proof solution.

Building a Simple IoT Integration with MuleSoft

To demonstrate MuleSoft’s IoT integration, below I have created a simple flow in Anypoint Studio that connects to an MQTT Explorer, processes sensor data, and logs it to the dashboard integration. This flow uses a public MQTT Explorer to simulate IoT sensor data. The following are the steps for the Mule API flow:

Api Flowchart

Step 1: Setting Up the Mule Flow

In Anypoint Studio, create a new Mule project (e.g., ‘IoT-MQTT-Demo’). Design a flow with an MQTT Connector to connect to an explorer, a Transform Message component to process data, and a Logger to output results.

Step1

Step 2: Configuring the MQTT Connector

Configure the MQTT Connector properties. In General Settings, configure on a public broker (“tcp://test.mosquitto.org:1883”). Add the topic filter “iot/sensor/data” and select QoS “AT_MOST_ONCE”.

Step2

Step 3: Transforming the Data

Use DataWeave to parse the incoming JSON payload (e.g., ‘{“temperature”: 25.5 }’) and add a timestamp. The DataWeave code is:

“`

%dw 2.0

output application/json


{

sensor: “Temperature”,

value: read(payload, “application/json“).temperature default “”,

timestamp: now()

 

}

“`

Step3

Step 4: Connect to MQTT

                Click on the Connections and use the credentials as shown below to connect to the MQTT explorer:

Step4

 Step 5: Simulating IoT Data

Once the MQTT connects using an MQTT Explorer, publish a sample message ‘{“temperature”: 28 }’ to the topic ‘iot/sensor/data’, sending to the Mule flow as shown below.

Step5

Step 6: Logging the Output

Run the API and publish the message from the MQTT explorer, and the processed data will be logged into the console. Below shows an example log:

Step6

The above example highlights MuleSoft’s process for connecting IoT devices, processing data, and preparing it for visualization or automation.

Challenges in IoT Integration and MuleSoft’s Solutions

IoT integration faces challenges:

  • Device and Protocol Diversity: IoT ecosystems involve different devices, such as sensors or gateways, using protocols like MQTT or HTTP with different data formats, such as JSON, XML, or binary.
  • Data Volume and Velocity: IoT devices generate high volumes of real-time data, which requires efficient processing to avoid restrictions.
  • Security and Authentication: IoT devices are unsafe and require secure communications like TLS or OAuth for device authentication.
  • Data Transformation and Processing: IoT data sends binary data, which requires transformation from Binary to JSON and needs improvement before use.

The Future of IoT with MuleSoft

The future of IoT with MuleSoft is promising. MuleSoft uses the Anypoint Platform to solve critical integration issues. It integrates different IoT devices and protocols, such as MQTT, to provide data flow between ecosystems. It provides real-time data processing and analytics integration. Security is added with TLS and OAuth.

Conclusion

MuleSoft’s Anypoint Platform reviews IoT and API integration by providing a scalable, secure, real-time solution for connecting devices to enterprise systems. As I showed in the example, MuleSoft processes MQTT-based IoT data and transforms it for useful insights without external scripts or sensors. By addressing challenges like data volume and security, MuleSoft provides a platform to build IoT ecosystems that provide automation and insights. As IoT keeps growing, MuleSoft’s API connectivity and native protocol support establish it as an innovation, with new smart city, healthcare, and more connectivity. Discover MuleSoft’s Anypoint Platform to unlock the full potential of your IoT projects and set the stage for a connected future.

]]>
https://blogs.perficient.com/2025/05/21/iot-and-api-integration-with-mulesoft-the-road-to-seamless-connectivity/feed/ 0 381483
Strategic Cloud Partner: Key to Business Success, Not Just Tech https://blogs.perficient.com/2025/05/13/strategic-cloud-partner-key-to-business-success-not-just-tech/ https://blogs.perficient.com/2025/05/13/strategic-cloud-partner-key-to-business-success-not-just-tech/#comments Tue, 13 May 2025 14:20:07 +0000 https://blogs.perficient.com/?p=381334

Cloud is easy—until it isn’t.

Perficient’s Edge: A Strategic Cloud Partner Focused on Business Outcomes

Cloud adoption has skyrocketed. Multi-cloud. Hybrid cloud. AI-optimized workloads. Clients are moving fast, but many are moving blindly. The result? High costs, low returns, and strategies that stall before they scale.

That’s why this moment matters. Now, more than ever, your clients need a partner who brings more than just cloud expertise—they need business insight, strategic clarity, and real results.

In our latest We Are Perficient episode, we sat down with Kiran Dandu, Perficient’s Managing Director, to uncover exactly how we’re helping clients not just adopt cloud, but win with it.

If you’re in sales, this conversation is your cheat sheet for leading smarter cloud conversations with confidence.

Key #1: Start with Business Outcomes, Not Infrastructure

Kiran makes one thing clear from the start: “We don’t start with cloud. We start with what our clients want to achieve.”

At Perficient, cloud is a means to a business end. That’s why we begin every engagement by aligning cloud architecture with long-term business objectives—not just technical requirements.

Perficient’s Envision Framework: Aligning Cloud with Business Objectives

  • Define their ideal outcomes
  • Assess their existing workloads
  • Select the right blend of public, private, hybrid, or multi-cloud models
  • Optimize performance and cost every step of the way

This outcome-first mindset isn’t just smarter—it’s what sets Perficient apart from traditional cloud vendors.

Key #2: AI in the Cloud – Delivering Millions in Savings Today

Forget the hype—AI is already transforming how we operate in the cloud. Kiran breaks down the four key areas where Perficient is integrating AI to drive real value:

  • DevOps automation: AI accelerates code testing and deployment, reducing errors and speeding up time-to-market.
  • Performance monitoring: Intelligent tools predict and prevent downtime before it happens.
  • Cost optimization: AI identifies underused resources, helping clients cut waste and invest smarter.
  • Security and compliance: With real-time threat detection and automated incident response, clients stay protected 24/7.

The result? A cloud strategy that’s not just scalable, but self-improving.

Key #3: Beyond Cloud Migration to Continuous Innovation

Moving to the cloud isn’t the end goal—it’s just the beginning.

Kiran emphasizes how Perficient’s global delivery model and agile methodology empower clients to not only migrate, but to evolve and innovate faster. Our teams help organizations:

  • Integrate complex systems seamlessly
  • Continuously improve infrastructure as business needs change
  • Foster agility across every department—not just IT

And it’s not just theory. Our global consultants, including the growing talent across LATAM, are delivering on this promise every day.

“The success of our cloud group is really going to drive the success of the organization.”
Kiran Dandu

Global Talent, Local Impact: The Power of a Diverse Strategic Cloud Partner

While visiting our offices in Medellín, Colombia, Kiran highlighted the value of diversity in driving cloud success:

“This reminds me of India in many ways—there’s talent, warmth, and incredible potential here.”

That’s why Perficient is investing in uniting its global cloud teams. The cross-cultural collaboration between North America, LATAM, Europe, and India isn’t just a feel-good story—it’s the engine behind our delivery speed, technical excellence, and customer success.

Key Takeaways for Sales: Lead Smarter Cloud Conversations

If your client is talking about the cloud—and trust us, they are—this interview is part of your toolkit.
You’ll walk away understanding:

  • Why Perficient doesn’t just build cloud platforms—we build cloud strategies that deliver
  • How AI and automation are creating real-time ROI for our clients
  • What makes our global model the best-kept secret in cloud consulting
  • And how to speak the language of business outcomes, not just cloud buzzwords

Watch the Full Interview: Deep Dive with Kiran Dandu

Want to hear directly from the source? Don’t miss Kiran’s full interview, packed with strategic insights that will elevate your next sales conversation.

Watch now and discover how Perficient is transforming cloud into a competitive advantage.

Choose Perficient: Your Client’s Strategic Cloud Partner for a Competitive Edge

Perficient is not just another cloud partner—we’re your client’s competitive edge. Let’s start leading the cloud conversation like it.

]]>
https://blogs.perficient.com/2025/05/13/strategic-cloud-partner-key-to-business-success-not-just-tech/feed/ 1 381334
Migrating from Eloqua to Salesforce Marketing Cloud: A Step-by-Step Guide https://blogs.perficient.com/2025/04/07/migrating-from-eloqua-to-salesforce-marketing-cloud-a-step-by-step-guide/ https://blogs.perficient.com/2025/04/07/migrating-from-eloqua-to-salesforce-marketing-cloud-a-step-by-step-guide/#respond Mon, 07 Apr 2025 18:06:03 +0000 https://blogs.perficient.com/?p=379708

Transitioning from Oracle Eloqua to Salesforce Marketing Cloud (SFMC) is a significant move that can unlock new capabilities, improve personalization, and better align your marketing stack with your broader Salesforce ecosystem. However, it’s not just a matter of copying and pasting your assets over — a successful migration requires thoughtful planning, collaboration, and execution. 

Here’s how to ensure a smooth and effective migration from Eloqua to SFMC. 

Start with a Comprehensive Assessment and Plan 

Before you begin moving anything, it’s essential to take stock of your current Eloqua environment. Conduct a full audit to understand exactly what’s being used — including email assets, forms, landing pages, segmentation data, and any automations in place. 

This is also the time to: 

  • Confirm key stakeholders across marketing, IT, and any other teams currently using Eloqua or planning to use SFMC. 
  • Map out the migration timeline, accounting for potential downtime or overlapping platform usage. 
  • Prepare your SFMC environment, ensuring that your Business Unit is properly configured and that administrative settings (including the Sender Authentication Package) are in place. 

Additionally, make sure that: 

  • The correct data extensions for segmentation are available (these will be equivalent to Eloqua’s contact and profile databases). 
  • All SFMC users have appropriate access and permissions. 
  • Each team member has gone through SFMC training and is comfortable with their responsibilities in the new platform. 

Data Migration: Getting Your Contacts Across 

Data migration is often the most sensitive and technically complex part of the process. You’ll want to follow a structured approach to minimize risk and ensure accuracy. 

  • Export data from Eloqua, including contacts, segments, and campaign history, using Eloqua’s built-in export tools. 
  • Map the data appropriately. The structure in SFMC may differ from Eloqua, so ensure everything is aligned correctly. 
  • Import the data into SFMC, placing it into the appropriate data extensions. 
  • Verify the migration, confirming that all contact and segmentation data is imported successfully and accessible. 

Rebuilding Campaigns, Content, and Journeys 

Now comes the creative part — migrating and rebuilding your campaigns. Not everything will transfer one-to-one, so this is also a great opportunity to refresh outdated assets and optimize workflows. 

  • Email templates and content should be rebuilt in SFMC’s Email Studio. Complex designs may require manual recreation. 
  • Campaign workflows from Eloqua need to be recreated in Journey Builder to replicate decision splits, entry criteria, and follow-up actions. 
  • Forms and landing pages that were part of Eloqua campaigns will need to be reimagined using SFMC’s Web Studio and CloudPages. 
  • Any automation processes in Eloqua should be rebuilt in SFMC’s Automation Studio. 

Testing and Validation 

Thorough testing is crucial to ensure nothing breaks during or after migration. 

  • Test all data imports to ensure accurate syncing and storage. 
  • Run campaign QA, confirming that email rendering and AMPScript personalization work across all platforms and browsers. 
  • Validate all Journeys, ensuring proper triggers, filters, and flows are in place. 
  • If using a new dedicated IP, follow proper IP warming protocols based on volume. Whether using a shared or dedicated IP, continuously monitor deliverability and escalate any issues to SFMC Support as needed. 

Training and Knowledge Transfer 

Even with the best tools, your team’s confidence and understanding of SFMC will drive success post-migration. Make sure everyone has completed their Salesforce Trailhead training and feels empowered in their role.  

Pro tip: Maintain comprehensive documentation of your new SFMC configuration and knowledge transfer processes. This can be invaluable for onboarding new team members 

Post-Migration Support and Optimization 

Your migration project shouldn’t end once the data is transferred, and the first campaigns are live. Keep an eye on how the system is performing, and ensure your team remains supported. 

  • Hold regular check-ins (weekly or monthly) to monitor team confidence and identify areas for process improvement. 
  • Actively monitor SFMC performance, looking for system or user issues that may emerge. 
  • Have a support structure in place to address any questions, issues, or needed updates. 

Don’t Forget the Extras 

While you focus on the main assets, make sure to also address the following: 

  • Third-party tools that may support or accelerate migration efforts. 
  • System integrations, such as CRMs or analytics tools, need to be reconnected and validated in SFMC. 
  • An audit of ongoing campaigns — this is the perfect time to retire outdated programs and streamline what’s being migrated. 

 

Migrating from Eloqua to Salesforce Marketing Cloud can feel daunting, but with the right preparation and a strategic approach, it can also be a major opportunity to evolve and enhance your marketing operations. Clear communication, proper planning, and a knowledgeable team will be key to a smooth and successful transition. 

Need help with your Eloqua to SFMC migration? Our team is here to guide you through every step — from strategy and setup to post-launch support. 

]]>
https://blogs.perficient.com/2025/04/07/migrating-from-eloqua-to-salesforce-marketing-cloud-a-step-by-step-guide/feed/ 0 379708
Legacy Systems Explained: Why Upgrading Them is Crucial for Your Business https://blogs.perficient.com/2024/12/04/legacy-systems-explained-why-upgrading-them-is-crucial-for-your-business/ https://blogs.perficient.com/2024/12/04/legacy-systems-explained-why-upgrading-them-is-crucial-for-your-business/#comments Wed, 04 Dec 2024 06:16:49 +0000 https://blogs.perficient.com/?p=372961

What are Legacy Systems? Why is Upgrading those Systems Required?

Upgrading means more than just making practical improvements to keep things running smoothly. It addresses immediate needs rather than chasing a perfect but impractical solution. The situation could spiral out of control if things don’t function properly in real-time.

One such incident happened on January 4, 2024, when South Africa’s Department of Home Affairs was taken offline nationwide due to a mainframe failure. In simple terms, Mainframe failures in such contexts are usually high-stakes issues because they impact the core infrastructure that supports vital public services. In South Africa, where the Department of Home Affairs handles a range of essential services such as passports, identity documents, and immigration control,  A mainframe failure can have widespread repercussions, leading to backlogs, delays, and potential administrative chaos. The South African Department of Home Affairs provides a clear example of a critical legacy system facing significant risks due to its outdated technology and operational challenges. 

Addressing these issues through modernization and digital transformation is crucial for improving service delivery and ensuring the system’s continued effectiveness and security. One cannot work on migrating the legacy system in one go, as the business and functional side of testing is a must. A planned and systematic approach is needed while upgrading the legacy system.

 

Question: What is the solution to avoid such a case?
Answer: Modernization of Legacy code.

Legacy code modernization is improving and updating outdated software systems to make them more maintainable, scalable, and compatible with modern technologies. Let’s understand this using Apigee (an API Management tool).

1. Scalability

Legacy system: Legacy systems were designed to provide solutions for the respected tasks but there was no scalability as records used to get limited within Infrastructure for improvements in business.
Apigee: Due to its easy scalability, centralized monitoring, and integration capabilities, Apigee helped the organization plan its approach to business improvements.

2. Security

Legacy system: One of the simplest methods for authenticating users in Legacy Systems was “Basic Authentication,” where the client sends a username and password in every HTTP request. This method was Vulnerable to man-in-the-middle (MITM) attacks if not combined with HTTPS. Credentials are exposed on each request.

Apigee: Using Apigee, the organization can quickly implement modern security features like OAuth, API key validation, rate limiting, and threat protection (e.g., bot detection) without changing the core logic of the APIs.

3. User and Developer Experience

Legacy system: The legacy API lacks good documentation, making it harder for external developers to integrate with it. Most systems tend to have a SOAP-based communication format.
Apigee: Apigee provides a built-in API portal, automatic API documentation, and testing tools, improving the overall developer experience and adoption of the APIs so that integration with other tools can be easy and seamless with modern standards.


There are now multiple ways to migrate data from legacy to modern systems, which are listed below.

1. Big Bang Migration
2. Phased Migration
3. Parallel Migration
4. Pilot Migration
5. Hybrid Migration
and more…

Although these things are known to the legacy system owners, they are very selective and picky when finalizing a migration plan. They are only aware of the short-term goal, i.e., to get the code up and running in production. Because when we are speaking of legacy systems, all there is left is code and a sigh of relief that it is still up and running.  For most systems, there is no documentation, code history, revisions, etc., and that’s why it could fail on a large scale if something goes wrong.

I have found some points that need to be ensured before finalizing the process of migrating from legacy systems to modern systems.

1. Research and Analysis

We need to understand the motives behind the development of the Legacy system since there is no or insufficient documentation.

In the study, we can plan to gather historical data to understand the system’s behavior. We need to dig deeper to find something that could help us understand the system better.

2. Team Management

After studying the system, we can estimate the team size and resource management. Such systems are way older when it comes to the tech on which they are running. So, it is hard to gather resources with such outdated skills. In that case, management can cross-skill existing resources into such technologies.

I believe adding the respective numbers of junior engineers would be best, as they would get exposure to challenges, which can help them improve their skills.

3. Tool to Capture Raw Logs

Analyzing the raw logs can talk more about the system, as this is the way communication is happening to complete each task requested by the system. By breaking down the data into layman’s language, understanding at what time requests are high by timestamps,

What parameter data consists of, and by such information, we can tell system behavior and plan properly.

4. Presentation of the Logs

Sometimes we may need to present the case study to high-level management before proceeding with the plan. So to simplify the presentation, we can use tools like Datadog and Splunk to get data in a format such as tabular, graphic, etc. so that other team members can understand.

5. Replicate the Architect with Proper Functionality

This is the most important part. End-to-end development is the only solution for smooth migration activity. We need to ensure standards here, such as maintaining core functionality, risk management, conveying data pattern changes to other associated clients, and ensuring user access, business processes, etc. The point 1 study can help us understand the behavior of systems to check on what modern tech we can land our migration on.

We can implement and plan using one of the migration methods I mentioned above in the blog.

6. End-to-end Testing

Once the legacy system is replicated to Modern Tech, we need to ensure that we have a User Acceptance Testing (UAT) environment to perform the system testing. It could be challenging if the respective legacy systems didn’t have a testing environment back in the day. We may need to call mock backend URLs to mock the behavior of services.

7. Before Moving to Production, do Pre-production Testing Properly

Only after successful UAT testing one can ensure the functionality and may think to move changes to production hassle-free. However, some points must be ensured, such as following standards and maintaining documentation. In standards, we need to ensure that no risk may lead to the failure of services in modern technology and that they are compatible properly.

In the documentation, we need to ensure that all service flows are appropriately documented and that testing is done according to the requirement gathering.

Legacy systems and their workings are among the most complex and time-consuming topics. But to make the job easier, one must put effort into it.

]]>
https://blogs.perficient.com/2024/12/04/legacy-systems-explained-why-upgrading-them-is-crucial-for-your-business/feed/ 2 372961
XM Cloud content migration: connecting external database https://blogs.perficient.com/2024/09/13/xm-cloud-content-migration-connecting-external-database/ https://blogs.perficient.com/2024/09/13/xm-cloud-content-migration-connecting-external-database/#comments Sat, 14 Sep 2024 04:59:05 +0000 https://blogs.perficient.com/?p=369122

Historically when performing content migration with Sitecore we used to deal with database backups. In a modern SaaS world, we do not have the luxury of neither managing cloud database backups, nor the corresponding UI for doing this. Therefore, we must find an alternative approach.

Technical Challenge

Let’s assume we have a legacy Sitecore website, in my case that was XP 9.3 and we’ve been provided with only master database backup having all the content. The objective is to perform content migration from this master database into a new and shiny XM Cloud environment(s).

Without having direct access to the cloud, we can only operate locally. In theory, there could be a few potential ways of doing this:

  1. Set up a legacy XP of the desired version with the legacy content database already attached/restored to it. Then try to attach (or restore) a vanilla XM Cloud database to a local SQL Server as a recipient database in order to perform content migration into it.  Unfortunately, the given approach would not work since SQL Server version incompatibility between XM Cloud and XP 9.3. Even if that was possible, running XP 9.3 with the XM Cloud database won’t work as Xз 9.3 neither knows about XM Cloud schema nor is capable of handling Items as Resource required feature which was invented later in XP 10.1. Therefore – this option is not possible.

  2. Can we go the other way around by using the old database along with XM Cloud? This is not documented, but let’s assess it:

    1. Definitely won’t work in the cloud since we’re not given any control of DBs and their maintenance or backups.

    2. In a local environment, XM Cloud only works in Docker containers and it is not possible to use it with an external SQL Server where we have a legacy database. But what if we try to plug that legacy database inside of the local SQL Container? Sadly, there are no documented ways of achieving that.

  3. Keep two independent instances side by side (legacy XP and XM Cloud in containers) and use an external tool to connect both of them in order to migrate the content. In theory that is possible but carries on few drawbacks.
    1. The tool of choice is Razl, but this tool is not free, requires a paid license, and does not have a free trial to ever test this out.
    2. Connecting to a containerized environment may not be easy and require some additional preps
    3. You may need to have a high-spec computer (or at least two mid-level machines connected to the same network) to have both instances running side by side.

After some consideration, the second approach seems to be reasonable to try so let’s give it a chance and conduct a PoC.

Proof of Concept: local XM Cloud with external content database

Utilize the second approach we’re going to try attaching the given external legacy database to XM Cloud running in a local containerized setup. That will allow using a built-in UI for mass-migrating the content between the databases (as pictured below) along with the Sitecore PowerShell script for finalizing and fine-tuning the migrated content.

Image 20240525 194741 (1)

Step 1: Ensurу SQL Server port is externally exposed

We are connecting the external  SQL Server Management studio through a port of the SQL Server container that is exposed externally in order to make it possible. Luckily, that has been done for us already, just make sure docker-compose has:

    ports:
      - "14330:1433"

Step 2: Spin up an XM Cloud containers and confirm XM Cloud works fine for you

Nothing extraordinary here, as easy as running .\init.ps1 followed by .\up.ps1.

Step 3: Connect SQL Management Studio to SQL Server running in a container.

After you sound up containers, run SQL Management Studio and connect to SQL Server running in SQL container through an exposed port 14330, as we did at step 1:

Picture1

Step 4: Restore the legacy database

If you have a Data-Tier “backpack” file you may want to do an extra step and convert it into a binary backup for that particular version used by XMCloud before restoring. This step is optional, but in case you want to restore the backup more than once (which is likely to happen), it would make sense to take a binary backup as soon as you restore the data-tier “backpack” first time ever. Data-tier backups process much slower than binaries, so that will definitely save time in the future.

Once connected, let’s enable contained database authentication. This step is mandatory, otherwise, that would not be possible to restore a database:

EXEC sys.sp_configure N'contained database authentication', N'1'
go
exec ('RECONFIGURE WITH OVERRIDE')
go

One more challenge ahead: when performing backup and restore operations, SQL Server shows up a path local to the server engine, and not the host machine. That means, our backup should exist “inside” of SQL container. Luckily, w have this also covered. Make sure docker-compose.override.yml contains:

  mssql:
    volumes:
      - type: bind
        source: .\docker\data\sql
        target: c:\data

That means, one can locate legacy database backups into .\docker\data\sql folder of a host machine and it will magically appear within C:\datafolder when using SQL Management Studio database backup restore tool which you can perform now.

Important! Restore legacy database using the “magic name” in a format Sitecore.<DB_NAME_SUFFIX>, further down below I will be using the value RR as DB_NAME_SUFFIX.

Once got restored database in SQL Server Management Studio under the name Sitecore.RR we need to plug this database to the system. There is a naming convention hidden from our eyes within CM containers.

Step 5: Configure connection strings

Unlike in XM/XP – there is no documented way to plug an external database. The way connection strings are mapped to the actual system is cumbersome, it uses some “magic” hidden within the container itself and obfuscated from our eyes. It only tool to reach it experimental way. Here are the steps to reproduce:

  • Add environmental variable to docker-compose record for CM:

    Sitecore_ConnectionStrings_RR: Data Source=${SQL_SERVER};Initial Catalog=${SQL_DATABASE_PREFIX}.RR;User ID=${SQL_SA_LOGIN};Password=${SQL_SA_PASSWORD}
  • Add new connection string record. To do so you’ll need to create a connection strings file within your customization project as .\src\platform\<SITENAME>\App_Config\ConnectionStrings.config with the content of the connection strings file from CM container with the addition of a new string:

    <add name="rr" connectionString="user id=user;password=password;Data Source=(server);Database=Sitecore_RR" />

Please note the difference in the suffix format of both above records, that is totally fine. CM container still processes that correctly.

Step 6: Reinstantiating CM container

Simply restarting a CM container is not sufficient. You must remove it and re-create it, just killing/stopping is not sufficient.

For example, the below command will work for that purpose:

docker-compose restart cm

… not will this one:

docker-compose kill cm

The reason is that CM will not update environmental variables from docker-compose file upon restart. Do this instead:

docker-compose kill cm
docker-compose rm cm --force
docker-compose up cm -d

Step 7: Validating

  1. Inspecting CM container for environmental variables will show you this new connection string, as added:

    "Env": [
                "Sitecore_ConnectionStrings_RR=Data Source=mssql;Initial Catalog=Sitecore.RR;User ID=sa;Password=6I7X5b0r2fbO2MQfwKH"

     

  2. Inspecting connection string config (located at C:\inetpub\wwwroot\App_Config\ConnectionStrings.config on CM container) contains the newly added connection string.

Step 8: Register new database with XM Cloud

It can be done the below config patch that does this job. Save it as docker\deploy\platfo.rm\App_Config\Include\ZZZ\z.rr.config for test and later do not forget to include it in a platform customization project, so that it gets shipped with each deployment
<?xml version="1.0" encoding="UTF-8"?>
<configuration xmlns:patch="www.sitecore.net/.../">
    <sitecore>
        <eventing defaultProvider="sitecore">
            <eventQueueProvider>
                <eventQueue name="rr" patch:after="evertQueue[@name='web']" type="Sitecore.Data.Eventing.$(database)EventQueue, Sitecore.Kernel">
                    <param ref="dataApis/dataApi[@name='$(database)']" param1="$(name)" />
                    <param ref="PropertyStoreProvider/store[@name='$(name)']" />
                </eventQueue>
            </eventQueueProvider>
        </eventing>
        <PropertyStoreProvider>
            <store name="rr" patch:after="store[@name='master']" prefix="rr" getValueWithoutPrefix="true" singleInstance="true" type="Sitecore.Data.Properties.$(database)PropertyStore, Sitecore.Kernel">
                <param ref="dataApis/dataApi[@name='$(database)']" param1="$(name)" />
                <param resolve="true" type="Sitecore.Abstractions.BaseEventManager, Sitecore.Kernel" />
                <param resolve="true" type="Sitecore.Abstractions.BaseCacheManager, Sitecore.Kernel" />
            </store>
        </PropertyStoreProvider>
        <databases>
            <database id="rr" patch:after="database[@id='master']" singleInstance="true" type="Sitecore.Data.DefaultDatabase, Sitecore.Kernel">
                <param desc="name">$(id)</param>
                <icon>Images/database_master.png</icon>
                <securityEnabled>true</securityEnabled>
                <dataProviders hint="list:AddDataProvider">
                    <dataProvider ref="dataProviders/main" param1="$(id)">
                        <disableGroup>publishing</disableGroup>
                        <prefetch hint="raw:AddPrefetch">
                            <sc.include file="/App_Config/Prefetch/Common.config" />
                            <sc.include file="/App_Config/Prefetch/Webdb.config" />
                        </prefetch>
                    </dataProvider>
                </dataProviders>
                <!-- <proxiesEnabled>false</proxiesEnabled> -->
                <archives hint="raw:AddArchive">
                    <archive name="archive" />
                    <archive name="recyclebin" />
                </archives>
                <cacheSizes hint="setting">
                    <data>100MB</data>
                    <items>50MB</items>
                    <paths>2500KB</paths>
                    <itempaths>50MB</itempaths>
                    <standardValues>2500KB</standardValues>
                </cacheSizes>
            </database>
        </databases>
    </sitecore>
</configuration>

Step 9: Enabling Sitecore PowerShell Extension

Next, we’d want to enable PowerShell, if that is not yet done. You won’t be able to migrate the content using SPE without performing this step.

<?xml version="1.0" encoding="utf-8"?>
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:role="http://www.sitecore.net/xmlconfig/role/" xmlns:set="http://www.sitecore.net/xmlconfig/set/">
  <sitecore role:require="XMCloud">
    <powershell>
      <userAccountControl>
        <tokens><token name="Default"  elevationAction="Block"/>
              <token name="Console" expiration="00:55:00" elevationAction="Allow" patch:instead="*[@name='Console']"/>
              <token name="ISE" expiration="00:55:00" elevationAction="Allow" patch:instead="*[@name='ISE']"/>
              <token name="ItemSave" expiration="00:55:00" elevationAction="Allow" patch:instead="*[@name='ItemSave']"/>
            </tokens>
      </userAccountControl>
    </powershell>
  </sitecore>
</configuration>

Include the above code into a platform customization project as .\docker\deploy\platform\App_Config\Include\ZZZ\z.SPE.config. If everything is done correctly, you can run SPE commands, as below:

Image 20240525 200307 (1)

The Result

After all the above steps are done correctly, you will be able to utilize the legacy content database along with your new shiny local XM Cloud instance:
Picture2
Now you can copy items between databases just by using built-in Sitecore UI preserving their IDs and version history. You can also copy items with SPE from one database to another which are both visible to the SPE engine.
]]>
https://blogs.perficient.com/2024/09/13/xm-cloud-content-migration-connecting-external-database/feed/ 1 369122
Accelerate Cloud Migration with AWS OLA https://blogs.perficient.com/2024/09/03/accelerate-cloud-migration-with-aws-ola/ https://blogs.perficient.com/2024/09/03/accelerate-cloud-migration-with-aws-ola/#respond Tue, 03 Sep 2024 17:45:10 +0000 https://blogs.perficient.com/?p=368610

In the wake of VMware’s recent license cost increase under Broadcom’s new pricing model, many enterprises are facing the pressing need to reevaluate their IT strategies. For those reliant on VMware’s virtualization technologies, the cost hike poses a significant challenge to maintaining budgetary control while continuing to drive digital transformation efforts.

Rather than simply absorbing these increased expenses, businesses now have a prime opportunity to explore more cost-effective and future-proof solutions. Perficient and Amazon Web Services (AWS), a robust and versatile cloud platform, can help organizations not only manage but also optimize their IT spending.

Why AWS?

AWS stands out as a premier choice for enterprises seeking to transition from traditional VMware environments to the cloud. Amazon Web Services Optimization and Licensing Assessment (AWS OLA) evaluates your third-party licensing costs to help you right-size your resources, reduce costs, and explore flexible licensing options. With its comprehensive suite of cloud-native services, AWS offers unparalleled flexibility, scalability, and cost-efficiency, enabling businesses to innovate and grow without being constrained by rising license fees.

AWS Finance Data shows that customers who used OLA benefited from a 36% reduction in total cost of ownership (TCO). Licensing is a critical yet often overlooked factor in cloud migration decisions. The cost associated with commercial licenses and the specific terms can significantly impact the total cost of ownership (TCO). A 2023 AWS study of 439 customers, encompassing over 300,000 servers, revealed that factoring in licensing considerations along with utilization optimization resulted in an average potential savings of 25.8%. This highlights the importance of a comprehensive approach to cloud migration, where licensing plays a key role in achieving substantial cost reductions.

 

How Perficient Can Help

As an AWS Advanced Tier Services Partner with the Migration Consulting Competency, Perficient is uniquely positioned to perform your AWS Optimization and Licensing Assessment (OLA). Our deep expertise and proven methodologies ensure a thorough evaluation of your current infrastructure, delivering actionable insights to optimize your AWS environment and reduce costs. With our extensive experience and strategic approach, Perficient helps you navigate the complexities of AWS licensing, ensuring your transition to the cloud is seamless and cost-effective.

 

Turning VMware Cost Hikes into Cloud-Driven Success with AWS and Perficient

Perficient is your trusted partner in this transition, offering the expertise, tools, and support needed to successfully migrate to AWS. Together, we can transform this challenge into a strategic advantage, positioning your business for long-term success in the cloud era.

For more information on how Perficient can help you transition to AWS, contact us today. Let’s embark on this journey to a more agile, efficient, and cost-effective IT future.

 

Sources:

https://aws.amazon.com/optimization-and-licensing-assessment/

https://aws.amazon.com/blogs/mt/reduce-software-licensing-costs-with-an-aws-optimization-and-licensing-assessment/

]]>
https://blogs.perficient.com/2024/09/03/accelerate-cloud-migration-with-aws-ola/feed/ 0 368610
Perficient XM Cloud JumpStart https://blogs.perficient.com/2023/09/27/perficient-xm-cloud-jumpstart/ https://blogs.perficient.com/2023/09/27/perficient-xm-cloud-jumpstart/#comments Wed, 27 Sep 2023 10:38:02 +0000 https://blogs.perficient.com/?p=345213

Perficient’s XM Cloud Jumpstart helps existing Sitecore customers who want to adopt XM Cloud create and execute a plan to move their MVC solutions to XM Cloud using Next.js and React or rapidly implement a greenfield solution according to the best industry standards from scratch.

Regardless of your starting point, JumpStart delivers a better experience to visitors and ensures your marketing and development teams are ready to take full advantage of the platform. Perficient’s brilliant XM Cloud JumpStart solution comes as the essence of the company’s collective well-documented experience, derived from our XM Cloud and Headless experts’ discoveries, improvements, and of course – real project expertise.

What are the key features of Perficient XM Cloud JumpStart?

  • it benefits from the latest versions of XM Cloud offering – we carefully monitor and sync updates regularly which allows you to stay up to date with the latest features, like Next.js SDK.
  • supports multisite architecture in all possible ways, whether you want to share the same resources within the same rendering host or are looking for total isolation between websites
  • multilingual support with scaffolding all the required dependencies, such as dictionaries, language switchers, etc.
  • SEO friendliness – achieved by a combination of our custom implementation and the best OOB SXA configuration
  • DevOps automation and infrastructure provisioning as well as support for multiple Source code providers (such as Azure DevOps) not just GitHub.
  • speaking about GitHub – GitHub Actions are also supported as an advanced CI/CD mechanism.
  • site blueprints – are especially helpful for multibrand and/or multiregional clients.
  • content migration scripts & automation.

Migration from XP/XM

Transferring your existing XP/XM platform to XM Cloud can demand a substantial allocation of resources. In some cases, the expense associated with reconstructing the solution may outweigh the benefits for certain businesses. Fortunately, our XM Cloud JumpStart offers a comprehensive toolkit, including templates, blueprints, migration automation tools, and adjustable pre-developed headless components. This simplifies and expedites the migration process, ultimately reducing the overall cost of transitioning to the cloud solution and making it a more accessible option for brands currently utilizing an on-premises platform.

Components

Headless SXA provided with each and every XM Cloud environment contains only a limited and basic set of components. Based on our previous XM Cloud implementations, we’ve identified that the Components Library is among the top time boosters for development and faster time to market. Therefore we create a components library featuring the most met and desired components for a typical enterprise-level website. Since all the components are Headless SXA-based, there are plenty of styling configuration options for each of them, including rendering variants and parameters. Configuring components on a page consumes significantly less time than implementing homebrewed components from scratch.

XM Cloud JumpStart implementation strategies

Adopting XM Cloud requires careful planning and execution and most importantly, a roadmap. Existing Sitecore customers considering a move to XM Cloud may feel overwhelmed at first.

Your Sitecore solution and business priorities may require different strategies for moving to XM Cloud. Here are a few approaches we’ve seen be successful:

  • Site Migration Factory works best for customers running a number of sites on Sitecore, creating a factory approach for migrating sites leads to the most efficiencies
  • Incremental & Agile approach suggests moving features incrementally, being responsive to business priorities along the way instead of focusing on a “big bang” release
  • Lift & Shift is the most popular strategy, allowing to keep like-for-like functionality and move as efficiently as possible this approach makes the most sense.
  • Redesign & Rearchitect is helpful when you want to improve the user experience during the migration. This approach can account for changes to design and content.

Want to learn more?

Reach out to me or my resourceful colleague David San Filippo, if you’d prefer to have a tour of XM Cloud in order to see it in action with your own eyes, as well as familiarize yourself with any of XM Cloud JumpStart capabilities.

]]>
https://blogs.perficient.com/2023/09/27/perficient-xm-cloud-jumpstart/feed/ 1 345213
Unleash the Power of Data: The Migration Factory by Perficient on Databricks https://blogs.perficient.com/2023/08/30/unleash-the-power-of-data-the-migration-factory-by-perficient-on-databricks/ https://blogs.perficient.com/2023/08/30/unleash-the-power-of-data-the-migration-factory-by-perficient-on-databricks/#respond Wed, 30 Aug 2023 17:14:22 +0000 https://blogs.perficient.com/?p=343696

Introducing: The Migration Factory

In today’s ever-evolving business environment, staying up to date on best practices and technology is essential for remaining competitive.  Many Fortune 500s have realized the importance of making data work in favor of one’s business.  Without proper data management, corporations begin to fall behind the competition by struggling with things like disorganization, loss of visibility, and siloed operations.  As a Databricks partner with more than 75 Databricks-dedicated consultants, we build end-to-end solutions that empower our clients to gain more value from their data.  

Microsoftteams Image (3)

Today, we are pleased to announce our inclusion in the Brickbuilder program, partner-developed solutions for the Lakehouse. A Brickbuilder Solution is a key component of the Databricks Partner Program and recognizes partners who have demonstrated a unique ability to offer differentiated Lakehouse industry and migration solutions in combination with their knowledge and expertise.   

 

The Migration Factory built by Perficient on the Databricks Platform is a repeatable solution that seamlessly migrates on-premises data to the cloud.  It leverages reusable tooling and a proven framework to efficiently migrate workloads, while considering a reduction in technical debt, implementing best practices, and speeding up the migration process. 

 

This solution addresses various challenges that are common for legacy systems: 

  • Unsupported and Costly Legacy Data Platform – Outdated or older Hadoop / Cloudera or on-premises DBMS platforms are a strain on IT budgets and create a lot of administration overhead. Organizations are limited to available compute and storage which makes scaling difficult. ​ 
  • Technical Debt and Disorganization – Aging platforms have significant technical debt, aging codebases, and lack of visibility into data processes. Batches continue to fail and support teams grow to maintain the business needs. Optimization is not a focus for historical processes. ​ 
  • Siloed Data Teams – Data Teams are not communicating and engaging to build a cohesive data ecosystem. This leads to long lead times for engineering, slower output of the data teams, and lack of access to data when it is needed. ​ 

Perficient’s Migration Factory simplifies the shift of legacy platforms to Databricks.  It includes various tools and accelerators such as data validation, data migration, and job conversion tools.  These go hand-in-hand to automate tasks, ensure data integrity, and adapt existing ETL jobs to the new platform. Our proven process consists of nine steps. 

Bigger Migrationfactory

The Perficient approach to data migration guarantees consistency, scalability, and repeatable results for its clients.  A healthcare insurance company needed to replace its aging, poorly managed, and expensive legacy data platform (Cloudera/Hadoop) because it did not meet the company’s need for flexibility nor the specialized needs of its data scientists and advanced analysts which was causing a strain to IT support and budgets.    

Our team of engineers created data migration patterns and code transformation frameworks to fully migrate the legacy data platform to Databricks. The usage-based pricing reduced costs while extending the client’s advanced analytics and data modeling capabilities.   

For additional information on the Migration Factory, check out this link to connect with our experts! 

]]>
https://blogs.perficient.com/2023/08/30/unleash-the-power-of-data-the-migration-factory-by-perficient-on-databricks/feed/ 0 343696
Healthcare Client Gets Major Commerce Upgrade https://blogs.perficient.com/2021/06/07/healthcare-client-gets-major-commerce-upgrade/ https://blogs.perficient.com/2021/06/07/healthcare-client-gets-major-commerce-upgrade/#respond Mon, 07 Jun 2021 15:17:37 +0000 https://blogs.perficient.com/?p=290492

Cardinal Health, a major healthcare services client, needed help upgrading from HCL Commerce V7 to V9 to serve its Medical Solutions Segment which is responsible for the sale and distribution of medical products to hospitals and other healthcare providers. The need to upgrade before loss of support of the current, older version was imperative, and our expertise with the technology and long-standing relationship with Cardinal Health made Perficient the perfect choice for a partner on this project.

How We Accomplished the Migration

At the time of the migration, Cardinal Health was going through a code-freeze and had a small window of time to complete the HCL Commerce migration as the company upgraded its enterprise resource planning (ERP). Cardinal Health chose us for this project among other potential partners due to our deep knowledge of the client’s business, in-depth scoping of requirements, and our deep experience with platform migration projects in general. Cardinal Health had discussions with the HCL Commerce product team who were confident in our team’s skillsets and the recommendations we made towards how we would achieve this migration through an accelerated timeline.

We began with conversations and scoping sessions, as well as workshops with Cardinal Health’s Medical ecommerce team to discuss how the platform is a factor in accelerating business growth and creating a more seamless customer experience. From an IT perspective, we highlighted everything from the platform’s infrastructure and development to communication tools, daily scrums, and more.

Perficient partnered closely with the Cardinal Health team to first gain access to the required environments, code artifacts, and test scripts. We then quickly gained a thorough understanding of how the current solution was built, all of the core components, and customizations, which ensured we were able to properly validate the migrated solution. Then, we created parallel environments to complete setup, configuration, custom code assets migration, database migration, and testing through to a production-ready environment, allowing full production testing before the final cutover, virtually ensuring success on the final cutover night

Perficient successfully completed the migration of the Cardinal Health Medical eCommerce site within the timeframe allowed for the project. With the upgraded platform, Cardinal Health has seen an improvement in site-speed, allowing customers to search and buy products more quickly, and enable Cardinal to process orders quickly within the platform. This will not only benefit buyers, but will also help Cardinal Health with efficiency and provide better return-visit rates for the website. For more information on how to upgrade your commerce platform efficiently, contact our experts today.

]]>
https://blogs.perficient.com/2021/06/07/healthcare-client-gets-major-commerce-upgrade/feed/ 0 290492
GitHub Code Migration Using DevOps Automation https://blogs.perficient.com/2021/04/15/github-code-migration-using-devops-automation/ https://blogs.perficient.com/2021/04/15/github-code-migration-using-devops-automation/#respond Fri, 16 Apr 2021 02:38:14 +0000 https://blogs.perficient.com/?p=290755

Migration from one code management system to another is a non-trivial exercise.  Most of the time the team wishes to maintain code history, branch structure, team permissions, and integrations. This blog post investigates one such migration from Bitbucket to GitHub for a large health maintenance organization.

Due to growth and acquisition over time, the organization found that development teams were using multiple source control systems. This led to increased expense from duplicate support efforts and license costs.  This included platform management, automated Continuous Integration / Continuous Delivery (CI/CD) integration, and end-user support. To resolve these issues, GitHub was chosen as the single platform for source control. The GitHub enterprise product offers multiple benefits, including tool integrations (e.g., web-hooks, SSH key based access, workflow plugins), an intuitive UI for team and project management, and notifications on specific behavior driven events (e.g., pull-request, merge, branch creation). Additionally, there is the option for cloud or on-premises deployment of their source code management (SCM) platform.

The migration of several thousand repositories presented a significant challenge. Beyond the logistics of coordination, it was also required that the DevOps team meet the tight timeframe around license renewal. To avoid this additional expense, teams were required to migrate not just the code base, but all of the associated meta-data (e.g., branch history, user permissions, tool integrations, etc.). In the approach detailed below we extensively leveraged CloudBees Jenkins™ workflows, Red Hat Ansible™ playbooks, and Python™ scripting to perform much of the required setup and migration work.

Approach

As shown in Figure 1, the migration effort involved creating a Jenkins migration workflow driven by user-provided information to define the Bitbucket source project, the GitHub target project, team ownership information, repository information, and additional integration requirements.  This migration information was stored into a new file added at the root of the source tree (‘app-info.yml’).  This approach facilitates future automation integration and provides a simple way to track application metadata within the code base itself. 

Github Migration Automation Map (3)

Figure 1. GitHub migration automation workflow

There were multiple considerations to address in the GitHub migration automation, including ensuring the target GitHub project had proper visibility permissions (e.g., public/private), using consistent project naming standards, integrating with pre-existing or to-be established security scanning automation, applying organization defined branch protection rules, and maintaining all necessary CI/CD pipeline automation.

Code Transfer

While technically the most straightforward migration operation was to clone the code into the new repository, this required significant manual modifications to several key automation files maintained at the root of the project folder structure.  For example, the pre-existing Jenkins configuration (‘Jenkinsfile’) was updated post migration to point to the correct shared library project; these had been previously migrated to GitHub from Bitbucket. Unfortunately, given that each development team used a specific library version this step was a manual rather than automated onboarding activity.

Branch Protection Rules

The organization had established a set of consistent branch management rules for source control trees.  For example, the policy requires that a pull-request be approved by at least one reviewer prior to code merges for the ‘master’, ‘release’, and ‘develop’ branches within the repository.  These rules were encoded within the migration Python scripts and pulled from the Ansible playbook during GitHub project creation.

Automated CI/CD Pipeline Modifications 

To support the existing CI/CD pipelines, the migrated code bases required pipeline configuration file updates. This included configuration links for automated Jira issue updates, proper Jenkins master/agent execution (i.e., web-hooks), security automation scans, and integration with library package control (e.g., JFrog Artifactory™).  These modifications were captured in migration Python scripts and pulled from the Ansible playbook during GitHub code migration.

Access Key and Service Account Management

Automated CI/CD processes often require the use of service accounts and shared-secret access keys  to function properly. During the GitHub migration it was critically important to maintain these access keys to prevent improper exposure to logs, notifications, or any other insecure reporting.  The GitHub migration team used the Ansible vault feature and Groovy scripts to update built-in Jenkins credential management  to ensure that project specific secrets/accounts/keys were securely transferred to the newly created GitHub linked jobs during the migration process.

GitHub Pre-Migration Setup

The GitHub Jenkins integration was built as a separate job to create the GitHub ‘team’. This included configuration of the team with a proper name, administration users, and match in the Jenkins build folder. For each repository we also set a Jenkins “web-hook” to ensure the proper Jenkins master is used to run each CI/CD pipeline.

Automated Testing Integration

As a part of code quality control, SonarQube code scanning is tied to a defined repository and required as part of the Jenkins CI/CD workflow.  The scan results are reported to a separate GitHub tab which needed to be matched up with the project team.  In this way, the newly created GitHub project could directly report to developers the results of the automated code quality analysis.

Results

The DevOps enablement team was required to meet a very tight deadline of four months to complete the full migration from Bitbucket to GitHub and avoid the expense of license renewals.  Given the scope of the challenge, the only viable solution was to automate as much of the migration as possible.  Where manual intervention was required, the DevOps team clearly communicated a checklist of activities to the affected teams for both pre- and post-migration changes.  Using the combined tool set of scripted Jenkins jobs, Ansible playbooks, and Python scripting, the DevOps team successfully completed all migrations and modifications to all code bases several weeks prior to the deadline. The organization’s information technology team has reported that all teams are active on GitHub and the Bitbucket repositories have been archived.

]]>
https://blogs.perficient.com/2021/04/15/github-code-migration-using-devops-automation/feed/ 0 290755
Checklist for a Successful Website Migration https://blogs.perficient.com/2020/04/28/checklist-for-a-successful-website-migration/ https://blogs.perficient.com/2020/04/28/checklist-for-a-successful-website-migration/#comments Tue, 28 Apr 2020 19:34:07 +0000 https://blogs.perficient.com/?p=273853

Relaunching or migrating a website is an intensive process and it’s very easy for small issues to slip through the cracks. Especially when teams either lack institutional knowledge, are gapped on critical skills or are working against an aggressive timeline.

Here are some things to watch out for I’ve seen observing and being involved in several dozen migrations over the course of my career.

Get the Experience Right

The most critical part of relaunching or migrating an experience is the experience itself.

Coordinate with Content Authors

Content authors frequently know the site better than business stakeholders. During the migration / refresh / whatever you call it, make sure to pull content authors into the process, after all they know the content and use the system of the daily basis.

Rather than surprising content authors with a new launch and hoping it matches their requirements, pull them in ahead of time to validate website authoring functionality as well as the content. A good content author is passionate about their content and willing to pitch in to validate so you can fix issues ahead of time instead of scrambling afterwards.

Confirm Content Renders Correctly

This may seem like an obvious basic, but it’s surprising how often someone has not checked every page on the website to make sure that it’s rendering correctly. Automated tools can certainly help here, a good crawler like Screaming Frog, is a must have in every migration toolbox.

If you are doing a 1:1 migration, you can leverage AI based automated testing tools to compare the website before to after migration, however if you are changing the UI this is not a viable option.

In the end, nothing compares to having good ol’ fashioned eyeballs looking at the website and making sure that it’s subjectively rendering correctly.

A simple, but effective mechanism for verifying website rendering is to crawl the website, create a spreadsheet of the URLs in a share such as Sharepoint or Google Drive and then coordinate with a QA team / content authors to review and check off each URL.

While testing, ensure to use multiple browsers and screen sizes to ensure the content appears correctly no matter the device. Responsive design is table stakes so there’s no good reason to not have images scale down to the viewport.

Validate Website Interactivity

Beyond initial content rendering, pay special attention to interactive functionality on the website such as:

  • Web Forms
  • Quizzes
  • Tabs / Content Switchers
  • Personalization

Prior to migration, the team should identify all such functionality, document a list of interactive functions and perform regression testing prior to re-launch.

Don’t forget Web Search!

One frequent miss is website search. This is one of the top features in most websites, but can easily be missed in preparation, planning or testing. It’s especially important to remember if the content needs to be re-indexed once the site is cut-over to plan enough time.

Track Consistently

You need reliable data to prove that a migration is successful and identify any issues. Therefore, it’s critical to ensure tracking is consistent and accurate pre and post-relaunch.

Validate Analytics Tracking

Along with testing the experience, the QA team should be validating that Analytics tags are firing correctly and conveying the correct data. This should be comprehensive, not just testing a small sub-set, but validating across at least a large, representative sample of the experiences.

Preserve Tracking Consistency

There’s no good reason to abandon the website data from before migration so make sure to use the same Google Analytics Property or Adobe Analytics Report Suite. If there are issues with the way tracking is implemented, it’s better to fix them before migrating to ensure good data than wait until the migration is complete.

Put the Website through the Wringer

Before the site goes live, check with every tool you can think of to ensure the website will not crash, be compromised or otherwise not work as expected.

Search Engine Optimization

Crawl the whole site to identify metadata issues, bad redirects and other SEO issues. Make sure to use tools like a SERP Simulator and Rich Results tester to ensure the metadata is correct on the pages.

 

Validate Redirects

Prior to migrating / relaunching the site, you should ensure you have a full set of the URLs and legacy redirects on the site. Prior to migration, validate that each redirect leads to the expected page.

Security Testing

Security and penetration testing is a whole realm of expertise and the implementation varies drastically between platforms. From a basics perspective, during the migration and relaunch development, make sure to comply with the OWASP Top 10. On top of development, ensure you have all of the recommended security headers to enforce SSL, prevent XSS and prevent framejacking and are following best practices for securing the web server and infrastructure access.

System Performance

There are a number of tools for testing system performance from Apache Benchmark to JMeter. Find the appropriate tool for your use case, test the site until it breaks and improve until you are a few standard deviations above the expected traffic.

User Performance

Having a well-performing server is great, but the front-end code will drastically influence the perceived website performance. Google PageSpeed Insights and Google Lighthouse are great tools for evaluating perceived performance.

Accessibility

Building a website accessible to everyone is both a business and moral imperative. Browser tools such as WAVE or ANDI can help perform technical accessibility compliance tests for websites and should be used to validate the delivered code as well as the authored content.

Some common Accessibility issues include designs that are difficult to read, insufficient contrast and non-required accessibility metadata.

Not Just a Website

Finally, a relaunch / migration should consider more than just the website. Make sure that 3rd party services, internal / external branding and communications are aligned to support the relaunch.

Ensure a Successful Relaunch

There’s a lot to consider when approaching a website migration or relaunch. Perficient’s expert consultant teams can support your success no matter the platform.

]]>
https://blogs.perficient.com/2020/04/28/checklist-for-a-successful-website-migration/feed/ 1 273853
Upgrading to Microsoft Teams: IT Admin Edition (Part 2) https://blogs.perficient.com/2019/10/25/upgrading-to-microsoft-teams-it-admin-edition-part-2/ https://blogs.perficient.com/2019/10/25/upgrading-to-microsoft-teams-it-admin-edition-part-2/#respond Fri, 25 Oct 2019 15:18:40 +0000 https://blogs.perficient.com/?p=246152

Welcome back to the second blog in our upgrading to Microsoft Teams: IT Admin Edition series. If you are just joining us for the first time, I encourage you to check out the first blog in this series as it covers some important concepts for you IT admins out there! This time we’ll be discussing what the IT admin experience will be like within the Teams Admin Center and PowerShell.

Setting Coexistence Mode (Org-wide)

Let’s start off by showing our IT admins how to set the coexistence mode within the Microsoft Teams admin center. For this you’ll just need to ensure you have the Teams Service Administrator role assigned. Once you are logged into the Teams Admin Center navigate to the Org-wide settings option and then select Teams upgrade.

Teamsupgrade

Image provided by Microsoft

In the picture above you’ll notice that we are presented with five different coexistence options. When setting your coexistence mode please be aware that this is an organization wide setting. By default, the coexistence mode will be set to Islands so before changing this make sure to plan accordingly.

Notifying the User (Org-wide)

In the example above you make have also noticed the options of notifying the Skype for Business users that an upgrade to Teams is available. In addition, you can set the preferred app that users will use to join a Skype for Business meeting. Lastly, you’ll have the ability of enabling or disabling the ability to download the Teams app in the background for your existing Skype for Business Users. In the screenshot below you’ll see what this looks like from a client perspective when having the “Notify Skype for Business users that an upgrade to Teams is available” option is enabled.

Teamsupgrade2

Image provided by Microsoft

Setting Coexistence Mode to TeamsOnly

Let’s say you’re finally ready to make the jump to a strictly Teams Only coexistence mode. Once you set your coexistence mode to Teams Only in the Teams Admin Center this is effectively saying your are prepared for your organization to only utilize Microsoft Teams. As soon as you change the coexistence mode to Teams Only and click “Save” on the bottom of the screen you will get a pop-up prompt with a warning saying, “WARNING: Users will no longer be able to use Skype for Business, except when joining Skype meetings. These users will be Teams only, unless an explicit per-user upgrade policy is set.” Additionally, there will be a box you must check to indicate you understand the effect of this change and then you will be able to “Save” the change. Unless you have set per-user upgrade policies then this will affect EVERYONE in your organization, so please make to plan accordingly.

Setting Coexistence Mode (Per User)

As I just alluded to, you are able to set coexistence mode on a per user basis meaning that this will trump whatever is set in the org-wide settings. You can do this withing the Teams Admin Center by going to Users then locating the particular user and going to their Teams upgrade settings and selecting Edit. You will then see a box pop up on the right hand side of your screen with the Teams upgrade mode options you can select from. In this scenario we’ll be selecting Teams Only mode meaning that this particular user will only utilize their Teams client except in the rare occasion where they need to join a Skype for Business meeting. By default, every user’s coexistence mode will be set to Use Org-wide settings meaning this will use whatever is set as the coexistence mode defined on a tenant level. However as soon as you change this option to something else this will override this setting on a per-user basis.

Teamsupgrade3

Image provided by Microsoft

Notifying the User (Per User)

We covered what this looks like on a per tenant/organization wide level but we haven’t covered this on a per user basis. Luckily for you, just like with setting your coexistence mode, you can also set how you’d like to notify users on a per-user basis! Giving admins this type of flexibility in whom gets migrated and communicated to is something many admins take for granted and was often times the case in Skype for Business Online where certain policies could only be controlled on an org-wide level. In the screenshot below you’ll see what this looks like from the client’s perspective once you enable the “Notify the Skype for Business user”option.

Teamsupgrade4

Image provided by Microsoft

PowerShell Reference

Now that we’ve covered what this will look like for admins in the Teams Admin Center, let’s cover what would be required if you opted to do this within PowerShell instead.

ModeMicrosoft Teams Admin CenterPowerShell (per user)
IslandsAvailableGrant-CsTeamsUpgradePolicy -PolicyName Islands -Identity $SipAddress
Islands + Notify the Skype for Business UserAvailableGrant-CsTeamsUpgradePolicy -PolicyName IslandsWithNotify -Identity $SipAddress
Skype for Business OnlyAvailable (routing only currently)Grant-CsTeamsUpgradePolicy -PolicyName SfBOnly -Identity $SipAddress
Skype for Business Only + Notify the Skype for Business UserAvailable (routing only currently)Grant-CsTeamsUpgradePolicy -PolicyName SfBOnlyWithNotify -Identity $SipAddress
Teams OnlyAvailableGrant-CsTeamsUpgradePolicy -PolicyName UpgradeToTeams -Identity $SipAddress
Skype for Business with Teams CollaborationAvailableGrant-CsTeamsUpgradePolicy -PolicyName SfBWithTeamsCollab -Identity $SipAddress
Skype for Business with Teams Collaboration + Notify the Skype for Business UserAvailableGrant-CsTeamsUpgradePolicy -PolicyName SfBWithTeamsCollabWithNotify -Identity $SipAddress
Skype for Business with Teams Collaboration and Meetings AvailableGrant-CsTeamsUpgradePolicy -PolicyName SfBWithTeamsCollabAndMeetings -Identity $SipAddress
Skype for Business with Teams Collaboration and Meetings + Notify the Skype for Business UserAvailableGrant-CsTeamsUpgradePolicy -PolicyName SfBWithTeamsCollabAndMeetingsWithNotify -Identity $SipAddress

So why would you want to use PowerShell over the Teams Admin Center? Great question! If you plan on assigning some of these coexistence modes to a list/subset of users this is where PowerShell can be very beneficial, as it gives you the ability to assign these coexistence modes in bulk. As of right now (October 2019) there are no capabilities within the Teams Admin Center to assign these various coexistence modes to a batch of users so you’ll have to use PowerShell to accomplish this. As you may notice in the table these cmdlets are catered to only a specific user, however you do have the ability of changing up the script to assign these coexistence modes on a per-tenant level instead.

This concludes our blog series on upgrading to Microsoft Teams: IT Admin Edition. I hope as the IT admin you are feeling more comfortable within the Teams Admin Center and/or PowerShell and that you have gained some knowledge around upgrading to Teams! However, this only touches the tip of the iceberg so I really encourage you to check out some of the other training that Microsoft provides here. Also, I encourage you to check back soon, as it’s almost that time of the year we have all been waiting for…. Microsoft Ignite!! There are over 57 Microsoft Teams sessions planned for Ignite, so make sure you get the sessions added to your calendar ASAP so you don’t miss out!

]]>
https://blogs.perficient.com/2019/10/25/upgrading-to-microsoft-teams-it-admin-edition-part-2/feed/ 0 246152