IBM / Red Hat Articles / Blogs / Perficient https://blogs.perficient.com/category/partners/ibm-red-hat/ Expert Digital Insights Thu, 30 Jan 2025 14:55:53 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png IBM / Red Hat Articles / Blogs / Perficient https://blogs.perficient.com/category/partners/ibm-red-hat/ 32 32 30508587 Is it really DeepSeek FTW? https://blogs.perficient.com/2025/01/30/is-it-really-deepseek-ftw/ https://blogs.perficient.com/2025/01/30/is-it-really-deepseek-ftw/#respond Thu, 30 Jan 2025 14:55:53 +0000 https://blogs.perficient.com/?p=376512

So, DeepSeek just dropped their latest AI models, and while it’s exciting, there are some cautions to consider. Because of the US export controls around advanced hardware, DeepSeek has been operating under a set of unique constraints that have forced them to get creative in their approach. This creativity seems to have yielded real progress in reducing the amount of hardware required for training high-end models in reasonable timeframes and for inferencing off those same models. If reality bears out the claims, this could be a sea change in the monetary and environmental costs of training and hosting LLMs.

In addition to the increased efficiency, DeepSeek’s R1 model is continuing to swell the innovation curve around reasoning models. Models that follow this emerging chain of thought paradigm in their responses, providing an explanation of their thinking first and then summarizing into an answer, are providing a step change in response quality. Especially when paired with RAG and a library of tools or actions in an agentic framework, baking this emerging pattern into the models instead of including it in the prompt is a serious innovation. We’re going to see even more open-source model vendors follow OpenAI and DeepSeek in this.

Key Considerations

One of the key factors in considering the adoption of DeepSeek models will be data residency requirements for your business. For now, self-managed private hosting is the only option for maintaining full US, EU, or UK data residency with these new DeepSeek models (the most common needs for our clients). The same export restrictions limiting the hardware available to DeepSeek have also prevented OpenAI from offering their full services with comprehensive Chinese data residency. This makes DeepSeek a compelling offering for businesses needing an option within China. It’s yet to be seen if the hyperscalers or other providers will offer DeepSeek models on their platforms (Before I managed to get his published, Microsoft made a move and is offering DeepSeek-R1 in Azure AI Foundry).  The good news is that the models are highly efficient, and self-image hosting is feasible and not overly expensive for inferencing with these models. The downside is managing provisioned capacity when workloads can be uneven, which is why pay-per-token models are often the most cost efficient.

We are expecting that these new models and the reduced prices associated with them will have serious downward pressure on per-token costs for other models hosted by the hyperscalers. We’ll be paying specific attention to Microsoft as they are continuing to diversify their offerings beyond OpenAI, especially with their decision to make DeepSeek-R1 available. We also expect to see US-based firms replicate DeepSeek’s successes, especially given that Hugging Face has already started work within their Open R1 project to take the research behind DeepSeek’s announcements and make it fully open source.

What to Do Now

This is a definite leap forward and progress in the direction of what we have long said is the destination—more and smaller models targeted at specific use cases. For now, when looking at our clients, we advise a healthy dose of “wait and see.” As has been the case for the last three years, this technology is evolving rapidly, and we expect there to be further developments in the near future from other vendors. Our perpetual reminder to our clients is that security and privacy always outweigh marginal cost savings in the long run.

The comprehensive FAQ from Stratechery is a great resource for more information.

]]>
https://blogs.perficient.com/2025/01/30/is-it-really-deepseek-ftw/feed/ 0 376512
Unlock the Future of Integration with IBM ACE https://blogs.perficient.com/2025/01/15/unlock-the-future-of-integration-with-ibm-ace/ https://blogs.perficient.com/2025/01/15/unlock-the-future-of-integration-with-ibm-ace/#respond Wed, 15 Jan 2025 07:13:29 +0000 https://blogs.perficient.com/?p=375312

Have you ever wondered about integration in API development or how to become familiar with the concept?

In this blog, we will discuss one of the integration technologies that is very easy and fun to learn, IBM ACE.

What is IBM ACE?

IBM ACE stands for IBM App Connect Enterprise. It is an integration platform that allows businesses to connect various applications, systems, and services, enabling smooth data flow and communication across diverse environments. IBM ACE supports the creation of Integrations using different patterns, helping organizations streamline their processes and improve overall efficiency in handling data and business workflows.

Through a collection of connectors to various data sources, including packaged applications, files, mobile devices, messaging systems, and databases, IBM ACE delivers the capabilities needed to design integration processes that support different integration requirements.

One advantage of adopting IBM ACE is that it allows current applications to be configured for Web Services without costly legacy application rewrites. By linking any application or service to numerous protocols, including SOAP, HTTP, and JMS, IBM ACE minimizes the point-to-point pressure on development resources.

Modern secure authentication technologies, including LDAP, X-AUTH, O-AUTH, and two-way SSL, are supported through MQ, HTTP, and SOAP nodes, including the ability to perform activities on behalf of masquerading or delegated users.

How to Get Started

Refer to Getting Started with IBM ACE: https://www.ibm.com/docs/en/app-connect/12.0?topic=enterprise-get-started-app-connect

For installation on Windows, follow the document link below. Change the IBM App Connect version to 12.0 and follow along: https://www.ibm.com/docs/en/app-connect/11.0.0?topic=software-installing-windows

IBM ACE Toolkit Interface

Interface

This is what an IBM ACE toolkit interface looks like. You can see all the applications/APIs and libraries you created during application development. In Pallete, you can see all the nodes and connectors needed for application development.

Learn more about nodes and connectors: https://www.ibm.com/docs/en/app-connect/12.0?topic=development-built-in-nodes

IBM ACE provides flexibility in creating an Integration Servers and Integration Node where you can deploy and test your developed code and application, which you can do with the help of mqsi commands.

How to Create a New Application

  • To create a new application, click on File -> New -> Application.

Picture3

  • Give the Application a name and click finish.

Picture4

 

  • To add a message flow, click on New under Application, then Message Flow.

Picture5

  • Give the message flow a name and click finish.

Picture6

  • Once your flow is created, double-click on its name. The message flow will open, and you can implement the process.
  • Drag the required node and connectors to the canvas for your development.

Picture7

How to Create an Integration Node and Integration Server

  • Open your command window for your current installation.

Picture8

  • To create an Integration server, run the following command in the command shell and specify the parameter for the integration server you want to create: mqsicreateexecutiongroup IBNODE -e IServer_2
  • To create an Integration node, run the following command in the command shell and specify the parameter for the integration node you want to create.
    • For example, If you want to create an Integration node with queue manager ACEMQ, use the following command: mqsicreatebroker MYNODE -i wbrkuid -a wbrkpw -q ACEMQ

 How to Deploy the Application

  • Right-click on the application, then click on Deploy.

Picture9

  • Then click on the Integration node and Finish.

Picture10

Advantages of IBM ACE

  • ACE offers powerful integration possibilities. Allowing for smooth communication between different applications, systems, and data sources.
  • It supports a variety of message patterns and data formats, allowing it to handle a wide range of integration scenarios.
  • It meets industry standards, ensuring compatibility and interoperability with many technologies and protocols.
  • ACE has complete administration and monitoring features, allowing administrators to track integration processes’ performance and health.
  • The platform encourages the production of reusable integration components, which decreases development time and effort for comparable integration tasks.
  • ACE offers comprehensive security measures that secure data during transmission and storage while adhering to enterprise-level security standards.
  • ACE offers a user-friendly development environment and tools to design, test, and deploy integration solutions effectively.

Conclusion

In this introductory blog, we have explored IBM ACE and how to create a basic application to learn about this integration technology.

Here at Perficient, we develop complex, scalable, robust, and cost-effective solutions using IBM ACE. This empowers our clients to improve efficiency and reduce manual work, ensuring seamless communication and data flow across their organization.

Contact us today to explore more options for elevating your business.

]]>
https://blogs.perficient.com/2025/01/15/unlock-the-future-of-integration-with-ibm-ace/feed/ 0 375312
Building GitLab CI/CD Pipelines with AWS Integration https://blogs.perficient.com/2024/12/18/building-gitlab-ci-cd-pipelines-with-aws-integration/ https://blogs.perficient.com/2024/12/18/building-gitlab-ci-cd-pipelines-with-aws-integration/#respond Wed, 18 Dec 2024 11:05:19 +0000 https://blogs.perficient.com/?p=373778

Building GitLab CI/CD Pipelines with AWS Integration

GitLab CI/CD (Continuous Integration/Continuous Deployment) is a powerful, integrated toolset within GitLab that automates the software development lifecycle (SDLC). It simplifies the process of building, testing, and deploying code, enabling teams to deliver high-quality software faster and more efficiently.

Understanding GitLab CI/CD

Getting started with GitLab CI/CD is simple. Start by creating a GitLab account and setting up a project for your application if you don’t have then install and configure a GitLab Runner, a tool responsible for executing the tasks defined in your .gitlab-ci.yml file. The runner handles building, testing, and deploying your code, ensuring the pipeline works as intended. This setup streamlines your development process and helps automate workflows efficiently.

What is a GitLab Pipeline?

A pipeline automates the process of building, testing, and deploying applications. CI (Continuous Integration) means regularly merging code changes into a shared repository. CD (Continuous Deployment/Delivery) automates releasing the application to its target environment.

Gitlab 1

Related CODE: In this step, you push your local code changes to the remote repository and commit any updates or modifications.

CI Pipeline: Once your code changes are committed and merged, you can run the build and test jobs defined in your pipeline. After completing these jobs, the code is ready to be deployed to staging and production environments.

Important Terms in GitLab CI/CD

1. .gitlab-ci.yaml file

A .gitlab-ci.yml file in a GitLab repository is used to define the Continuous Integration/Continuous Deployment (CI/CD) pipeline configuration. This file contains instructions on building, testing, and deploying your project.

2. Gitlab-Runner

In GitLab CI/CD, a “runner” refers to the agent that executes the jobs defined in the .gitlab-ci.yml pipeline configuration. Runners can be either shared or specific to the project.

Here’s how runners work:

  1. Shared Runners: GitLab provides shared runners available to all projects within a GitLab instance. These runners are managed by GitLab administrators and can be used by any project. Shared runners are convenient if we don’t want to set up and manage our own runners.
  2. Specific Runners: We can also set up our own runners that are dedicated to our project. These runners can be deployed on our infrastructure (e.g., on-premises servers, cloud instances) or using a variety of methods like Docker, Kubernetes, shell, or Docker Machine. Specific runners offer more control over the execution environment and can be customized to meet the specific needs of our project.

3. Pipeline:

Pipelines are made up of jobs and stages:

  • Jobs define what you want to do. For example, test code changes, or deploy to a dev environment.
  • Jobs are grouped into stages. Each stage contains at least one job. Common stages include build, test, and deploy.
  • You can run the pipeline either manually or from the pipeline schedule Job.

First is manually means directly commit, when you merged or commit any changes into code pipeline directly trigger.

And second is by using rules for that, you need to create a scheduled job.

 

Gitlab 2

 

 4. Schedule Job:

We use scheduled jobs to automate pipeline execution. To create a scheduled job, follow these steps:

  1. Navigate to Schedule Settings: Go to Build, select Pipeline Schedules, and click Create New Schedule.
  2. Configure Schedule Details:
    1. Description: Enter a name for the scheduled job.
    2. Cron Timezone: Set the timezone according to your requirements.
    3. Interval Pattern: Define the cron schedule to determine when the pipeline should run. If you   prefer to run it manually by clicking the play button when needed, uncheck the Activate button at the end.
    4. Target Branch: Specify the branch where the cron job will run.
  3. Add Variables: Include any variables mentioned in the rules section of your .gitlab-ci.yml file to ensure the pipeline runs correctly.
    1. Input variable key = SCHEDULE_TASK_NAME
    2. Input variable value = prft-deployment

Gitlab 3

 

Gitlab3.1

Demo

Prerequisites for GitLab CI/CD 

  • GitLab Account and Project: You need an active GitLab account and a project repository to store your source code and set up CI/CD workflows.
  • Server Environment: You should have access to a server environment, like a AWS Cloud, where your install gitlab-runner.
  • Version Control: Using a version control system like Git is essential for managing your source code effectively. With Git and a GitLab repository, you can easily track changes, collaborate with your team, and revert to previous versions whenever necessary.

Configure Gitlab-Runner

  • Launch an AWS EC2 instance with any operating system of your choice. Here, I used Ubuntu. Configure the instance with basic settings according to your requirements.
  • SSH into the EC2 instance and follow the steps below to install GitLab Runner on Ubuntu.
  1. sudo apt install -y curl
  2. curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh | sudo bash
  3. sudo apt install gitlab-runner

After installing GitLab Runner, proceed to register it. Navigate to GitLab, go to Settings, then CI/CD, and under Runners, click on the three dots to access the registration options.

And copy-paste the below cmd:

Gitlab 4

Run the following command on your EC2 instance and provide the necessary details for configuring the runner based on your requirements:

  1. URL: Press enter to keep it as the default.
  2. Token: Use the default token and press enter.
  3. Description: Add a brief description for the runner.
  4. Tags: This is critical; the tag names define your GitLab Runner and are referenced in your .gitlab-ci.yml file.
  5. Notes: Add any additional notes if required.
  6. Executor: Choose shell as the executor.

Gitlab 5

Check GitLab-runner status and active status using the below cmd:

  • gitlab-runner verify
  • gitlab-runner list

Gitlab 6

Check gitlab-runner is active in gitlab also:

Navigate to GitLab, then go to Settings and select GitLab Runners.

 

Gitlab 7

 Configure gitlab-ci.yaml file

  • Stages: Stages that define the sequence in which jobs are executed.
    • build
    • deploy
  • Build-job: This job is executed in the build stage, the first run stage.
    • Stage: build
    • Script:
      • Echo “Compiling the code…”
      • Echo “Compile complete.”‘
    • Rules:
      • if: ‘$CI_PIPELINE_SOURCE == “schedule” && $SCHEDULE_TASK_NAME == “prft-deployment”‘
    • Tags:
      • prft-test-runner
  • Deploy-job: This job is executed in the deploy stage.
    • Stage: deploy   #It will only execute when both jobs in the build job & test job (if added) have been successfully completed.
    • script:
      • Echo “Deploying application…”
      • Echo “Application successfully deployed.”
    • Rules:
      • if: ‘$CI_PIPELINE_SOURCE == “schedule” && $SCHEDULE_TASK_NAME == “prft-deployment”‘
    • Tags:
      • prft-test-runner

Note: If needed, you can add a test job similar to the BUILD and DEPLOY jobs.

Run Pipeline

Since the Cron job is already configured in the schedule, simply click the Play button to automatically trigger your pipeline.

Gitlab 8

To check pipeline status, go to Build and then Pipeline. Once the Build Job is successfully completed, the Test Job will start, and once the Test Job is completed, the deploy job will start.

Gitlab 9

Output

We successfully completed BUILD & DEPLOY Jobs.

Gitlab 10

Build Job

Gitlab 11

Deploy Job

Gitlab 12

Conclusion

As we can see, the BUILD & DEPLOY jobs pipeline has successfully passed.

We’ve provided a brief overview of GitLab CI/CD pipelines and a practical demonstration of how its components work together. Hopefully, everything is running smoothly on your end!

 

]]>
https://blogs.perficient.com/2024/12/18/building-gitlab-ci-cd-pipelines-with-aws-integration/feed/ 0 373778
IBM OMS Multi-Hop Upgrade https://blogs.perficient.com/2024/07/01/ibm-oms-multi-hop-upgrade/ https://blogs.perficient.com/2024/07/01/ibm-oms-multi-hop-upgrade/#respond Mon, 01 Jul 2024 15:58:26 +0000 https://blogs.perficient.com/?p=364899

IBM OMS (Order Management System) upgrade process is to update an existing OMS system to a newer version. This upgrade can involve updating only OMS application or other dependent applications or software’s. The primary goal of an OMS upgrade is to improve the efficiency, scalability, and performance of order processing.

Multi-hop upgrade is to upgrade an existing OMS system or legacy IBM OMS application through multiple versions to a newer version. This type of upgrade is necessary when upgrading a much older version to the latest version (Example: IBM OMS 9.1 to OMS10.0). Multi-hop upgrades are very complex due to significant changes in the OMS software architecture, database schema, and other dependent software’s. These types of upgrades also allow us to mitigate the risk by applying and validating gradual upgrades.

 Multi-hop upgrade steps:

 1. Impact Analysis and Assessment

Analysis and Assessment plays a very important role in any upgrade, and these are much more important for multi-Hop as most of the dependent applications / software’s requires upgrades to be compatible with OMS latest versions. Careful verification of IBM OMS software’s compatible matrix is mandatory to plan and upgrade all the required software’s (Example: Linux OS, Java, Database, Application Server etc..).

Example:

 Impactanalysis

  Impactanalysis Levels

 

Impactanalysis Example

Impactanalysis Example2

Impactanalysis Example3

 

Impactanalysis Example7

Impactanalysis Example4

2. POC / Environment setup

Preparing upgrades steps, upgrade scripts and executing those steps on a POC environment is important to reduce the risk and smooth upgrade process for higher environments.

POC Environment upgrade steps to upgrade in a multi-hop upgrade mode:

    1. Setup a new Linux      box/environment same as existing DEV/QA box.
    2. Modify the sandbox.cfg, jdbc.properties to point to POC Database Schema.
    3. Build and deploy the new .ear file.
    4. Bring up the existing OMS application, agent, and integration servers on the POC environment.
    5. Run a high-level validation and make sure the current OMS application is up and running on POC box.
    6. Download and Copy OMS software’s, fix pack’s into POC environment.
    7. Install OMS software (Example: OMS 9.5) and execute OMS upgrade steps.
    8. Build and deploy a new OMS ear.
    9. Run high-level validations and make sure OMS upgrade is complete and OMS application is up and running on newer version.
    10. Create a snapshot of OMS Linux box.
    11. Take the OMS DB Backup / Create DB Restore points.
    12. Install OMS latest version (Example: OMS 10.0) and execute OMS upgrade steps.
    13. Run high-level validations and make sure OMS 10.0 upgrade is complete and OMS application is up and running on newer version.
    14. Bring up all the Agent and Integration servers and validate order flow.
    15. Monitor the transactional data flow, exceptions, alerts etc.

3. Executing upgrades in multi-hope mode in all the required environments

After the successful completion of multi-hop upgrade on POC environments, follow the same steps and upgrade other higher environments like, DEV, QA, Master Config, Pre-Production).

 4. Go-Live preparation, Production downtime and upgrade

Multi-hop upgrades typically take much longer time than the regular updates hence it is very important to plan for Production downtime, setting expectations with the business on the downtime. During the go-live window executing upgrade steps and verifying log files, output of each step is very crucial to avoid issues or risk of reverting back everything.

5. Rollback strategy

Planning rollback options and executing them on one or more lower environments is important as the multi-hop upgrade is very complex and incase if the entire upgrade needs to be reverted due to any issue or time constraints with respect to the production downtime window.

6. Post-Production validation and support

Validating all the critical interfaces, functionalities are very important as upgrades can contain significant changes in the OMS architecture, database schema, user interfaces, and functionalities. Identifying all the critical scenarios that are required to be covered will help us to plan the Production Go-Live and Rollback strategies.

 

 

]]>
https://blogs.perficient.com/2024/07/01/ibm-oms-multi-hop-upgrade/feed/ 0 364899
IBM Sterling OMS Order Hub Installation (On Premises) https://blogs.perficient.com/2024/06/13/ibm-sterling-oms-order-hub-installation-on-premises/ https://blogs.perficient.com/2024/06/13/ibm-sterling-oms-order-hub-installation-on-premises/#respond Thu, 13 Jun 2024 16:11:13 +0000 https://blogs.perficient.com/?p=363989

Introducing Order Hub: Complete Fulfillment Network Management Solution

Order Hub, part of the IBM® Sterling Order Management System, is the ultimate tool for fulfillment and order management professionals. With its intuitive interface, contextual data, and key performance metrics, Order Hub empowers users to seamlessly translate business goals into actionable steps within their fulfillment network.

Monitoring Network with Ease

Order Hub allows users to effortlessly view various metrics and monitor nodes, orders, and shipments across the network. Stay on top of performance with customizable alert rules that help identify SLA and progress risks, all conveniently displayed on the workspace.

Take Control of Operations

Gain deep insights into nodes and orders with Order Hub’s extensive details. From changing node capacity to reassigning pending order releases, users have the power to optimize operations and maximize efficiency. Manage inventory effortlessly, from viewing item and SKU details to performing actions like moving inventory across nodes, adjusting safe stock, and setting fulfillment options.

Experience Seamless Management

With Order Hub, managing the fulfillment network has never been easier. Stay ahead of the curve and streamline the operations with this powerful interface designed to meet the needs of today’s dynamic business environment.

Unlock the full potential of fulfillment network with Order Hub – the comprehensive solution for modern order and fulfillment management.

Pre-requisite:

  • Upgrade to IBM® Sterling Order Management System Software version 10.0.2209.1 or later: The latest version OMS software is required to access Order Hub, now available for on-premises installations since September 2022.
  • Set up Nginx web server: Install Nginx on any server to serve the Order Hub UI content, allowing users to make REST API calls to the application server. Nginx’s efficient asset serving and caching capabilities enhance performance, while its deployment flexibility ensures seamless integration with the existing infrastructure.

 

A Step-by-Step Guide for Installation

  1. Install nginx web server.
  2. Install OrderHub:
  • <INSTALL_DIR>/repository/orderhub

Here <INSTALL_DIR> is the Sterling OMS installation home directory.

  • Extract the orderhub archive by running the following command:

tar xf orderhub.tar

  • Grant the orderhub setup script the execute (x) permission by running the following command:

chmod +x orderhub-setup.sh

  • Make a copy of the oh-setup.properties.sample file as oh-setup.properties by running the following command:

cp oh-setup.properties.sample oh-setup.properties

  • Update the oh-setup.properties file.
    • Uncomment the HTML_DIRECTORY and CONFIG_DIRECTORY properties that are applicable to the operating system. If necessary, update them to point to installed web server’s HTML and configuration directories.
    • Update the OMS_APPSERVER_HOST property to point to the OMS environment.

          Picture1

  • Run the Order Hub setup script:

./orderhub-setup.sh

  • Go to /etc/nginx/conf.d/default.conf and add server details:

          Picture2

For https: add Port No., server_name, certificate and certificate_key for user application.

  • Add below properties to:

<INSTALL_DIR>/properties/customer_overrides.properties

# Order Hub UI

xapirest.servlet.cors.enabled=true

xapirest.servlet.cors.allow.credentials=true

xapirest.servlet.jwt.auth.enabled=true

yfs.yfs.jwt.oms.verify.keyloader=jkstruststore

yfs.api.security.token.enabled=Y

 

  • Configure JWT authentication:
    • Locate or create a keystore.
      • To create a keystore run the following command:

For example:

keytool -genkey -keyalg RSA -keysize 2048 -keystore jwtkeystore.jks -validity 365 -storetype JKS

-alias oms-default-jwt -storepass secret4ever -keypass secret4ever -dname “CN=oms, OU=oms, O=oms, L=oms, S=oms, C=US”

Where:

-keystore provides the keystore name, for example, key.jks.

-alias describes the alias name that is configured as part of JWT properties of Sterling Order Management System Software.

-storepass and -keypass provides the password for keystore.

  • Add JVM system startup properties:

For example:

 -Dycp.jwt.auth.keyStore=/var/oms/keystore/jwtkeystore.jks

-Dycp.jwt.auth.keyStorePassword=secret4ever

-Dycp.jwt.auth.trustStore=/var/oms/keystore/jwtkeystore.jks

-Dycp.jwt.auth.trustStorePassword=secret4ever

  • Set property in customer_override_properties file as:

yfs.api.security.token.enabled=Y

  • Start or restart the web server.

Access Order Hub from the applicable URL:

http://<hostname>:<port>/order-management

https://<hostname>:<port>/order-management

 

Where hostname is the host name where the web server is running, and port is the port number that is configured in the web server configuration.

References:

]]>
https://blogs.perficient.com/2024/06/13/ibm-sterling-oms-order-hub-installation-on-premises/feed/ 0 363989
Unlocking Specialized AI: IBM’s InstructLab and the Future of Fine-Tuned Models — IBM Think 2024 https://blogs.perficient.com/2024/05/28/unlocking-specialized-ai-ibms-instructlab-and-the-future-of-fine-tuned-models-ibm-think-2024/ https://blogs.perficient.com/2024/05/28/unlocking-specialized-ai-ibms-instructlab-and-the-future-of-fine-tuned-models-ibm-think-2024/#respond Tue, 28 May 2024 18:36:46 +0000 https://blogs.perficient.com/?p=363598

I’ve been reflecting on my experience last week at IBM Think. As ever, it feels good to get back to my roots and see familiar faces and platforms. What struck me, though, was the unfamiliar. Seeing AWS, Microsoft, Salesforce, Adobe, SAP, and Oracle all manning booths at IBM’s big show was jarring, as it’s almost unheard of. It’s a testament to my current rallying cry for prioritizing the focus on how to make a diversity of platforms work better together by making data flow all directions, with minimal effort. I see many partners focusing in on this by supporting a diversity of data integration patterns in zero copy or zero elt patterns (a recurring theme, thank you Salesforce). In this environment of radical collaboration, I think something really compelling might’ve gotten lost… a little open source project they launched called InstructLab.

IBM spent a lot of time talking about how now is the time to SCALE your investments in AI, how it’s time to get out of the lab and into production. At the same time, there was a focus on fit for purpose AI, using the smallest, leanest model possible to achieve the goal you set.

Think Big. Start Small. Move Fast.

I always come back to one of our favorite mantras, Think Big. Start Small. Move Fast. What that means here is that we have this opportunity to thread the needle. It’s not about going from the lab to the enterprise-wide rollouts in one move. It’s about identifying the right, most valuable use cases and building tailored, highly effective solutions for them. You get lots of fast little wins that way, instead of hoping for general 10% productivity gains across the board, you’re getting 70+% productivity gain on specific measurable tasks.

This is where we get back to InstructLab, a model- agnostic open source AI project created to enhance LLMs. . We’ve seen over and over that general-purpose LLMs perform well for general-purpose tasks, but when you ask them to do something specialized, you’re getting intern in their first week results. The idea of InstructLab is to be able to track a taxonomy of knowledge and task domains, choose a foundation model that’s trained on the most relevant branches of the taxonomy, then add additional domain-specific tuning with a machine-amplified training data set. This opens the door to effective fine tuning. We’ve been advising against it because most enterprises just don’t have enough data to move the needle and make the necessary infrastructure spend for the model retraining to be worth it. With the InstructLab approach, we can, as we so often do in AI, borrow an idea from Biology–amplification. We use an adversarial approach to amplify a not-big-enough training set by adding additional synthetic entries that follow the patterns in the sample.

The cool thing here is that, because IBM chose the Apache 2 license for everything, they’ve open sourced, including Granite, it’s now possible to use InstructLab to train new models with Granite models as foundations, and decide to keep it private or open source it and share it with the world. This could be the start of a new ecosystem of trustable open-source models that have been trained for very specific tasks that meet the demands of our favorite mantra.

Move Faster Today

Whether your business is just starting its AI journey or seeking to enhance its current efforts, partnering with the right service provider makes all the difference. With a team of over 300 AI professionals, Perficient has extensive knowledge and skills across various AI domains. Learn more about how Perficient can help your organization harness the power of emerging technologies- contact us today.

]]>
https://blogs.perficient.com/2024/05/28/unlocking-specialized-ai-ibms-instructlab-and-the-future-of-fine-tuned-models-ibm-think-2024/feed/ 0 363598
Promising Facts about IBM Sterling Intelligent Promising https://blogs.perficient.com/2024/05/15/promising-facts-about-ibm-sterling-intelligent-promising/ https://blogs.perficient.com/2024/05/15/promising-facts-about-ibm-sterling-intelligent-promising/#respond Wed, 15 May 2024 21:16:02 +0000 https://blogs.perficient.com/?p=362957

In today’s world, every retailer’s biggest challenge is to ensure the shopper’s loyalty, and retailers are constantly dealing with this. Retailers need an intelligent and efficient supply chain to deliver the product. Retailers who operate the order fulfillment without synced-up end-to-end order promising, risk losing shoppers, increased costs, and falling behind competitors who can meet customer demands more efficiently.

Order Promising ensures promises are kept, all the product in the cart, better customer experiences, and chaos is transformed into orderliness. IBM Sterling Intelligent Promising combines inventory and capacity visibility with sophisticated fulfillment decisioning to help the retailers to maximize inventory productivity, make reliable and accurate order promises, and optimize fulfillment decisions at scale.

There are plenty of benefits. They include:

  1. Improved Customer Satisfaction: Accurate delivery estimates and reliable order fulfilment enhance customer trust and satisfaction. This leads to higher conversion rates and fewer cancellations.
  2. Efficient Resource Utilization: Optimized inventory, production, and logistics such as consolidated shipping reduces wastage and operational costs.
  3. Reduced Delays: Coordination between order promising and order management minimizes delays and ensures timely deliveries.
  4. Enhanced Brand Reputation: Consistently meeting delivery commitments strengthens the brand’s reputation for reliability.
  5. Lower Operating Costs: Better inventory control and resource allocation lead to cost savings.
  6. Streamlined Supply Chain: A well-coordinated system ensures smoother and more efficient supply chain operation.

How This “Promise” can be Achieved

Adoption of cutting-edge technology enables retailers to ensure the most accurate ‘Promise’ to their Shoppers! IBM’s Sterling Intelligent Promising (SIP) solution offers greater certainty, choice and transparency across shoppers’ buying journey. It is designed to revolutionize order promising and fulfilment in the ever-evolving world of commerce.

IBM Sterling Intelligent Promising

It’s a SaaS platform that has the following three services.

  1. Inventory Visibility
  2. Promising
  3. Fulfillment Optimizer

All the three services modules are independent, but they share a common single platform SIP.

  1. Maximize inventory productivity: Use real-time inventory visibility to confidently expose inventory and maximize conversions, gaining granular control over inventory actions, such as safety stock setting based on configurable business rules. Improve inventory turns by applying additional context like channel, fulfillment type and labor availability when making available-to promise decisions.
  2. Make and manage order promises: Improve conversion rates by confidently delivering order and delivery promises across every step of the shopping journey, including the product list page, product detail page, cart, and checkout. Automate the review of inventory, capacity, and costs to make informed promises, and harness powerful AI during fulfillment to simplify complex scenarios like orders with third-party services and support a wide range of fulfillment options.
  3. Optimize omnichannel profitability: Set operating performance objectives and KPIs using real cost drivers (like distance, labor, capacity, and carrier costs) and profit drivers (markdown, stockout), so you can confidently make the best fulfillment decisions for your business objectives. By optimizing across thousands of fulfillment permutations in milliseconds, retailers can ensure balance between profitability and the best customer experience.

SIP is the future of OMS, it ensures that the customers receive their orders on time, with trust and loyalty. SIP can employ AI and predictive analytics to anticipate demand, optimize inventory, and offer customer-centric promises. In an increasingly complex supply chain environment, it collaborates with suppliers for synchronized commitments, helping businesses stay agile and responsive to market shifts. IBM Sterling Intelligent Promising is not just a solution for today but a strategic asset for the future.

]]>
https://blogs.perficient.com/2024/05/15/promising-facts-about-ibm-sterling-intelligent-promising/feed/ 0 362957
The Crucial Steps to Success: Preparing for an Order Management Project https://blogs.perficient.com/2024/03/28/the-crucial-steps-to-success-preparing-for-an-order-management-project/ https://blogs.perficient.com/2024/03/28/the-crucial-steps-to-success-preparing-for-an-order-management-project/#respond Thu, 28 Mar 2024 18:40:04 +0000 https://blogs.perficient.com/?p=350472

Embarking on an order management project is a significant undertaking for any organization. It involves not only implementing new systems but also reshaping processes and workflows. The success of such projects hinges on meticulous preparation, particularly in terms of collecting and categorizing requirements and effectively managing the associated change. In this article, we will delve into the importance of these preparatory steps and how they contribute to the overall success of an order management project.

Collecting and Categorizing Requirements:

1. Understanding Business Objectives:
Before diving into the technicalities of an order management project, it’s crucial to understand the overarching business objectives. What are the key drivers for implementing a new order management system? Whether it’s improving efficiency, reducing errors, or enhancing customer satisfaction, a clear understanding of these goals will guide the entire project.

2. Stakeholder Collaboration:
The success of an order management project relies heavily on the involvement and collaboration of various stakeholders. Engage with representatives from different departments – sales, finance, logistics, and customer service – to gather a comprehensive set of requirements. Each stakeholder brings unique insights into their department’s needs and challenges, ensuring a holistic approach to system design.

3. Documentation and Analysis:
Systematic documentation of requirements is essential. This involves not only listing the functional requirements but also considering non-functional aspects such as performance, scalability, and security. Thorough analysis of these requirements helps in identifying potential conflicts or dependencies early in the planning stage, preventing issues during implementation.

4. Prioritization and Scope Definition:
Not all requirements are of equal importance, and attempting to implement every feature at once can lead to project delays and budget overruns. Prioritize requirements based on their impact on business goals and create a clear scope for the initial phase. This phased approach allows for a more focused implementation, reducing the risk of project failure.

5. Flexibility and Adaptability: 
Requirements are not static; they can evolve as the project progresses or as external factors change. Build flexibility into the project plan to accommodate changes in requirements. Regularly revisit and reassess requirements throughout the project lifecycle to ensure alignment with evolving business needs.

Getting the Organization Prepared for Change Management:

  1. Communicating the Vision:
    Change is often met with resistance, and employees may be apprehensive about adapting to new systems and processes. Communicating a clear and compelling vision for the order management project is essential. Help employees understand not just the technical aspects but also how the changes align with the organization’s goals and how they will benefit from the improvements.

  2. Inclusive Training Programs:
    Adequate training is key to a smooth transition. Develop comprehensive training programs that cater to employees at all levels. This includes end-users who will interact directly with the new system and administrators who will be responsible for its maintenance. Training should be ongoing, with refresher courses available as needed.

  3. Change Champions:
    Identify and empower change champions within the organization. These individuals, often departmental leaders or influencers, can play a crucial role in promoting the benefits of the order management project and encouraging their teams to embrace the changes. Their support can significantly mitigate resistance.

  4. Addressing Concerns Proactively:
    Change often brings about uncertainties and concerns. Proactively address these by establishing channels for open communication. Encourage employees to voice their concerns, and provide transparent and timely information to address any misconceptions. A proactive approach helps in building trust and reducing resistance.

  5. Monitoring and Evaluation:
    Change management is an ongoing process that extends beyond the initial implementation phase. Implement monitoring mechanisms to assess how well the organization is adapting to the changes. Collect feedback from users, identify pain points, and address them promptly. Continuous evaluation allows for adjustments to be made, ensuring the long-term success of the order management project.

Conclusion:

In conclusion, the success of an order management project hinges on meticulous preparation in terms of collecting and categorizing requirements and effectively managing change within the organization. By understanding business objectives, collaborating with stakeholders, and prioritizing requirements, an organization sets the foundation for a successful implementation. Simultaneously, fostering a positive and adaptive organizational culture through clear communication, inclusive training, and proactive change management strategies ensures that the transition is embraced rather than resisted. Together, these elements create a framework that not only leads to a successful order management project but also sets the stage for continued growth and adaptation in the ever-evolving business landscape.

]]>
https://blogs.perficient.com/2024/03/28/the-crucial-steps-to-success-preparing-for-an-order-management-project/feed/ 0 350472
Deep Dive into IBM Sterling Certified Containers and Cloud Solutions https://blogs.perficient.com/2024/01/19/deep-dive-into-ibm-sterling-certified-containers-and-cloud-solutions/ https://blogs.perficient.com/2024/01/19/deep-dive-into-ibm-sterling-certified-containers-and-cloud-solutions/#respond Fri, 19 Jan 2024 14:22:26 +0000 https://blogs.perficient.com/?p=352300

Many retailers are embarking on a digital transformation to modernize and scale their order management system (OMS) solution. Built on a modern architecture, the solution wraps Docker containers around order management business services. This architecture streamlines application management and the release of new functionality. The container technology also supports varying levels of technical acumen, business continuity, security, and compliance. If you want to reduce capital and operational expenditures, speed time to market, and improve scalability, elasticity, security, and compliance, you should consider moving your on-premises IBM Sterling application to IBM supported native SaaS or other cloud solutions which best suits your business.

Tailored Hybrid Cloud Solutions from IBM

IBM offers retailers three distinct hybrid cloud solutions tailored to their specific needs. The first option involves a do-it-yourself (DIY) approach with containers on any platform. While offering flexibility, it comes with potential downsides such as slower time to market, increased operational costs, and higher risk due to the intricacies of self-managing containerized environments. The second option introduces a more robust solution with IBM Certified Containers deployed using Kubernetes, striking a balance between customization and manageability. Option three, the most advanced choice, employs IBM Certified Containers deployed through the Red Hat OpenShift Containers Platform. This enterprise-grade solution prioritizes faster time to market, reduced operational costs, and lower risk, providing a secure and comprehensive hybrid cloud environment for organizations seeking efficiency and reliability in their IT transformation endeavors.

Containers*K8s is referred to Kubernetes. * RHOCP is referred to Red Hat OpenShift Container Platform.

IBM Sterling Certified Container Overview

IBM Sterling Order Management certified containers are distributed in the form of three images—om-base, om-app, and om-agent—via the IBM Entitled Registry. This distribution utilizes licensed API keys, streamlining the process for customers to conveniently retrieve and access these containers in their local registries or incorporate them seamlessly into their CI/CD pipelines.

  • om-base: Serving as the foundational image, om-base is provisioned on the IBM Cloud Container Registry (Image Registry). It is equipped for the addition of product extensions and customizations, allowing customers to create a customized runtime tailored to their specific needs.
  • om-app: This image is the Order Management application server designed to manage synchronous traffic patterns. It incorporates the IBM WebSphere Liberty application server. The different om-app images that are built using the customized runtime can be deployed with a dedicated route or ingress to expose the applications in om-app images. Routes are created only if using a Red Hat OpenShift Container Platform cluster. For any other Kubernetes cluster, ingress is created.
  • om-agent: This container serves as the Order Management workflow agent and integration server, specifically tailored to handle asynchronous traffic patterns.
Basic Architecture of Sterling OMS – Post Deployment on K8s or RHOCP
Oms On Redhat Image Courtesy: IBM

Key Benefits of IBM Sterling Certified Containers

  • Flexibility: Multi cloud & platform validated to run applications anywhere seamlessly.
  • Speed: Faster start-up times and new instance creation with easy install and configurations.
  • Efficient Scaling and Deployment Management: Auto-scaling with standardized deployment across all environments. Optimize your infrastructure by capacity scaling & reduced compute resources. Better logging and monitoring and support for continuous integration & delivery.
  • Security: Safeguard brand reputation with top-tier security standards.
  • Seamless Upgrades with Zero Downtime: Simplify application deployment and maintenance with zero down-time upgrades.

Cloud Solutions for IBM Sterling Order Management

IBM offers its native Software as a Service (SaaS), commonly known as IBM Cloud or CoC, taking on the responsibility for hosting, managing, maintaining, and monitoring the entire Order Management (OM) ecosystem. This allows customers to direct their focus toward achieving their business requirements and enhancing business services. IBM’s ownership and management of the DevOps process facilitate automatic upgrades of the OMS application with new features, alongside activities such as backup, database reorganization, and upgrades/patches for WebSphere Application Server (WAS) Liberty, MQ, DB2, and Red Hat Enterprise Linux (RHEL). The proactive monitoring of system performance, coupled with the establishment of automatic alerts and remediation procedures for instances of high CPU/memory usage, ensures a seamless experience for customers. Convenient access to detailed audits/graphs of system performance is provided through a self-serve tool, complemented by log monitoring via Greylog.

In contrast, three other well-regarded cloud solutions compatible with IBM Sterling Certified containers—Amazon AWS, Microsoft Azure, and Oracle Cloud Infrastructure (OCI)—present unique advantages. However, customers opting for these alternatives bear the responsibility of implementing measures to manage, maintain, and monitor the entire Order Management (OM) ecosystem. This encompasses tasks such as database backups, infrastructure upgrades, and system performance monitoring. Additionally, customers must seamlessly integrate with logging tools of their choice when opting for these alternatives.

Conclusion: A Path to Modernization and Efficiency

In conclusion, the shift towards a modernized and scalable Order Management System (OMS) is becoming imperative for retailers undergoing digital transformation. The adoption of IBM Sterling Certified Containers and Software as a Service (SaaS) solutions presents a strategic pathway to enhance flexibility, speed, efficiency, and security in managing the OMS ecosystem. IBM’s hybrid cloud offerings provide retailers with tailored choices, allowing them to align their preferences with the desired level of customization, manageability, and risk. The option to leverage IBM’s native SaaS or explore alternate cloud solutions like Amazon AWS, Microsoft Azure or Oracle Cloud underscores the adaptability of IBM Sterling solutions to diverse business needs. As retailers navigate the complexities of modernizing their OMS, the comprehensive support provided by IBM’s SaaS offerings stands out, ensuring a secure, efficient, and future-ready infrastructure for their digital endeavors.

Key Links-

Installing IBM Sterling Order Management System Software using Certified Container – IBM Documentation

A Step-by-Step Guide for Deploying IBM Sterling Order Management on AWS | AWS for Industries (amazon.com)

Deploy Sterling Order Management on Azure Red Hat OpenShift – IBM Developer

Deploy IBM Sterling Order Management Software in a Virtual Machine on Oracle Cloud Infrastructure

]]>
https://blogs.perficient.com/2024/01/19/deep-dive-into-ibm-sterling-certified-containers-and-cloud-solutions/feed/ 0 352300
Where is the OMS Market Going? https://blogs.perficient.com/2023/10/17/where-is-the-oms-market-going/ https://blogs.perficient.com/2023/10/17/where-is-the-oms-market-going/#respond Tue, 17 Oct 2023 11:44:22 +0000 https://blogs.perficient.com/?p=347218

Businesses are increasingly looking to innovative solutions that streamline their operations. One critical aspect of business operations is order management, which plays a pivotal role in ensuring customer satisfaction and efficient supply chain management. The future state of commercial order management systems promises to revolutionize this essential function, solving current problems while expanding capabilities to meet the demands of tomorrow’s businesses. With new entrants and expanding use-cases in non-traditional industries, will the definition of OMS get clearer or the definition ‘fuzzier’?

Current Challenges in Order Management

Before delving into the future, it is crucial to understand the problems that order management systems currently aim to address:

  • Orchestration: Many businesses struggle with data scattered across multiple platforms, making it challenging to maintain a centralized and up-to-date view of orders, inventory, and customer information. This fragmentation often leads to errors and delays. Just as a PIM orchestrates product-information, the OMS orchestrates the flow of orders (data) across the entire systems landscape.
  • Inventory Optimization: Optimizing inventory availability across channels is a complex task. Over-exposing inventory leads to poor customer experiences, but under-exposing leads to lost sales. The controls are increasingly found in more mature OMS solutions, but the challenge currently is that those roles are heuristics (rules-based) still.
  • Scalability: Traditional systems may struggle to handle the growing volumes of orders, especially for businesses experiencing rapid expansion. Scalability is a vital concern to ensure smooth operations during periods of high demand. Estimated delivery-dates, for example, are increasingly happening at the PLP… can current systems keep up with that scale?
  • Integration: Seamless integration between the systems involved in capturing, fulfilling, servicing and returning orders is often a challenge, resulting in inefficiencies and data discrepancies.

The Future State: Solving Today’s Problems

I’ve been fortunate enough to see the evolution of order management over the last 15 years in the space, 40 projects in total.
From the first screens in 2008 at Manhattan Associates to my time today across a handful of industry solutions (and for a year within a retailer) here are my predictions for where the market is heading.

AI-Driven Inventory Management:

Artificial intelligence and machine learning algorithms will power predictive inventory management, optimizing stock levels in real-time based on historical data, current demand, and market trends. This will reduce costs and minimize stockouts. Onera, a ToolsGroup company was working on this but got bought… I believe COTS OMS providers may either built this or buy it in the coming years. Keep an eye on Retalon.

Scalability and Cloud Solutions:

Cloud-based order management systems have now become the norm, offering unparalleled scalability to accommodate growing order volumes and business expansion. This flexibility ensures businesses can adapt to changing market conditions. Yet, there are still many commerce organizations deployed on premise. Moreover, I still see most OM providers only deploy quarterly major releases vs. only few have truly continuous deployments for fixes and new functionalities. Salesforce is and has been out in front on CI/CD, but Manhattan Associates and Körber Supply Chain both do a great job on major/minor releases.

Enhanced Customer Experience:

Future systems will focus on enhancing the customer experience through automation and personalization. AI-powered chatbots and virtual assistants will provide real-time order updates and answer customer queries, ensuring a seamless experience from order placement to delivery. Most chatbots I see still do not fully integrate into the OMS, and specifically they don’t do this proactively and with an understanding of the likely reason a customer is needing support. While not an OMS, Zendesk has been pushing forward heavily here. I could see more OM provider with turnkey solutions to Zendesk and other customer-interfacing / ticketing systems.

Integration and Interoperability:

The future will witness the widespread adoption of open APIs and integration platforms, making it easier for businesses to connect their order management systems with other crucial software applications. This integration will streamline operations and eliminate data silos. I see the “ERP vs. OMS” discussion, relative to order-flow, heating up in the coming years. ERPs will always be around to support back-office functions, but the middle-of-house functions for inventory availability, orchestration and service in the front-of-house will still need a focused solution. You’ll see more players in the MACH Alliance grow in the coming years, and I’m banking on Fluent Commerce as one of the only OMS providers solely focused on the OM space.

Expanding Capabilities for the Future

As order management systems evolve, their capabilities will expand to address emerging business needs:

  • Supply Chain Visibility: These systems will offer end-to-end supply chain visibility, enabling businesses to track the movement of goods from suppliers to customers in real-time. This will enhance transparency and reduce the risk of supply chain disruptions. Blue Yonder has been making traction here since acquiring Yantriks solution, keep an eye out.
  • Customization and Personalization: Order management systems will increasingly leverage customer data to provide personalized product recommendations and promotions, further enhancing the customer experience and driving sales. I’m interested to see how B2B use-cases evolve here in the future!
  • Accelerated, Turnkey, Integrations: At a conference in May in Vegas I stood up and talked for a few minutes on how easy it will be using Chat-GPT like functions to almost instantaneously create ‘connectors’ for all the major supporting commerce functions moving forward. No more months of waiting on tax/fraud/payment/email connections.
  • Environmental Sustainability: Sustainability will become a key focus, with order management systems helping businesses reduce their carbon footprint by optimizing shipping routes, reducing packaging waste, and minimizing returns. IBM showcased a lot of this at their TechXchange conference last month and currently organizes their OMS within a Sustainability suite.

The future state of order management systems is poised to revolutionize the way businesses handle their operations. “e-Commerce” is dead… “Commerce” is where we’re evolving, and OMS will be front and center in that transformation. I’m very bullish on this space, specifically the point of order-capture and how the post-purchase experience shapes and retains customer trust.

]]>
https://blogs.perficient.com/2023/10/17/where-is-the-oms-market-going/feed/ 0 347218
Red Hat Ansible Accelerator https://blogs.perficient.com/2023/07/28/red-hat-ansible-accelerator/ https://blogs.perficient.com/2023/07/28/red-hat-ansible-accelerator/#respond Fri, 28 Jul 2023 15:27:40 +0000 https://blogs.perficient.com/?p=340997

Automate with Ansible

Most server management infrastructure tasks have been automated for some time, but network changes can still create a bottleneck. Red Hat Ansible enables you to automate many IT tasks including cloud provisioning, configuration management, application deployment, and intra-service orchestration. With Ansible you can configure systems, deploy software, and coordinate more advanced IT tasks such as continuous integration/continuous deployment (CI/CD) or zero downtime rolling updates.

Our Ansible Accelerator provides an overview of what Ansible can do to help modernize and streamline your DevOps and IT operations. The accelerator is available at three different intervention levels: a workshop, technical enablement, or full team consulting. In 6-12 weeks, we architect a proof of concept that delivers a more secure, compliant, reliable, and automated solution for you and your business.

What’s Included

  • An Ansible pilot with demo playbooks for common use cases
  • Ansible Engine core workshop
  • Playbook authoring
  • Security encryption
  • Writing custom roles
  • Tips for how to leverage cloud providers and dynamic inventories
  • Using Ansible Engine as part of a CI/CD pipeline with other tools
  • Ansible Tower accelerator
  • Documentation including best practices and sample style guide to assist developers in adhering to corporate standards

Use Cases

  • Enable automated deployment across devices in a hybrid model (cloud and on premises)
  • Network automation in a hybrid model
  • Automating Windows
  • Application deployment for Windows/Linux
  • Solving classic infrastructure-as-code challenges
  • Support security scanning for code and infrastructure to enable compliance and remediation (DevSecOps)
  • Support for automation of all cloud platforms including Azure and AWS

Ready to Accelerate?

Perficient + Red Hat

Red Hat provides open-source technologies that enable strategic cloud-native development, DevOps, and enterprise integration solutions to make it easier for enterprises to work across platforms and environments. As a Red Hat Premier Partner and a Red Hat Apex Partner, we help drive strategic initiatives around cloud-native development, DevOps, and enterprise integration to ensure successful application modernization and cloud implementations and migrations.

Contact Us

]]>
https://blogs.perficient.com/2023/07/28/red-hat-ansible-accelerator/feed/ 0 340997
IBM Sterling Next Generation Store Engagement: Revolutionizing Retail Experiences https://blogs.perficient.com/2023/06/22/ibm-sterling-next-generation-store-engagement-revolutionizing-retail-experiences/ https://blogs.perficient.com/2023/06/22/ibm-sterling-next-generation-store-engagement-revolutionizing-retail-experiences/#comments Fri, 23 Jun 2023 04:13:00 +0000 https://blogs.perficient.com/?p=338443

In today’s highly competitive retail landscape, providing exceptional customer experiences is paramount to success. Customers now demand seamless interactions across multiple channels, personalized services, and real-time access to product information. To address these evolving expectations, IBM has developed the Sterling Next Generation Store Engagement solution, a powerful platform that helps retailers transform their in-store experiences and bridge the gap between physical and digital realms. In this blog post, we will explore how IBM Sterling Next Generation Store Engagement is revolutionizing retail experiences.

Seamless Omnichannel Integration:

One of the key strengths of IBM Sterling Next Generation Store Engagement is its ability to seamlessly integrate various channels. The solution consolidates data from different sources, such as online stores, mobile apps, and physical stores, creating a unified view of customer preferences, purchase history, and inventory availability. This holistic approach enables retailers to deliver personalized and consistent experiences across all touchpoints, whether customers are shopping online, visiting a brick-and-mortar store, or engaging through mobile devices.

Empowering Store Associates:

Store associates play a crucial role in enhancing customer experiences, and IBM Sterling Next Generation Store Engagement empowers them with the right tools and information. The solution provides real-time access to inventory data, allowing associates to quickly check product availability, locate items within the store, and offer accurate delivery timeframes. Additionally, the platform enables associates to access comprehensive product details, including specifications, and recommendations, thereby enabling them to provide expert guidance and personalized recommendations to customers.

Enhanced Customer Engagement:

IBM Sterling Next Generation Store Engagement offers a range of features that enhance customer engagement and satisfaction. For instance, the platform enables store associates to handle in store Pickup, Ship from Store and in store returns of online, POS and mobile orders, thereby ensuring a dedicated and uninterrupted shopping experience. Moreover, the solution leverages inventory visibility and SIM microservice modules to provide seamless customer experience with near real time product availability and accuracy among the store and DC networks thereby saving the sale and creating an omni channel realm.

Customer Expectation

Omni Channel Customer Expectations Illustration Simplified

Technical Overview:

IBM Sterling Next Gen Store Engagement is a single, monolithic front-end application that is built with Angular and Bootstrap framework. It leverages micro frontend architecture of the single spa framework that splits monolithic application into smaller and logical modules known as micro-front-ends, and at the same time keeps the user experience similar to using a single application. It supports multiple features that spans across store fulfillment and inventory management operations. Each feature is modeled as an angular feature module that contains multiple routes or views. The objective is to break up each feature module into individual angular application. Each single-spa-enabled Angular application has its own package.json and controls its own dependencies, so an application can be upgraded independently. The single-spa-enabled angular application uses a common set of components and services through shared widget libraries.

Store Architecture

Micro-frontend Architecture and Evolving Technologies adapted for the solution

IBM Sterling Next Generation Store Engagement is a game-changer for retailers looking to create exceptional in-store experiences that seamlessly integrate the physical and digital worlds. The use of the latest front-end technologies is another advantage that makes the application more scalable and easier to expand and customize depending on the client’s needs. By empowering store associates, enhancing customer engagement, streamlining checkout processes, and providing valuable insights, the platform enables retailers to meet and exceed customer expectations. As the retail industry continues to evolve, solutions like the Next Generation Store will play a crucial role in helping retailers thrive in the digital era.

Reference:

IBM Sterling Store Engagement (Next-generation) – IBM Documentation

]]>
https://blogs.perficient.com/2023/06/22/ibm-sterling-next-generation-store-engagement-revolutionizing-retail-experiences/feed/ 7 338443