Integration Articles / Blogs / Perficient https://blogs.perficient.com/tag/integration/ Expert Digital Insights Wed, 04 Dec 2024 06:16:49 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Integration Articles / Blogs / Perficient https://blogs.perficient.com/tag/integration/ 32 32 30508587 Legacy Systems Explained: Why Upgrading Them is Crucial for Your Business https://blogs.perficient.com/2024/12/04/legacy-systems-explained-why-upgrading-them-is-crucial-for-your-business/ https://blogs.perficient.com/2024/12/04/legacy-systems-explained-why-upgrading-them-is-crucial-for-your-business/#comments Wed, 04 Dec 2024 06:16:49 +0000 https://blogs.perficient.com/?p=372961

What are Legacy Systems? Why is Upgrading those Systems Required?

Upgrading means more than just making practical improvements to keep things running smoothly. It addresses immediate needs rather than chasing a perfect but impractical solution. The situation could spiral out of control if things don’t function properly in real-time.

One such incident happened on January 4, 2024, when South Africa’s Department of Home Affairs was taken offline nationwide due to a mainframe failure. In simple terms, Mainframe failures in such contexts are usually high-stakes issues because they impact the core infrastructure that supports vital public services. In South Africa, where the Department of Home Affairs handles a range of essential services such as passports, identity documents, and immigration control,  A mainframe failure can have widespread repercussions, leading to backlogs, delays, and potential administrative chaos. The South African Department of Home Affairs provides a clear example of a critical legacy system facing significant risks due to its outdated technology and operational challenges. 

Addressing these issues through modernization and digital transformation is crucial for improving service delivery and ensuring the system’s continued effectiveness and security. One cannot work on migrating the legacy system in one go, as the business and functional side of testing is a must. A planned and systematic approach is needed while upgrading the legacy system.

 

Question: What is the solution to avoid such a case?
Answer: Modernization of Legacy code.

Legacy code modernization is improving and updating outdated software systems to make them more maintainable, scalable, and compatible with modern technologies. Let’s understand this using Apigee (an API Management tool).

1. Scalability

Legacy system: Legacy systems were designed to provide solutions for the respected tasks but there was no scalability as records used to get limited within Infrastructure for improvements in business.
Apigee: Due to its easy scalability, centralized monitoring, and integration capabilities, Apigee helped the organization plan its approach to business improvements.

2. Security

Legacy system: One of the simplest methods for authenticating users in Legacy Systems was “Basic Authentication,” where the client sends a username and password in every HTTP request. This method was Vulnerable to man-in-the-middle (MITM) attacks if not combined with HTTPS. Credentials are exposed on each request.

Apigee: Using Apigee, the organization can quickly implement modern security features like OAuth, API key validation, rate limiting, and threat protection (e.g., bot detection) without changing the core logic of the APIs.

3. User and Developer Experience

Legacy system: The legacy API lacks good documentation, making it harder for external developers to integrate with it. Most systems tend to have a SOAP-based communication format.
Apigee: Apigee provides a built-in API portal, automatic API documentation, and testing tools, improving the overall developer experience and adoption of the APIs so that integration with other tools can be easy and seamless with modern standards.


There are now multiple ways to migrate data from legacy to modern systems, which are listed below.

1. Big Bang Migration
2. Phased Migration
3. Parallel Migration
4. Pilot Migration
5. Hybrid Migration
and more…

Although these things are known to the legacy system owners, they are very selective and picky when finalizing a migration plan. They are only aware of the short-term goal, i.e., to get the code up and running in production. Because when we are speaking of legacy systems, all there is left is code and a sigh of relief that it is still up and running.  For most systems, there is no documentation, code history, revisions, etc., and that’s why it could fail on a large scale if something goes wrong.

I have found some points that need to be ensured before finalizing the process of migrating from legacy systems to modern systems.

1. Research and Analysis

We need to understand the motives behind the development of the Legacy system since there is no or insufficient documentation.

In the study, we can plan to gather historical data to understand the system’s behavior. We need to dig deeper to find something that could help us understand the system better.

2. Team Management

After studying the system, we can estimate the team size and resource management. Such systems are way older when it comes to the tech on which they are running. So, it is hard to gather resources with such outdated skills. In that case, management can cross-skill existing resources into such technologies.

I believe adding the respective numbers of junior engineers would be best, as they would get exposure to challenges, which can help them improve their skills.

3. Tool to Capture Raw Logs

Analyzing the raw logs can talk more about the system, as this is the way communication is happening to complete each task requested by the system. By breaking down the data into layman’s language, understanding at what time requests are high by timestamps,

What parameter data consists of, and by such information, we can tell system behavior and plan properly.

4. Presentation of the Logs

Sometimes we may need to present the case study to high-level management before proceeding with the plan. So to simplify the presentation, we can use tools like Datadog and Splunk to get data in a format such as tabular, graphic, etc. so that other team members can understand.

5. Replicate the Architect with Proper Functionality

This is the most important part. End-to-end development is the only solution for smooth migration activity. We need to ensure standards here, such as maintaining core functionality, risk management, conveying data pattern changes to other associated clients, and ensuring user access, business processes, etc. The point 1 study can help us understand the behavior of systems to check on what modern tech we can land our migration on.

We can implement and plan using one of the migration methods I mentioned above in the blog.

6. End-to-end Testing

Once the legacy system is replicated to Modern Tech, we need to ensure that we have a User Acceptance Testing (UAT) environment to perform the system testing. It could be challenging if the respective legacy systems didn’t have a testing environment back in the day. We may need to call mock backend URLs to mock the behavior of services.

7. Before Moving to Production, do Pre-production Testing Properly

Only after successful UAT testing one can ensure the functionality and may think to move changes to production hassle-free. However, some points must be ensured, such as following standards and maintaining documentation. In standards, we need to ensure that no risk may lead to the failure of services in modern technology and that they are compatible properly.

In the documentation, we need to ensure that all service flows are appropriately documented and that testing is done according to the requirement gathering.

Legacy systems and their workings are among the most complex and time-consuming topics. But to make the job easier, one must put effort into it.

]]>
https://blogs.perficient.com/2024/12/04/legacy-systems-explained-why-upgrading-them-is-crucial-for-your-business/feed/ 2 372961
Streamline Your PIM Strategy: Key Techniques for Effective inriver Integration https://blogs.perficient.com/2024/11/13/streamline-your-pim-strategy-key-techniques-for-effective-inriver-integration/ https://blogs.perficient.com/2024/11/13/streamline-your-pim-strategy-key-techniques-for-effective-inriver-integration/#comments Thu, 14 Nov 2024 05:03:27 +0000 https://blogs.perficient.com/?p=370634

In today’s digital landscape, efficiently managing product information is vital for businesses to enhance customer satisfaction and drive sales growth. A robust Product Information Management (PIM) system with excellent integration features, like inriver, will streamline your PIM strategy. By utilizing the integration frameworks and APIs provided by inriver, businesses can ensure relevant, accurate, and consistent product information across all channels. This article explores key inriver integration techniques that have the potential to transform your PIM approach.

The importance of PIM Integration

Automating PIM processes leads to significant improvements in efficiency, accuracy, and scalability. By eliminating manual data entry, automated integration reduces errors and ensures that information remains consistent and current across all systems. This not only saves time and cuts labor costs but also enhances business agility and customer satisfaction. With automated integration, companies can swiftly adapt to market changes, make informed decisions, and provide timely, personalized information to their customers.

Streamline the PIM process

Exploring inriver Integration Options

There are several ways to automate the integration between systems that are used to send or receive data –

Leveraging APIs (application programming interface)-

  • inriver REST APIs – These can be utilized to build integrations in any programming language and customize interfaces within inriver, including creating enriched PDF/Preview templates.
  • inriver Remoting APIs – These require C# programming knowledge and are used with hosted solutions. The Remoting API services consist of six major components:
    • Channel Service – Methods related to channels. e.g. Channel Structure, Publish/Unpublish a channel, Retrieve entities and links from a channel etc.
    • Data Service – One of the most widely used Service for creating, updating, deleting and finding entities and links in the system.
    • Model Service – Contains methods for building and maintaining PIM data model.
    • Print Service – Used for developing the inriver print plugin.
    • User Service – Provide methods for maintaining uses, role, permissions and restrictions.
    • Utility Service – Contains various method including Connector states, HTML Templates, Languages, and Notifications.

Remoting Services

Remoting Services

  • Content API  – A set of APIs designed to facilitate the onboarding and distribution of large volume of product data.
    • Content Onboarding API – help standardize the data onboarding process by dividing them into five key steps – Landing Area, Field Mapping, Staging area, PIM validations and Import.
    • Content Delivery API – used for distribution of product data to various channels and platforms, ensures that product data is uniform across all channels.

Integration Framework (IIF) – The Integration Framework is a foundation for building adapters and outbound integrations in inriver. It transforms customer’s unique data model into a standard integration model. It supports custom entity types, delta functionality and provide standard functions to deliver product data.

High level integration framework flow

High level integration framework flow

The following table highlights the key aspects when considering integration within inriver –

Feature/Aspect REST API Remoting API inriver Integration Framework (IIF) Content API
Functionality Basic to advance functionality Extensive functionality Outbound integrations Build on IIF, Standardizes inbound and outbound data handling
Programming Language Technology-agnostic Requires C# programming Requires C# programming Technology-agnostic
Use Cases Remote solutions Hosted solutions, advanced operations Exporting data to storefronts, building adapters Onboarding product data, distributing product data
Performance Better performance for remote solutions Better performance for hosted solutions Efficient for outbound data handling Efficient for both inbound and outbound data handling
Flexibility High flexibility, suitable for various platforms Less flexible, specific to inriver environment Moderate flexibility, decouples standard adapters High flexibility, suitable for various platforms
Scalability Highly scalable Scalable within inriver cloud service Scalable for outbound integrations Highly scalable
Common Applications eCommerce platforms, CMS, BI tools ERP systems, custom extensions eCommerce platforms, Marketplaces Supplier onboarding, ERP, content distribution

 

These integration techniques can significantly enhance your PIM strategy, ensuring your product data remains accurate, consistent, and up to date across all channels. At Perficient, we engage in comprehensive discussions throughout our elaboration process and continue to validate during implementation phase. We help finalize best practices tailored to each customer’s unique needs, recognizing that one approach may work better for one client than another. Get in touch to explore how we can support you on your PIM implementation journey, whether you’re starting fresh or facing challenges with an existing system.

]]>
https://blogs.perficient.com/2024/11/13/streamline-your-pim-strategy-key-techniques-for-effective-inriver-integration/feed/ 2 370634
Exploring Apigee: A Comprehensive Guide to API Management https://blogs.perficient.com/2024/10/15/exploring-apigee-a-comprehensive-guide-to-api-management/ https://blogs.perficient.com/2024/10/15/exploring-apigee-a-comprehensive-guide-to-api-management/#respond Tue, 15 Oct 2024 06:47:11 +0000 https://blogs.perficient.com/?p=369958

APIs, or application programming interfaces, are essential to the dynamic world of digital transformation because they allow companies to communicate quickly and efficiently with their data and services. Consequently, effective management is essential to ensure these APIs function correctly, stay safe, and provide the desired benefits. This is where Google Cloud’s top-tier API management product, Apigee, comes into play.

What is Apigee?

Apigee is a great platform for companies that want to manage their APIs effectively. It really simplifies the whole process of creating, growing, securing, and implementing APIs, which makes things a lot easier for developers. One thing that stands out about Apigee is its flexibility; it can handle both external APIs that third-party partners can access and internal APIs used within the company. This makes Apigee a great option for businesses of all sizes. Moreover, its versatility is a significant benefit for those looking to simplify their API management. It also integrates nicely with various security layers, like Nginx, which provides an important layer of authentication between Apigee and the backend. Because of this adaptability, Apigee enhances security and allows for smooth integration across different systems, making it a reliable choice for managing APIs.

Core Features of Apigee

1. API Design and Development

Primarily, Apigee offers a unique suite of tools for developing and designing APIs. You can define API endpoints, maintain API specifications, and create and modify API proxies by using the Open API standard. Consequently, it becomes easier to design functional and compliant APIs with industry standards. Furthermore, this capability streamlines the development process and ensures that the APIs meet regulatory requirements. Thus, developers can focus on innovation while maintaining a strong foundation of compliance and functionality. Below is a flow diagram related to API Design and Development with Apigee:

2. Security and Authentication

Any API management system must prioritize security, and Apigee leads the field in this regard. It provides security features such as OAuth 2.0, JWT (JSON Web Token) validation, API key validation, and IP validation. By limiting access to your APIs to authorized users, these capabilities help safeguard sensitive data from unwanted access.

3. Traffic Management

With capabilities like rate limitation, quota management, and traffic shaping, Apigee enables you to optimize and control API traffic. This helps proper usage and maintains consistent performance even under high traffic conditions.

4. Analytics and Monitoring

You can access analytics and monitoring capabilities with Apigee, which offers insights into API usage and performance. You can track response times, error rates, and request volumes, enabling you to make data-driven decisions and quickly address any issues that arise.

5. Developer Portal

Apigee includes a customizable developer portal where API users can browse documentation, test APIs, and get API keys. This portal builds a community around your APIs and improves the developer experience.

6. Versioning and Lifecycle Management

Keeping an API’s versions separate is essential to preserving backward compatibility and allowing it to change with time. Apigee offers lifecycle management and versioning solutions for APIs, facilitating a seamless upgrade or downgrade process.

7. Integration and Extensibility

Apigee supports integration with various third-party services and tools, including CI/CD pipelines, monitoring tools, and identity providers. Its extensibility through APIs and custom policies allows you to tailor the platform to meet your specific needs.

8. Debug Session

Moreover, Apigee offers a debug session feature that helps troubleshoot and resolve issues by providing a real-time view of API traffic and interactions. This feature is crucial for identifying and fixing problems and is essential during the development and testing phases. In addition, this feature helps ensure that any issues are identified early on; consequently, it enhances the overall quality of the final product.

9. Alerts:

Furthermore, you can easily set up alerts within Apigee to notify you of critical issues related to performance and security threats. It is crucial to understand that both types of threats affect system reliability and can lead to significant downtime; addressing them promptly is essential for maintaining optimal performance.

10. Product Onboarding for Different Clients

Apigee supports product onboarding, allowing you to manage and customize API access and resources for different clients. This feature is essential for handling diverse client needs and ensuring each client has the appropriate level of access.

11. Threat Protection

Apigee provides threat protection mechanisms to ensure that your APIs can handle concurrent requests efficiently without performance degradation. This feature helps in maintaining API stability under high load conditions.

12. Shared Flows

Apigee allows you to create and reuse shared flows, which are common sets of policies and configurations applied across multiple API proxies. This feature promotes consistency and reduces redundancy in API management.

Benefits of Using Apigee

1. Enhanced Security

In summary, Apigee’s comprehensive security features help protect your APIs from potential threats and ensure that only authorized users can access your services.

2. Improved Performance

Moreover, with features like traffic management and caching, Apigee helps optimize API performance, providing a better user experience while reducing the load on your backend systems.

3. Better Visibility

Apigee’s analytics and monitoring tools give valuable insights into API usage and performance, helping you identify trends, diagnose issues, and make informed decisions.

4. Streamlined API Management

Apigee’s unified platform simplifies the management of APIs, from design and development to deployment and monitoring, saving time and reducing complexity.

5. Scalability

Finally, Apigee is designed to handle APIs at scale, making it suitable for both small projects and large enterprise environments.

Getting Started with Apigee

To get started with Apigee, follow these steps:

1. Sign Up for Apigee

Visit the Google Cloud website and sign up for an Apigee account. Based on your needs, you can choose from different pricing plans.
Sign-up for Apigee.

2. Design Your API

Use Apigee’s tools to design your API, define endpoints, and set up API proxies.

3. Secure Your API

Implement security policies and authentication mechanisms to protect your API.

4. Deploy and Monitor

Deploy your API to Apigee and use the analytics and monitoring tools to track its performance.

5. Engage Developers

Set up your developer portal to provide documentation and resources for API consumers.

In a world where APIs are central to digital innovation and business operations, having a powerful API management platform like Apigee can make a significant difference. With its rich feature set and comprehensive tools, Apigee helps organizations design, secure, and manage APIs effectively, ensuring optimal performance and value. Whether you’re just starting with APIs or, conversely, looking to enhance your existing API management practices, Apigee offers a variety of capabilities. Furthermore, it provides the flexibility necessary to thrive in today’s highly competitive landscape.

]]>
https://blogs.perficient.com/2024/10/15/exploring-apigee-a-comprehensive-guide-to-api-management/feed/ 0 369958
Production Deployment and its Basics: Known to Many, Followed by Few https://blogs.perficient.com/2024/09/04/production-deployment-and-its-basics-known-to-many-followed-by-few/ https://blogs.perficient.com/2024/09/04/production-deployment-and-its-basics-known-to-many-followed-by-few/#respond Wed, 04 Sep 2024 11:09:01 +0000 https://blogs.perficient.com/?p=367473

Did you ever feel tense while taking your exams? Or you must have watched the Olympics or other sports events like cricket, football, etc. When you focus on national players during significant events, you can observe stress and anxiety in performing at that level. Similar is the situation of an IT professional during a production deployment call. This moment is crucial because it represents the end of months or years of effort, the results of which will be evaluated by those involved. The stakes are high because the quality and success of the deployment can have a huge impact.

Teams follow a multi-step process called the SDLC (Software Development Life Cycle) model to manage this stress and increase success. These standards provide a framework to guide process improvement, reduce risk, and streamline deployment. The team’s goal is to follow this process and deliver quality software that meets the needs of stakeholders.

Some of the major SDLC models are:

  1. Waterfall Model
  2. V-Model
  3. Incremental Model
  4. RAD Model
  5. Iterative Model

Each SDLC model is suitable for a certain type of project. We can take the example of the Waterfall Model.

The SDLC Waterfall Model

1024px Sdlc Software Development Life Cycle

 

  1. Requirements Analysis: Gather and document what the system should do.
  2. System Design: Outline the architecture and design specifications.
  3. Implementation: Write and integrate the code according to the design.
  4. Testing: Evaluate the system to ensure it meets the requirements.
  5. Deployment: Release the system for end-users to use.
  6. Maintenance: Address any issues or updates needed after deployment.

Structured approaches like SDLC emphasize planning, alignment, and risk management to ensure successful deployments. However, gaps can still lead to failures and negatively impact the client’s perception.

It is always a hassle when it comes to production deployment. It is simply your code for a service that will run as you developed it but in a different organization or environment. So, what’s the drill?

I can answer this by noting down some of the points I have understood from my IT experience.

 Incorrect Testingjpg

1. Insufficient Requirement Gathering

Sometimes, demands are not appropriately explained in the documentation, stories, or any part of requirement gathering, but for some tasks, we just don’t have standards to track but understandings. If the process gets carried on, we may face delays in production planning or issues in production if deployed in such a case. Also, it can cause recurring problems in production.

For example, in one of the requirements meetings, we asked the client for the parameter’s details, but the client had no such information, which caused a delay in deployment.

2. Incorrect Dev/Sandbox Testing

Developers often test the service until a successful response and move it directly to production by getting approval. For TL/Manager, it is a win-win situation because service is delivered before the deadline until clients start playing Russian roulette.

Your (developers) poor approach is exposed now, and fixtures are happening live now in production. This affects the value of the business and the relationship with the client.

3. Inconsistency Between the Code in Lower Environment and Production

Most of the time, developers have to make changes to production services due to certain reasons, whether by team or client. At that time, it is necessary to have those changes tested in the Dev Organization/Environment first. Directly implementing those in production because of short-term liberty and approvals may do justice with the client and TL/Manager but not with your junior folks. They may not understand why code differences are there.

4. Improper or incomplete testing by the client

Note: This may be more for the production manager type of folks.

I have been through some of the developments and have reported the same behavior from some people that sometimes clients also rely on the developer in the testing part. The client knows the end-to-end project, and the developer is responsible for some part of it. So, the client side of testing is essential.

5. Pre-production testing

In most cases, the client doesn’t have test data for Pre-production to confirm the end-to-end working status of the service. This may cause failure of service. Always ask the client to do pre-production testing with real-time data and confirm the status of the service.

6. Load testing

Often, load testing is avoided when requirement gathering itself. It is necessary to have the service go through load testing so that if, at the production level, services start to receive more traffic than usual, we can trust the service’s capability to handle such cases.

That’s a wrap!

These gaps or processes need to be properly followed for a successful and hassle-free production deployment.

Perficient + Apigee

At Perficient, we create complex and robust integration solutions in Apigee, which helps our clients address the full spectrum of challenges with lasting solutions.

Contact us today to learn how we can help you to implement integration solutions with Apigee.

]]>
https://blogs.perficient.com/2024/09/04/production-deployment-and-its-basics-known-to-many-followed-by-few/feed/ 0 367473
Navigating Snaplogic Integration: A Beginner’s Guide https://blogs.perficient.com/2024/03/05/navigating-snaplogic-integration-a-beginners-guide/ https://blogs.perficient.com/2024/03/05/navigating-snaplogic-integration-a-beginners-guide/#comments Tue, 05 Mar 2024 07:52:56 +0000 https://blogs.perficient.com/?p=353553

As there is rapid growth in businesses going digital, the need to develop scalable and reliable functionalities to connect applications, Cloud environments, on-premises assets have grown. To resolve these complex scenarios, iPaaS seems to be a perfect solution.

For example, if a developer needs to connect and transfer huge data from an e-commerce platform to a CRM system, writing custom code to handle data transfer would be tedious. Instead, the developer can simply consume APIs deployed to iPaaS, significantly reducing development time and effort.

But What Exactly is iPaaS?

Integration Platform as a Service (iPaaS) is a cloud-based solution that makes integrating different applications, data sources and systems easier. It typically provides built-in connectors, reusable components, and tools for designing, executing, and monitoring integrations. This helps businesses enhance operational efficiency, reduce manual efforts, and quickly adapt to changing technology landscapes.

Today, we will talk about one of the iPaaS solutions which stands as a visionary in the Gartner’s magic quadrant of 2023 i.e. SnapLogic.

Picture1

What is SnapLogic?

SnapLogic is an iPaaS (Integration Platform as a Service) tool, that allows organization to connect various applications, data sources, and APIs to facilitate data integration, automation, and workflows.

It provides a visual interface for designing integration pipelines, making it easier for both technical and non-technical users to create and manage data integrations. SnapLogic supports hybrid cloud and on-premises deployment and is used for tasks such as data migration, ETL (Extract, Transform and Load) processes and application integration.

Getting Started with the Basics of SnapLogic

To kick-start your journey, spend 5-10 minutes for setup. Here are the steps to quickly setup your training environment.

  1. Sign Up for SnapLogic: You must sign up for an account. For training and better hands-on experience, SnapLogic provides a training account for 4 weeks. You can start with the training account to explore its features. Here is the link to get the training account: SnapLogic User Login.
  2. Access SnapLogic designer: SnapLogic designer is the heart of its integration capabilities. Once you have signed up, you can access it from your account.
  3. Course suitable for beginners: Click this link to enroll in theSnapLogic Certified Enterprise Automation Professional” entry-level course to quickly get up to speed on SnapLogic.

Features of SnapLogic

SnapLogic is an integration platform that makes connecting different data sources and applications easier. Some key features include:

  1. Multi-cloud Integration: Supports integration across various cloud platforms.
  2. Low-Code Approach: Reduces the requirement for advanced coding knowledge.
  3. API Management: Helps manage APIs and create custom APIs between different applications.
  4. Real-time Integration: Supports real-time data integration.

Overview of Use Case

Done with sign-up and setup! Lessons that are theoretical are never easy to learn until you continue to do hands-on in parallel. Let’s look at a practical use case to simplify learning.

The customer must automatically insert the employee records from the Excel file in a shared directory to the salesforce CRM end system.

How Can We Achieve This Using SnapLogic?

SnapLogic provides pre-built snaps, such as file reader, CSV parser, mapper, salesforce create, and many more.

For achieving the below use case, we need to add the File Reader Snap to fetch the csv file, to parse the data use the CSV Parser, Mapper Snap to transform the data, and lastly, Salesforce Create to insert the data into it.

Creating the pipeline

  1. Upload your CSV file to the SnapLogic file system as we need to read the csv file.Picture1
  2. Creating a pipeline is the first step in building an integration. You can click the “+” sign on the top of the middle canvas as follows: Picture2Then fill the pipeline name and parent project then click “Save”.Picture3
  3. Add and configure the file reader snap: For the file field you uploaded in step 1. Because you are accessing the file system, no authentication information is needed.Picture4Picture5
  4. Add a CSV parser snap; you will use the default configuration.Picture6Picture7
  5. Add the Mapper: It transforms the incoming data using specific mapping and produces new output data.Picture8Picture9
  6. Salesforce Create: It creates the records into a Salesforce account object using the Rest API.Picture10 Picture11
  7. After saving, SnapLogic will automatically validate changes; you can click on the green document icon to view what your data looks like.Picture12
  8. Test the pipeline: After the build is done, we can test the pipeline now. To do that, click on the “play” icon in the pipeline menu and wait for the pipeline to finish executing. Notice how the color of the snaps turns yellow while executing, indicating they are currently running.Picture13
  9. Validate the Results: Once the execution finishes, the pipeline turns dark green. If there’s any exception, the failing snap turns red.Picture14
  10. Results: Login to the salesforce account > accounts > Click on the recently viewed accounts. You will be able to see the records that were fetched from the Employee_Data.csv file.Picture15

Conclusion

Congratulations on completing your first SnapLogic integration! In this blog, we went through the basics of iPaaS and SnapLogic. We also went through a practical use case to gain confidence and better understand. Our journey in SnapLogic has just started, and we’ll be exploring more in the future to expand on the knowledge we accumulated in this article.

Perficient and SnapLogic

At Perficient, we develop scalable and robust integrations within the SnapLogic Platform. With our expertise in SnapLogic, we resolve customers’ complex business problems, which helps them grow their business efficiently.

Contact us today to explore more options for elevating your business.

]]>
https://blogs.perficient.com/2024/03/05/navigating-snaplogic-integration-a-beginners-guide/feed/ 1 353553
Google Gemini AI Integrates Seamlessly with Salesforce for Enhanced Efficiency and Productivity https://blogs.perficient.com/2023/12/12/google-gemini-ai-integrates-seamlessly-with-salesforce-for-enhanced-efficiency-and-productivity/ https://blogs.perficient.com/2023/12/12/google-gemini-ai-integrates-seamlessly-with-salesforce-for-enhanced-efficiency-and-productivity/#respond Tue, 12 Dec 2023 16:00:09 +0000 https://blogs.perficient.com/?p=351253

Last week, Google announced Gemini, its groundbreaking multimodal AI model designed to push the boundaries of performance and versatility in AI technology. This means it can generalize and understand; operate across platforms; and is trained across different types of information, including text, code, audio, image, and video.

It’s also the most flexible AI model yet – it can run on everything from data centers to mobile devices, making it ideal for developers and customers to build and scale with AI.

Gemini is available in three versions:

  • Gemini Ultra: The largest and most capable for complex tasks
  • Gemini Pro: Ideal for scaling across a wide range of tasks
  • Gemini Nano: The most efficient for on-device tasks

 

What Sets Gemini Apart?

Gemini outperforms GPT-4 across multiple benchmarks, including massive multitask language understanding (MMLU), reasoning, math, and code. Gemini Ultra surpasses state-of-the-art results on 30/32 widely used academic benchmarks in large language model (LLM) research.

 

When to Expect Gemini in Action:

12/6/23: Bard and Pixel 8 Pro can now leverage Gemini Pro and Nano for tasks like Summarize in Recorder and Smart Reply in Gboard

12/13/23: Developers and enterprise customers can access Gemini Pro via the Gemini API in Google AI Studio or Google Cloud Vertex AI

Early 2024: Bard Advanced, launching with Gemini Ultra, will offer a new level of interaction with top-tier models and capabilities

 

Why Should Salesforce Users Care?

AI enthusiasts have an exciting new benchmark to explore, comparing Gemini’s performance against other models. Businesses seeking AI integration for growth now have a powerful model to assess and potentially adopt.

Salesforce customers can utilize Gemini Pro and Ultra’s generative capabilities through Google Cloud Vertex AI using the “bring your own model” feature of the Einstein Trust Layer.

  • Einstein GPT: This integration allows users to leverage Gemini’s generative AI capabilities directly within Salesforce. Generate personalized users, product descriptions, social media posts, and other content based on your customer data.
  • Einstein Copilot: This bidirectional integration connects Salesforce with Google Workspace. Seamlessly generate content in Google Workspace, update records in Salesforce, and trigger workflows based on specific actions.
  • Custom integrations: To allow for a highly tailored solution, you can build custom integrations using Salesforce APIs to connect Gemini with any specific functionalities.

 

Benefits of Integrating Gemini with Salesforce

  • Increased efficiency and productivity
  • Improved customer engagement
  • Enhanced decision making
  • Personalized customer experiences
  • Automated workflows

 

Use Cases for Gemini and Salesforce Integration

Sales Cloud:

  • Personalized email campaigns and product descriptions: Create personalized, targeted email campaigns and product descriptions to drive engagement and conversion
  • Automated lead qualification: Automatically analyze data and score leads to allow sales reps to focus on qualified leads and close deals

Service Cloud:

  • Personalized customer support responses: Generate personalized responses to customer inquiries for faster resolution times and improved customer satisfaction
  • Knowledge base article creation: Create and update knowledge base articles automatically with accurate and informative content
  • Automated ticket routing: Streamline and automate tickets to the most qualified customer service agent based on the customer data and issue to ensure quick and efficient resolution

Marketing Cloud:

  • Social media content and ad copy: Draft engaging and relevant copy based on audience insights to increase brand awareness and engagement
  • Automated email marketing campaigns: Design and automate email campaigns to deliver relevant content to customers for maximum campaign effectiveness

 

Interested in Learning More?

Contact us to learn more about Perficient’s AI expertise, and multicloud capabilities. Our partnerships with Google and Salesforce make us the partner of choice to help you integrate Salesforce and Gemini.

]]>
https://blogs.perficient.com/2023/12/12/google-gemini-ai-integrates-seamlessly-with-salesforce-for-enhanced-efficiency-and-productivity/feed/ 0 351253
Sitecore and Azure AD B2C Integration https://blogs.perficient.com/2023/11/22/sitecore-and-azure-ad-b2c-integration/ https://blogs.perficient.com/2023/11/22/sitecore-and-azure-ad-b2c-integration/#respond Wed, 22 Nov 2023 11:30:46 +0000 https://blogs.perficient.com/?p=348567

Introduction

In this blog post, we will explore the benefits of Sitecore and Azure AD B2C integration and how its advanced authentication and authorization capabilities can help businesses provide a more secure and personalized digital experience for their customers. Sitecore first introduced Identity Server as a feature in Sitecore 9, which was released in October 2017. Identity Server was introduced as a replacement for identity management system which relied on the ASP.NET Membership Provider. This built-in support for Identity Server made it easier to integrate Azure AD B2C as an identity provider within the Sitecore ecosystem.

Understanding Azure B2C

Azure AD B2C is a cloud-based identity and access management (IAM) solution that enables businesses to manage user identities and access to their digital assets. It provides a range of features, such as user authentication and authorization to help businesses protect their digital assets from unauthorized access. With Azure AD B2C, businesses can also collect and manage user data, allowing them to create personalized digital experiences. Significant level of customization options enables seamless experience, branding and user journeys which can align with any specific requirements.

Setting Up Sitecore and Azure AD B2C Integration

Setting up Sitecore and Azure AD B2C integration involves several steps.
  1. Creating an Azure AD B2C tenant and configuring the identity provider settings.
  2. Configuring Sitecore to use Azure AD B2C as the primary identity provider.
    • This involves configuring Sitecore’s federated authentication module to forward login requests to Azure AD B2C.
    • Configuring Sitecore’s roles and permissions based on Azure AD B2C user attributes.
  3. Configuring Sitecore to use Azure AD B2C for collecting user data.
    • This involves mapping user attributes from Azure AD B2C to the corresponding xConnect contact facets and extending xConnect contact model if needed.

Here are the most significant benefits when considering Sitecore and Azure AD B2C integration.

Advanced Authentication Capabilities:

Azure AD B2C provides a wide range of authentication methods,
including social identities like Facebook and Google, multi-factor authentication, and passwordless authentication.
This gives businesses the flexibility to choose the authentication method that best suits their needs,
and also provides additional security options beyond what is available with Sitecore’s Identity Server.

Improved Security:

Security is a critical concern for any business operating online, and Sitecore and Azure AD B2C integration provides an enhanced level of security for
user authentication and authorization. With Azure AD B2C, businesses can take advantage of multi-factor authentication.
Azure AD B2C supports several MFA methods, such as SMS, email, or phone call.
Sitecore can leverage these MFA methods to provide an additional layer of security for user authentication.
Conditional access policies allow businesses to control access to their digital assets based on user attributes, such as device type, IP address, or location.
These added layers of security helps prevent unauthorized access, phishing attacks, and other cyber threats that can compromise sensitive data.

Personalized User Experiences:

Sitecore and Azure AD B2C integration also enable businesses to create more personalized user experiences.
This is achieved by using Azure AD B2C to collect user data, such as user preferences or behavior, and use it to create more relevant and engaging experiences for the customers.
User data is stored in a centralized location and can easily accessed and managed using Microsoft Graph API or Azure AD B2C portal.
Businesses can leverage user data to create targeted marketing campaigns and personalized content based on user preferences.
By tailoring content and offers to individual users, businesses can drive higher engagement and conversion rates.

Seamless Integration:

Sitecore and Azure AD B2C integration is designed to be seamless, making it easy for businesses to set up and manage.
With Azure AD B2C, businesses can manage user identities, profiles, and permissions, all within a single cloud-based platform.
This makes it easier for businesses to manage user data, streamline workflows, and reduce administrative overhead.
In addition to these benefits, Sitecore and Azure AD B2C integration also provides businesses with greater scalability, flexibility, and cost savings.
By leveraging cloud-based services, businesses can scale their digital experiences to meet the demands of their customers, while only paying for the resources they use.

Centralized Identity Management:

Azure AD B2C provides a centralized location for managing user identities,
which simplifies the process of managing user access across multiple applications and environments.
This reduces the risk of errors and makes it easier to maintain a secure identity environment.

Scalability and Availability:

Azure AD B2C is a cloud-based service that is designed to be highly scalable and available.
This means that businesses can easily scale up or down based on demand, and also benefit from the high availability and redundancy provided by Microsoft’s cloud infrastructure.

Compliance and Regulation:

Azure AD B2C provides compliance with industry regulations such as GDPR and HIPAA,
as well as support for authentication standards like OpenID Connect and OAuth 2.0.
This makes it easier for businesses to comply with regulatory requirements and ensure the security of user data.

Integration with Other Azure Services:

Azure AD B2C integrates seamlessly with other Azure services, such as Azure Active Directory and Azure Key Vault,
which provides additional security and management options for businesses.
Additionally, Azure AD B2C can be easily integrated with other cloud-based services, such as Microsoft Dynamics 365 and Salesforce.

There are some downsides as well which we need to point out.

Complexity:

Azure AD B2C is a more complex system than Sitecore’s Identity Server, and it may take longer to set up and configure.
As Azure AD B2C is a separate service, developers and administrators will need to learn a new set of tools and technologies to use it effectively.
This may require additional resources and expertise, and could result in a steeper learning curve for those who are not familiar with Azure AD B2C.

Cost:

While Sitecore’s Identity Server is included in the Sitecore license, Azure AD B2C is a separate service that requires a subscription.
Depending on the size and complexity of your organization, the cost of using Azure AD B2C may be higher than using Sitecore’s Identity Server.

Dependency on External Service:

Because Azure AD B2C is a cloud-based service, it introduces a dependency on an external service provider.
This may result in increased latency, and could potentially cause issues with service availability or performance.

Integration Challenges:

Depending on the complexity of your existing infrastructure, integrating Azure AD B2C with your Sitecore implementation may require additional development and configuration work.
This could result in longer development timelines or higher development costs.

In conclusion

Sitecore and Azure AD B2C integration provides businesses with a powerful set of tools to improve their digital experiences.
By combining Sitecore’s robust content management capabilities with Azure AD B2C’s advanced identity and access management features,
Sitecore can provide businesses with MFA, conditional access policies, and user data collection for personalization.
This integration can provide a more secure, personalized, and seamless user experience.
With this integration, businesses can better engage with their customers, drive higher conversion rates, and ultimately grow their business.
Setting up Sitecore and Azure B2C integration involves several technical steps, but the benefits of this integration definitely make it worth the effort.
]]>
https://blogs.perficient.com/2023/11/22/sitecore-and-azure-ad-b2c-integration/feed/ 0 348567
Coveo Headless Library Integration with SAPUI5 Framework: Development Environment Setup – Phase I https://blogs.perficient.com/2023/10/12/coveo-headless-library-integration-with-sapui5-framework-development-environment-setup-phase-i/ https://blogs.perficient.com/2023/10/12/coveo-headless-library-integration-with-sapui5-framework-development-environment-setup-phase-i/#respond Fri, 13 Oct 2023 03:22:55 +0000 https://blogs.perficient.com/?p=344001

In this blog, we will explore how to integrate Coveo Headless, a powerful search and relevance platform, with OpenUI5, a popular UI framework for building web applications. As search functionality becomes increasingly crucial for modern applications, this integration will allow us to create an advanced search experience within OpenUI5 projects.

Introduction

Coveo Headless is a search and relevance platform that offers a set of APIs to build tailored search experiences. It leverages machine learning and AI to deliver personalized results, making it a powerful tool for enhancing search functionality.

OpenUI5 is a UI framework based on JavaScript that facilitates the development of responsive web applications. It provides a collection of libraries and tools for creating consistent and visually appealing user interfaces.

By integrating Coveo Headless with OpenUI5, we can combine the strengths of Coveo’s advanced search capabilities with OpenUI5’s flexible UI components, resulting in a comprehensive and user-friendly search experience.

Requirements

Before we dive in, it’s essential to ensure you have the following prerequisites:

  • Basic knowledge of Coveo and OpenUI5 components.
  • Familiarity with JavaScript and Node.js.
  • Node.js version >= 18.12.0 installed (you can use Node Version Manager, NVM, for this).

Setting Up the Development Environment

In this section, we’ll guide you through the process of setting up your development environment to integrate Coveo Headless with OpenUI5. This includes cloning a sample OpenUI5 repository, upgrading your Node.js version, installing required dependencies, adding dependencies to the  package.json file, and configuring shims for compatibility.

Clone Sample OpenUI5 Repository:

To get started, clone the OpenUI5 sample application repository from GitHub.

Repository URL: https://github.com/SAP/openui5-sample-app

This sample repository provides a basic structure for an OpenUI5 application and will serve as the foundation for integrating Coveo Headless library.

Configurations

Step-01: Add Dependencies to package.json:

Open the package.json file in your project directory. Add the following dependencies to the “dependencies” section:

"dependencies": {
    "@coveo/headless": "^1.109.0",
    "http-proxy": "^1.18.1",
    "openui5-redux-model": "^0.4.1"
}

Step-02: Add Shim Configuration:

In your ui5.yaml configuration file, add the shim configuration for the Coveo Headless package. This configuration ensures that OpenUI5 correctly loads the Coveo Headless module:

---
specVersion: "2.5"
kind: extension
type: project-shim
metadata:
  name: ui5-ts-shim-showcase.thirdparty
shims:
  configurations:
    "@coveo/headless":
      specVersion: "2.5"
      type: module
      metadata:
        name: "@coveo/headless"
      resources:
        configuration:
          paths:
            "/resources/@coveo/headless/": ""

Step-03: Install Dependencies:

Run the following commands in your project directory to install the newly added dependencies.

npm install
cd webapp
yarn install

Please note that the installation might take some time.

Step-04: Configure Component.js:

Open your Component.js file located within the webapp folder and add the following code. It ensures that Coveo Headless is properly mapped and recognized as a module by OpenUI5:

sap.ui.loader.config({
  map: {
    "*": {
      "@coveo/headless": "@coveo/headless/dist/browser/headless"
    }
  },
  shim: {
    "@coveo/headless/": {
      "amd": true,
      "deps": [],
      "exports": "CoveoHeadless"
    }
  }
});

sap.ui.define(["sap/ui/core/UIComponent", "sap/ui/core/ComponentSupport", "@coveo/headless"], function(UIComponent) {
  "use strict";
  return UIComponent.extend("sap.ui.demo.todo.Component", {
    metadata: {
      manifest: "json"
    }
  });
});

Start a local server and run the application (http://localhost:8080/index.html).

npm start or ui5 serve -o index.html

This setup ensures that Coveo Headless is correctly loaded and available within your OpenUI5 project. You can also verify this in your browser’s console as shown in the screenshot below:

Console

Now you can use the CoveoHeadless variable within your OpenUI5 project to initialize the Coveo search engine and start building advanced search functionality.

Summary

By performing the above steps, you will have successfully prepared your development environment to integrate Coveo Headless with OpenUI5. The sample OpenUI5 application and the added dependencies will serve as the basis for building your enhanced search functionality.

Additional resources

]]>
https://blogs.perficient.com/2023/10/12/coveo-headless-library-integration-with-sapui5-framework-development-environment-setup-phase-i/feed/ 0 344001
How to Integrate Power BI into Power Apps https://blogs.perficient.com/2023/10/04/how-to-integrate-power-bi-into-power-apps/ https://blogs.perficient.com/2023/10/04/how-to-integrate-power-bi-into-power-apps/#comments Wed, 04 Oct 2023 15:44:59 +0000 https://blogs.perficient.com/?p=345875

In today’s digital age, businesses need tools that can help them make informed decisions quickly. Power BI and Power Apps are two powerful tools that can help businesses achieve this goal. Power BI is a business intelligence tool that helps businesses visualize and analyze their data, while Power Apps is a low-code platform that allows businesses to create custom applications. In this blog, we’ll explore how to connect Power BI and Power Apps to unleash the power of integration.

Why Integrate Power BI and Power Apps?

Integrating Power BI and Power Apps can provide businesses with a unified platform for visualizing and analyzing their data. By connecting Power BI reports and dashboards to Power Apps, businesses can provide employees with real-time insights into their data while also allowing them to act on that data through custom applications. The integration of Power BI and Power Apps also overcomes the drawback of limited visuals in Power Apps.

Some benefits of integrating Power BI and Power Apps include:

  • Improved Data Visibility: By integrating Power BI and Power Apps, businesses can provide employees with a unified platform for visualizing and analyzing their data, improving overall data visibility.
  • Streamlined Workflows: By allowing employees to act on data insights through custom applications, businesses can streamline their workflows and improve overall efficiency.
  • Increased Collaboration: By integrating Power BI and Power Apps, businesses can foster increased collaboration between employees by providing them with real-time insights into shared data.
  • Field Service Management: Field service technicians can use Power Apps to receive real-time insights into customer data, inventory levels, and other critical information while on the go. By integrating with Power BI, they can quickly access visual reports and dashboards to make informed decisions.
  • Sales and Marketing: Sales and marketing teams can use Power Apps to manage leads, track customer interactions, and analyze sales data. By integrating with Power BI, they can access visual reports and dashboards to track sales performance, identify trends, and adjust their strategies accordingly.
  • Financial Management: Finance teams can use Power Apps to manage financial data, monitor expenses, and analyze financial performance. By integrating with Power BI, they can access visual reports and dashboards to track key metrics, identify areas for improvement, and make informed decisions.

How to Integrate Power BI and Power Apps

  1. Create a Power BI Report: Develop a Power BI report or dashboard to visually represent the data intended for integration into your Power App, and subsequently, share and make the Power BI report accessible on the Power BI service.Picture1
  2. Access the previously published report within the Power BI service, then proceed to open the report and pin it to a dashboard.Picture2
  3. Assign a suitable name to the dashboard and pin it live.Picture3
  4. Next, navigate to Power Apps and search for the Power BI tile component. Once found, select the Power BI tile.Picture4
  5. A modal will appear, allowing you to choose the Workspace, Dashboard, and Tile. Please select options from all three dropdowns to establish a connection with the Power BI Dashboard.Picture5

Once you’ve made selections in all the properties dropdowns, Power Apps will establish a live connection with the Power BI dashboard, and you will be able to view the Power BI report within Power Apps.Picture6
Integrating Power BI and Power Apps can provide businesses with a unified platform for visualizing and analyzing their data. By connecting Power BI reports and dashboards to Power Apps, businesses can provide employees with real-time insights into their data while also allowing them to act on that data through custom applications. Follow the steps outlined in this blog to unleash the power of integration and take your business to the next level.

 

]]>
https://blogs.perficient.com/2023/10/04/how-to-integrate-power-bi-into-power-apps/feed/ 2 345875
Introduction to Boomi Master Data Hub https://blogs.perficient.com/2023/04/06/introduction-to-boomi-master-data-hub/ https://blogs.perficient.com/2023/04/06/introduction-to-boomi-master-data-hub/#comments Thu, 06 Apr 2023 06:48:54 +0000 https://blogs.perficient.com/?p=332195

Importance of a Quality Data:  

  • Many organizations, whether they recognize it or not, experience the effects of disparate data. High-quality data is the cornerstone on which success is built when a decision must be taken, or a plan needs to be put into action.Data Quality
  • The Master Data Hub steps in at this point. No matter what system the data is coming from, Hub can make sure all the right information is tracked for each customer, avoid confusion from duplication, and more. You can access the appropriate data anytime you need it for collaborative decision-making, reporting, and sales efforts.
  • With the Boomi Master Data Hub, you can choose what kind of data you want to monitor. The bottom line is that Hub gives you access to high-quality information so you can realize business value.

What is Master Data Hub?

Master Data is any and all data about business entities, that is valuable to a company and its successful operation. This data must be consistent between all systems, groups, departments, and reports.Microsoftteams Image (2)

A master data hub system is a centralized data management solution that stores, manages, and distributes critical business data across multiple systems and applications. It serves as a single source of truth for all essential data elements, such as customer information, product details, financial data, and more.

Let’s look at few of the many functions that Boomi Master Data Hub offers:

  1. Quickly Model Master Data: A low-code, visual interface experience increases speed for matching and merging accurate data records throughout your business.
  2. Collective Intelligence: Tap into the experience of the Boomi developer community by using the Boomi Suggest wizard to quickly add fields to data models.
  3. Comprehensive Matching: Leverage built-in matching processes to help you create consolidated, error-free data records your business will trust.
  4. Automatic alerts: Real-time notifications inform you when data processing is finished or if records have been isolated for further stewardship.

Using master data hub system record with the single source of truth about a business entity, such as a customer or product. Also, it contains validated and valued data about that entity called as Golden Record is created. “Golden Record” contains accurate data about each of your customer or product.

In this blog, we will explore the benefits of implementing a master data hub system using Boomi Master Data Hub and how it can help organizations improve their data management processes.

Life Cycle of Master Data Hub: 

Master Data Hub (2)

 

 

A Boomi Master Data Hub’s lifecycle is divided into four phases: define, deploy, synchronize, and stewardship.

Let’s dig deeper to gain a better understanding and practically see how to configure this in Boomi Atom Platform.

 

1. Define: Define your model by identifying the sources, fields, and rules that will form your records.

    • Steps to define Model:
      • Create your Hub repository:
        • Navigate to the Master Data Hub services in Atomsphere.

Boomi Mdh

        • Repository tab is the default screen for MDH platform. Select Create a Repository
        • Dialogue box appears to select an Atom/Cloud to create your Hub repository on, and to name

2023 03 20 11 47 09 Training Mayur Tidke Perficient Repositories Boomi Atomsphere

      • Create Hub Sources:

        • Select the Sources tab and select Create Your First Source.
        • Next in the Create a Source Window, configure your source e.g., MYSQL, SFDC etc.

2023 03 20 11 50 15 Training Mayur Tidke Perficient Sources Boomi Atomsphere

      • Create a Model:
        • Navigate to MDH platform select the Models tab and next, select the Create your First Model button.
        • In the Model Name field enter model name.

2023 03 20 11 54 24 Training Mayur Tidke Perficient Model Boomi Atomsphere

        • Next, select the Fields option in the Configure Model sub tab.

Boomi Field

        • After adding fields apply Data Quality Steps and Match Rules according to your requirements and Save your Model.
  1. Deploy: Once you create and publish a model, you can deploy it to a repository to create a master data domain hosted in that repository, which stores the golden records created from the model’s sources.
    • To deploy a Model to Repository switch to Models tab in MDH Platform screen and select your Model and click on Deploy option. 2023 03 20 12 16 47 Basic Interview Question Protected View Word
    • Deploying a Model in Hub:

      • Select the Repositories tab and then select your repository.
      • From the Summary tab Select Deploy Your First Model. 2023 03 20 11 52 38 Basic Interview Question Protected View Word
      • Select Contact in the Model Name and recent version in the Model Version menu.2023 03 20 12 19 25 Training Mayur Tidke Perficient Repository Boomi Atomsphere
    • After successful deployment of Model, the summary screen will look like this. 2023 03 20 11 50 46
  2. Synchronize: Together, the Integration and Master Data Hub services collect and control the model data, so it is consistent across all sources. Leverage Integration to orchestrate data synchronization and design process flows to ensure data quality.
    • Navigate back to the Integration service by hovering over the Services menu tab and create a process using Master Data Hub services.Boomi Process
  3. Stewardship: Ensure the quality of your golden records by reviewing and managing potential exceptions to the model’s rules. Steward data as it flows into domains to resolve duplicates and fix data entry issues, as well as identify and correct inaccurate data.

By following these above steps, you can create a Golden Record in Boomi using Master Data Hub. This can help you ensure that your data is accurate, consistent, and up to date across all your systems.

Benefits of a Master Data Hub System:

  • Centralized Data Management: A master data hub system provides a single, centralized location for storing and managing all critical business data. This ensures data consistency and accuracy across different systems and applications, reducing the risk of errors and data inconsistencies.
  • Improved Data Quality: By maintaining a single source of truth for critical data elements, a master data hub system ensures that all data is accurate, complete, and up to date. This, in turn, improves the overall quality of the data, leading to better decision-making and improved business outcomes.
  • Increased Efficiency: With a master data hub system in place, organizations can eliminate redundant data entry and reduce manual data processing efforts. This, in turn, increases efficiency and productivity, allowing employees to focus on higher-value activities.
  • Faster Time-to-Market: A master data hub system enables organizations to quickly and easily onboard new applications, systems, and data sources. This allows organizations to rapidly deploy new business processes and services, reducing time-to-market and improving agility.
  • Improved Compliance: By maintaining accurate and up-to-date data, a master data hub system helps organizations comply with regulatory requirements and industry standards. This, in turn, reduces the risk of non-compliance and associated penalties. Implementing a Master Data Hub System Implementing a master data hub system requires careful planning and execution.

Conclusion

We choose Boomi because it enables seamless data integration and helps users to quickly find data and leverage the right data required to answer any business question. Boomi flow enables you to quickly build applications and create workflows that makes records housed in Boomi Master Data Hub more actionable.

References:

  1. https://help.boomi.com/
  2. https://boomi.com/platform/master-data-hub/

Why Perficient?

Perficient is a Boomi Select Partner with deep expertise in key technologies that support IT modernization. Our expertise, coupled with Boomi’s innovative, cloud-native platform, delivers increased value, low cost of ownership, and drives better, faster outcomes for our clients. For more details visit- https://www.perficient.com/who-we-are/partners/boomi

Contact us today to explore options for elevating your business.

 

]]>
https://blogs.perficient.com/2023/04/06/introduction-to-boomi-master-data-hub/feed/ 2 332195
Openshift Essentials and Modern App Dev on Kubernetes https://blogs.perficient.com/2023/03/13/openshift-essentials-and-modern-app-dev/ https://blogs.perficient.com/2023/03/13/openshift-essentials-and-modern-app-dev/#respond Mon, 13 Mar 2023 19:02:00 +0000 https://blogs.perficient.com/?p=322779

Introduction

Whether you have already adopted Openshift or are considering it, this article will help you increase your ROI and productivity by listing the 12 essential features including with any Openshift subscription. This is where Openshift shines as a platform when compared to pure Kubernetes engine distributions like EKS, AKS, etc. which are more barebones and require quite a bit of setup to be production and/or enterprise ready. When you consider the total value of Openshift, and factor in the total cost of ownership for the alternative, Openshift is a very competitive option not only for cost conscious buyers but also organizations that like to get things done, and get things done the right way. Here we go:

  1. Managed Openshift in the cloud
  2. Operators
  3. GitOps
  4. Cluster Monitoring
  5. Cluster Logging
  6. Distributed Tracing
  7. Pipelines
  8. Autoscaling
  9. Service Mesh
  10. Serverless
  11. External Secrets
  12. Hyperscaler Operators

Special Bonus: Api Management

ROSA, ARO, ROKS: Managed Openshift in the cloud

If you want an easy way to manage your Openshift Cloud infra, these managed Openshift solutions are an excellent value and a great way to get ROI fast. Pay-as-you-go running on the hyperscaler’s infrastructure, you can save a ton of money by using reserved instances with a year commitment. RedHat manages the control plane (master and infra nodes) and you pay a small fee per worker. We like the seamless integration with native hyperscaler services like storage and node pools for easy autoscaling, zone awareness for HA, networking and RBAC security with IAM or AAD. Definitely worth a consideration over the EKS/AKS, etc. solutions which are more barebones.

Check out our Openshift Spring Boot Accelerator for ROSA, which leverages most of the tools I’m introducing down below…

Operators

Available by default on Openshift, the OperatorHub is pretty much the app store for Kubernetes. Operators manage the installation, upgrade and lifecycle of complex Kubernetes-based solutions like the tools we’re going to present in this list. They also are based on the controller pattern which is at the core of the Kubernete’s architecture and enable declarative configuration through the use of Custom Resource Definitions (CRD). Operators is a very common way to distribute 3rd party software nowadays and the Operator Framework makes it easy to create custom controllers to automate common Kubernetes operations tasks in your organization.

The OperatorHub included with Openshift out-of-the-box allows you to install said 3rd party tools with the click of a button, so you can setup a full-featured cluster in just minutes, instead of spending days, weeks, months gathering installation packages from all over. The Operator Framework support Helm, Ansible and plain Go based controllers to manage your own CRDs and extend the Kubernetes APIs. At Perficient, we leverage custom operators to codify operations of high-level resources like a SpringBootApp. To me, Operators represent the pinnacle of devsecops automation or at least a giant leap forward.

Openshift GitOps (AKA ArgoCD)

First thing you should install on your clusters to centralize the management of your clusters configuration with Git is GitOps. GitOps is a RedHat’s distribution of ArgoCD which is delivered as an Operator, and integrates seamlessly with Openshift RBAC and single-sign on authentication. Instead of relying on a CI/CD pipeline and the oc (kubectl) cli to implement changes in your clusters, ArgoCD works as an agent running on your cluster which automatically pulls your configuration manifests from a Git repository. This is the single most important tool in my opinion for so many reasons, the main ones being:

  1. Central management and synchronization of multi-cluster configuration (think multi-region active/active setups at the minimum)
  2. Ability to version control cluster states (auditing, rollback, git flow for change management)
  3. Reduction of learning curve for development teams (no new tool required, just git, manage simple yaml files)
  4. Governance and security (quickly propagating policy changes, no need to give non-admin users access to clusters’apis)

I have a very detailed series on GitOps on my Perficient’s blog, this is a must-read whether you’re new to Openshift or not.

Cluster Monitoring

Openshift comes with a pre-configured monitoring stack powered by Prometheus and Grafana. Openshift Monitoring manages the collection and visualization of internal metrics like resource utilization, which can be leveraged to create alerts and used as the source of data for autoscaling. This is generally a cheaper and more powerful alternative to the native monitoring systems provided by the hyperscalers like CloudWatch and Azure Monitoring. Like other RedHat’s managed operators, it comes already integrated with Openshift RBAC and authentication. The best part is it can be managed through GitOps by using the provided, super simple CRDs.

A less-know feature is the ability to leverage Cluster Monitoring to collect your own application metrics. This is called user-workload monitoring and can be enabled with one line in a manifest file. You can then create ServiceMonitor resources to indicate where Prometheus can scrape your application custom metrics, which can be then used to build custom alerts, framework-aware dashboards, and best of all, used as a source for autoscaling (beyond cpu/memory). All with a declarative approach which you can manage across clusters with GitOps!

Cluster Logging

Based on a Fluentd-Elasticsearch stack, cluster logging can be deployed through the OperatorHub and comes with production-ready configuration to collect logs from the Kubernetes engine as well as all your custom workloads in one place. Like Cluster Monitoring, Cluster Logging is generally a much cheaper and Powerful alternative to the hyperscaler’s native services. Again, the integration with Openshift RBAC and single-sign on makes it very easy to secure on day one. The built-in Kibana deployment allows you to visualize all your logs through a web browser without requiring access to the Kubernetes API or CLI. The ability to visualize logs from multiple pods simultaneously, sort and filter messages based on specific fields and create custom analytics dashboards makes Cluster Logging a must-have.

Another feature of Cluster Logging is log forwarding. Through a simple LogForwarder CRD, you can easily (and through GitOps too!) forward logs to external systems for additional processing such as real-time notifications, anomaly detection, or simply integrate with the rest of your organization’s logging systems. A great use case of log forwarding is to selectively send log messages to a central location which is invaluable when managing multiple clusters in active-active configuration for example.

Last but not least is the addition of custom Elasticsearch index schema in recent versions, which allows developers to output structured log messages (JSON) and build application-aware dashboards and analytics. This feature is invaluable when it comes to filtering log messages based on custom fields like log levels, or a trace ID, to track logs across distributed transactions (think Kafka messages transiting through multiple topics and consumers). Bonus points for being able to use Elasticsearch as a metrics source for autoscaling with KEDA for example.

Openshift Distributed Tracing

Based on Jaeger and Opentracing, Distributed Tracing can again be quicky installed through the OperatorHub and makes implementing Opentracing for your applications very, ridiculously easy. Just deploy a Jaeger instance in your namespace and you can just annotate any Deployment resource in that namespace with one single line to start collecting your traces. Opentelemetry is invaluable for pinpointing performance bottlenecks in distributed systems. Alongside Cluster Logging with structured logs as mentioned above, it makes up a complete solution for troubleshooting transactions across multiple services if you just log your Opentracing trace IDs.

Openshift Distributed Tracing also integrates with Service Mesh, which we’ll introduce further down, to monitor and troubleshoot traffic between services inside a mesh, even for applications which are not configured with Opentelemetry to begin with.

Openshift Pipelines

Based on Tekton, Openshift pipelines allow you to create declarative pipelines for all kind of purposes. Pipelines are the recommended way to create CI/CD workflows and replaces the original Jenkins integration. The granular declarative nature of Tekton makes creating re-usable pipeline steps, tasks and entire pipelines a breeze, and again can be managed through GitOps (!) and custom operators. Openshift pipelines can be deployed through the OperatorHub in one-click and comes with a very intuitive (Jenkins-like) UI and pre-defined tasks like S2I to containerize applications easily. Creating custom tasks is a breeze as tasks are simply containers, which allows you to leverage the massive ecosystem of 3rd party containers without having to install anything additional.

You can use Openshift pipelines for any kind of workflow, from standard ci/cd for application deployments to on demand integration tests, to executing operations maintenance tasks, or even step functions. As Openshift native, Pipelines are very scalable as they leverage the Openshift infrastructure to execute tasks on pods, which can be very finely tuned for maximum performance and high availability, integrate with the Openshift RBAC and storage.

Autoscaling

Openshift supports the three types of autoscalers: horizontal pod autoscaler, vertical pod autoscaler, cluster autoscaler. The horizontal pod autoscaler is included OOTB alongisde the node autoscaler, and the vertical pod autoscaler can be installed through the OperatorHub.

Horizontal pod autoscaler is a controller which increases and decreases the number of pod replicas for a deployment based on CPU and Memory metrics threshold. It leverages Cluster Logging to source the Kubernetes pod metrics from the included Prometheus server and can be extended to use custom application metrics. The HPA is great to scale stateless rest services up and down to maximize utilization and increase responsiveness during traffic increase.

Vertical pod autoscaler is another controller which analyses utilization metrics patterns to optimize pod resource configuration. It automatically tweaks your deployment resources CPU and memory requests to reduce wastes or undercommitment to insure maximum performance. It’s worth noting that a drawback of VPA is that pods have to be shutdown and replaced during scaling operations. Use with caution.

Finally, the cluster autoscaler is used to increase or decrease the number of nodes (machines) in the cluster to adapt to the number of pods and requested resources. The cluster autoscaler paired with the hyperscaler integration with machine pools can automatically create new nodes when additional space is required and remove the nodes when the load decreases. There are a lot of considerations to account for before turning on cluster autoscaling related to cost, stateful workloads requiring local storage, multi-zone setups, etc.  Use with caution too.

Special Mention

Special mention for KEDA, which is not commercially supported by RedHat (yet), although it is actually a RedHat-Microsoft led project. KEDA is an event-driven scaler which sits on top of the built-in HPA and provides extensions to integrate with 3rd party metrics aggregating systems like Prometheus, Datadog, Azure App Insight, and many many more. It’s most well-known for autoscaling serverless or event-driven applications backed by tools like Kafka, AMQ, Azure EventHub, etc. but it’s very useful to autoscale REST services as well. Really cool tech if you want  to move your existing AWS Lambda or Azure Functions over to Kubernetes.

Service Mesh

Service mesh is supported by default and can also be installed through the OperatorHub. It leverages Istio and integrates nicely with other Openshift operators such as Distributed Tracing, Monitoring & Logging, as well as SSO. Service mesh serves many different functions that you might be managing inside your application today (For example if you’re using Netflix OSS apps like Eureka, Hystrix, Ribbon, etc):

  1. Blue/green deployments
  2. Canary deployments (weighted traffic)
  3. A/B testing
  4. Chaos testing
  5. Traffic encryption
  6. OAuth and OpenID authentication
  7. Distributed tracing
  8. APM

You don’t even need to use microservices to take advantage of Service Mesh, a lot of these features apply to re-platformed monoliths as well.

Finally you can leverage Service Mesh as a simple API Management tool thanks to the Ingress Gateway components, in order to expose APIs outside of the cluster behind a single pane of glass.

Serverless

Now we’re getting into real modern application development and deployment. If you want peak performance and maximize your compute resources and/or bring down your cost, serverless is the way to go for APIs. Openshift Serverless is based on KNative and provides 2 main components: serving and eventing. Serving is for HTTP APIs containers autoscaling and basic routing, while eventing is for event-driven architecture with CloudEvents.

If you’re familiar with AWS Lambda or Azure Functions, Serverless is the equivalent in the Kubernetes world, and there are ways to migrate from one to the other if you want to leverage more Kubernetes in your infrastructure.

We can build a similar solution with some of the tools we already discussed like KEDA and Service Mesh, but KNative is a more opinionated model for HTTP-based serverless applications. You will get better results with KNative if you’re starting from scratch.

The big new thing is eventing which promotes a message-based approach to service-to-service communication (as opposed to point-to-point). If you’ve used that kind of decoupling before, you might have used Kafka, or AWS SQS or other types of queues to decouple your applications, and maybe Mulesoft or Spring Integration or Camel (Fuse) to produce and consume messages. KNative eventing is a unified model for message format with CloudEvent and abstracts the transport layer with a concept called event mesh. Check it out: https://knative.dev/docs/eventing/event-mesh/#knative-event-mesh.

External Secrets Add-On

One of the first things to address when deploying applications to Kubernetes is the management of sensitive configuration variables like passwords to external systems. Though Openshift doesn’t officially support loading secrets from external vaults, there are widely used solutions which are easily setup on Openshift clusters:

  • Sealed Secrets: if you just want to manage your secrets in Git, you cannot have them in clear even if you’re using GitHub or other Git providers. SealedSecrets allows you to encrypt secrets in Git which can only be read by your Openshift cluster. This requires an extra step before committing using the provided client certificate but doesn’t require a 3rd party store.
  • External Secrets: this operator allows you to map secrets stored in external vaults like Hashicorp, Azure Vault and AWS Secret Manager to internal Openshift secrets. Very similar to the CSI driver below, it essentially creates a Secret resource automatically, but doesn’t require an application deployment manifest to be modified in order to be leveraged.
  • Secrets Store CSI Driver: another operator which syncs an external secrets store to an internal secret in Openshift but works differently than the External Secrets operator above. Secrets managed by the CSI driver only exist as long as a pod using it is running, and the application’s deployment manifest has to explicitly “invoke” it. It’s not usable for 3rd party containers which are not built with CSI driver support out-of-the-box.

Each have their pros and cons depending on whether you’re in the cloud, use GitOps, your organization policies, existing secrets management processes, etc. If you’re starting from scratch and are not sure of which one to use, I recommend starting with External Secrets and your Cloud provider secret store like AWS Secret Manager or Azure Vault.

Special Mention: Hyperscalers Operators

If you’re running on AWS or Azure, each cloud provider has released their own operators to manage cloud infrastructure components through GitOps (think vaults, databases, disks, etc), allowing you to consolidate all your cloud configuration in one place, instead of using additional tools like Terraform and CI/CD. This is particularly useful when automating integration or end-to-end tests with ephemeral Helm charts to setup various components of an application.

API Management Add-On

Muleosft, Boomi or Cloupak for integration customers, this is an add-on but it’s way worth considering if you want to reduce your APIM costs: Redhat Application Foundation and Integration. These suites include a bunch of cool tech like Kafka (with a registry) and AMQ, SSO (SAML, OIDC, OAuth), Runtimes like Quarkus and Spring and Camel, 3Scale for API Management (usage plans, keys, etc), CDC, Caching and more.

Again because it’s all packaged as an operator, you can install and start using all these things in just a few minutes, with the declarative configuration goodness that enables GitOps and custom operators.

]]>
https://blogs.perficient.com/2023/03/13/openshift-essentials-and-modern-app-dev/feed/ 0 322779
Integrating Terraform with Jenkins (CI/CD) https://blogs.perficient.com/2022/06/01/integrating-terraform-with-jenkins-ci-cd/ https://blogs.perficient.com/2022/06/01/integrating-terraform-with-jenkins-ci-cd/#comments Wed, 01 Jun 2022 16:57:44 +0000 https://blogs.perficient.com/?p=310456

Automated Infrastructure (AWS) Setup Using Terraform and Jenkins (Launch EC2 and VPC)

In this blog we will discuss how to execute the Terraform code using Jenkins and set up AWS infrastructure such as EC2 and VPC.

For those of you are unfamiliar with Jenkins, it is an open-source continuous integration and continuous development automation tool that allows us to implement CI/CD workflows, called pipelines.

Getting to Know the Architecture

2

1

What is Terraform? – Terraform is the infrastructure as code delivered by HashiCorp. It is a tool for building, changing, and managing infrastructure in a safe repeatable way.

What is Jenkins? An open-source continuous integration/continuous delivery and deployment (CI/CD) automation software DevOps tool written in the Java programming language. It is used to implement CI/CD workflows, called pipelines.

What is Infrastructure as Code? – It is the process of managing infrastructure in a file, or files, rather than manually configuring resources in a user interface.

 

Advantages of Continuous Integration/Continuous Deployment –

  • Small code changes are easier and less consequential.
  • Insulating faults is easier and faster.
  • Testability is enhanced through smaller, specific changes.

 

Terraform consists of three stages of workflow:

  1. Write: You set resources, which can be split between several cloud providers and services.
  2. Plan: Terraform creates a workplan for your existing infrastructure and configuration that describes the infrastructure it will create and update.
  3. Apply: Terraform completes all operations in the correct sequence.

 

In this Article, we will cover the basic functions of Terraform to create infrastructure on AWS.

 

  1. Launch One Linux Machine and Install Jenkins. 

3

    • The Admin password is created and stored in the log file. To access the password you will need to run the below command.
      • # cat  /var/lib/jenkins/secrets/initialAdminPasswordThen, Customize JenkinsAfter That, Create First Admin UserClick on Save and Continue…

4

  1. Install the Terraform Plugin in Jenkins
    • In the Jenkins console, go to Manage Jenkins > Manage Plugins > Available > and search “Terraform”.

5

  1. Configure Terraform
    • You will need to manually set up Terraform on the same sever as Jenkins using the below commands.
      • In Manage Jenkins > Global Tool Config > Terraform
      • Add Terraform.
      • Uncheck the “Install Automatically” check box.
      • Name: Terraform
      • Install Directory: /usr/local/bin/

6

    • After getting Terraform set up on the Jenkins’ server, you will need to install Git on your Jenkins VM and write Terraform code on .tf file

7

  1. Integrate Jenkins with Terraform and our Git Hub Repository
    • We need to create a new project to run Terraform using Jenkins.
    • In Jenkins got to New Item and enter and item name and create Pipeline.
    • Now, we will write the script for the GitHub and Terraform job. Here we can use the Jenkins syntax generator to write the script.

8

pipeline {

agent any

stages {

stage(‘Checkout’) {

steps {

checkout([$class: ‘GitSCM’, branches: [[name: ‘*/main’]], extensions: [], userRemoteConfigs: [[url: ‘https://github.com/suraj11198/Terraform-Blog.git‘]]])

}

}

stage (“terraform init”) {

steps {

sh (‘terraform init’)

}

}

stage (“terraform Action”) {

steps {

echo “Terraform action is –> ${action}”

sh (‘terraform ${action} –auto-approve’)

}

}

}

}

 

  1. Using the Previous Steps, We Should Have Successfully Built Our Job

9

  1. Our EC2 Instance and VPC are Created, and the Same VPC is Attached to Our EC2 Instance

10

11

 

Summary:

Using Terraform, we built an EC2 instance and VPC on AWS using remote control.

We have touched on the basics of Terraform and Jenkins capabilities. It has several functionalities for the construction, modification, and versioning of the infrastructure.

 

How Can Perficient Help You?

Perficient is a certified Amazon Web Services partner with more than 10 years of experience delivering enterprise-level applications and expertise in cloud platform solutions, contact center, application modernization, migrations, data analytics, mobile, developer and management tools, IoT, serverless, security, and more. Paired with our industry-leading strategy and team, Perficient is equipped to help enterprises tackle the toughest challenges and get the most out of their implementations and integrations.

Learn more about our AWS practice and get in touch with our team here!

]]>
https://blogs.perficient.com/2022/06/01/integrating-terraform-with-jenkins-ci-cd/feed/ 7 310456