api Articles / Blogs / Perficient https://blogs.perficient.com/tag/api/ Expert Digital Insights Thu, 05 Jun 2025 17:06:19 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png api Articles / Blogs / Perficient https://blogs.perficient.com/tag/api/ 32 32 30508587 Postbot–An AI Bot that Generates Scripts for You https://blogs.perficient.com/2025/06/05/postbot-an-ai-bot-that-generates-scripts-for-you/ https://blogs.perficient.com/2025/06/05/postbot-an-ai-bot-that-generates-scripts-for-you/#comments Thu, 05 Jun 2025 17:06:19 +0000 https://blogs.perficient.com/?p=382392

In the fast-paced world of software development, automated API testing is no longer optional—it’s essential. If you’re looking for a sleek, intelligent tool to streamline your API testing workflow, Postbot might be exactly what you need.

In this blog post, we’ll explore what Postbot is, its core features, and how you can start using it today. We’ll also walk through an example with screenshots so you can follow along step by step.

What is Postbot?

Postbot is an AI tool developed by Postman. This tool enables users to create and execute tests without writing any code. Isn’t it interesting? So let’s dive a little deeper and see how it works.

Install Postman on your System

The first step is to download Postman and install it, or you can use the web version as well. Download the appropriate installer for your operating system (Windows, macOS, or Linux) from the official Postman website. Next step, you need to run the installer and follow the instructions to complete the installation. After installation, you can launch Postman and sign in or create a new account.

How to Use Postbot?

The next step is to create a request and start writing test scripts using Postman.

You can locate the Postbot icon under the Scripts tab, or at the bottom right side of the Postman window, or simply press Ctrl+Alt+P. You will get the screen below.

postbot

Now, hit the send button, and it will fetch some response as shown in the screenshot below.

Postbot an AI tool

After getting the response, we need to click on the Postbot icon and start scripting. You can start with simply writing “Add test for validating status” and hit the enter button, Postbot will generate a test for you.

pm.test(“Status code is 201”, function () {

pm.response.to.have.status(201);

});

Click on the send button again, and you will see a test case for the status code that has passed.

Postbot Generative AI tool

Similarly, you can ask Postbot any test case you want in simple English, and it will generate tests for you.

One more feature of using Postbot is that it can repair your existing test script. For example, just now you have created a test to validate status code, just change the status code to 300, and in postbot write” Repair the test case for status” and hit enter. That’s it, your test got corrected successfully.

Postbot Repair

We can generate visualizations by generating tables and charts from the responses by using the “Visualize Response” feature of Postbot

Postbot Visualize

Postbot provides easy API testing by enabling users to create and run tests without requiring any coding skills or knowledge. Its user-friendly design, AI-driven automation, and real-time support make it suitable for both novices and seasoned developers. By handling repetitive tasks and delivering precise validations, Postbot enhances productivity, minimizes mistakes, and ensures thorough API test coverage. Whether your goal is to enhance workflow efficiency or ensure seamless API performance, Postbot provides a reliable and streamlined solution. It enables development teams to concentrate on innovation instead of manual testing, making it a perfect fit for fast-paced, agile environments.

There are many more things you can do with Postbot. Stay tuned for more great features of Postbot in the upcoming blogs.

Happy Reading!

To explore more about the Postbot tool, you can go through the links below:

https://www.postman.com/product/postbot/

https://www.frugaltesting.com/blog/how-to-use-postbot-for-api-testing-write-api-tests-with-zero-code

NOTE: You can use the API for practice “https://reqres.in/api/users”

]]>
https://blogs.perficient.com/2025/06/05/postbot-an-ai-bot-that-generates-scripts-for-you/feed/ 1 382392
IOT and API Integration With MuleSoft: The Road to Seamless Connectivity https://blogs.perficient.com/2025/05/21/iot-and-api-integration-with-mulesoft-the-road-to-seamless-connectivity/ https://blogs.perficient.com/2025/05/21/iot-and-api-integration-with-mulesoft-the-road-to-seamless-connectivity/#respond Wed, 21 May 2025 09:08:59 +0000 https://blogs.perficient.com/?p=381483

In today’s hyper-connected world, the Internet of Things (IoT) is transforming industries, from smart manufacturing to intelligent healthcare. However, the real potential of IoT is to connect continuously with enterprise systems, providing real-time insights and automating. This is where MuleSoft’s Anypoint Platform comes in, a disturbance in integrating IoT units and API to create an ecosystem. This blog explains how MuleSoft sets the platform for connection and introduces a strong basis for IoT and API integration that goes beyond the specific dashboard to offer scalability, safety, and efficiency.

Objective

In this blog, I will show MuleSoft’s ability to integrate IoT devices with enterprise systems through API connectivity, focusing on real-time data processing. I will provide an example of how MuleSoft’s Anypoint Platform connects to an MQTT broker and processes IoT device sensor data. The example highlights MuleSoft’s ability to handle IoT protocols like MQTT and transform data for insights.

How Does MuleSoft Facilitate IoT Integration?

The MuleSoft’s Anypoint Platform is specific to the API connection, native protocol support, and a comprehensive integration structure to handle the complications of IoT integration. This is how MuleSoft IOT does the integration comfortably:

  1. API Connectivity for Scalable Ecosystems

MuleSoft’s API strategy categorizes integrations into System, Process, and Experience APIs, allowing modular connections between IoT devices and enterprise systems. For example, in a smart city, System APIs gather data from traffic sensors and insights into a dashboard. This scalability avoids the chaos of point-to-point integrations, a fault in most visualization-focused tools.

  1. Native IoT Protocol Support

IoT devices are based on protocols such as MQTT, AMQP, and CoAP, which MuleSoft supports. Without middleware, this enables direct communication between sensors and gateways. In a scenario, MuleSoft is better able to connect MQTT data from temperature sensors to a cloud platform such as Azure IoT Hub than other tools that require custom plugins.

  1. Real-Time Processing and Automation

IoT requires real-time data processing, and MuleSoft’s runtime engine processes data streams in real time while supporting automation. For example, if a factory sensor picks up a fault, MuleSoft can invoke an API to notify maintenance teams and update systems. MuleSoft integrates visualization with actionable workflows.

  1. Pre-Built Connectors for Setup

MuleSoft’s Anypoint Exchange provides connectors for IoT platforms (e.g., AWS IoT) and enterprise systems (e.g., Salesforce). In healthcare, connectors link patient wearables to EHRs, reducing development time. This plug-and-play approach beats custom integrations commonly required by other tools.

  1. Centralized Management and Security

IoT devices manage sensitive information, and MuleSoft maintains security through API encryption and OAuth. Its Management Center provides a dashboard to track device health and data flows, offering centralized control that standalone dashboard applications cannot provide without additional infrastructure.

  1. Hybrid and Scalable Deployments

MuleSoft’s hybrid model supports both on-premises and cloud environments, providing flexibility for IoT deployments. Its scalability handles growing networks, such as fleets of connected vehicles, making it a future-proof solution.

Building a Simple IoT Integration with MuleSoft

To demonstrate MuleSoft’s IoT integration, below I have created a simple flow in Anypoint Studio that connects to an MQTT Explorer, processes sensor data, and logs it to the dashboard integration. This flow uses a public MQTT Explorer to simulate IoT sensor data. The following are the steps for the Mule API flow:

Api Flowchart

Step 1: Setting Up the Mule Flow

In Anypoint Studio, create a new Mule project (e.g., ‘IoT-MQTT-Demo’). Design a flow with an MQTT Connector to connect to an explorer, a Transform Message component to process data, and a Logger to output results.

Step1

Step 2: Configuring the MQTT Connector

Configure the MQTT Connector properties. In General Settings, configure on a public broker (“tcp://test.mosquitto.org:1883”). Add the topic filter “iot/sensor/data” and select QoS “AT_MOST_ONCE”.

Step2

Step 3: Transforming the Data

Use DataWeave to parse the incoming JSON payload (e.g., ‘{“temperature”: 25.5 }’) and add a timestamp. The DataWeave code is:

“`

%dw 2.0

output application/json


{

sensor: “Temperature”,

value: read(payload, “application/json“).temperature default “”,

timestamp: now()

 

}

“`

Step3

Step 4: Connect to MQTT

                Click on the Connections and use the credentials as shown below to connect to the MQTT explorer:

Step4

 Step 5: Simulating IoT Data

Once the MQTT connects using an MQTT Explorer, publish a sample message ‘{“temperature”: 28 }’ to the topic ‘iot/sensor/data’, sending to the Mule flow as shown below.

Step5

Step 6: Logging the Output

Run the API and publish the message from the MQTT explorer, and the processed data will be logged into the console. Below shows an example log:

Step6

The above example highlights MuleSoft’s process for connecting IoT devices, processing data, and preparing it for visualization or automation.

Challenges in IoT Integration and MuleSoft’s Solutions

IoT integration faces challenges:

  • Device and Protocol Diversity: IoT ecosystems involve different devices, such as sensors or gateways, using protocols like MQTT or HTTP with different data formats, such as JSON, XML, or binary.
  • Data Volume and Velocity: IoT devices generate high volumes of real-time data, which requires efficient processing to avoid restrictions.
  • Security and Authentication: IoT devices are unsafe and require secure communications like TLS or OAuth for device authentication.
  • Data Transformation and Processing: IoT data sends binary data, which requires transformation from Binary to JSON and needs improvement before use.

The Future of IoT with MuleSoft

The future of IoT with MuleSoft is promising. MuleSoft uses the Anypoint Platform to solve critical integration issues. It integrates different IoT devices and protocols, such as MQTT, to provide data flow between ecosystems. It provides real-time data processing and analytics integration. Security is added with TLS and OAuth.

Conclusion

MuleSoft’s Anypoint Platform reviews IoT and API integration by providing a scalable, secure, real-time solution for connecting devices to enterprise systems. As I showed in the example, MuleSoft processes MQTT-based IoT data and transforms it for useful insights without external scripts or sensors. By addressing challenges like data volume and security, MuleSoft provides a platform to build IoT ecosystems that provide automation and insights. As IoT keeps growing, MuleSoft’s API connectivity and native protocol support establish it as an innovation, with new smart city, healthcare, and more connectivity. Discover MuleSoft’s Anypoint Platform to unlock the full potential of your IoT projects and set the stage for a connected future.

]]>
https://blogs.perficient.com/2025/05/21/iot-and-api-integration-with-mulesoft-the-road-to-seamless-connectivity/feed/ 0 381483
From IBM APIC to Apigee: Your Step-by-Step Migration Journey https://blogs.perficient.com/2025/02/24/from-ibm-apic-to-apigee-your-step-by-step-migration-journey/ https://blogs.perficient.com/2025/02/24/from-ibm-apic-to-apigee-your-step-by-step-migration-journey/#comments Mon, 24 Feb 2025 09:57:34 +0000 https://blogs.perficient.com/?p=376797

What is API and API migration?

An API (Application Programming Interface) is a set of guidelines and protocols that allows one software application to communicate with another. API migration refers to the process of migrating an API from one environment, platform, or version to another.

What is IBM API Connect?

IBM API Connect is an integrated API management platform designed by IBM to create, manage, secure, and socialize APIs across different environments (cloud, on-premises, or hybrid). Below are the steps to go through the APIC interface.

What is Apigee?

Apigee is a full lifecycle API management platform developed by Google Cloud, designed to help organizations create, manage, secure, and scale APIs. Enterprises prefer Apigee because of its robust security features, advanced analytics capabilities, scalability to large enterprises and compatibility to multiple clouds. Below are the steps to go through the Apigee interface.

Why APIC and Apigee is needed?

IBM API Connect and Apigee are two comprehensive API management tools that allow organizations to create, secure, manage, and analyze APIs. Here are the advantages why they are needed:

  • API Management and Governance
  • Security and Compliance
  • API Analytics and Monitoring
  • Developer Ecosystem Management

Why would a company choose to switch from APIC to Apigee, and what are the advantages?

              An organization or user will choose API migration when they need to improve their API infrastructure, adapt to new business needs, or implement better technologies. Choosing between Apigee and IBM API Connect depends on the specific needs and priorities of an organization, as each platform has its strengths. However, Apigee may be considered better than IBM API Connect in certain aspects based on features, usability, and industry positioning. Using Apigee is more flexible, where we can easily analyze API monitoring, API metrics, and generate custom reports. The following are some advantages that make Apigee a better option:

  • Google Cloud Integration and Ecosystem
  • Advanced Analytics and Monitoring
  • Developer Experience
  • Security and Rate Limiting
  • API Monetization

Migration Process:

MigrationProcess

 

Applications Used to Migrate:

Below are the applications that we have utilized in the process of migration.

  • IBM API Connect
  • Apigee Edge/Apigee Hybrid
  • Swagger Editor

IBM API Connect

Fetching APIC migration details

  • To migrate API/product from the API Connect, go to the login page and provide USERNAME/PASSWORD, then click on sign in.

APIs:

  • Access APIs by clicking on the APIs
  • After locating the API details, confirm that the type is REST/SOAP and, if multiple versions are displayed, choose the appropriate one.API Search
  • Next, choose the API and navigate to the Assemble section to determine whether the API is Passthrough or Non-Passthrough.
  • Proceed to the Design page and take note of the necessary information that is mandatory.
    1. Name of the API
    2. Basepath
    3. Consumes (JSON/XML)
    4. Security Definitions, Security
    5. Properties -> Backend Endpoint URL
    6. Paths

Design Parameters                       Source Tab

  • Next, navigate to the API’s source page and retrieve the swagger file that is accessible.

Products:

  • Select the Products tab, use the search box to locate the right product, and then click on it.

Products Search        Product Design Parameters

  • Determine how many APIs refer to the same product.
  • Verify the number of plans available for that product.
  • Next, select each plan and take note of the required fields shown below.
    1. Rate Limits (calls/time interval)
    2. Burst Limit (calls/time interval)

Apigee Edge/Apigee Hybrid

Migration of APIs and Products in Apigee

  • Go to the login page, enter your username and password, and then click “sign in” to create an API or product.

APIs:

  • To build a new API, select the API Proxies section and click +Proxy.
  • Choose Reverse Proxy/No Target to manually construct an API.

API Proxies

  • For Reverse proxy provide API name, basepath and Target server which we have noted from IBM API Connect.
  • Make sure to establish the flow paths in accordance with APIC after creating the proxy, including the get, post, put, and delete methods.

Conditional flow

  • Click on the policies section to add the Traffic Management policies, Security policies, Mediation policies and Extension policies as per APIC/our requirement.

Policies

  • Using the host and port from the APIC Endpoint URL, establish a target server, modify the Apigee Target Endpoint XML code as needed, and make the URL dynamic.

             <HTTPTargetConnection>

                    <SSLInfo>

                         <Enabled>true</Enabled>

                   </SSLInfo>

                   <LoadBalancer>

                         <Server name = “TS-testAPI” />

                   </LoadBalancer>

                   <Path>/</Path>

            </HTTPTargetConnection>

Compare and debug the flow:

  • After the API development is completed, we must verify and compare the API flow between API Connect and Apigee to determine whether the flow looks similar.
  • Once the API has been implemented, deploy it to the appropriate environment and begin testing it using the client’s provided test data. Check the flow by using the DEBUG/TRACE section once you’ve reached the proxy endpoint URL.
  • Pre-production testing should be done by the client using real-time data to verify the service’s end-to-end functioning status prior to the production deployment.

Products:

  • Click on the API Products section and click on + API Product to create a new product.
  • Provide product name, display name, quota and burst limits which we have noted from IBM API Connect.
  • Then add APIs that refer to the existing product in the Operations (In Hybrid)/API Resources (In Edge) section.

Create Product

  • If the product contains more than one plan in APIC, repeat the same process and provide required fields to create other plans.

Swagger Editor

Swagger Editor is an open-source, browser-based tool that allows developers to design, define, edit, and document APIs using the OpenAPI Specification (OAS) format.

  • As we have collected the swagger file from the APIC, as per our requirement, we need to edit the file and change the version of the swagger file if required using the swagger editor.
  • From the swagger file we can remove IBM-related tags and add our security variables as per our code.

Apigee Portal Publishing:

  • The swagger file must be published on the Apigee developer portal once it is ready.
  • Go to the Apigee Home page, select the Portals section, and then click on API Catalog to begin the portal publishing process.
  • Click the plus button to add an API product in the catalog. After choosing the product, click the next button, fill out the below required fields, and then click save to publish.
  • Check the published check box.
  • Check the OpenAPI document in the API documentation section.
  • Select the swagger file and upload.
  • Select API visibility as per the specification.

API Catalog

Summary:

Migrating from IBM API Connect (APIC) to Apigee involves moving API management capabilities to the Apigee platform to leverage its more advanced features for design, deployment, and analytics. The process of migration involves the assessment of existing APIs and dependencies, exporting and adapting API definitions, mapping and recreating policies like authentication and rate limiting, and thorough testing to ensure functionality in the new environment.

]]>
https://blogs.perficient.com/2025/02/24/from-ibm-apic-to-apigee-your-step-by-step-migration-journey/feed/ 1 376797
Prospective Developments in API and APIGEE Management: A Look Ahead for the Next Five Years https://blogs.perficient.com/2025/02/12/prospective-developments-in-api-and-apigee-management-a-look-ahead-for-the-next-five-years/ https://blogs.perficient.com/2025/02/12/prospective-developments-in-api-and-apigee-management-a-look-ahead-for-the-next-five-years/#respond Wed, 12 Feb 2025 11:39:03 +0000 https://blogs.perficient.com/?p=376548

Application programming interfaces, or APIs, are crucial to the ever-changing digital transformation landscape because they enable businesses to interact with their data and services promptly and effectively. Effective administration is therefore necessary to guarantee that these APIs operate as intended, remain secure, and offer the intended advantages. This is where Apigee, Google Cloud’s premier API management solution, is helpful.

What is Apigee?

Apigee is an excellent tool for businesses wanting to manage their APIs smoothly. It simplifies the process of creating, scaling, securing, and deploying APIs, making developers’ work easier. One of Apigee’s best features is its flexibility—it can manage both external APIs for third-party access and internal APIs for company use, making it suitable for companies of all sizes. Apigee also works well with security layers like Nginx, which adds a layer of authentication between Apigee and backend systems. This flexibility and security make Apigee a reliable and easy-to-use platform for managing APIs.

What is Gemini AI?

Gemini AI is an advanced artificial intelligence tool that enhances the management and functionality of APIs. Think of it as a smart assistant that helps automate tasks, answer questions, and improve security for API systems like Apigee. For example, if a developer needs help setting up an API, Gemini AI can guide them with instructions, formats, and even create new APIs based on simple language input. It can also answer common user questions or handle customer inquiries automatically, making the whole process faster and more efficient. Essentially, Gemini AI brings intelligence and automation to API management, helping businesses run their systems smoothly and securely.

Why Should Consumers Opt for Gemini AI with Apigee?

Consumers should choose Gemini AI with Apigee because it offers more innovative, faster, and more secure API management. It also brings security, efficiency, and ease of use to API management, making it a valuable choice for businesses that want to streamline their operations and ensure their APIs are fast, reliable, and secure. Here are some key benefits: Enhanced Security, Faster Development, and Time-Saving Automation.

Below is the flow diagram for Prospective Developments in APIGEE.

Image1


Greater Emphasis on API Security

  • Zero Trust Security:  The Zero Trust security approach is founded on “never trust, always verify,” which states that no device or user should ever be presumed trustworthy, whether connected to the network or not. Each request for resource access under this architecture must undergo thorough verification.
  • Zero Trust Models: APIs will increasingly adopt zero-trust security principles, ensuring no entity is trusted by default. The future of Zero-Trust in Apigee will likely focus on increasing the security and flexibility of API management through tighter integration with identity management, real-time monitoring, and advanced threat protection technologies.
  • Enhanced Data Encryption: Future developments might include more substantial data encryption capabilities, both in transit and at rest, to protect sensitive information in compliance with Zero Trust principles.

    Picture2


Resiliency and Fault Tolerance

 The future of resiliency and fault tolerance in Apigee will likely involve advancements and innovations driven by evolving technological trends and user needs. Here are some key areas where we can expect Apigee to enhance its resiliency and fault tolerance capabilities.

Picture3

  • Automated Failover: Future iterations of Apigee will likely have improved automated failover features, guaranteeing that traffic is redirected as quickly as possible in case of delays or outages. More advanced failure detection and failover methods could be a part of this.
  • Adaptive Traffic Routing: Future updates could include more dynamic and intelligent traffic management features. This might involve adaptive routing based on real-time performance metrics, enabling more responsive adjustments to traffic patterns and load distribution.
  • Flexible API Gateway Configurations: Future enhancements could provide more flexibility in configuring API gateways to better handle different fault scenarios. This includes custom policies for fault tolerance, enhanced error handling, and more configurable redundancy options.

Gemini AI with Apigee

Gemini AI and Apigee’s integration has the potential to improve significantly API administration by enhancing its intelligence, security, and usability. Organizations can anticipate improved security, more effective operations, and better overall user and developer experience by utilizing cutting-edge AI technologies. This integration may open the door to future breakthroughs and capabilities as AI and API management technologies develop. If the API specifications that are currently available in API Hub do not satisfy your needs, you can utilize Gemini to create a new one by just stating your needs in basic English. Considerable time is saved in the cycles of development and assessment.

Gemini AI can inform you about the policy documentation in parallel while adding policies to the Apigee development. Gemini AI can guide you with the formats used in the policies. We can automate the query region like chatbots with Gemini AI. We may utilize Gemini AI to improve and get answers to questions about the APIs available on the Apigee portal.

If any integration is currently in use. We can use Gemini AI to accept inquiries from customers or clients and automate the most frequently asked responses. Additionally, Gemini AI can simply reply to customers until our professionals are active.


Overview

Apigee, Google Cloud’s API management platform, plays a key role in digital transformation by securely and flexibly connecting businesses with data and services. Future advancements focus on stronger security with a “Zero Trust” approach, improved resilience through automated failover and adaptive traffic routing, and enhanced flexibility in API gateway settings. Integration with Gemini AI will make Apigee smarter, enabling automated support, policy guidance, API creation, streamlining development, and improving customer service.

]]>
https://blogs.perficient.com/2025/02/12/prospective-developments-in-api-and-apigee-management-a-look-ahead-for-the-next-five-years/feed/ 0 376548
Legacy Systems Explained: Why Upgrading Them is Crucial for Your Business https://blogs.perficient.com/2024/12/04/legacy-systems-explained-why-upgrading-them-is-crucial-for-your-business/ https://blogs.perficient.com/2024/12/04/legacy-systems-explained-why-upgrading-them-is-crucial-for-your-business/#comments Wed, 04 Dec 2024 06:16:49 +0000 https://blogs.perficient.com/?p=372961

What are Legacy Systems? Why is Upgrading those Systems Required?

Upgrading means more than just making practical improvements to keep things running smoothly. It addresses immediate needs rather than chasing a perfect but impractical solution. The situation could spiral out of control if things don’t function properly in real-time.

One such incident happened on January 4, 2024, when South Africa’s Department of Home Affairs was taken offline nationwide due to a mainframe failure. In simple terms, Mainframe failures in such contexts are usually high-stakes issues because they impact the core infrastructure that supports vital public services. In South Africa, where the Department of Home Affairs handles a range of essential services such as passports, identity documents, and immigration control,  A mainframe failure can have widespread repercussions, leading to backlogs, delays, and potential administrative chaos. The South African Department of Home Affairs provides a clear example of a critical legacy system facing significant risks due to its outdated technology and operational challenges. 

Addressing these issues through modernization and digital transformation is crucial for improving service delivery and ensuring the system’s continued effectiveness and security. One cannot work on migrating the legacy system in one go, as the business and functional side of testing is a must. A planned and systematic approach is needed while upgrading the legacy system.

 

Question: What is the solution to avoid such a case?
Answer: Modernization of Legacy code.

Legacy code modernization is improving and updating outdated software systems to make them more maintainable, scalable, and compatible with modern technologies. Let’s understand this using Apigee (an API Management tool).

1. Scalability

Legacy system: Legacy systems were designed to provide solutions for the respected tasks but there was no scalability as records used to get limited within Infrastructure for improvements in business.
Apigee: Due to its easy scalability, centralized monitoring, and integration capabilities, Apigee helped the organization plan its approach to business improvements.

2. Security

Legacy system: One of the simplest methods for authenticating users in Legacy Systems was “Basic Authentication,” where the client sends a username and password in every HTTP request. This method was Vulnerable to man-in-the-middle (MITM) attacks if not combined with HTTPS. Credentials are exposed on each request.

Apigee: Using Apigee, the organization can quickly implement modern security features like OAuth, API key validation, rate limiting, and threat protection (e.g., bot detection) without changing the core logic of the APIs.

3. User and Developer Experience

Legacy system: The legacy API lacks good documentation, making it harder for external developers to integrate with it. Most systems tend to have a SOAP-based communication format.
Apigee: Apigee provides a built-in API portal, automatic API documentation, and testing tools, improving the overall developer experience and adoption of the APIs so that integration with other tools can be easy and seamless with modern standards.


There are now multiple ways to migrate data from legacy to modern systems, which are listed below.

1. Big Bang Migration
2. Phased Migration
3. Parallel Migration
4. Pilot Migration
5. Hybrid Migration
and more…

Although these things are known to the legacy system owners, they are very selective and picky when finalizing a migration plan. They are only aware of the short-term goal, i.e., to get the code up and running in production. Because when we are speaking of legacy systems, all there is left is code and a sigh of relief that it is still up and running.  For most systems, there is no documentation, code history, revisions, etc., and that’s why it could fail on a large scale if something goes wrong.

I have found some points that need to be ensured before finalizing the process of migrating from legacy systems to modern systems.

1. Research and Analysis

We need to understand the motives behind the development of the Legacy system since there is no or insufficient documentation.

In the study, we can plan to gather historical data to understand the system’s behavior. We need to dig deeper to find something that could help us understand the system better.

2. Team Management

After studying the system, we can estimate the team size and resource management. Such systems are way older when it comes to the tech on which they are running. So, it is hard to gather resources with such outdated skills. In that case, management can cross-skill existing resources into such technologies.

I believe adding the respective numbers of junior engineers would be best, as they would get exposure to challenges, which can help them improve their skills.

3. Tool to Capture Raw Logs

Analyzing the raw logs can talk more about the system, as this is the way communication is happening to complete each task requested by the system. By breaking down the data into layman’s language, understanding at what time requests are high by timestamps,

What parameter data consists of, and by such information, we can tell system behavior and plan properly.

4. Presentation of the Logs

Sometimes we may need to present the case study to high-level management before proceeding with the plan. So to simplify the presentation, we can use tools like Datadog and Splunk to get data in a format such as tabular, graphic, etc. so that other team members can understand.

5. Replicate the Architect with Proper Functionality

This is the most important part. End-to-end development is the only solution for smooth migration activity. We need to ensure standards here, such as maintaining core functionality, risk management, conveying data pattern changes to other associated clients, and ensuring user access, business processes, etc. The point 1 study can help us understand the behavior of systems to check on what modern tech we can land our migration on.

We can implement and plan using one of the migration methods I mentioned above in the blog.

6. End-to-end Testing

Once the legacy system is replicated to Modern Tech, we need to ensure that we have a User Acceptance Testing (UAT) environment to perform the system testing. It could be challenging if the respective legacy systems didn’t have a testing environment back in the day. We may need to call mock backend URLs to mock the behavior of services.

7. Before Moving to Production, do Pre-production Testing Properly

Only after successful UAT testing one can ensure the functionality and may think to move changes to production hassle-free. However, some points must be ensured, such as following standards and maintaining documentation. In standards, we need to ensure that no risk may lead to the failure of services in modern technology and that they are compatible properly.

In the documentation, we need to ensure that all service flows are appropriately documented and that testing is done according to the requirement gathering.

Legacy systems and their workings are among the most complex and time-consuming topics. But to make the job easier, one must put effort into it.

]]>
https://blogs.perficient.com/2024/12/04/legacy-systems-explained-why-upgrading-them-is-crucial-for-your-business/feed/ 2 372961
Exploring Apigee: A Comprehensive Guide to API Management https://blogs.perficient.com/2024/10/15/exploring-apigee-a-comprehensive-guide-to-api-management/ https://blogs.perficient.com/2024/10/15/exploring-apigee-a-comprehensive-guide-to-api-management/#respond Tue, 15 Oct 2024 06:47:11 +0000 https://blogs.perficient.com/?p=369958

APIs, or application programming interfaces, are essential to the dynamic world of digital transformation because they allow companies to communicate quickly and efficiently with their data and services. Consequently, effective management is essential to ensure these APIs function correctly, stay safe, and provide the desired benefits. This is where Google Cloud’s top-tier API management product, Apigee, comes into play.

What is Apigee?

Apigee is a great platform for companies that want to manage their APIs effectively. It really simplifies the whole process of creating, growing, securing, and implementing APIs, which makes things a lot easier for developers. One thing that stands out about Apigee is its flexibility; it can handle both external APIs that third-party partners can access and internal APIs used within the company. This makes Apigee a great option for businesses of all sizes. Moreover, its versatility is a significant benefit for those looking to simplify their API management. It also integrates nicely with various security layers, like Nginx, which provides an important layer of authentication between Apigee and the backend. Because of this adaptability, Apigee enhances security and allows for smooth integration across different systems, making it a reliable choice for managing APIs.

Core Features of Apigee

1. API Design and Development

Primarily, Apigee offers a unique suite of tools for developing and designing APIs. You can define API endpoints, maintain API specifications, and create and modify API proxies by using the Open API standard. Consequently, it becomes easier to design functional and compliant APIs with industry standards. Furthermore, this capability streamlines the development process and ensures that the APIs meet regulatory requirements. Thus, developers can focus on innovation while maintaining a strong foundation of compliance and functionality. Below is a flow diagram related to API Design and Development with Apigee:

2. Security and Authentication

Any API management system must prioritize security, and Apigee leads the field in this regard. It provides security features such as OAuth 2.0, JWT (JSON Web Token) validation, API key validation, and IP validation. By limiting access to your APIs to authorized users, these capabilities help safeguard sensitive data from unwanted access.

3. Traffic Management

With capabilities like rate limitation, quota management, and traffic shaping, Apigee enables you to optimize and control API traffic. This helps proper usage and maintains consistent performance even under high traffic conditions.

4. Analytics and Monitoring

You can access analytics and monitoring capabilities with Apigee, which offers insights into API usage and performance. You can track response times, error rates, and request volumes, enabling you to make data-driven decisions and quickly address any issues that arise.

5. Developer Portal

Apigee includes a customizable developer portal where API users can browse documentation, test APIs, and get API keys. This portal builds a community around your APIs and improves the developer experience.

6. Versioning and Lifecycle Management

Keeping an API’s versions separate is essential to preserving backward compatibility and allowing it to change with time. Apigee offers lifecycle management and versioning solutions for APIs, facilitating a seamless upgrade or downgrade process.

7. Integration and Extensibility

Apigee supports integration with various third-party services and tools, including CI/CD pipelines, monitoring tools, and identity providers. Its extensibility through APIs and custom policies allows you to tailor the platform to meet your specific needs.

8. Debug Session

Moreover, Apigee offers a debug session feature that helps troubleshoot and resolve issues by providing a real-time view of API traffic and interactions. This feature is crucial for identifying and fixing problems and is essential during the development and testing phases. In addition, this feature helps ensure that any issues are identified early on; consequently, it enhances the overall quality of the final product.

9. Alerts:

Furthermore, you can easily set up alerts within Apigee to notify you of critical issues related to performance and security threats. It is crucial to understand that both types of threats affect system reliability and can lead to significant downtime; addressing them promptly is essential for maintaining optimal performance.

10. Product Onboarding for Different Clients

Apigee supports product onboarding, allowing you to manage and customize API access and resources for different clients. This feature is essential for handling diverse client needs and ensuring each client has the appropriate level of access.

11. Threat Protection

Apigee provides threat protection mechanisms to ensure that your APIs can handle concurrent requests efficiently without performance degradation. This feature helps in maintaining API stability under high load conditions.

12. Shared Flows

Apigee allows you to create and reuse shared flows, which are common sets of policies and configurations applied across multiple API proxies. This feature promotes consistency and reduces redundancy in API management.

Benefits of Using Apigee

1. Enhanced Security

In summary, Apigee’s comprehensive security features help protect your APIs from potential threats and ensure that only authorized users can access your services.

2. Improved Performance

Moreover, with features like traffic management and caching, Apigee helps optimize API performance, providing a better user experience while reducing the load on your backend systems.

3. Better Visibility

Apigee’s analytics and monitoring tools give valuable insights into API usage and performance, helping you identify trends, diagnose issues, and make informed decisions.

4. Streamlined API Management

Apigee’s unified platform simplifies the management of APIs, from design and development to deployment and monitoring, saving time and reducing complexity.

5. Scalability

Finally, Apigee is designed to handle APIs at scale, making it suitable for both small projects and large enterprise environments.

Getting Started with Apigee

To get started with Apigee, follow these steps:

1. Sign Up for Apigee

Visit the Google Cloud website and sign up for an Apigee account. Based on your needs, you can choose from different pricing plans.
Sign-up for Apigee.

2. Design Your API

Use Apigee’s tools to design your API, define endpoints, and set up API proxies.

3. Secure Your API

Implement security policies and authentication mechanisms to protect your API.

4. Deploy and Monitor

Deploy your API to Apigee and use the analytics and monitoring tools to track its performance.

5. Engage Developers

Set up your developer portal to provide documentation and resources for API consumers.

In a world where APIs are central to digital innovation and business operations, having a powerful API management platform like Apigee can make a significant difference. With its rich feature set and comprehensive tools, Apigee helps organizations design, secure, and manage APIs effectively, ensuring optimal performance and value. Whether you’re just starting with APIs or, conversely, looking to enhance your existing API management practices, Apigee offers a variety of capabilities. Furthermore, it provides the flexibility necessary to thrive in today’s highly competitive landscape.

]]>
https://blogs.perficient.com/2024/10/15/exploring-apigee-a-comprehensive-guide-to-api-management/feed/ 0 369958
A rabbit hole in web development https://blogs.perficient.com/2024/09/11/a-rabbit-hole-in-web-development/ https://blogs.perficient.com/2024/09/11/a-rabbit-hole-in-web-development/#respond Wed, 11 Sep 2024 16:15:35 +0000 https://blogs.perficient.com/?p=369021

A rabbit hole

Recently, I was learning about some new Adobe software, and came across the line of code import Theme from "@swc-react/theme". This quickly dropped me into the web development education rabbit hole…

  • A quick search shows me that "@swc-react/theme" is React Wrappers for Spectrum Web Components.

  • Another search shows that Spectrum Web Components is a particular implementation of Adobe Spectrum that uses Open Web Components‘s project generator.

  • What is Open Web Components? Well, whatever it is, it relies on something called Lit.

  • What is Lit? It’s a JavaScript library that relies on Web Components.

  • At the end of the rabbit hole, we learn that Web Components is a collection of modern HTML and JavaScript features that allow implementation of “components”, which are modular, HTML-parameterizable pieces of a webpage that have their own associated HTML, JavaScript, and CSS. Components are typically implemented by more heavyweight frameworks such as React or Angular.

Of course, few of the clarifying details I’ve added in the above bullet points were clear to me during my initial time in the rabbit hole.

The following is an article that presents the relevant content from the rabbit-hole in a more foundational, “bottom-up” approach.

Web components

Web Components is a suite of different technologies [standard to HTML and JavaScript] allowing you to create reusable custom elements – with their functionality encapsulated away from the rest of your code – and utilize them in your web apps.”

The “suite of different technologies” are the custom elements JavaScript API, the shadow DOM JavaScript API, and the <template> and <slot> HTML elements.

Custom elements (JavaScript API)

The custom elements JavaScript API allows

  • extension of built-in HTML elements, such as <p>, so that an extended HTML element can be used in HTML with code such as <p is="word-counter"> . (The argument to is specifies which extension of <p> is used.) These are called customized built-in elements.

  • creation of new HTML elements that have new tag names such as <custom-element>. These are called autonomous (HTML) elements.

A custom element is implemented as a class which extends either

  • an interface corresponding to an HTML element, in the case of extending an existing HTML element

    or

  • HTMLElement, in the case of creating a new HTML element

The class will need to implement several “lifecycle callback functions”. The class, say Cls, is then passed to window.customElementRegistry.define("my-custom-element", Cls).

Shadow DOM (JavaScript API)

The shadow DOM JavaScript API allows “hidden” DOM trees, called shadow trees, to be attached to elements in the regular DOM tree. Shadow trees are hidden in the sense that they are not selected by tools such as document.querySelectorAll(). They allow for encapsulation because none of the code inside a shadow tree can affect the portion of the overall DOM tree that is its parent.

Shadow trees are effected by using

  • <template shadowrootmode="open"> </template> in HTML

    or

  • const shadow = elem.attatchShadow({mode: "open"}) in JavaScript

<template>

The <template> HTML element is not actually rendered by the browser. Instead, when template is the JavaScript Element representing a <template> HTML element (e.g. const template = document.querySelectorAll("#some-template")), we are expected to manually render* template.content. This manual rendering is done by writing code such as document.body.appendChild(template.content).

But- still- what good is this? At this stage, all we know about <template> is that use of it requires manually rendering HTML. It seems useless!

*template.content is of type DocumentFragment, which is a data structure that represents template.innerHTML. You can read about a situation in which you would want to use DocumentFragment over innerHTML here. It’s not clear to me how using DocumentFragment is vastly superior to innerHTML in this scenario, but there is probably some small performance advantage.

Slotting

<template> does become quite useful when it’s paired with the <slot> element. The <slot> element allows us to define portions of the <template> inner HTML that are variable so that we can later “plug-in” custom HTML into those portions of the <template> inner HTML.

In order achieve this functionality of <slot>, we must actually use <slot> alongside custom element and shadow DOM concepts, as this was how <slot> was designed to be used.

Slotted custom elements

We now describe how <slot> is used with custom elements, the shadow DOM, and templates to implement a “slotted” custom element.

  1. Include code such as

<template id = "some-template">
    ...
    <slot name = "some-slot"> default text </slot>
    ...
</template>

in the HTML.

  1. In the class that defines a custom element, write a constructor that creates a shadow tree by including const shadowRoot = this.attachShadow({mode: "open"}) in the constructor.

  1. In same constructor, right after the creation of the shadow tree, set template.content to be the inner HTML of the shadow tree: shadowRoot.attachChild(template.content.cloneNode(true)).

(To see an example of this, inspect this webpage with your browser’s development tools.)

We see that the three concepts of custom elements, the shadow DOM, and templates are all involved. (1) and (3) are about templates, (2) is about the shadow DOM, and (2) and (3) occur in the custom element’s constructor!

But how does <slot> come into play? Well, suppose that a custom element called “some-element” is configured in the above way. Then then the HTML

<some-element> </some-element>

is interpreted by the browser to be the inner HTML of the template with the inner HTML of the template’s <slot> element replacing the template’s <slot> element. So, the browser will render the HTML

...
default text
...

Alternatively, the HTML

<some-element>
    <div slot = "some-slot"> replacement text </div>
</some-element>

is interpreted by the browser to be the inner HTML of the template with the inner HTML of the newly specified <slot> element replacing the template’s <slot> element. So, the browser will render the HTML

...
replacement text
...

Modern components

The type of custom element above implements the idea of a modern component, which is

  • easily reusable

  • encapsulated (in the sense that one component’s code is separate from other components and does not affect other components state or behavior)

  • allows for parameterization of HTML with <slot>

We’ve seen that writing the above type of custom element requires a lot of boilerplate. We could eliminate the boilerplate by writing a class that implements the modern component functionality. The class’s constructor would take the HTML that is to underlie the modern component as an argument*.

* If <slot> functionality is used, then the HTML that is to underlie the modern component would contain a the same kind of <slot> element that <template> did above.

Lit

Lit is a library that provides a class, LitElement, that implements this notion of modern component. As Lit’s documentation says, the advantage of this approach is that, since modern components rely on standard HTML and JavaScript APIs, they are supported by almost all web browsers (all web browser that support the required HTML and JavaScript APIs, that is), and do not require any frameworks such as Angular or React to run.

Open Web Components

Open Web Components is a website that “gives a set of recommendations and defaults” on how to write modern web components”. The “Getting Started” page recommends that to begin developing a web component, you should make use of their npm package by running npm init @open-wc, which generates an example Lit component.

Spectrum Web Components

Spectrum Web Components is the “frameworkless” or “as close to vanilla JS as possible” implementation of Adobe Spectrum. Spectrum Web Components are Lit components and thus extend LitElement.

React Wrappers for Spectrum Web Components

swc-react is a collection of React wrapper components for the Spectrum Web Components (SWC) library, enabling you to use SWC in your React applications with ease. It relies on the @lit/react package to provide seamless integration between React and the SWC library.”

Why not just use React Spectrum components?

swc-react and React components are two technologies that implement the idea of a component in some way. I would think that if we’re using React, wouldn’t it more natural to just use React components, and not import an extra library that make Lit components useable in React? Well, Adobe documentation says:

We recommend using swc-react over React Spectrum in your add-ons based on React, because it currently offers a more comprehensive set of components which provide built-in benefits as detailed above in the Spectrum Web Components section, and is more actively supported.

So I suppose that answers my question 🙂

]]>
https://blogs.perficient.com/2024/09/11/a-rabbit-hole-in-web-development/feed/ 0 369021
Production Deployment and its Basics: Known to Many, Followed by Few https://blogs.perficient.com/2024/09/04/production-deployment-and-its-basics-known-to-many-followed-by-few/ https://blogs.perficient.com/2024/09/04/production-deployment-and-its-basics-known-to-many-followed-by-few/#respond Wed, 04 Sep 2024 11:09:01 +0000 https://blogs.perficient.com/?p=367473

Did you ever feel tense while taking your exams? Or you must have watched the Olympics or other sports events like cricket, football, etc. When you focus on national players during significant events, you can observe stress and anxiety in performing at that level. Similar is the situation of an IT professional during a production deployment call. This moment is crucial because it represents the end of months or years of effort, the results of which will be evaluated by those involved. The stakes are high because the quality and success of the deployment can have a huge impact.

Teams follow a multi-step process called the SDLC (Software Development Life Cycle) model to manage this stress and increase success. These standards provide a framework to guide process improvement, reduce risk, and streamline deployment. The team’s goal is to follow this process and deliver quality software that meets the needs of stakeholders.

Some of the major SDLC models are:

  1. Waterfall Model
  2. V-Model
  3. Incremental Model
  4. RAD Model
  5. Iterative Model

Each SDLC model is suitable for a certain type of project. We can take the example of the Waterfall Model.

The SDLC Waterfall Model

1024px Sdlc Software Development Life Cycle

 

  1. Requirements Analysis: Gather and document what the system should do.
  2. System Design: Outline the architecture and design specifications.
  3. Implementation: Write and integrate the code according to the design.
  4. Testing: Evaluate the system to ensure it meets the requirements.
  5. Deployment: Release the system for end-users to use.
  6. Maintenance: Address any issues or updates needed after deployment.

Structured approaches like SDLC emphasize planning, alignment, and risk management to ensure successful deployments. However, gaps can still lead to failures and negatively impact the client’s perception.

It is always a hassle when it comes to production deployment. It is simply your code for a service that will run as you developed it but in a different organization or environment. So, what’s the drill?

I can answer this by noting down some of the points I have understood from my IT experience.

 Incorrect Testingjpg

1. Insufficient Requirement Gathering

Sometimes, demands are not appropriately explained in the documentation, stories, or any part of requirement gathering, but for some tasks, we just don’t have standards to track but understandings. If the process gets carried on, we may face delays in production planning or issues in production if deployed in such a case. Also, it can cause recurring problems in production.

For example, in one of the requirements meetings, we asked the client for the parameter’s details, but the client had no such information, which caused a delay in deployment.

2. Incorrect Dev/Sandbox Testing

Developers often test the service until a successful response and move it directly to production by getting approval. For TL/Manager, it is a win-win situation because service is delivered before the deadline until clients start playing Russian roulette.

Your (developers) poor approach is exposed now, and fixtures are happening live now in production. This affects the value of the business and the relationship with the client.

3. Inconsistency Between the Code in Lower Environment and Production

Most of the time, developers have to make changes to production services due to certain reasons, whether by team or client. At that time, it is necessary to have those changes tested in the Dev Organization/Environment first. Directly implementing those in production because of short-term liberty and approvals may do justice with the client and TL/Manager but not with your junior folks. They may not understand why code differences are there.

4. Improper or incomplete testing by the client

Note: This may be more for the production manager type of folks.

I have been through some of the developments and have reported the same behavior from some people that sometimes clients also rely on the developer in the testing part. The client knows the end-to-end project, and the developer is responsible for some part of it. So, the client side of testing is essential.

5. Pre-production testing

In most cases, the client doesn’t have test data for Pre-production to confirm the end-to-end working status of the service. This may cause failure of service. Always ask the client to do pre-production testing with real-time data and confirm the status of the service.

6. Load testing

Often, load testing is avoided when requirement gathering itself. It is necessary to have the service go through load testing so that if, at the production level, services start to receive more traffic than usual, we can trust the service’s capability to handle such cases.

That’s a wrap!

These gaps or processes need to be properly followed for a successful and hassle-free production deployment.

Perficient + Apigee

At Perficient, we create complex and robust integration solutions in Apigee, which helps our clients address the full spectrum of challenges with lasting solutions.

Contact us today to learn how we can help you to implement integration solutions with Apigee.

]]>
https://blogs.perficient.com/2024/09/04/production-deployment-and-its-basics-known-to-many-followed-by-few/feed/ 0 367473
Web APIs in Appian: Bridging the Gap Between Systems https://blogs.perficient.com/2024/05/27/appian-web-apis/ https://blogs.perficient.com/2024/05/27/appian-web-apis/#comments Mon, 27 May 2024 08:40:44 +0000 https://blogs.perficient.com/?p=344465

Seamless integration between various systems and applications is crucial for efficient data sharing and enhanced functionality. Appian, a leading low-code automation platform, recognizes this need and provides a powerful toolset for creating Web APIs.

Web APIs: Bridging the Gap

Web APIs, or Application Programming Interfaces, serve as a bridge between different software applications, enabling them to communicate and share data seamlessly. In the context of Appian, Web APIs provide a way to expose Appian data and services to external systems, facilitating integration with other software solutions.

Key Features of Web APIs

  • Integration and Data Exchange: Appian’s Web API feature allows for seamless integration with external systems and services, enabling the exchange of data in real time. It supports RESTful web services, which can be used to expose Appian data and processes to other applications or to consume external data within Appian.
  • Security and Customization: Appian Web APIs come with built-in security features such as authentication and authorization, ensuring that only authorized users can access the API. Additionally, they can be customized to perform complex business logic, validate inputs, and format responses, providing flexible and secure data handling capabilities.
  • Scalability and Performance: Appian Web APIs are designed to handle high volumes of requests efficiently, ensuring that performance remains optimal even as the demand grows. This scalability is crucial for enterprise-level applications that require reliable and fast data processing and integration capabilities.

How to Harness the Power of Web APIs in Appian

Define Your API

  • When defining your API, carefully choose the URLs or URIs that serve as access points for various resources or specific actions within your system. This crucial step sets the foundation for seamless interaction with your API.

Create the API in Appian

  1. Choose the Appropriate HTTP Methods
    • Determine the HTTP methods by specifying which ones (GET, POST, PUT, DELETE, etc.) your API will support for each endpoint.
    • Define the request/response formats by specifying the data formats (such as JSON, XML, etc.) that your API will use for sending requests and receiving responses.
  2. Design Your API
    • Consider the needs of both Appian and the external system when designing your Web API. Define clear and concise documentation that outlines the API’s functionality, required parameters, and expected responses.
  3. Implement Security Measures
    • Security actively takes centre stage when exposing your Appian data and services to external systems. Actively implement authentication and authorization mechanisms, such as API keys or OAuth tokens, to ensure that only authorized entities can actively access your API.

Test Thoroughly

  • Before making your Web API available to external systems, thoroughly test it using various scenarios and edge cases. Identify and resolve potential issues to ensure a smooth and reliable integration experience.

Deploy the API

  • Once you have finished creating and testing your API, deploy it to the desired environment (development, test, or production).
  • Ensure that the necessary resources (servers, databases, etc.) are appropriately configured and accessible for the API to function correctly in the deployment environment.

Document and Publish the API

  • Create documentation for your API, including details about the endpoints, supported methods, request/response formats, input/output parameters, and any authentication/authorization requirements.
  • Publish the documentation internally or externally to make it available to the API consumers.

Monitor and Maintain

  • Establish monitoring and logging mechanisms to track your API’s performance, usage, and errors.

Challenges while developing Appian Web API

  • Authentication Challenges: Struggles with configuring and maintaining authentication methods like API keys, tokens, or OAuth can result in issues accessing the system.
  • Data Validation Complexity: Verifying and managing data input accuracy, as well as dealing with validation errors, can be tricky, particularly with intricate data structures.
  • Endpoint Configuration: Errors in configuring endpoints, including incorrect URLs or URIs, can disrupt API functionality.
  • Security Vulnerabilities: Overlooking security best practices may expose APIs to vulnerabilities, potentially leading to data breaches or unauthorized access.
  • Third-Party Service Dependencies: If the API relies on third-party services, developers may face difficulties when those services experience downtime or changes.
  • Error Handling: Inadequate error handling and unclear error messages can make troubleshooting and debugging challenging.
  • Documentation Gaps: Poorly documented APIs or incomplete documentation can lead to misunderstandings, making it difficult for developers to use the API effectively.
  • Integration Challenges: Integrating the API with external systems, especially those with differing data formats or protocols, can pose integration challenges.

Developers building Web APIs often face tricky situations like ensuring secure access, validating data correctly, and making sure everything communicates smoothly. Solving these challenges leads to powerful APIs that make sharing information between different systems easier and safer.

Creating a Web API to Share Information

We will be creating a Web API to share information about people that is stored in the Appian Database with three parties who can access it via a GET call on a specific URL.

  • Log into Appian Designer from your Appian developer account.
  • In Appian Designer, navigate to the “Objects” section.
  • Create a new object by clicking on “New.”
  • In the object creation menu, select “Web API”.

template

  • You will be prompted to define your Web API. Provide a name and description for your API.

create details name and other create details method endpoint

  • Configure the endpoints by specifying the URLs or URIs used to access resources or perform actions through your API.
  • Specify the data inputs (request parameters) and outputs (response data) for each endpoint within the Web API.

rule and test input

  • Define the structure of the data that your API will send and receive.
  • For each endpoint, implement the logic using Appian expressions, business rules, or by integrating with external data sources or services. Ensure the logic meets the endpoint’s requirements.

expression mode

  • After configuring your Web API, save your changes.

Appian web api screen

  • Use the built-in Appian testing capabilities or external tools like Postman to test your Web API. Send requests to the defined endpoints and verify the responses.

Appian Result and test screen Appian Response of API

In conclusion, following these steps, you can efficiently create and configure a Web API in Appian, ensuring it is ready for use and thoroughly tested for seamless integration with other systems. For more information, you can visit documentation.

]]>
https://blogs.perficient.com/2024/05/27/appian-web-apis/feed/ 1 344465
Set Your API Performance on Fire With BlazeMeter https://blogs.perficient.com/2024/05/20/set-your-api-performance-on-fire-with-blazemeter/ https://blogs.perficient.com/2024/05/20/set-your-api-performance-on-fire-with-blazemeter/#respond Mon, 20 May 2024 15:45:43 +0000 https://blogs.perficient.com/?p=358370

BlazeMeter, continuous testing platform,  is a perfect solution for your performance needs. BlazeMeter is an open-source tool that supports Web, Mobile and API implementations. You can perform large scale load and performance testing with the ability to tweak parameters to suit your needs.

We will learn step by step process on using BlazeMeter for API testing.

Register for BlazeMeter

Enter your information on the BlazeMeter site to register and get started

Configure Your First Scenario

The first time you login, you will be taken to default view of BlazeMeter with default workspace and project. Let us start configuring a new scenario.

Create a New Project

  1. Select Projects -> Create new project
  2. Name project
  3. Select Create Test
  4. Select Performance Test
  5. Now you are taken to configuration tab

 

Update Your Scenario

  1. The left section here has your test specifications
  2. Tap on Edit link and start updating your project name, let it be “FirstLoadTest”
  3. You can define scenario and test data in Scenario Definition section
  4. For this Demo we will configure API endPoint, tap on Enter URL/API calls (see picture below)
  5. In Scenario Definition enter “https://api.demoblaze.com/entries“. So we are load testing this endpoint with GET call
  6. Lets Name this scenario “DemoWithoutParameters”
  7. Tap on three dots next to Scenario definition and duplicate the scenario
  8. Name this as “DemoWithParameters”

Test Specifications

Create TestData

Create New Csvfile

  1. Next to Scenario Definition we have TestData section, tap on it
  2. You can choose from options available, for this demo we will go with “Create New Data Entity”
  3. Lets name it “DemoTestData” and Add it
  4. Tap on + icon next to entity created for parameterization options
  5. In this example we will select New CSV File
  6. You will be taken to a data table. Rename “variableName1” to “Parameter1” and “variableName2” to “Parameter2″(our variable names are “Parameter1” and “Parameter”)
  7. Enter values as “Value1” and “Value2” and Save
  8. Configure these parameters in Query Parameters section (See picture below)
  9. Now we have successfully completed building a scenario with two endpoints, you can configure one or more endpoints in one scenario

Scenariodefinition

Configure Your First Test Run

  1. Scroll down the scenario definition window to see Load Configuration section
  2. Enter Total Users, Duration, Ramp up Time. For now we can just test with 2 users, Duration: 1minute, RampupTime: 0
  3. Once you update these details observe the graphical representation of how your Load Test is going to be in the graph displayed in this section.
  4. We can also limit Requests Per Second(RPS) by enabling the toggle button for “Limit RPS” and select requests you need to limit per second
  5. We can also change number of users at run time, but this is available with only Enterprise Plan.
  6. Lets configure LoadDistribution now in “Load Distribution” section which is right below the “Load Configuration” section
  7. Select the location from where you need the requests to trigger.
  8. We can select multiple locations and distribute load across different locations, but again this feature is available with only enterprise plan
  9. For now, lets proceed by selecting one location

Load Configuration

Failure Criteria

  1. Failure Criteria is the best approach to immediately know your LoadTest Results
  2. Do you have your failure criteria defined? If yes, you can configure that in this section. This is optional, you can skip if you don’t have failure criteria defined.
  3. You can configure multiple failure criteria as well
  4. Enable “1-min slide window eval” for evaluating your loudest prior to execution
  5. Select “Stop Test?” checkbox if you want to stop the execution in case of failure
  6. Select “Ignore failure criteria during rampup” to ignore the failures during ramp-ups
  7. You can add one or more failure criteria and select this option uniquely for each criteria
  8. Select the option “Enable 1-min slide window eval for all” on top right of this section to enable for all provided failure criteria

Failure Criteria

Test Your Scenario

  1. Run your scenario by clicking on “RunTest”
  2. Wait for launch Test window to load completely
  3. Now click on “Launch Servers” button
  4. Click on “Abort Test” to abort your execution any time
  5. Observe your execution go through different stages (Pending, Booting, Downloading and Ready)
  6. Once it reaches Ready you can see your execution progress
  7. Once the execution is done you can view the summary with status as passed/failed

Blaze Executionstatus

Analyze Your LoadTest Results

  1. The important part of performance test is to analyze your KPIs
  2. You can see different KPIs in test results summary
  3. To understand more navigate to “Timeline Report” section, bottom left you can see “KPI Panel”,this panel contains different KPIS.These KPIs can be analyzed as required
  4. By default it provides generalized view, you can select single endpoint to analyze KPIs for one particular endpoint

Blazemeter Analyze Results

Schedule Your Load Tests

  1. BlazeMeter is continuous Integration tool, you can schedule your executions and view results when required
  2. Select your test from Tests Menu on top
  3. On to left of project description window you can find SCHEDULE section
  4. Tap on Add button next to it Schedule to see schedule window
  5. Configure the scheduler with required timings and Save the scheduler
  6. The new scheduler will be added to your project
  7. Delete it by tapping on Delete icon
  8. You can add multiple schedulers
  9. Toggle on/off to activate/deactivate the schedulers

Schedule Section

BlazeMeter Pros/Cons

ProsCons
Open sourceRequires a license for additional features and support
Provides Scriptless performance testingTest results analysis requires expertise
Integration with Selenium, JMeter, Gatling, LocustNeed to integrate with Selenium/JMeter to test functional scenarios
User-friendly UI
Report Monitoring from any geographic location
Integrates with CI/CD pipelines

If you are looking for a tool that services your performance needs, BlazeMeter is your best option. You can generate scripts with its scriptless UI, simulate loads and run your tests. You can also simulate the spinning up servers, script runs and results generated within seconds.

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2024/05/20/set-your-api-performance-on-fire-with-blazemeter/feed/ 0 358370
Storybook https://blogs.perficient.com/2024/03/29/storybook/ https://blogs.perficient.com/2024/03/29/storybook/#respond Fri, 29 Mar 2024 20:05:59 +0000 https://blogs.perficient.com/?p=360595

You may have never heard of Storybook or maybe that was just a glimpse leaving you a feeling Storybook is such an unnecessary tool – in that case, this article is for you. Previously, I could share this opinion, but since I played with Storybook in action when building JumpStart starter kit with Next.Js, it changed.

Why

With the advent of responsive design, the uniqueness of user interfaces has increased significantly – with the majority of them having bespoke nuances. New requirements have emerged for devices, browser interfaces, accessibility, and performance. We started using JavaScript frameworks, adding different types of rendering to our applications (CSR, SSR, SSG and ISR) and breaking the monolith into micro-frontends. Ultimately, all this complicated the front end and created the need for new approaches to application development and testing.

The results of a 2020 study showed that 77% of developers consider current development to be more complex than 10 years ago. Despite advances in JavaScript tools, professionals continue to face more complex challenges. The component-based approach used in React, Vue, and Angular helps break complex user interfaces into simple components, but it’s not always enough. As the application grows, the number of components increases; in serious projects, there can be hundreds of them, which gives thousands of permutations. To even further complicate matters, interfaces are difficult to debug because they are entangled in business logic, interactive states, and application context.

This is where Storybook comes to the rescue.

What Storybook is

Storybook is a tool for the rapid development of UI components. It allows you to view a library of components and track the status of each of them. With StoryBook, one can develop components separately from the application, making it easier to reuse and test UI components.

Storybook promotes the Component-Driven Development (CDD) approach, where every part of the user interface is a component. These are the basic building blocks of an application. Each of them is developed, tested, and documented separately from the others, which simplifies the process of developing and maintaining the application as a whole.

A component is an independent fragment of the application interface. In Sitecore, in most cases, a component is equal to a rendering, for example, CTA, input, badge, and so on. If we understand the principles of CDD and know how to apply this approach in development, we can use components as the basis for creating applications. Ideally, they should be designed as independent from each other and be reusable in other parts of the application. You can approach creating components in different ways: start with smaller ones and gradually combine them into larger ones, and vice versa. You can create them both within the application itself and in a separate project – in the form of a library of components.

With Storybook’s powerful functionality, you can view your interfaces the same way users do. It provides the ability to run automated tests, analyze various interface states, work with mock data, create documentation, and even conduct code reviews. All these tasks are performed within the framework of the so-called Story, which allows you to effectively use Storybook for development.

What is a Story

This is the basic unit of Storybook design and allows you to demonstrate different states of a component to test its appearance and behavior. Each component can have multiple stories, and each one can be treated as a separate test case to test the functionality of the component.

You write stories for specific states of UI components and then use them to demonstrate the appearance during development, testing, and documentation.

Using the Storybook control panel, you can edit each of the story function arguments in real time. This allows your team to dynamically change components in Storybook to test and validate different edge cases.

Storybook explained

Storybook Capabilities

Creating documentation

Storybook provides the ability to create documentation along with components, making the process more convenient. With its help, you can generate automatic documentation based on code comments, as well as create separate pages with examples of use and descriptions of component properties. This allows you to maintain up-to-date and detailed documentation that will be useful not only for developers but also for designers, testers, and users.

User Interface Testing

Another good use of Storybook – UI Tests identify visual changes to interfaces. For example, if you use Chromatic, the service takes a snapshot of each story in a cloud browser environment. Each time you push the code, Chromatic creates a new set of snapshots to compare existing snapshots with those from previous builds. The list of visual changes is displayed on the build page in the web application so that you can check if these changes are intentional. If they are not, that may be a bug or glitch to be corrected.

Accessibility Compliance

As The State of Frontend 2022 study found, respondents pay high attention to accessibility, with 63% predicting this trend will gain more popularity in the coming years. Accessibility in Storybook can be tested using the storybook-addon-a11y. Upon the installation, the “Accessibility” tab will appear, for you to see the results of the current audit.

Mocking the data

When developing components for Storybook, one should consider realistic data to demonstrate the capabilities of the components and simulate a real-life use case. For this purpose, mock data is often taken, that is, fictitious data that has a structure and data types similar to real ones but does not carry real information. In Storybook, you can use various libraries to create mock data, and you can also create your own mocks for each story. If a component itself needs to perform network calls pulling data, you can use the msw library.

Simulating context and API

Storybook addons can help you simulate different component usage scenarios, such as API requests or different context values. This will allow you to quickly test components in realistic scenarios. If your component uses a provider to pass data, you can use a decorator that wraps the history and provides a cloaked version of the provider. This is especially useful if you are using Redux or context.

Real-life advantages

Wasting resources on the user journey

Building a landing page may seem to be a simple exercise, especially in a development mode when you see changes appear in the browser immediately. However, the majority of cases are not that straightforward. Imagine a site with a backend entirely responsible for routing – and one may need to login first, answer the security questions, and then navigate through the complex menu structure. Say you only need to “change the color of a button” on the final screen of the application, then the developer needs to launch the application in its initial state, login, get to the desired screen, fill out all the forms along the way, and only after that check whether the new style has been applied to the button.

If the changes have not been applied, the entire sequence of actions must be repeated. Storybook solves this problem. With it, a developer can open any application screen and instantly see how it looks, taking into account the applied styles and the desired state. This allows you to significantly speed up the process of developing and testing components since they can be tested and verified independently of the backend and other parts of the application.

Development without having actual data

Often the UI development takes place before the API is ready from the backend developers. Storybook allows you to create components that stub data that will be retrieved from the real API in the future. This allows us to prototype and test the user interface, regardless of the presence or readiness of the backend, and use mock data to demonstrate components.

Frequently changing UI

On a project, we often encounter changes in layout design, and it is very important for us to quickly adapt our components to these changes. Storybook allows you to quickly create and compare different versions of components, helping you save time and make your development process more efficient.

Infrastructural issues

The team may encounter problems when a partner’s test environment or dependencies stop working, which leads to delays and lost productivity. However, with Storybook, it is possible to continue developing components in isolation and not wait for service to recover. Storybook also helps to quickly switch between your application versions and test components in different contexts. This significantly reduces downtime and increases productivity.

Knowledge transfer

In large projects, onboarding may take a lot of time and resources. Storybook allows new developers to quickly become familiar with components and how they work, understand the structure of the project, and start working on specific components without having to learn everything from scratch. This makes the development process easier and more intuitive, even for those not familiar with a particular framework.

Application build takes a long time

Webpack is a powerful tool for building JavaScript applications. However, when developing large applications, building a project can take a long time. Storybook automatically compiles and assembles components whenever changes occur. This way, developers quickly receive updated versions of components without a need to rebuild the entire project. In addition, Storybook supports additional plugins and extensions for Webpack, to improve performance and optimize project build time.

Installation

First, install Storybook using the following commands:

cd nextjs-app-folder
npx storybook@latest init

Once installed, execute it:

npm run storybook

This will run Storybook locally, by default on port 6066, but if the port is occupied – it will pick an alternative one.

Storybook

Storybook is released under the MIT license, you can access its source code in the GitHub repository.

Making it with Sitecore

When developing headless projects with Sitecore, everything stays in the same manner. As part of our mono repository, we set up Storybook with Next.js so that front-end developers don’t have to run an instance of Sitecore to do their part of the development work.

Upon installation, you’ll find a .storybook folder at the root of your Next.Js application (also used as a rendering host) which contains configuration and customization files for your Storybook setup. This folder is crucial for tailoring Storybook to your specific needs, such as setting up addons, and Webpack configurations, and defining the overall behavior of Storybook in your project.

  1. main.js(or main.ts): this is the core configuration file for Storybook. It includes settings for loading stories, adding add-ons, and custom Webpack configurations. You can specify the locations of your story files, add an array of addons you’re using, and customize the Webpack and Babel configs as needed.
  2. preview.js(or preview.tsx): used to customize the rendering of your stories. You can globally add decorators and parameters here, affecting all stories. This file is often used for setting up global contexts like themes, internationalization, and configuring the layout or backgrounds for your stories.

02

One of the best integrations (found from Jeff L’Heureux) allows you to use your own Sitecore context mock and also any placeholder to use any of your components (see the lines below decorators).

import React from 'react';
import { LayoutServicePageState, SitecoreContext } from '@sitecore-jss/sitecore-jss-nextjs';
import { componentBuilder } from 'temp/componentBuilder';
import type { Preview } from '@storybook/react';

import 'src/assets/main.scss';

export const mockLayoutData = {
  sitecore: {
    context: {
      pageEditing: false,
      pageState: LayoutServicePageState.Normal,
    },
    setContext: () => {
      // nothing
    },
    route: null,
  },
};

const preview: Preview = {
  parameters: {
    actions: { argTypesRegex: '^on[A-Z].*' },
    controls: {
      matchers: {
        color: /(background|color)$/i,
        date: /Date$/,
      },
    },
  },
  decorators: [
    (Story) => (
      <SitecoreContext
        componentFactory={componentBuilder.getComponentFactory({ isEditing: mockLayoutData.sitecore.context.pageEditing })}
        layoutData={mockLayoutData}
      >
        <Story />
      </SitecoreContext>
    ),
  ],
};

export default preview;

You put stories somewhere under src/stories/components folder, but it could be any folder as soon as it matches paths referenced from main.ts:

const config: StorybookConfig = {
  stories: ['../src/**/*.mdx', '../src/**/*.stories.@(js|jsx|mjs|ts|tsx)'],
 // ....
}

It is important to understand that getServerSide/getStaticProps are not executed when using Storybook. You are responsible for providing all the required data needed as well as context, so you need to wrap your story or component.

Component level fetching works nicely with Sitecore headless components using MSW – you can just mock fetch API to return the required data from inside of the story file.

Useful tips for running Storybook for Headless Sitecore

  • use next-router-mock to mock the Nextjs router in Storybook (or upgrade to version 7 with the @storybook/nextjs)
  • exclude stories from the componentFactory / componentBuilder file.
  • make sure to run npm run bootstrap before starting storybook or adding it to the package.json, something like: "prestorybook": "npm-run-all --serial bootstrap" – when the storybook script is invoked, this prestorybook will automatically run just before, using a default NPM feature.

Conclusion

The Storybook integration into a Sitecore headless project will require you to invest some time digging into it, but offers numerous benefits, including improved component visualization, isolation for development and testing.

]]>
https://blogs.perficient.com/2024/03/29/storybook/feed/ 0 360595
GraphQL: not an ideal one! https://blogs.perficient.com/2024/03/06/graphql-whats-wrong-with-it/ https://blogs.perficient.com/2024/03/06/graphql-whats-wrong-with-it/#respond Wed, 06 Mar 2024 16:39:44 +0000 https://blogs.perficient.com/?p=358426

You’ll find plenty of articles about how amazing GraphQL is (including mine), but after some time of using it, I’ve got some considerations with the technology and want to share some bitter thoughts about it.

Gq Image[1]

History of GraphQL

How did it all start? The best way to answer this question is to go back to the original problem Facebook faced.

Back in 2012, we began an effort to rebuild Facebook’s native mobile applications. At the time, our iOS and Android apps were thin wrappers around views of our mobile website. While this brought us close to a platonic ideal of the “write once, run anywhere” mobile application, in practice, it pushed our mobile web view apps beyond their limits. As Facebook’s mobile apps became more complex, they suffered poor performance and frequently crashed. As we transitioned to natively implemented models and views, we found ourselves for the first time needing an API data version of News Feed — which up until that point had only been delivered as HTML.

We evaluated our options for delivering News Feed data to our mobile apps, including RESTful server resources and FQL tables (Facebook’s SQL-like API). We were frustrated with the differences between the data we wanted to use in our apps and the server queries they required. We don’t think of data in terms of resource URLs, secondary keys, or join tables; we think about it in terms of a graph of objects.

Facebook came across a specific problem and created its own solution: GraphQL. To represent data in the form of a graph, the company designed a hierarchical query language. In other words, GraphQL naturally follows the relationships between objects. You can now receive nested objects and return them all in a single HTTPS request. Back in the day, it was crucial for global users not to always have cheap/unlimited mobile tariff plans, so the GraphQL protocol was optimized, allowing only what users needed to be transmitted.

Therefore, GraphQL solves Facebook’s problems. Does it solve yours?

First, let’s recap the advantages

  • Single request, multiple resources: Compared to REST, which requires multiple network requests to be made to each endpoint, with GraphQL you can request all resources with a single call.
  • Receive accurate data: GraphQL minimizes the amount of data transferred over the wires, selectively selecting it based on the needs of the client application. Thus, a mobile client with a small screen may receive less information.
  • Strong typing: Every request, input, and response objects have a type. In web browsers, the lack of types in JavaScript has become a weakness that various tools (Google’s Dart, and Microsoft’s TypeScript) try to compensate for. GraphQL allows you to exchange types between the backend and frontend.
  • Better tooling and developer friendliness: The introspective server can be queried about the types it supports, allowing for API explorer, autocompletion, and editor warnings. No more relying on backend developers to document their APIs. Simply explore the endpoints and get the data you need.
  • Version independent: the type of data returned is determined solely by the client request, so servers become simpler. When new server-side features are added to the product, new fields can be added without affecting existing clients.

Thanks to the “single request, multiple resources” principle, front-end code has become much simpler with GraphQL. Imagine a situation where a user wants to get details about a specific writer, for example (name, id, books, etc.). In a traditional intuitive REST pattern, this would require a lot of cross-requests between the two endpoints /writers and /books, which the frontend would then have to merge. However, thanks to GraphQL, we can define all the necessary data in the request, as shown below:

writers(id: "1") {
  id
  name
  avatarUrl
  books(limit: 2) {
    name
    urlSlug
  }
}

The main advantage of this pattern is to simplify the client code. However, some developers expected to use it to optimize network calls and speed up application startup. You don’t make the code faster; you simply transfer the complexity to the backend, which has more computing power. Also for many scenarios, metrics show that using the REST API appeared faster than GraphQL.

This is mostly relevant for mobile apps. If you’re working with a desktop app or a machine-to-machine API, there’s no added value in terms of performance.

Another point is that you may indeed save some kilobytes with GraphQL, but if you really want to optimize loading times, it’s better to focus on loading lower-quality images for mobile, as we’ll see, GraphQL doesn’t work very well with documents.

But let’s see what actually is wrong or could be better with GraphQL.

Strongly Typing

GraphQL defines all API types, commands, and queries in the graphql.schema file. However, I’ve found that typing with GraphQL can be confusing. First of all, there is a lot of duplication here. GraphQL defines the type in the schema, however we need to override the types for our backend (TypeScript with node.js). You have to spend additional effort to make it all work with Zod or create some cumbersome code generation for types.

Debugging

It’s hard to find what you’re looking for in the Chrome inspector because all the endpoints look the same. In REST you can tell what data you’re getting just by looking at the URL:

Devtools1 Devtools2

Do you see the difference?

No support for status codes

REST allows you to use HTTP error codes like “404 not found”, “500 server error” and so on, but GraphQL does not. GraphQL forces a 200 error code to be returned in the response payload. To understand which endpoint failed, you need to check each payload. Same applies for Monitoring: HTTP error monitoring is very easy compared to GraphQL because they all have their own error code while troubleshooting GraphQL requires parsing JSON objects.

Additionally, some objects may be empty either because they cannot be found or because an error occurred. It can be difficult to distinguish the difference at a glance.

Versioning

Everything has its price. When modifying the GraphQL API, you can make some fields obsolete, but you are forced to maintain backward compatibility. They should still remain there for older clients who use them. You don’t need to support GraphQL versioning, at the price of maintaining each field.

To be fair, REST versioning is also a pain point, but it does provide an interesting feature for expiring functionality. In REST, everything is an endpoint, so you can easily block legacy endpoints for a new user and measure who is still using the old endpoint. Redirects could also simplify versioning from older to newer in some cases.

Pagination

GraphQL Best Practices suggests the following:

The GraphQL specification is deliberately silent on several important API-related issues, such as networking, authorization, and pagination.

How “convenient” (not!). In general, as it turns out, pagination in GraphQL is very painful.

Caching

The point of caching is to receive a server response faster by storing the results of previous calculations. In REST, the URLs are unique identifiers of the resources that users are trying to access. Therefore, you can perform caching at the resource level. Caching is a part of HTTP specification. Additionally, the browser and mobile device can also use this URL and cache the resources locally (same as they do with images and CSS).

In GraphQL this gets tricky because each query can be different even though it’s working on the same entity. It requires field-level caching, which is not easy to do with GraphQL because it uses a single endpoint. Libraries like Prisma and Dataloader have been developed to help with such scenarios, but they still fall short of REST capabilities.

Media types

GraphQL does not support uploading documents to the server, which is used multipart-form-data by default. Apollo developers have been working on a file-uploads solution, but it is difficult to set up. Additionally, GraphQL does not support a media types header when retrieving a document, which allows the browser to display the file correctly.

I previously made a post about the steps one must take in order to upload an image to Sitecore Media Library (either XM Cloud or XM 10.3 or newer) by using Authoring GraphQL API.

Security

When working with GraphQL, you can query exactly what you need, but you should be aware that this comes with complex security implications. If an attacker tries to send a costly request with attachments to overload the server, then may experience it as a DDoS attack.

They will also be able to access fields that are not intended for public access. When using REST, you can control permissions at the URL level. For GraphQL this should be the field level:

user {
  username <-- anyone can see that
  email <-- private field
  post {
    title <-- some of the posts are private
  }
}

Conclusion

REST has become the new SOAP; now GraphQL is the new REST. History repeats itself. It’s hard to say whether GraphQL will just be a popular new trend that will gradually be forgotten, or whether it will truly change the rules of the game. One thing is certain: it still requires some development to obtain more maturity.

]]>
https://blogs.perficient.com/2024/03/06/graphql-whats-wrong-with-it/feed/ 0 358426