Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ Expert Digital Insights Fri, 15 Nov 2024 18:30:24 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Technical Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/ 32 32 30508587 A Comprehensive Guide to IDMC Metadata Extraction in Table Format https://blogs.perficient.com/2024/11/16/a-comprehensive-guide-to-idmc-metadata-extraction-in-table-format/ https://blogs.perficient.com/2024/11/16/a-comprehensive-guide-to-idmc-metadata-extraction-in-table-format/#respond Sun, 17 Nov 2024 00:00:27 +0000 https://blogs.perficient.com/?p=372086

Metadata Extraction: IDMC vs. PowerCenter

When we talk about metadata extraction, IDMC (Intelligent Data Management Cloud) can be trickier than PowerCenter. Let’s see why.
In PowerCenter, all metadata is stored in a local database. This setup lets us use SQL queries to get data quickly and easily. It’s simple and efficient.
In contrast, IDMC relies on the IICS Cloud Repository for metadata storage. This means we have to use APIs to get the data we need. While this method works well, it can be more complicated. The data comes back in JSON format. JSON is flexible, but it can be hard to read at first glance.
To make it easier to understand, we convert the JSON data into a table format. We use a tool called jq to help with this. jq allows us to change JSON data into CSV or table formats. This makes the data clearer and easier to analyze.

In this section, we will explore jq. jq is a command-line tool that helps you work with JSON data easily. It lets you parse, filter, and change JSON in a simple and clear way. With jq, you can quickly access specific parts of a JSON file, making it easier to work with large datasets. This tool is particularly useful for developers and data analysts who need to process JSON data from APIs or other sources, as it simplifies complex data structures into manageable formats.

For instance, if the requirement is to gather Succeeded Taskflow details, this involves two main processes. First, you’ll run the IICS APIs to gather the necessary data. Once you have that data, the next step is to execute a jq query to pull out the specific results. Let’s explore two methods in detail.

Extracting Metadata via Postman and jq:-

Step 1:
To begin, utilize the IICS APIs to extract the necessary data from the cloud repository. After successfully retrieving the data, ensure that you save the file in JSON format, which is ideal for structured data representation.
Step 1 Post Man Output

Step 1 1 Save File As Json

Step 2:
Construct a jq query to extract the specific details from the JSON file. This will allow you to filter and manipulate the data effectively.

Windows:-
(echo Taskflow_Name,Start_Time,End_Time & jq -r ".[] | [.assetName, .startTime, .endTime] | @csv" C:\Users\christon.rameshjason\Documents\Reference_Documents\POC.json) > C:\Users\christon.rameshjason\Documents\Reference_Documents\Final_results.csv

Linux:-
jq -r '["Taskflow_Name","Start_Time","End_Time"],(.[] | [.assetName, .startTime, .endTime]) | @csv' /opt/informatica/test/POC.json > /opt/informatica/test/Final_results.csv

Step 3:
To proceed, run the jq query in the Command Prompt or Terminal. Upon successful execution, the results will be saved in CSV file format, providing a structured way to analyze the data.

Step 3 1 Executing Query Cmd

Step 3 2 Csv File Created

Extracting Metadata via Command Prompt and jq:-

Step 1:
Formulate a cURL command that utilizes IICS APIs to access metadata from the IICS Cloud repository. This command will allow you to access essential information stored in the cloud.

Windows and Linux:-
curl -s -L -X GET -u USER_NAME:PASSWORD "https://<BASE_URL>/active-bpel/services/tf/status?runStatus=Success" -H "Accept: application/json"

Step 2:
Develop a jq query along with cURL to extract the required details from the JSON file. This query will help you isolate the specific data points necessary for your project.

Windows:
(curl -s -L -X GET -u USER_NAME:PASSWORD "https://<BASE_URL>/active-bpel/services/tf/status?runStatus=Success" -H "Accept: application/json") | (echo Taskflow_Name,Start_Time,End_Time & jq -r ".[] | [.assetName, .startTime, .endTime] | @csv" C:\Users\christon.rameshjason\Documents\Reference_Documents\POC.json) > C:\Users\christon.rameshjason\Documents\Reference_Documents\Final_results.csv

Linux:
curl -s -L -X GET -u USER_NAME:PASSWORD "https://<BASE_URL>/active-bpel/services/tf/status?runStatus=Success" -H "Accept: application/json" | jq -r '["Taskflow_Name","Start_Time","End_Time"],(.[] | [.assetName, .startTime, .endTime]) | @csv' /opt/informatica/test/POC.json > /opt/informatica/test/Final_results.csv

Step 3:
Launch the Command Prompt and run the cURL command that includes the jq query. Upon running the query, the results will be saved in CSV format, which is widely used for data handling and can be easily imported into various applications for analysis.

Step 3 Ver 2 Cmd Prompt

Conclusion
To wrap up, the methods outlined for extracting workflow metadata from IDMC are designed to streamline your workflow, minimizing manual tasks and maximizing productivity. By automating these processes, you can dedicate more energy to strategic analysis rather than tedious data collection. If you need further details about IDMC APIs or jq queries, feel free to drop a comment below!

Reference Links:-

IICS Data Integration REST API – Monitoring taskflow status with the status resource API

jq Download Link – Jq_Download

]]>
https://blogs.perficient.com/2024/11/16/a-comprehensive-guide-to-idmc-metadata-extraction-in-table-format/feed/ 0 372086
Don’t try to fit a Layout Builder peg in a Site Studio hole. https://blogs.perficient.com/2024/11/14/dont-try-to-fit-a-layout-builder-peg-in-a-site-studio-hole/ https://blogs.perficient.com/2024/11/14/dont-try-to-fit-a-layout-builder-peg-in-a-site-studio-hole/#respond Thu, 14 Nov 2024 19:39:04 +0000 https://blogs.perficient.com/?p=372075

How to ensure your toolset matches your vision, team and long term goals.

Seems common sense right? Use the right tool for the right purpose. However, in the DXP and Drupal space, we often see folks trying to fit their project to the tool and not the tool to the project.

There are many modules, profiles, and approaches to building Drupal out there, and most all of them have their time and place. The key is knowing when to implement which and why. I am going to take a little time here a dive into one of those key decisions that we find ourselves at Perficient facing frequently and how we work with our clients to ensure the proper approach is selected for their Drupal application.

Site Studio vs Standard Drupal(blocks, views, content, etc..) vs Layout Builder

I would say this is the most common area where we see confusion related to the best tooling and how to pick. To start let’s do a summary of the various options(there are many more approaches available but these are the common ones we encounter), as well as their pros and cons.

First, we have Acquia Site Studio, it is a low-code site management tool built on top of Drupal. And it is SLICK. They provide web user editable templates, components, helpers, and more that allow a well trained Content Admin to have control of almost every aspect of the look and feel of the website. There is drag and drop editors for all templates that would traditionally be TWIG, as well as UI editors for styles, fonts and more. This is the cadillac of low code solutions for Drupal, but that comes with some trade offs in terms of developer customizability and config management strategies. We have also noticed, that not every content team actually utilizes the full scope of Site Studio features, which can lead to additional complexity without any benefit, but when the team is right, Site Studio is a very powerful tool.

The next option we frequently see, is a standard Drupal build utilizing Content Types and Blocks to control page layouts, with WYSIWYG editors for rich content and a standard Drupal theme with SASS, TWIG templates, etc…. This is the one you see most developer familiarity with, as well as the most flexibility to implement custom work as well as clean configuration management. The trade off here, is that most customizations will require a developer to build them out, and content editors are limited to “color between the lines” of what was initially built. We have experienced both content teams that were very satisfied with the defined controls, but also teams that felt handcuffed with the limitations and desired more UI/UX customizations without deployments/developer involvement.

The third and final option we will be discussing here, is the Standard Drupal option described above, with the addition of Layout Builder. Layout Builder is a Drupal Core module that enables users to attach layouts, such as 1 column, 2 column and more to various Drupal Entity types(Content, Users, etc..). These layouts then support the placement of blocks into their various regions to give users drag and drop flexibility over laying out their content. Layout Builder does not support full site templates or custom theme work such as site wide CSS changes. Layout Builder can be a good middle ground for content teams not looking for the full customization and accompanying complexity of Site Studio, but desiring some level of content layout control. Layout builder does come with some permissions and configuration management considerations. It is important to decide what is treated as content and what as configuration, as well as define roles and permissions to ensure proper editors have access to the right level of customizations.

Now that we have covered the options as well as the basic pros and cons of each, how do you know which tool is right for your team and your project? This is where we at Perficient start with a holistic review of your needs, short and long term goals, as well as the technical ability of your internal team. It is important to honestly evaluate this. Just because something has all the bells and whistles, do you have the team and time to utilize them, or is it a sunk cost with limited ROI. On the flip side, if you have a very technically robust team, you don’t want to handcuff them and leave them frustrated with limitations that could impact marketing opportunities that could lead to higher ROI.

Additional considerations that can help guide your choice in toolset would be future goals and initiatives. Is a rebrand coming soon? Is your team going to quickly expand with more technical staff? These might point towards Site Studio as the right choice. Is your top priority consistency and limiting unnecessary customizations? Then standard structured content might be the best approach. Do you want to able to customize your site, but just don’t have the time or budget to undertake Site Studio? Layout Builder might be something you should closely look at.

Perficient starts these considerations at the first discussions with our potential clients, and continue to guide them through the sales and estimation process to ensure the right basic Drupal tooling is selected. This then continues through implementation as we continue to inform stakeholders about the best toolsets beyond the core systems. In future articles we will discuss the advantages and disadvantages of various SSO, DAM, Analytics, Drupal module solutions as well as the new Star Shot Drupal Initiative and how it will impact the planning of your next Drupal build!

]]>
https://blogs.perficient.com/2024/11/14/dont-try-to-fit-a-layout-builder-peg-in-a-site-studio-hole/feed/ 0 372075
A Step-by-Step Guide to Extracting Workflow Details for PC-IDMC Migration Without a PC Database https://blogs.perficient.com/2024/11/08/a-step-by-step-guide-to-extracting-workflow-details-for-pc-idmc-migration-without-a-pc-database/ https://blogs.perficient.com/2024/11/08/a-step-by-step-guide-to-extracting-workflow-details-for-pc-idmc-migration-without-a-pc-database/#respond Fri, 08 Nov 2024 06:29:05 +0000 https://blogs.perficient.com/?p=371403

In the PC-IDMC conversion process, it can be challenging to gather detailed information about workflows. Specifically, we often need to determine:

  • The number of transformations used in each mapping.
  • The number of sessions utilized within the workflow.
  • Whether any parameters or variables are being employed in the mappings.
  • The count of reusable versus non-reusable sessions used in the workflow etc.

To obtain these details, we currently have to open each workflow individually, which is time-consuming. Alternatively, we could use complex queries to extract this information from the PowerCenter metadata in the database tables.

This section focuses on XQuery, a versatile language designed for querying and extracting information from XML files. When workflows are exported from the PowerCenter repository or Workflow Manager, the data is generated in XML format. By employing XQuery, we can effectively retrieve the specific details and data associated with the workflow from this XML file.

Step-by-Step Guide to Extracting Workflow Details Using XQuery: –

For instance, if the requirement is to retrieve all reusable and non-reusable sessions for a particular workflow or a set of workflows, we can utilize XQuery to extract this data efficiently.

Step 1:
Begin by exporting the workflows from either the PowerCenter Repository Manager or the Workflow Manager. You have the option to export multiple workflows together as one XML file, or you can export a single workflow and save it as an individual XML file.

Step 1 Pc Xml Files

Step 2:-
Develop the XQuery based on our specific requirements. In this case, we need to fetch all the reusable and non-reusable sessions from the workflows.

let $header := "Folder_Name,Workflow_Name,Session_Name,Mapping_Name"
let $dt := (let $data := 
    ((for $f in POWERMART/REPOSITORY/FOLDER
    let $fn:= data($f/@NAME)
    return
        for $w in $f/WORKFLOW
        let $wn:= data($w/@NAME)
        return
            for $s in $w/SESSION
            let $sn:= data($s/@NAME)
            let $mn:= data($s/@MAPPINGNAME)
            return
                <Names>
                    {
                        $fn ,
                        "," ,
                        $wn ,
                        "," ,
                        $sn ,
                        "," ,
                        $mn
                    }
                </Names>)
    |           
    (for $f in POWERMART/REPOSITORY/FOLDER
    let $fn:= data($f/@NAME)
    return          
        for $s in $f/SESSION
        let $sn:= data($s/@NAME)
        let $mn:= data($s/@MAPPINGNAME)
        return
            for $w in $f/WORKFLOW
            let $wn:= data($w/@NAME)
            let $wtn:= data($w/TASKINSTANCE/@TASKNAME)
            where $sn = $wtn
            return
                <Names>
                    {
                        $fn ,
                        "," ,
                        $wn ,
                        "," ,
                        $sn ,
                        "," ,
                        $mn
                    }
                </Names>))
       for $test in $data
          return
            replace($test/text()," ",""))
      return
 string-join(($header,$dt), "
")

Step 3:
Select the necessary third-party tools to execute the XQuery or opt for online tools if preferred. For example, you can use BaseX, Altova XMLSpy, and others. In this instance, we are using Basex, which is an open-source tool.

Create a database in Basex to run the XQuery.

Step 3 Create Basex Db

Step 4: Enter the created XQuery into the third-party tool or online tool to run it and retrieve the results.

Step 4 Execute XqueryStep 5:
Export the results in the necessary file extensions.

Step 5 Export The Output

Conclusion:
These simple techniques allow you to extract workflow details effectively, aiding in the planning and early detection of complex manual conversion workflows. Many queries exist to fetch different kinds of data. If you need more XQueries, just leave a comment below!

]]>
https://blogs.perficient.com/2024/11/08/a-step-by-step-guide-to-extracting-workflow-details-for-pc-idmc-migration-without-a-pc-database/feed/ 0 371403
3 Key Insurance Takeaways From InsureTech Connect 2024 https://blogs.perficient.com/2024/10/29/3-key-insurance-takeaways-from-insuretech-connect-2024/ https://blogs.perficient.com/2024/10/29/3-key-insurance-takeaways-from-insuretech-connect-2024/#respond Tue, 29 Oct 2024 16:49:00 +0000 https://blogs.perficient.com/?p=371156

The 2024 InsureTech Connect (ITC) conference was truly exhilarating, with key takeaways impacting the insurance industry. Each year, it continues to improve, offering more relevant content, valuable industry connections, and opportunities to delve into emerging technologies.

This year’s event was no exception, showcasing the importance of personalization to the customer, tech-driven relationship management, and AI-driven underwriting processes. The industry is constantly evolving, and ITC displays the alignment of everyone within the insurance industry surrounding the same purpose.

The Road Ahead: Transformative Trends

As I reflect on ITC and my experience, it is evident the progression of the industry is remarkable. Here are a few key takeaways from my perspective that will shape our industry roadmap:

1. Personalization at Scale

We’ve spoken for many years about the need to drive greater personalization across our interactions in our industry. We know that customers engage with companies that demonstrate authentic knowledge of their relationship. This year, we saw great examples of how companies are treating personalization, not as an incremental initiative, but rather embedding it at key moments in the insurance experience, particularly underwriting and claims.

For example, New York Life highlighted how personalization is driving generational loyalty. We’ve been working with industry leading insurers to help drive personalization across the distribution network: carriers to agents and the final policyholder.

Success In Action: Our client wanted to integrate better contact center technology to improve internal processes and allow for personalized, proactive messaging to clients. We implemented Twilio Flex and leveraged its outbound notification capabilities to support customized messaging while also integrating their cloud-based outbound dialer and workforce management suite. The insurer now has optimized agent productivity and agent-customer communication, as well as newfound access to real-time application data across the entire contact center.

2. Holistic, Well-Connected Distribution Network

Insurance has always had a complex distribution network across platforms, partnerships, carriers, agents, producers, and more. Leveraging technology to manage these relationships opens opportunities to gain real-time insights and implement effective strategies, fostering holistic solutions and moving away from point solutions. Managing this complexity and maximizing the value of this network requires a good business and digital transformation strategy.

Our proprietary Envision process has been leading the way to help carriers navigate this complex system with proprietary strategy tools, historical industry data, and best practices.

3. Artificial Intelligence (AI) for Process Automation

Not surprisingly, AI permeated many of the presentations and demos across the session. AI Offers insurers unique decisioning throughout the value chain to create differentiation. It was evident that while we often talk about AI as an overarching technology, the use cases were more point solutions across the insurance value chain. Moreover, AI is not here to replace the human, but rather assist the human. By automating the mundane process activities, mindshare and human capital can be invested toward more value-added activity and critical problems to improve customer experience. Because these point solutions are available across many disparate groups, organizational mandates demand safe and ethical use of AI models.

Our PACE framework provides a holistic approach to responsibly operationalize AI across an organization. It empowers organizations to unlock the benefits of AI while proactively addressing risks.

Our industry continues to evolve in delivering its noble purpose – to protect individual’s and businesses’ property, liability, and financial obligations. Technology is certainly an enabler of this purpose, but transformation must be managed to be effective.

Perficient Is Driving Success and Innovation in Insurance

Want to know the now, new, and next of digital transformation in insurance? Contact us and let us help you meet the challenges of today and seize the opportunities of tomorrow in the insurance industry.

]]>
https://blogs.perficient.com/2024/10/29/3-key-insurance-takeaways-from-insuretech-connect-2024/feed/ 0 371156
The risk of using String objects in Java https://blogs.perficient.com/2024/10/25/the-risk-of-using-string-objects-in-java/ https://blogs.perficient.com/2024/10/25/the-risk-of-using-string-objects-in-java/#respond Fri, 25 Oct 2024 23:23:46 +0000 https://blogs.perficient.com/?p=370962

If you are a Java programmer, you may have been incurring an insecure practice without knowing. We all know (or should know) that is not safe to store unencrypted passwords in the database because that might compromise the protection of data at rest. But that is not the only issue, if at any time in our code there is an unencrypted password or sensitive data stored in a String variable even if it is temporary, then there could be a risk.

Why is there a risk?

String objects were not created to store passwords, they were designed to optimize space in our program. String objects in Java are “immutable” which means that after you create a String object and assign it some value, afterward you cannot remove the value nor modify it. I know you might be thinking that this is not true because you can assign “Hello World” to a given String object and in the following line assign it with “Goodbye, cruel world”, and that is technically correct. The problem is that the “Hello World” that you created first is going to keep living in the String pool even if you cannot see it.

What is the String pool?

Java uses a special memory area called the String pool to store String literals. When you create a String literal, Java checks the String pool first to see if an identical String already exists. If it does, Java will reuse the reference to the existing String, saving memory. This means that if you create 25.000 String objects and all of them have the value of “Michael Jackson” only one String Literal will be stored in memory and all variables will be pointing to the same one, optimizing the space in memory.

Ok, the object is in the String pool, where is the risk?

The String Object will remain in memory for some time before being deleted by the garbage collector. If an attacker has access to the content of the memory, they could obtain the password stored there.

Let’s see a basic example of this. The following code is creating a String object and assigning it with a secret password: “¿This is a secret password”. Then, that same object is overwritten 3 times, and the Instances Inspector of the Debugger will help us in locating String objects starting with the character “¿”.

Example 1 Code:

Code1

Example 1 Debugger:

Example1

 

As you can notice in the image when the debugger has gotten to the line 8, even after having changed three times the value of the String variable “a” and setting it to null at the end, all previous values remain in the memory, included our: “¿This is a secret password”.

 

Got it. Just avoiding creating String variables will solve the problem, right?

It is not that simple. Let us consider a second example. Now we are smarter, and we are going to use a char array to store the password instead of the String to avoid the issue of having it saved in the String pool. In addition, rather than having the secret password as literal in the code, it will be available unencrypted in a text file, which by the way is not recommended to save it unencrypted, but we will do it for this example. A BufferedReader is going to support reading the contents of the file.

Unfortunately, as you will see, password also exist in the String pool.

Example 2 Code:

Code2

Example 2 Debugger:

 

Example2 Debugger

This case is even more puzzling because in the code a String Object was never created, at least explicitly. The problem is that the BufferedReader.readLine() is returning a String Object temporarily and the content with the unencrypted password will remain in the String pool.

What can I do to solve this problem?

In this last example we will have the unencrypted password stored in a text file, we will use a BufferedReader to read the contents of the file, but instead of using the method BufferedReader.readLine()  that returns a String we are using the method BufferedReader.read() that stores the content of the file in a char array.  As seen in the debugger’s screenshot, this time the file’s contents are not available in the String pool.

Example 3 Code:

Code3

Example 3 Debugger:

Example3 Debugger

In summary

To solve this problem, consider following the principles listed below:

  1. Do not create String literals with confidential information in your code.
  2. Do not store confidential information in String objects. You can use other types of Objects to store this information such as the classic char array. After processing the data make sure to overwrite the char array with zeros or some random chars, just to confuse attackers.
  3. Avoid calling methods that will return the confidential information as String, even if you will not save that into a variable.
  4. Consider applying an additional security layer by encrypting confidential information. The SealedObject in Java is a great alternative to achieve this. The SealedObject is a Java Object where you can store sensitive data, you provide a secret key, and the Object is encrypted and serialized. This is useful if you want to transmit it and ensure the content remains unexposed. Afterward, you can decrypt it using the same secret key. Just one piece of advice, after decrypting it, please do not store it on a String object.
]]>
https://blogs.perficient.com/2024/10/25/the-risk-of-using-string-objects-in-java/feed/ 0 370962
Perficient Named in Forrester’s App Modernization and Multicloud Managed Services Landscape, Q4 2024 https://blogs.perficient.com/2024/10/25/perficient-in-forresters-app-modernization-and-multicloud-managed-services-landscape-q4-2024/ https://blogs.perficient.com/2024/10/25/perficient-in-forresters-app-modernization-and-multicloud-managed-services-landscape-q4-2024/#respond Fri, 25 Oct 2024 12:21:43 +0000 https://blogs.perficient.com/?p=371037

As new technologies become available within the digital space, businesses must adapt quickly by modernizing their legacy systems and harnessing the power of the cloud to stay competitive. Forrester’s 2024 report recognizes 42 notable providers– and we’re proud to announce that Perficient is among them.

We believe our inclusion in Forrester’s Application Modernization and Multicloud Managed Services Landscape, Q4 2024 reflects our commitment to evolving enterprise applications and managing multicloud environments to enhance customer experiences and drive growth in a complex digital world.

With the demand for digital transformation growing rapidly, this landscape provides valuable insights into what businesses can expect from service providers, how different companies compare, and the options available based on provider size and market focus.

Application Modernization and Multicloud Managed Services

Forrester defines application modernization and multicloud managed services as:

“Services that offer technical and professional support to perform application and system assessments, ongoing application multicloud management, application modernization, development services for application replacements, and application retirement.”

According to the report,

“Cloud leaders and sourcing professionals implement application modernization and multicloud managed services to:

  • Deliver superior customer experiences.
  • Gain access to technical and transformational skills and capabilities.
  • Reduce costs associated with legacy technologies and systems.”

By focusing on application modernization and multicloud management, Perficient empowers businesses to deliver superior customer experiences through agile technologies that boost user satisfaction. We provide clients with access to cutting-edge technical and transformational skills, allowing them to stay ahead of industry trends. Our solutions are uniquely tailored to reduce costs associated with maintaining legacy systems, helping businesses optimize their IT budgets while focusing on growth.

Focus Areas for Modernization and Multicloud Management

Perficient has honed its expertise in several key areas that are critical for organizations looking to modernize their applications and manage multicloud environments effectively. As part of the report, Forrester asked each provider included in the Landscape to select the top business scenarios for which clients select them and from there determined which are the extended business scenarios that highlight differentiation among the providers. Perficient self-reported three key business scenarios that clients work with us out of those extended application modernization and multicloud services business scenarios:

  • Infrastructure Modernization: We help clients transform their IT infrastructure to be more flexible, scalable, and efficient, supporting the rapid demands of modern applications.
  • Cloud-Native Development Execution: Our cloud-native approach enables new applications to leverage cloud environments, maximizing performance and agility.
  • Cloud Infrastructure “Run”: We provide ongoing support for cloud infrastructure, keeping applications and systems optimized, secure, and scalable.

Delivering Value Through Innovation

Perficient is listed among large consultancies with an industry focus in financial services, healthcare, and the manufacturing/production of consumer products. Additionally, our geographic presence in North America, Latin America, and the Asia-Pacific region was noted.

We believe that Perficient’s inclusion in Forrester’s report serves as another milestone in our mission to drive digital innovation for our clients across industries. We are proud to be recognized among notable providers and look forward to continuing to empower our clients to transform their digital landscapes with confidence. For more information on how Perficient can help your business with application modernization and multicloud managed services, contact us today.

Download the Forrester report, The Application Modernization And Multicloud Managed Services Landscape, Q4 2024, to learn more (link to report available to Forrester subscribers and for purchase).

]]>
https://blogs.perficient.com/2024/10/25/perficient-in-forresters-app-modernization-and-multicloud-managed-services-landscape-q4-2024/feed/ 0 371037
5 Takeaways: Enhancing Trust in Healthcare [Webinar] https://blogs.perficient.com/2024/10/24/5-takeaways-enhancing-trust-in-healthcare-webinar/ https://blogs.perficient.com/2024/10/24/5-takeaways-enhancing-trust-in-healthcare-webinar/#respond Thu, 24 Oct 2024 21:28:40 +0000 https://blogs.perficient.com/?p=371079

In our recent webinar, “Enhancing Trust in Healthcare,” experts David Allen and Michael Porter, along with Appian’s Matt Collins, addressed the concerning decline in consumer trust within the healthcare sector.

Historically, healthcare has maintained higher levels of trust compared to other industries, but a recent Gallup survey shows that this trust is now at a near-record low.

Related: 9 Healthcare Trends For 2024

The discussion explored actionable strategies to enhance trust among both patients and members, emphasizing the importance of transparency, effective communication, and improving outcomes. Our experts shared insights on how healthcare organizations can rebuild confidence and ease experiences.

5 Ways to Enhance Trust in Healthcare

1. Understand the key factors contributing to patient/member mistrust

Nearly one third of Gallup respondents cited ‘very little’ confidence in the medical system, well above the 20-year average. This highlights a significant gap in public confidence that healthcare organizations must address.

Factors contributing to this mistrust include inconsistent communication, perceived lack of transparency, and negative past experiences.

For instance, consider the following statistics:

  • 30% of consumers have delayed or skipped care after finding inaccurate provider information within their health plan’s transparency tools.
  • 49% of providers identify that patient information errors are a primary cause of denied claims (e.g., authorizations, eligibility, etc.)

Related: Build Empathy and Understanding. Ease Patient and Member Journeys.

2. Optimize your approach by keeping the consumer at the heart of progress

Traditional approaches to technology often lead to friction points that can erode trust with your patients and members. We recommend instead that healthcare organizations embrace an outcomes-based mindset and approach.

This starts by aligning the enterprise around a strategic vision and actionable KPIs. It’s a holistic, iterative process rooted in value creation and supported by change management.

Hallmarks of a business transformation approach include:

  • Alignment with organizational strategy
  • Assessment of overall readiness
  • Orchestration around the user
  • An iterative MVP approach
  • A flexible technical foundation
  • Intentional focus on data and KPIs

Discover More: Business Transformation in Healthcare

3. Tactically and strategically ease the healthcare journey

Consumers are navigating an increasing number of digital touchpoints throughout their healthcare journey. These digital interactions are crucial for engagement and proactive health monitoring.

By leveraging technology to provide timely updates and personalized care, healthcare organizations can strengthen relationships with patients and members.

Focused use cases could include:

  • Referrals + Scheduling: Simpler, faster, more-memorable referral journeys for patients
  • Health Monitoring: Patients feel known and well cared for in their health journey
  • Eligibility: Faster verification and clarity of choice for the consumer
  • Prior Authorizations: Reduce guesswork; patient already feels worried and unprepared
  • Revenue Cycle Management: Provider has insight into revenue and financial status in near-real time
  • Claims Management: Member feels confident the insurer can tell them where they stand at any time

4. Break down silos to improve outcomes

Technologies deployed in narrow silos can ultimately contribute to a challenge as much as they seek to solve it. While different technology systems are good at their specific role in the organization, effective data transfer between systems often proves challenging, hindering health and business outcomes.

Breaking down these silos through integrated systems and collaborative approaches can enhance communication and coordination across the healthcare ecosystem. Ideally, modernization efforts will maximize technology to drive health innovation, efficiency, and interoperability.

  • Orchestrate resources and decision-making processes into a culture that promotes growth
  • Set strategic parameters for operational excellence and champion iterative delivery models
  • Innovate beyond mandated goals to add business value, meet consumers’ evolving expectations, and deliver equitable care and services
  • Accelerate value with secure, compliant, and modern platforms

5. Determine if intelligent automation and advanced analytics can address challenges

Trust gaps are commonly voiced by patients and members alike. These breakdowns in trust often manifest as the result of weakly orchestrated processes and data assets.

Intelligent automation can address a number of these trust-influencing challenges, including:

  • Self-Service + Transparency: Control and visibility over actions and impacts in the digital journey
  • Accuracy + Completeness: Comprehensive, up-to-date information across the digital journey
  • Speed of Response: Close to real-time updates about critical information in the digital journey
  • Privacy + Security: Compliance aligned with appropriate flexibility in using my data to best serve and enhance the digital journey

Success Story: Improving Experiences and Offsetting Call Center Volume

Elevate Trust With Expert Healthcare Guidance

We blend healthcare and automation expertise to help leaders optimize processes and elevate experiences.

Discover why we have been trusted by the 10 largest health systems and the 10 largest health insurers in the U.S., and Modern Healthcare consistently recognizes us as one of the largest healthcare consulting firms.

Our experts will help you identify how work is performed today and how you can optimize for tomorrow. Contact us to get started.

Watch the Full Webinar Now:

]]>
https://blogs.perficient.com/2024/10/24/5-takeaways-enhancing-trust-in-healthcare-webinar/feed/ 0 371079
A New Era of AI Agents in the Enterprise? https://blogs.perficient.com/2024/10/22/a-new-era-of-custom-ai-in-the-enterprise/ https://blogs.perficient.com/2024/10/22/a-new-era-of-custom-ai-in-the-enterprise/#respond Tue, 22 Oct 2024 18:08:30 +0000 https://blogs.perficient.com/?p=370801

In a move that has sparked intense discussion across the enterprise software landscape, Klarna announced its decision to drop both Salesforce Sales Cloud and Workday, replacing these industry-leading platforms with its own AI-driven tools. This announcement, led by CEO Sebastian Siemiatkowski, may signal a paradigm shift toward using custom AI agents to manage critical business functions such as customer relationship management (CRM) and human resources (HR). While mostly social media fodder at this point, this very public bet on SaaS replacement has raised important questions about the future of enterprise software and how Agentic AI might reshape the way businesses operate.

AI Agents – Impact on Enterprises

Klarna’s move maybe be a one-off internal pivot or it may signal broader shifts that impact enterprises worldwide. Here are three ways this transition could affect the broader market:

  1. Customized AI Over SaaS for Competitive Differentiation Enterprises are always on the lookout for ways to differentiate themselves from the competition. Klarna’s decision may reflect an emerging trend: companies developing custom Agentic AI solutions to better tailor workflows and processes to their specific needs. The advantage here lies in having a system that is purpose-built for an organization’s unique requirements, potentially driving innovation and efficiencies that are difficult to achieve with out-of-the-box software. However, this approach also raises challenges. Building Agentic AI solutions in-house requires significant technical expertise, resources, and time. Not all companies will have the bandwidth to undertake such a transformation, but for those who do, it could become a key differentiator in terms of operational efficiency and personalized customer experiences.
  2. Shift in Vendor Relationships and Power Dynamics If more enterprises follow Klarna’s lead, we could see a shift in the traditional vendor-client dynamic. For years, businesses have relied on SaaS providers like Salesforce and Workday to deliver highly specialized, integrated solutions. However, AI-driven automation might diminish the need for comprehensive, multi-purpose platforms. Instead, companies might lean towards modular, lightweight tech stacks powered by AI agents, allowing for greater control and flexibility. This shift could weaken the power and influence of SaaS providers if enterprises increasingly build customized systems in-house. On the other hand, it could also lead to new forms of partnership between AI providers and SaaS companies, where AI becomes a layer on top of existing systems rather than a full replacement.
  3. Greater Focus on Data and Compliance Risks With AI agents handling sensitive business functions like customer management and HR, companies like Klarna must ensure that data governance, compliance, and security are up to the task. This shift toward Agentic AI requires robust mechanisms to manage customer and employee data, especially in industries with stringent regulatory requirements, like finance and healthcare. Marc Benioff, Salesforce’s CEO, raised these concerns directly, questioning how Klarna will handle compliance, governance, and institutional memory. AI might automate many processes, but without the proper safeguards, it could introduce new risks that legacy SaaS providers have long addressed. Enterprises looking to follow Klarna’s example will need to rethink how they manage these critical issues within their AI-driven frameworks.

AI Agents – SaaS Vendors Respond

As enterprises explore the potential of Agentic AI-driven systems, SaaS providers like Salesforce and Workday must adapt to a new reality. Klarna’s decision could be the first domino in a broader shift, forcing these companies to reconsider their own offerings and strategies. Here are three possible responses we could see from the SaaS giants:

  1. Doubling Down on AI Integration Salesforce and Workday are not standing still. In fact, both companies are already integrating AI into their platforms. Salesforce’s Einstein and the newly introduced Agentforce are examples of AI-powered tools designed to enhance customer interactions and automate tasks. We might see a rapid acceleration of these efforts, with SaaS providers emphasizing Agentic AI-driven features that keep businesses within their ecosystems rather than prompting them to build in-house solutions. However, as Benioff pointed out, the key might be blending AI with human oversight rather than replacing humans altogether. This hybrid approach will allow Salesforce and Workday to differentiate themselves from pure AI solutions by ensuring that critical human elements—like decision-making, customer empathy, and regulatory knowledge—are never lost.
  2. Building Modular and Lightweight Offerings Klarna’s move underscores the desire for flexibility and control over tech stacks. In response, SaaS companies may offer more modular, API-driven solutions that allow enterprises to mix and match components based on their needs. This would enable businesses to take advantage of best-in-class SaaS features without being locked into a monolithic platform. By offering modular systems, Salesforce and Workday could cater to enterprises looking to integrate AI while maintaining the core advantages of established SaaS infrastructure—such as compliance, security, and data management.
  3. Strengthening Data Governance and Compliance as Key Differentiators As AI grows in influence, data governance, compliance, and security will become critical battlegrounds for SaaS providers. SaaS companies like Salesforce and Workday have spent years building trusted systems that comply with various regulatory frameworks. Klarna’s AI approach will be closely scrutinized to ensure it meets these same standards, and any slip-ups could provide an opening for SaaS vendors to argue that their systems remain the gold standard for enterprise-grade compliance. By doubling down on their strengths in these areas, SaaS vendors could position themselves as the safer, more reliable option for enterprises that handle sensitive or regulated data. This approach could attract companies that are hesitant to take the AI plunge without fully understanding the risks.

What’s Next?

Klarna’s decision to replace SaaS platforms with a custom AI system may represent a significant shift in the enterprise software landscape. While this move highlights the growing potential of AI to reshape key business functions, it also raises important questions about governance, compliance, and the long-term role of SaaS providers. As organizations worldwide watch Klarna’s big bet play out, it’s clear that we are entering a new phase of enterprise software evolution—one where the balance between AI, human oversight, and SaaS will be critical to success.

What do you think? Is Klarna’s move a sign of things to come, or will it encounter challenges that reaffirm the importance of traditional SaaS systems? Lets continue the SaaS replacement conversation in the comments below!

]]>
https://blogs.perficient.com/2024/10/22/a-new-era-of-custom-ai-in-the-enterprise/feed/ 0 370801
Making the Difference with Perficient´s LatAm L&D Tech Ecosystem https://blogs.perficient.com/2024/10/21/making-the-difference-with-perficients-latam-ld-tech-ecosystem/ https://blogs.perficient.com/2024/10/21/making-the-difference-with-perficients-latam-ld-tech-ecosystem/#respond Mon, 21 Oct 2024 17:20:08 +0000 https://blogs.perficient.com/?p=370770

The L&D Tech Ecosystem is a group of interrelated tools, assets, platforms, and networks to help our colleagues in LatAm (México, Colombia, Chile, Argentina, and Uruguay) learn and develop themselves. The idea behind the Tech Ecosystem describes a strong and comprehensive landscape full of resources and educational tools that encourage colleagues to learn tech abilities, sharpen and keep updated their skills, contributing to their upskilling and reskilling.

L&d Latam Ecosystem V1

Taking the Lead in Digital Transformation

This approach contributes to Perficient’s success in leading digital transformation.

Perficient is a global digital consultancy transforming how leading enterprises and biggest brands connect with customers and grow their businesses. Our teams bring an end-to-end approach to digital transformation through our comprehensive portfolio of digital solutions.

With unparalleled strategy, creativity, and technology capabilities, we bring big thinking and innovative ideas, along with a practical approach, to help the world’s largest enterprises and brands succeed. Our teams span national, industry, and geo business units across nearly 40 global locations and deliver compelling digital experiences for our customers.

To develop a solid portfolio of digital solutions and unleash creativity and innovation, we broaden our capabilities, expand our knowledge, and take the thought leadership pathway.

A Way of Working in Our Technical Capabilities

We have multiple ways of improving our technical capabilities. One of them, which has proven to be highly relevant, is the way that the L&D LatAm team talks to the business. At Perficient LatAm, L&D is part of the Tech Team. That means that our learning strategy and decisions are made with a technological mindset, allowing us to align the strategy with the execution in the learning and development dimension.

Making a Difference

Aligning goals, devising and co-creating L&D strategy with the Tech Team was a big bet, but after we did it, we immediately realized it was a game-changer move. It allowed us to improve the technological ecosystem and make a difference by moving even faster than we did in the past. As we moved forward, conversations between L&D and Tech became more straightforward, more frequent, and exclusively tech-oriented. L&D rapidly incorporated the tech mindset, managing to grasp the business strategy and needs clearly and without effort.

The Impact on Our People

The move also had a great impact on our people. After some months of working this way, we realized that colleagues were noticing the learning opportunities of the Tech Ecosystem to improve and grow as professionals. They quickly saw the benefits and started to take advantage of all the learning assets, easily identifying the resources and support to learn, develop, and get certified. Therefore, people started to learn and get certified at an unprecedented pace.

Final Thoughts

Aligning goals and devising and co-creating L&D strategies with the Tech Team proved to be a faster and easier way to progress. This pathway has proven to be a game changer with a tremendous impact on business and people. The L&D Tech Ecosystem demonstrated to be a highly valued tool for faster progress. Our boldly advanced technical capabilities became even more robust and burst with health. Certifications started to go to the sky, meaning that not only were colleagues learning and improving their careers but also that Perficient strengthened their technical capabilities, adding accredited skills to our solutions portfolio. As a final thought, this alliance between L&D and Tech Team has shown us to be a win-win strategy to drive growth for everyone.

]]>
https://blogs.perficient.com/2024/10/21/making-the-difference-with-perficients-latam-ld-tech-ecosystem/feed/ 0 370770
Use Column Name as space/numbers/special Characters in Output File Using Talend https://blogs.perficient.com/2024/10/21/to-use-column-name-as-space-numbers-special-characters-in-output-file-using-talend/ https://blogs.perficient.com/2024/10/21/to-use-column-name-as-space-numbers-special-characters-in-output-file-using-talend/#respond Mon, 21 Oct 2024 06:53:04 +0000 https://blogs.perficient.com/?p=358826

Problem Statement

In Talend, while generating output file if we need to add a column as number or column name with space or to include any special characters as column name in Talend it won’t allow directly by adding the below-mentioned column names in schema, will get the below mentioned error.

As number as column name:

Capture    Capture2

As space in column name:

Spacecolumn Spacecolumnerror

As special characters in column name:

Specialcharatererror      Specialcharater

Solution:

The above use case was implemented by simple Talend job and with the below steps.

Step 1: To use tFixedFlowInput component by providing the actual column names (number/special character/space) as highlighted below,

Columndefining Step 2: To map the fields to the target file to populate the headers in the first line of the output.

Output1

Step 3: To load the actual source data which we need to load it in output target file will be done in step 3. The source data can be a Input File or any other stream of data. Here input file is used as a source for example.

Step3input

Step 4: In order to avoid the header from actual file used and to pick the headers from the previous flow we need to use sequence and given condition to pick the records which has sequence more than the value 1 as below in tMap.

Tmap

Step 5: To load the source data after tMap to the same target file using append operation on addition to the header load from the previous flow.

Outputfile

with the same concept we can replace the output component instead of tFileOutputDelimited to tFileOutputExcel for generating excel.

Result:

The output file is generated after the execution of the job and the given columns are loaded successfully as below in header of the target file.

Outresult

 

]]>
https://blogs.perficient.com/2024/10/21/to-use-column-name-as-space-numbers-special-characters-in-output-file-using-talend/feed/ 0 358826
Exploring Apigee: A Comprehensive Guide to API Management https://blogs.perficient.com/2024/10/15/exploring-apigee-a-comprehensive-guide-to-api-management/ https://blogs.perficient.com/2024/10/15/exploring-apigee-a-comprehensive-guide-to-api-management/#respond Tue, 15 Oct 2024 06:47:11 +0000 https://blogs.perficient.com/?p=369958

APIs, or application programming interfaces, are essential to the dynamic world of digital transformation because they allow companies to communicate quickly and efficiently with their data and services. Consequently, effective management is essential to ensure these APIs function correctly, stay safe, and provide the desired benefits. This is where Google Cloud’s top-tier API management product, Apigee, comes into play.

What is Apigee?

Apigee is a great platform for companies that want to manage their APIs effectively. It really simplifies the whole process of creating, growing, securing, and implementing APIs, which makes things a lot easier for developers. One thing that stands out about Apigee is its flexibility; it can handle both external APIs that third-party partners can access and internal APIs used within the company. This makes Apigee a great option for businesses of all sizes. Moreover, its versatility is a significant benefit for those looking to simplify their API management. It also integrates nicely with various security layers, like Nginx, which provides an important layer of authentication between Apigee and the backend. Because of this adaptability, Apigee enhances security and allows for smooth integration across different systems, making it a reliable choice for managing APIs.

Core Features of Apigee

1. API Design and Development

Primarily, Apigee offers a unique suite of tools for developing and designing APIs. You can define API endpoints, maintain API specifications, and create and modify API proxies by using the Open API standard. Consequently, it becomes easier to design functional and compliant APIs with industry standards. Furthermore, this capability streamlines the development process and ensures that the APIs meet regulatory requirements. Thus, developers can focus on innovation while maintaining a strong foundation of compliance and functionality. Below is a flow diagram related to API Design and Development with Apigee:

2. Security and Authentication

Any API management system must prioritize security, and Apigee leads the field in this regard. It provides security features such as OAuth 2.0, JWT (JSON Web Token) validation, API key validation, and IP validation. By limiting access to your APIs to authorized users, these capabilities help safeguard sensitive data from unwanted access.

3. Traffic Management

With capabilities like rate limitation, quota management, and traffic shaping, Apigee enables you to optimize and control API traffic. This helps proper usage and maintains consistent performance even under high traffic conditions.

4. Analytics and Monitoring

You can access analytics and monitoring capabilities with Apigee, which offers insights into API usage and performance. You can track response times, error rates, and request volumes, enabling you to make data-driven decisions and quickly address any issues that arise.

5. Developer Portal

Apigee includes a customizable developer portal where API users can browse documentation, test APIs, and get API keys. This portal builds a community around your APIs and improves the developer experience.

6. Versioning and Lifecycle Management

Keeping an API’s versions separate is essential to preserving backward compatibility and allowing it to change with time. Apigee offers lifecycle management and versioning solutions for APIs, facilitating a seamless upgrade or downgrade process.

7. Integration and Extensibility

Apigee supports integration with various third-party services and tools, including CI/CD pipelines, monitoring tools, and identity providers. Its extensibility through APIs and custom policies allows you to tailor the platform to meet your specific needs.

8. Debug Session

Moreover, Apigee offers a debug session feature that helps troubleshoot and resolve issues by providing a real-time view of API traffic and interactions. This feature is crucial for identifying and fixing problems and is essential during the development and testing phases. In addition, this feature helps ensure that any issues are identified early on; consequently, it enhances the overall quality of the final product.

9. Alerts:

Furthermore, you can easily set up alerts within Apigee to notify you of critical issues related to performance and security threats. It is crucial to understand that both types of threats affect system reliability and can lead to significant downtime; addressing them promptly is essential for maintaining optimal performance.

10. Product Onboarding for Different Clients

Apigee supports product onboarding, allowing you to manage and customize API access and resources for different clients. This feature is essential for handling diverse client needs and ensuring each client has the appropriate level of access.

11. Threat Protection

Apigee provides threat protection mechanisms to ensure that your APIs can handle concurrent requests efficiently without performance degradation. This feature helps in maintaining API stability under high load conditions.

12. Shared Flows

Apigee allows you to create and reuse shared flows, which are common sets of policies and configurations applied across multiple API proxies. This feature promotes consistency and reduces redundancy in API management.

Benefits of Using Apigee

1. Enhanced Security

In summary, Apigee’s comprehensive security features help protect your APIs from potential threats and ensure that only authorized users can access your services.

2. Improved Performance

Moreover, with features like traffic management and caching, Apigee helps optimize API performance, providing a better user experience while reducing the load on your backend systems.

3. Better Visibility

Apigee’s analytics and monitoring tools give valuable insights into API usage and performance, helping you identify trends, diagnose issues, and make informed decisions.

4. Streamlined API Management

Apigee’s unified platform simplifies the management of APIs, from design and development to deployment and monitoring, saving time and reducing complexity.

5. Scalability

Finally, Apigee is designed to handle APIs at scale, making it suitable for both small projects and large enterprise environments.

Getting Started with Apigee

To get started with Apigee, follow these steps:

1. Sign Up for Apigee

Visit the Google Cloud website and sign up for an Apigee account. Based on your needs, you can choose from different pricing plans.
Sign-up for Apigee.

2. Design Your API

Use Apigee’s tools to design your API, define endpoints, and set up API proxies.

3. Secure Your API

Implement security policies and authentication mechanisms to protect your API.

4. Deploy and Monitor

Deploy your API to Apigee and use the analytics and monitoring tools to track its performance.

5. Engage Developers

Set up your developer portal to provide documentation and resources for API consumers.

In a world where APIs are central to digital innovation and business operations, having a powerful API management platform like Apigee can make a significant difference. With its rich feature set and comprehensive tools, Apigee helps organizations design, secure, and manage APIs effectively, ensuring optimal performance and value. Whether you’re just starting with APIs or, conversely, looking to enhance your existing API management practices, Apigee offers a variety of capabilities. Furthermore, it provides the flexibility necessary to thrive in today’s highly competitive landscape.

]]>
https://blogs.perficient.com/2024/10/15/exploring-apigee-a-comprehensive-guide-to-api-management/feed/ 0 369958
Content Search with Optimizely Graph https://blogs.perficient.com/2024/10/09/content-search-with-optimizely-graph/ https://blogs.perficient.com/2024/10/09/content-search-with-optimizely-graph/#respond Wed, 09 Oct 2024 19:31:53 +0000 https://blogs.perficient.com/?p=370373

Optimizely Graph lets you fetch content and sync data from other Optimizely products. For content search, this lets you create custom search tools that transform user input into a GraphQL query and then process the results into a search results page.

Why use Graph for Content Search?

The benefits of a Optimizely Graph-based search service include:

  • Faster search results.
  • Better error handling.
  • More flexibility over search logic.
  • Cross-application and cross-platform search capability.

Let’s explore the steps to make this work using the Alloy project. First, obtain the Content graph keys/secret from Optimizely.

Implementation: This involves two steps.

#1: Server side setup and Querying

  • Add the content graph keys and secret in the appSettings.json

Appsettings

  • Install the Optimizely.ContentGraph.Cms package, and note that the Content Delivery API must also be installed as a prerequisite for the graph to function
    • Highlighted in red are the required packages. Highlighted in green is the initialization of the Content Delivery API and Graph in IServiceCollection. Additional configuration options are available as needed.

Packages

  • After executing the code, you should see the Optimizely Graph option in the CMS.

Cms Graph

  • Run the Indexing job from the Content Synchronization option. Once completed, all indexed content types will be visible.
    • The screen below shows the job status, and you can explore the indexed content by clicking the Details link. Highlighted in green are the indexed content types available for querying.

Indexed Content

  • With this, you should be able to query content in the GraphQL playground by selecting any page content types

Graphql Query

#2. Client side setup and Querying

Now that the server-side querying is ready, let’s configure the client side to query from the application code.

  • Install the following packages:
    • StrawberryShake.Server
    • StrawberryShake.Transport.Http.

Strawberry Tools

  • We also need to install StrawberryShake.Tools on the machine
    • dotnet tool install StrawberryShake.Tools -g
  • Next, we’ll create a sample GraphQL query to generate a proxy/schema. Copy the same query from the server-side setup into a .graphql file. Create a Queries folder and place the query file inside it.

Test Query

  • Navigate to the current project folder in the terminal and run the following command, replacing the OptimizelyGraphSingleKeyValue with the key received from Optimizely (as shown in the appSettings step):
    • dotnet-graphql init https://cg.optimizely.com/content/v2?auth={OptimizelyGraphSingleKeyValue} -n AlloyGraphClient

Schema Generator

  • This will generate three files for the GraphQL schema, as highlighted below..

Schema Files

  • The previous step also creates the StrawberryShake Client in the solution, which will be used for querying. The client name will match the namespace provided earlier. Since we used -n AlloyGraphClient, the generated client will be AddAlloyGraphClient
    • Configure the base address to use the Optimizely Single Key value from the graph settings.

Schema Client Code

  • This completes the setup, and we should now be able to use this client for querying in code. The client generates an interface following a similar naming pattern; here, it will be IAlloyGraphClient
    • Inject this interface into the StartPageController and verify the results

Querying In Controller Query Results In Controller

    • As we can see, the data is being returned according to the query. This is now ready to be mapped and displayed in the Views.
  • GraphQL supports a full range of features, including querying and filtering. It’s easy to verify queries in the playground and apply those changes to the .graphql query file. If you make any additions or modifications to the query, ensure that the latest schema is downloaded when you rebuild the code
    • Following update will show up on build for the new schema update to the code.

Generate Client

 

 

 

 

]]>
https://blogs.perficient.com/2024/10/09/content-search-with-optimizely-graph/feed/ 0 370373