Platforms and Technology Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/ Expert Digital Insights Thu, 14 Nov 2024 19:39:04 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Platforms and Technology Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/ 32 32 30508587 Don’t try to fit a Layout Builder peg in a Site Studio hole. https://blogs.perficient.com/2024/11/14/dont-try-to-fit-a-layout-builder-peg-in-a-site-studio-hole/ https://blogs.perficient.com/2024/11/14/dont-try-to-fit-a-layout-builder-peg-in-a-site-studio-hole/#respond Thu, 14 Nov 2024 19:39:04 +0000 https://blogs.perficient.com/?p=372075

How to ensure your toolset matches your vision, team and long term goals.

Seems common sense right? Use the right tool for the right purpose. However, in the DXP and Drupal space, we often see folks trying to fit their project to the tool and not the tool to the project.

There are many modules, profiles, and approaches to building Drupal out there, and most all of them have their time and place. The key is knowing when to implement which and why. I am going to take a little time here a dive into one of those key decisions that we find ourselves at Perficient facing frequently and how we work with our clients to ensure the proper approach is selected for their Drupal application.

Site Studio vs Standard Drupal(blocks, views, content, etc..) vs Layout Builder

I would say this is the most common area where we see confusion related to the best tooling and how to pick. To start let’s do a summary of the various options(there are many more approaches available but these are the common ones we encounter), as well as their pros and cons.

First, we have Acquia Site Studio, it is a low-code site management tool built on top of Drupal. And it is SLICK. They provide web user editable templates, components, helpers, and more that allow a well trained Content Admin to have control of almost every aspect of the look and feel of the website. There is drag and drop editors for all templates that would traditionally be TWIG, as well as UI editors for styles, fonts and more. This is the cadillac of low code solutions for Drupal, but that comes with some trade offs in terms of developer customizability and config management strategies. We have also noticed, that not every content team actually utilizes the full scope of Site Studio features, which can lead to additional complexity without any benefit, but when the team is right, Site Studio is a very powerful tool.

The next option we frequently see, is a standard Drupal build utilizing Content Types and Blocks to control page layouts, with WYSIWYG editors for rich content and a standard Drupal theme with SASS, TWIG templates, etc…. This is the one you see most developer familiarity with, as well as the most flexibility to implement custom work as well as clean configuration management. The trade off here, is that most customizations will require a developer to build them out, and content editors are limited to “color between the lines” of what was initially built. We have experienced both content teams that were very satisfied with the defined controls, but also teams that felt handcuffed with the limitations and desired more UI/UX customizations without deployments/developer involvement.

The third and final option we will be discussing here, is the Standard Drupal option described above, with the addition of Layout Builder. Layout Builder is a Drupal Core module that enables users to attach layouts, such as 1 column, 2 column and more to various Drupal Entity types(Content, Users, etc..). These layouts then support the placement of blocks into their various regions to give users drag and drop flexibility over laying out their content. Layout Builder does not support full site templates or custom theme work such as site wide CSS changes. Layout Builder can be a good middle ground for content teams not looking for the full customization and accompanying complexity of Site Studio, but desiring some level of content layout control. Layout builder does come with some permissions and configuration management considerations. It is important to decide what is treated as content and what as configuration, as well as define roles and permissions to ensure proper editors have access to the right level of customizations.

Now that we have covered the options as well as the basic pros and cons of each, how do you know which tool is right for your team and your project? This is where we at Perficient start with a holistic review of your needs, short and long term goals, as well as the technical ability of your internal team. It is important to honestly evaluate this. Just because something has all the bells and whistles, do you have the team and time to utilize them, or is it a sunk cost with limited ROI. On the flip side, if you have a very technically robust team, you don’t want to handcuff them and leave them frustrated with limitations that could impact marketing opportunities that could lead to higher ROI.

Additional considerations that can help guide your choice in toolset would be future goals and initiatives. Is a rebrand coming soon? Is your team going to quickly expand with more technical staff? These might point towards Site Studio as the right choice. Is your top priority consistency and limiting unnecessary customizations? Then standard structured content might be the best approach. Do you want to able to customize your site, but just don’t have the time or budget to undertake Site Studio? Layout Builder might be something you should closely look at.

Perficient starts these considerations at the first discussions with our potential clients, and continue to guide them through the sales and estimation process to ensure the right basic Drupal tooling is selected. This then continues through implementation as we continue to inform stakeholders about the best toolsets beyond the core systems. In future articles we will discuss the advantages and disadvantages of various SSO, DAM, Analytics, Drupal module solutions as well as the new Star Shot Drupal Initiative and how it will impact the planning of your next Drupal build!

]]>
https://blogs.perficient.com/2024/11/14/dont-try-to-fit-a-layout-builder-peg-in-a-site-studio-hole/feed/ 0 372075
A Step-by-Step Guide to Extracting Workflow Details for PC-IDMC Migration Without a PC Database https://blogs.perficient.com/2024/11/08/a-step-by-step-guide-to-extracting-workflow-details-for-pc-idmc-migration-without-a-pc-database/ https://blogs.perficient.com/2024/11/08/a-step-by-step-guide-to-extracting-workflow-details-for-pc-idmc-migration-without-a-pc-database/#respond Fri, 08 Nov 2024 06:29:05 +0000 https://blogs.perficient.com/?p=371403

In the PC-IDMC conversion process, it can be challenging to gather detailed information about workflows. Specifically, we often need to determine:

  • The number of transformations used in each mapping.
  • The number of sessions utilized within the workflow.
  • Whether any parameters or variables are being employed in the mappings.
  • The count of reusable versus non-reusable sessions used in the workflow etc.

To obtain these details, we currently have to open each workflow individually, which is time-consuming. Alternatively, we could use complex queries to extract this information from the PowerCenter metadata in the database tables.

This section focuses on XQuery, a versatile language designed for querying and extracting information from XML files. When workflows are exported from the PowerCenter repository or Workflow Manager, the data is generated in XML format. By employing XQuery, we can effectively retrieve the specific details and data associated with the workflow from this XML file.

Step-by-Step Guide to Extracting Workflow Details Using XQuery: –

For instance, if the requirement is to retrieve all reusable and non-reusable sessions for a particular workflow or a set of workflows, we can utilize XQuery to extract this data efficiently.

Step 1:
Begin by exporting the workflows from either the PowerCenter Repository Manager or the Workflow Manager. You have the option to export multiple workflows together as one XML file, or you can export a single workflow and save it as an individual XML file.

Step 1 Pc Xml Files

Step 2:-
Develop the XQuery based on our specific requirements. In this case, we need to fetch all the reusable and non-reusable sessions from the workflows.

let $header := "Folder_Name,Workflow_Name,Session_Name,Mapping_Name"
let $dt := (let $data := 
    ((for $f in POWERMART/REPOSITORY/FOLDER
    let $fn:= data($f/@NAME)
    return
        for $w in $f/WORKFLOW
        let $wn:= data($w/@NAME)
        return
            for $s in $w/SESSION
            let $sn:= data($s/@NAME)
            let $mn:= data($s/@MAPPINGNAME)
            return
                <Names>
                    {
                        $fn ,
                        "," ,
                        $wn ,
                        "," ,
                        $sn ,
                        "," ,
                        $mn
                    }
                </Names>)
    |           
    (for $f in POWERMART/REPOSITORY/FOLDER
    let $fn:= data($f/@NAME)
    return          
        for $s in $f/SESSION
        let $sn:= data($s/@NAME)
        let $mn:= data($s/@MAPPINGNAME)
        return
            for $w in $f/WORKFLOW
            let $wn:= data($w/@NAME)
            let $wtn:= data($w/TASKINSTANCE/@TASKNAME)
            where $sn = $wtn
            return
                <Names>
                    {
                        $fn ,
                        "," ,
                        $wn ,
                        "," ,
                        $sn ,
                        "," ,
                        $mn
                    }
                </Names>))
       for $test in $data
          return
            replace($test/text()," ",""))
      return
 string-join(($header,$dt), "
")

Step 3:
Select the necessary third-party tools to execute the XQuery or opt for online tools if preferred. For example, you can use BaseX, Altova XMLSpy, and others. In this instance, we are using Basex, which is an open-source tool.

Create a database in Basex to run the XQuery.

Step 3 Create Basex Db

Step 4: Enter the created XQuery into the third-party tool or online tool to run it and retrieve the results.

Step 4 Execute XqueryStep 5:
Export the results in the necessary file extensions.

Step 5 Export The Output

Conclusion:
These simple techniques allow you to extract workflow details effectively, aiding in the planning and early detection of complex manual conversion workflows. Many queries exist to fetch different kinds of data. If you need more XQueries, just leave a comment below!

]]>
https://blogs.perficient.com/2024/11/08/a-step-by-step-guide-to-extracting-workflow-details-for-pc-idmc-migration-without-a-pc-database/feed/ 0 371403
Transforming Friction into Innovation: The QA and Software Development Relationship https://blogs.perficient.com/2024/11/06/transforming-friction-into-innovation-the-qa-and-software-development-relationship/ https://blogs.perficient.com/2024/11/06/transforming-friction-into-innovation-the-qa-and-software-development-relationship/#respond Wed, 06 Nov 2024 19:17:18 +0000 https://blogs.perficient.com/?p=371711

The relationship between Quality Assurance (QA) and Software Development teams is often marked by tension and conflicting priorities. But what if this friction could be the spark that ignites innovation and leads to unbreakable products? 

The Power of Productive Tension 

It’s no secret that QA and Development teams sometimes clash. QA and testing professionals are tasked with finding flaws and ensuring stability, while developers are focused on building features, focusing on speed and innovation. This natural tension, however, can be a powerful force when channeled correctly. 

 One of the key challenges in harnessing this synergy is breaking down the traditional silos between QA and Development and aligning teams early in the development process. 

  1. Shared Goals: Align both teams around common objectives that prioritize both quality and innovation.
  2. Cross-Functional Teams: Encourage collaboration by integrating QA professionals into development sprints from the start.
  3. Continuous Feedback: Implement systems that allow for rapid, ongoing communication between teams.

 Leveraging Automation and AI 

Automation and artificial intelligence are playing an increasingly crucial role in bridging the gap between QA and Software Development Teams: 

  1. Automated Testing: Frees up QA teams to focus on more complex, exploratory testing scenarios.
  2. AI-Powered Analysis: Helps identify patterns and potential issues that human testers might miss.
  3. Predictive Quality Assurance: Uses machine learning to anticipate potential bugs before they even occur.

 Best Practices  

Achieving true synergy between QA and Development isn’t always easy, but it’s well worth the effort. Here are some best practices to keep in mind: 

  1. Encourage Open Communication: Create an environment where team members feel comfortable sharing ideas and concerns early and often.
  2. Celebrate Collaborative Wins: Recognize and reward instances where QA-Dev cooperation leads to significant improvements.
  3. Continuous Learning: Invest in training programs that help both teams understand each other’s perspectives and challenges.
  4. Embrace Failure as a Learning Opportunity: Use setbacks as a chance to improve processes and strengthen the relationship between teams.

  

As business leaders are tasked with doing more with less, the relationship between QA and Development will only become more crucial. By embracing the productive tension between these teams and implementing strategies to foster collaboration, organizations can unlock new levels of innovation and product quality. 

Are you ready to turn your development and testing friction into a strategic advantage?

]]>
https://blogs.perficient.com/2024/11/06/transforming-friction-into-innovation-the-qa-and-software-development-relationship/feed/ 0 371711
Effortless Data Updates in Salesforce: Leveraging the Update Record Function in LWC https://blogs.perficient.com/2024/11/04/leveraging-the-update-record-function-in-lwc/ https://blogs.perficient.com/2024/11/04/leveraging-the-update-record-function-in-lwc/#respond Mon, 04 Nov 2024 12:08:44 +0000 https://blogs.perficient.com/?p=371275

The updateRecord function in Lightning Web Components (LWC) is a powerful tool for Salesforce developers, allowing for seamless data updates directly from the user interface. This feature enhances user experience by providing quick and efficient updates to Salesforce records without the need for page refreshes. In this guide, we’ll explore how the update record function works, its key benefits, and best practices for implementing it in your LWC projects.

UpdateRecord Function:

The updateRecord function in Lightning Web Components (LWC) is used to update a record in Salesforce. It is part of the lightning/uiRecordApi module and allows you to update records with minimal Apex code. The updateRecord function takes an object as input. It includes the fields to update, and optionally, client options to control the update behaviour.

import {updateRecord} from 'lightning/uiRecordApi';

updateRecord(recordInput, clientOptions)
    .then((record) => {
    //handle success
    })
    .catch((error) =>{
    // handle error
    });

Reference:https://developer.salesforce.com/docs/platform/lwc/guide/reference-update-record.html

The function directly modifies the record data in Salesforce, eliminating the need for manual API calls or complex data manipulation.

Key Features and Usage of  updateRecord:

  • Field-Level Security: Ensure that the fields you’re updating are accessible to the current user based on field-level security settings.
  • Data Validation: Perform necessary data validation before updating the record to prevent invalid data from being saved.
  • Field-Specific updates: You can target specific fields for modification, ensuring granular control over the updated data.
  • Automatic UI Refresh: After a successful update, the component’s UI is automatically refreshed to reflect the changes. it is providing a seamless user experience.Example:
import {LightningElement, api } from 'lwc';
import { updateRecord } from 'lightning/uiRecordApi';
import { ShowToastEvent } from 'lightning/platformShowToastEvent';
export default class UpdateRecordExample extends LightningElement { 
    @api recordId; // Assume this is passed to the component
    handleUpdate() {
     const fields = {
       Id: this.recordId,
       Name: 'Updated Name', // Example field to update 
       Phone: '123-456-7890' // Another example field
      };

     const recordInput = { fields };
      updateRecord(recordInput)
         .then(() => {
             this.dispatchEvent(
                new ShowToastEvent({
                        title: 'Success',
                        message: 'Record updated successfully!', 
                        variant: 'success'
                        })
                      );
                  })
          .catch((error) => {
             this.dispatchEvent(
                 new ShowToastEvent({
                      title: 'Error updating record',
                      message: error.body.message,
                      variant: 'error'
                     })
                 );
             });
     }
}

Conclusion:

Incorporating the update record function in Lightning Web Components can greatly enhance both the functionality and user experience of your Salesforce applications. By simplifying the process of data manipulation on the client side, this function reduces the need for page reloads, improves performance, and allows for a more interactive and responsive interface. Mastering this feature not only streamlines development but also empowers users with a smoother, more efficient workflow. Embracing such tools keeps your Salesforce solutions agile and ready to meet evolving business needs.

]]>
https://blogs.perficient.com/2024/11/04/leveraging-the-update-record-function-in-lwc/feed/ 0 371275
Using PyTest with Selenium for Efficient Test Automation https://blogs.perficient.com/2024/11/04/using-pytest-with-selenium-for-efficient-test-automation/ https://blogs.perficient.com/2024/11/04/using-pytest-with-selenium-for-efficient-test-automation/#respond Mon, 04 Nov 2024 06:47:49 +0000 https://blogs.perficient.com/?p=370819

In our previous post, we explored the basics of Selenium with Python, covering the introduction, some pros and cons, and a basic program to get you started. In this post, we’ll delve deeper into the world of test automation by integrating Selenium with PyTest, a popular testing framework in Python. PyTest makes it easier to write simple and scalable test cases, which is crucial for maintaining a robust test suite.

Picture9

What is PyTest?

PyTest is a testing framework that allows you to write simple yet scalable test cases. It is widely used due to its easy syntax, powerful features, and rich plugin architecture. PyTest can run tests, handle setup and teardown, and integrate with various other tools and libraries.

Why Use PyTest with Selenium?

  • Readable and Maintainable Tests: PyTest’s syntax is clean and concise, making tests easier to read and maintain.
  • Powerful Assertions: PyTest provides powerful assertion introspection, which gives more detailed error messages.
  • Fixtures: PyTest fixtures help in setting up preconditions for your tests and can be reused across multiple test functions.
  • Extensible: PyTest’s plugin architecture allows for easy extension and customization of test runs.

Setting Up PyTest with Selenium

Prerequisites

Before you begin, ensure you have the following installed:

  • Python (>= 3.6)
  • Selenium (pip install selenium)
  • PyTest (pip install pytest)

You also need a WebDriver for the browser you intend to automate. For instance, ChromeDriver for Google Chrome.

Basic Test Setup

  • Project Structure

Create a directory structure for your test project:

Picture1

  • Writing Your First Test

In the test_example.py file, write a simple test case:

This simple test opens Google and checks if the page title contains “Google”.

Picture2

  • Using PyTest Fixtures

Fixtures in PyTest are used to manage setup and teardown. Create a fixture in the conftest.py file:

Picture3

Now, update the test to use this fixture:

Picture4

This approach ensures that the WebDriver setup and teardown are handled cleanly.

  • Running Your Tests

To run your tests, navigate to the project directory and use the following command:

Picture7

PyTest will discover and run all the test functions prefixed with test_.

Advanced Usage

  • Parameterized Tests

You can run a test with different sets of data using @pytest.mark.parametrize:

Picture5

  • Custom PyTest Plugins

Extend PyTest functionalities by writing custom plugins. For example, you can create a plugin to generate HTML reports or integrate with CI/CD tools.

  • Headless Browser Testing

Run tests in headless mode to speed up execution:

Picture6

Conclusion

Integrating PyTest with Selenium not only enhances the readability and maintainability of your tests but also provides powerful features to handle complex test scenarios. By using fixtures, parameterization, and other advanced features, you can build a robust and scalable test suite.

In the next post, we will explore the Page Object Model (POM) design pattern, which is a crucial technique for managing large test suites efficiently.

 

]]>
https://blogs.perficient.com/2024/11/04/using-pytest-with-selenium-for-efficient-test-automation/feed/ 0 370819
Streams with Tasks in Snowflake https://blogs.perficient.com/2024/10/29/snowflake-streams-with-tasks/ https://blogs.perficient.com/2024/10/29/snowflake-streams-with-tasks/#respond Tue, 29 Oct 2024 13:41:58 +0000 https://blogs.perficient.com/?p=371197

Snowflake’s Stream

Stream

Stream is a CHANGE DATA CAPTURE methodology in Snowflake; it records the DML changes made to tables, including (Insert/Update/delete). When a stream is created for a table, it will create a pair of hidden columns to track the metadata.

 

create or replace stream s_emp on table emp append_only=false;

 

Picture1

I have two tables, emp and emp_hist. Emp is my source table, and emp_hist will be my target.

Picture2

 

Picture3

Now, I will insert a new row in my source table to capture the data in my stream.

Picture4

Let’s see our stream result.

Picture5

 

In the same way, I’m going to delete and update my source table.

Picture6

 

I deleted one record and made an update in a row, but here in the stream, we could see two deleted actions.

  1. The first delete action was for the row that I deleted, and the second one is for the row that I updated.
  2. If the row is deleted from the source, the stream will capture the METADATA$ACTION as DELETE and METADATA@ISUPDATE as FALSE.
  3. If the row is updated in the source, the stream will capture both the delete and insert actions, so it will capture the old row as delete and the updated row as insert.

Create a Merge Query to Store the Stream Data into the Final Table

I’m using the below merge query to capture the newly insert and updated record (SCD1) into my final table.

merge into emp_hist t1

using (select * from s_emp where not(METADATA$ACTION=’DELETE’ and METADATA$ISUPDATE=’TRUE’) ) t2

on t1.emp_id=t2.emp_id

when matched and t2.METADATA$ACTION=’DELETE’ and METADATA$ISUPDATE=’FALSE’ then delete

when matched and t2.METADATA$ACTION=’INSERT’ and METADATA$ISUPDATE=’TRUE’

then update set t1.emp_name=t2.emp_name, t1.location=t2.location

when not matched then

insert (emp_id,emp_name,location) values(t2.emp_id,t2.emp_name,t2.location);

 

Picture7

 

Picture8

Query for SCD2

BEGIN;

update empl_hist t1

set t1.emp_name=t2.emp_name , t1.location=t2.location,t1.end_date=current_timestamp :: timestamp_ntz

from (select emp_id,emp_name,location from s_empl where METADATA$ACTION=’DELETE’) t2

where t1.emp_id=t2.emp_id;

insert into empl_hist  select t2.emp_id,t2.emp_name,t2.location,current_timestamp,NULL

from s_empl t2 where t2.METADATA$ACTION=’INSERT’;

 

commit;

 

 Tasks

Tasks use user-defined functions to automate and schedule business processes. A single task can perform a simple to complex function in your data pipeline.

I have created a task for the above-mentioned merge query. Instead of running this query manually every time, we can create a task. Here, I have added a condition system$stream_has_data(’emp_s’) in my task creation. So, if data is available in the stream, then the task will run and load it to the target table, or else it will be skipped.

create task mytask warehouse=compute_wh

schedule=’1 minute’ when

system$stream_has_data(’emp_s’)

as merge into emp_hist t1

using (select * from emp_s where not(METADATA$ACTION=’DELETE’ and METADATA$ISUPDATE=’TRUE’) ) t2

on t1.emp_id=t2.emp_id

when matched and t2.METADATA$ACTION=’DELETE’ and METADATA$ISUPDATE=’FALSE’ then delete

when matched and t2.METADATA$ACTION=’INSERT’ and METADATA$ISUPDATE=’TRUE’

then update set t1.emp_name=t2.emp_name, t1.location=t2.location

when not matched then

insert (emp_id,emp_name,location) values(t2.emp_id,t2.emp_name,t2.location);

 

]]>
https://blogs.perficient.com/2024/10/29/snowflake-streams-with-tasks/feed/ 0 371197
Perficient Named in Forrester’s App Modernization and Multicloud Managed Services Landscape, Q4 2024 https://blogs.perficient.com/2024/10/25/perficient-in-forresters-app-modernization-and-multicloud-managed-services-landscape-q4-2024/ https://blogs.perficient.com/2024/10/25/perficient-in-forresters-app-modernization-and-multicloud-managed-services-landscape-q4-2024/#respond Fri, 25 Oct 2024 12:21:43 +0000 https://blogs.perficient.com/?p=371037

As new technologies become available within the digital space, businesses must adapt quickly by modernizing their legacy systems and harnessing the power of the cloud to stay competitive. Forrester’s 2024 report recognizes 42 notable providers– and we’re proud to announce that Perficient is among them.

We believe our inclusion in Forrester’s Application Modernization and Multicloud Managed Services Landscape, Q4 2024 reflects our commitment to evolving enterprise applications and managing multicloud environments to enhance customer experiences and drive growth in a complex digital world.

With the demand for digital transformation growing rapidly, this landscape provides valuable insights into what businesses can expect from service providers, how different companies compare, and the options available based on provider size and market focus.

Application Modernization and Multicloud Managed Services

Forrester defines application modernization and multicloud managed services as:

“Services that offer technical and professional support to perform application and system assessments, ongoing application multicloud management, application modernization, development services for application replacements, and application retirement.”

According to the report,

“Cloud leaders and sourcing professionals implement application modernization and multicloud managed services to:

  • Deliver superior customer experiences.
  • Gain access to technical and transformational skills and capabilities.
  • Reduce costs associated with legacy technologies and systems.”

By focusing on application modernization and multicloud management, Perficient empowers businesses to deliver superior customer experiences through agile technologies that boost user satisfaction. We provide clients with access to cutting-edge technical and transformational skills, allowing them to stay ahead of industry trends. Our solutions are uniquely tailored to reduce costs associated with maintaining legacy systems, helping businesses optimize their IT budgets while focusing on growth.

Focus Areas for Modernization and Multicloud Management

Perficient has honed its expertise in several key areas that are critical for organizations looking to modernize their applications and manage multicloud environments effectively. As part of the report, Forrester asked each provider included in the Landscape to select the top business scenarios for which clients select them and from there determined which are the extended business scenarios that highlight differentiation among the providers. Perficient self-reported three key business scenarios that clients work with us out of those extended application modernization and multicloud services business scenarios:

  • Infrastructure Modernization: We help clients transform their IT infrastructure to be more flexible, scalable, and efficient, supporting the rapid demands of modern applications.
  • Cloud-Native Development Execution: Our cloud-native approach enables new applications to leverage cloud environments, maximizing performance and agility.
  • Cloud Infrastructure “Run”: We provide ongoing support for cloud infrastructure, keeping applications and systems optimized, secure, and scalable.

Delivering Value Through Innovation

Perficient is listed among large consultancies with an industry focus in financial services, healthcare, and the manufacturing/production of consumer products. Additionally, our geographic presence in North America, Latin America, and the Asia-Pacific region was noted.

We believe that Perficient’s inclusion in Forrester’s report serves as another milestone in our mission to drive digital innovation for our clients across industries. We are proud to be recognized among notable providers and look forward to continuing to empower our clients to transform their digital landscapes with confidence. For more information on how Perficient can help your business with application modernization and multicloud managed services, contact us today.

Download the Forrester report, The Application Modernization And Multicloud Managed Services Landscape, Q4 2024, to learn more (link to report available to Forrester subscribers and for purchase).

]]>
https://blogs.perficient.com/2024/10/25/perficient-in-forresters-app-modernization-and-multicloud-managed-services-landscape-q4-2024/feed/ 0 371037
Exploring Apigee: A Comprehensive Guide to API Management https://blogs.perficient.com/2024/10/15/exploring-apigee-a-comprehensive-guide-to-api-management/ https://blogs.perficient.com/2024/10/15/exploring-apigee-a-comprehensive-guide-to-api-management/#respond Tue, 15 Oct 2024 06:47:11 +0000 https://blogs.perficient.com/?p=369958

APIs, or application programming interfaces, are essential to the dynamic world of digital transformation because they allow companies to communicate quickly and efficiently with their data and services. Consequently, effective management is essential to ensure these APIs function correctly, stay safe, and provide the desired benefits. This is where Google Cloud’s top-tier API management product, Apigee, comes into play.

What is Apigee?

Apigee is a great platform for companies that want to manage their APIs effectively. It really simplifies the whole process of creating, growing, securing, and implementing APIs, which makes things a lot easier for developers. One thing that stands out about Apigee is its flexibility; it can handle both external APIs that third-party partners can access and internal APIs used within the company. This makes Apigee a great option for businesses of all sizes. Moreover, its versatility is a significant benefit for those looking to simplify their API management. It also integrates nicely with various security layers, like Nginx, which provides an important layer of authentication between Apigee and the backend. Because of this adaptability, Apigee enhances security and allows for smooth integration across different systems, making it a reliable choice for managing APIs.

Core Features of Apigee

1. API Design and Development

Primarily, Apigee offers a unique suite of tools for developing and designing APIs. You can define API endpoints, maintain API specifications, and create and modify API proxies by using the Open API standard. Consequently, it becomes easier to design functional and compliant APIs with industry standards. Furthermore, this capability streamlines the development process and ensures that the APIs meet regulatory requirements. Thus, developers can focus on innovation while maintaining a strong foundation of compliance and functionality. Below is a flow diagram related to API Design and Development with Apigee:

2. Security and Authentication

Any API management system must prioritize security, and Apigee leads the field in this regard. It provides security features such as OAuth 2.0, JWT (JSON Web Token) validation, API key validation, and IP validation. By limiting access to your APIs to authorized users, these capabilities help safeguard sensitive data from unwanted access.

3. Traffic Management

With capabilities like rate limitation, quota management, and traffic shaping, Apigee enables you to optimize and control API traffic. This helps proper usage and maintains consistent performance even under high traffic conditions.

4. Analytics and Monitoring

You can access analytics and monitoring capabilities with Apigee, which offers insights into API usage and performance. You can track response times, error rates, and request volumes, enabling you to make data-driven decisions and quickly address any issues that arise.

5. Developer Portal

Apigee includes a customizable developer portal where API users can browse documentation, test APIs, and get API keys. This portal builds a community around your APIs and improves the developer experience.

6. Versioning and Lifecycle Management

Keeping an API’s versions separate is essential to preserving backward compatibility and allowing it to change with time. Apigee offers lifecycle management and versioning solutions for APIs, facilitating a seamless upgrade or downgrade process.

7. Integration and Extensibility

Apigee supports integration with various third-party services and tools, including CI/CD pipelines, monitoring tools, and identity providers. Its extensibility through APIs and custom policies allows you to tailor the platform to meet your specific needs.

8. Debug Session

Moreover, Apigee offers a debug session feature that helps troubleshoot and resolve issues by providing a real-time view of API traffic and interactions. This feature is crucial for identifying and fixing problems and is essential during the development and testing phases. In addition, this feature helps ensure that any issues are identified early on; consequently, it enhances the overall quality of the final product.

9. Alerts:

Furthermore, you can easily set up alerts within Apigee to notify you of critical issues related to performance and security threats. It is crucial to understand that both types of threats affect system reliability and can lead to significant downtime; addressing them promptly is essential for maintaining optimal performance.

10. Product Onboarding for Different Clients

Apigee supports product onboarding, allowing you to manage and customize API access and resources for different clients. This feature is essential for handling diverse client needs and ensuring each client has the appropriate level of access.

11. Threat Protection

Apigee provides threat protection mechanisms to ensure that your APIs can handle concurrent requests efficiently without performance degradation. This feature helps in maintaining API stability under high load conditions.

12. Shared Flows

Apigee allows you to create and reuse shared flows, which are common sets of policies and configurations applied across multiple API proxies. This feature promotes consistency and reduces redundancy in API management.

Benefits of Using Apigee

1. Enhanced Security

In summary, Apigee’s comprehensive security features help protect your APIs from potential threats and ensure that only authorized users can access your services.

2. Improved Performance

Moreover, with features like traffic management and caching, Apigee helps optimize API performance, providing a better user experience while reducing the load on your backend systems.

3. Better Visibility

Apigee’s analytics and monitoring tools give valuable insights into API usage and performance, helping you identify trends, diagnose issues, and make informed decisions.

4. Streamlined API Management

Apigee’s unified platform simplifies the management of APIs, from design and development to deployment and monitoring, saving time and reducing complexity.

5. Scalability

Finally, Apigee is designed to handle APIs at scale, making it suitable for both small projects and large enterprise environments.

Getting Started with Apigee

To get started with Apigee, follow these steps:

1. Sign Up for Apigee

Visit the Google Cloud website and sign up for an Apigee account. Based on your needs, you can choose from different pricing plans.
Sign-up for Apigee.

2. Design Your API

Use Apigee’s tools to design your API, define endpoints, and set up API proxies.

3. Secure Your API

Implement security policies and authentication mechanisms to protect your API.

4. Deploy and Monitor

Deploy your API to Apigee and use the analytics and monitoring tools to track its performance.

5. Engage Developers

Set up your developer portal to provide documentation and resources for API consumers.

In a world where APIs are central to digital innovation and business operations, having a powerful API management platform like Apigee can make a significant difference. With its rich feature set and comprehensive tools, Apigee helps organizations design, secure, and manage APIs effectively, ensuring optimal performance and value. Whether you’re just starting with APIs or, conversely, looking to enhance your existing API management practices, Apigee offers a variety of capabilities. Furthermore, it provides the flexibility necessary to thrive in today’s highly competitive landscape.

]]>
https://blogs.perficient.com/2024/10/15/exploring-apigee-a-comprehensive-guide-to-api-management/feed/ 0 369958
Impact of Item Classification (Oracle PDH Cloud) on Oracle Procurement Cloud https://blogs.perficient.com/2024/10/14/impact-of-item-classification-oracle-pdh-cloud-on-oracle-procurement-cloud/ https://blogs.perficient.com/2024/10/14/impact-of-item-classification-oracle-pdh-cloud-on-oracle-procurement-cloud/#respond Mon, 14 Oct 2024 14:06:34 +0000 https://blogs.perficient.com/?p=367534

In today’s fast-paced business environment, efficient procurement processes are essential for maintaining a competitive edge. Organizations must manage a myriad of products and services, ensuring that they are sourced, purchased, and delivered efficiently. Oracle Product Data Hub (PDH) Cloud and Oracle Procurement Cloud are two powerful tools that facilitate this process. A critical component of this integration is item classification, which profoundly impacts procurement activities. In this blog, we will explore the impact of item classification in Oracle PDH Cloud on Oracle Procurement Cloud, along with practical examples.

Understanding Item Classification in Oracle PDH Cloud

Oracle PDH Cloud is a comprehensive product information management solution that serves as a central repository for all product data. One of its key features is item classification, which allows organizations to categorize items based on various attributes such as type, usage, and specific characteristics. This classification system helps in organizing, tracking, and managing products more effectively across the enterprise.

Item classes in Oracle PDH Cloud can be defined using multiple criteria, such as:

  • Product Type: Finished goods, raw materials, components, etc.
  • Usage: Consumables, repairable items, capital goods, etc.
  • Industry-specific Attributes: Chemicals, electronics, pharmaceuticals, etc.

By leveraging these item classes, businesses can ensure consistent and accurate classification of all items, which is critical for effective procurement management.

Impact of Item Classification on Oracle Procurement Cloud

Oracle Procurement Cloud leverages the data provided by Oracle PDH Cloud to enhance procurement processes. The classification of items plays a crucial role in several aspects of procurement:

Streamlined Supplier Management:

  • Proper item classification enables procurement teams to categorize suppliers based on the types of products they provide. This facilitates more targeted supplier management and helps in establishing specialized supplier relationships.
  • Example: A manufacturing company classifies its items into raw materials, spare parts, and office supplies. Suppliers of critical raw materials are managed with rigorous quality checks and regular performance reviews, while suppliers of office supplies are evaluated based on cost-effectiveness and timely delivery.

Efficient Sourcing and Bidding:

  • Item classification helps in the efficient sourcing of products. By categorizing items, procurement teams can develop tailored sourcing strategies for each class of items.
  • Example: An electronics company classifies items into components, finished products, and packaging materials. When sourcing microchips (a component), the company invites bids from specialized suppliers with expertise in semiconductor manufacturing, ensuring competitive pricing and high-quality standards.

Optimized Purchase Order Management:

  • Accurate item classification ensures that purchase orders are managed more efficiently. Procurement teams can create purchase orders based on specific item classes, improving order accuracy and fulfillment.
  • Example: A retail company classifies its inventory into seasonal items, regular stock, and promotional goods. During the holiday season, purchase orders for seasonal items are prioritized to ensure timely stock availability, while regular stock orders are managed on a rolling basis.

Improved Contract Management:

  • With clear item classification, procurement teams can develop and manage contracts that are specific to different classes of items. This allows for more detailed contract terms and conditions, tailored to the unique requirements of each item class.
  • Example: A pharmaceutical company classifies its items into active pharmaceutical ingredients (APIs), excipients, and packaging materials. Contracts for APIs include strict quality control measures and compliance with regulatory standards, while contracts for packaging materials focus on sustainability and cost-efficiency.

Enhanced Spend Analysis and Reporting:

  • Item classification provides a structured way to analyze procurement spend. By categorizing items, procurement teams can gain deeper insights into spending patterns, identifying opportunities for cost savings and strategic sourcing.
  • Example: A food and beverage company classifies its items into perishable goods, non-perishable goods, and packaging. Spend analysis reveals that packaging costs have increased significantly, prompting the procurement team to negotiate better terms with suppliers or seek alternative packaging solutions.

Compliance and Risk Management:

  • Certain industries require strict compliance with regulatory standards. Item classification helps ensure that procurement processes adhere to these standards.
  • Example: A chemical manufacturing company classifies items based on regulatory requirements, such as hazardous chemicals, non-hazardous chemicals, and safety equipment. Procurement processes for hazardous chemicals include adherence to safety regulations, proper documentation, and special handling procedures.

Integrating Oracle PDH Cloud with Oracle Procurement Cloud

The seamless integration between Oracle PDH Cloud and Oracle Procurement Cloud is essential for maximizing the benefits of item classification. This integration ensures:

  • Data Consistency: Item classification data is consistently shared between the two systems, eliminating discrepancies and ensuring a unified view of procurement data.
  • Real-time Updates: Any changes in item classification in Oracle PDH Cloud are automatically reflected in Oracle Procurement Cloud, ensuring that procurement processes are always based on the most current data.
  • Scalability: As businesses grow and evolve, the integrated system can scale accordingly, accommodating new item classes and changing procurement needs.

Conclusion

Item classification in Oracle PDH Cloud is a powerful tool that significantly enhances the capabilities of Oracle Procurement Cloud. By categorizing items based on various attributes, businesses can achieve greater efficiency, accuracy, and control in their procurement processes. The integration of these two systems ensures that procurement activities are streamlined, compliant, and scalable, ultimately leading to improved operational performance and cost savings.

As organizations navigate the complexities of modern supply chains, leveraging the power of item classification and integrated procurement management systems will be a key differentiator in achieving operational excellence and competitive advantage.

]]>
https://blogs.perficient.com/2024/10/14/impact-of-item-classification-oracle-pdh-cloud-on-oracle-procurement-cloud/feed/ 0 367534
Three Tips for Adding a Headless Site in XM Cloud https://blogs.perficient.com/2024/10/09/three-tips-for-adding-a-headless-site-in-xm-cloud/ https://blogs.perficient.com/2024/10/09/three-tips-for-adding-a-headless-site-in-xm-cloud/#respond Wed, 09 Oct 2024 20:08:15 +0000 https://blogs.perficient.com/?p=367815

Intro 📖

In this post, I’ll cover three tips developers should consider when deploying a new headless site to XM Cloud. Having recently added a new headless site to an existing solution, I ran into a few issues. I hope to save others from similar headaches in the future (mostly myself 😉). If you’ve added a new headless site to your XM Cloud solution recently and are having trouble getting the site to appear and function properly, please read on 👇.

1. Verify the New Site’s Start Item 🚩

After deploying your new site, if you notice that the site isn’t appearing on the Sites landing page in XM Cloud (https://xmapps.sitecorecloud.io/?tab=tools&tenantName=<tentant_name>&organization=<organization>), double-check that the site’s Start item field is set. This field can be found on the site’s Site Grouping item whose path is (usually) as follows:

/sitecore/content/<site_collection>/<site>/Settings/Site Grouping/<site>

Moreover, make sure that the referenced item is physically present in the content tree. If the Start item isn’t present, the site won’t appear in XM Cloud.

Site Start Item

Verify that the Start item is set and points to an actual page.

In my particular case, I had initially misconfigured serialization for the new site and inadvertently excluded the new site’s Home item. The Start item field was set, but it didn’t point to anything in the target environment, so my new site wasn’t showing up in XM Cloud 🤦‍♂️.

2. Verify the Rendering Host Items 🤖

If your new site is appearing in XM Cloud but you can’t open any pages in Experience Editor or Preview, something could be wonky with the rendering host items.

Every XM Cloud solution includes an xmcloud.build.json file in the root directory. This is the XM Cloud build configuration file; it controls how XM Cloud builds and deploys the solution. Included in this file is a list of the rendering hosts that XM Cloud should provision and spin up as part of a deployment. Rendering hosts (also sometimes called “editing hosts” in the context of a CM) are necessary to drive Sitecore’s Experience Editor and Preview functionality. The xmcloud.build.json file is pretty important and useful; for more information on this file and what it can do, please refer to the official documentation here: https://doc.sitecore.com/xmc/en/developers/xm-cloud/the-xm-cloud-build-configuration.html.

There should be an entry in the renderingHosts property of the xmcloud.build.json file for every separate headless application in your solution. Note, however, that it is possible to run multiple headless sites with a single head application using the JSS multisite add-on. For more information on the pros and cons of either approach, check out this Sitecore developer article: https://developers.sitecore.com/learn/accelerate/xm-cloud/pre-development/project-architecture/multisite#web-application.

For the purposes of this tip, assume that there are two headless sites, each with their own headless Next.js application running different versions of Node (which also implies the need for two separate rendering hosts–one rendering host can’t run multiple versions of Node/Next.js). Let’s say the xmcloud.build.json file looks something like this:

{
    "renderingHosts": {
        "mysite1": {
            "path": "./src/rendering-mysite1",
            "nodeVersion": "16.15.1",
            "jssDeploymentSecret":"<redacted>",
            "enabled": true,
            "type": "sxa",
            "lintCommand": "lint",
            "startCommand": "start:production"
        },
        "mysite2": {
          "path": "./src/rendering-mysite2",
          "nodeVersion": "20.14.0",
          "jssDeploymentSecret":"<redacted>",
          "enabled": true,
          "type": "sxa",
          "lintCommand": "lint",
          "startCommand": "start:production"
      }
    },
    ...

When XM Cloud runs a deployment, it reads the xmcloud.build.json file, iterates through the renderingHosts property, and provisions the relevant containers behind the scenes. When the deployment completes, the rendering host items in the content tree are created and/or updated under this folder:

/sitecore/system/Settings/Services/Rendering Hosts

The rendering host items in this folder map to the rendering hosts enumerated in the xmcloud.build.json file.

One interesting thing to note is that, regardless of the name of the first rendering host the xmcloud.build.json file (e.g., mysite1 in the example above), the first rendering host in the xmcloud.build.json file will always be created in the Sitecore content tree with a name of Default. The N + 1 rendering hosts will have the names listed in the xmcloud.build.json file. For example (again, assuming the xmcloud.build.json file, above 👆), the post-deployment rendering hosts in the target XM Cloud environment would look like this:

Rendering Hosts

The resulting rendering host items from an XM Cloud deployment.

Once XM Cloud creates these items, it sets them to protected in the content tree–these items should not be modified outside of the XM Cloud deployment process.

If, for whatever reason, you’ve serialized these items and have manually overridden the items (either by making manual changes or by installing a content package), you can get into a situation where the changes and updates during XM Cloud deployments on these items are ignored because Sitecore is looking at the overridden items. This will remain an issue until the overridden serialized items are either cleaned up using the Sitecore CLI itemres cleanup command (reference: https://doc.sitecore.com/xmc/en/developers/xm-cloud/the-cli-itemres-command.html#the-cleanup-subcommand) or the overridden items are simply deleted (to be restored on the next deployment).

The TL;DR for this tip: do not serialize rendering host items corresponding to entries in the renderingHosts property in the xmcloud.build.json file–XM cloud manages these items, so you don’t have to.

3. Set the Name and Description for the Site Collection 📛

The XM Cloud Sites landing page has been updated recently but, in the past, the name of the site collection to which a site belonged would sometimes be presented as “N/A”:

Collection On Site

A site’s Collection label reading “N/A”.

It turns out that there’s a field section on site collection items that lets developers set the Name and Description for the site collection. Admittedly, initially, I was just annoyed with the “N/A” and wanted a way to set the site collection name. However, it’s generally a good idea to name and describe your site collections anyway, especially if there are (or will be) many of them. To set the Name and Description fields on a site collection item, navigate to the site collection item in the content tree and drill down to the Metadata field section to provide values for these fields:

Site Collection Name And Description

Setting the Name and Description of a site collection.

🥣 Be sure to serialize any updates to these items using the Sitecore CLI ser pull command (reference: https://doc.sitecore.com/xmc/en/developers/xm-cloud/the-cli-serialization-command.html#the-pull-subcommand). Site collection items are developer-controlled and should be consistent between environments.

Now, in the XM Cloud Sites interface (and possibly elsewhere in the future), it’ll be easier to differentiate between site collections and determine the purpose of a given site collection. ✅

Thanks for the read! 🙏

]]>
https://blogs.perficient.com/2024/10/09/three-tips-for-adding-a-headless-site-in-xm-cloud/feed/ 0 367815
Selenium – Uploading a File Using Robot Class https://blogs.perficient.com/2024/10/09/selenium-uploading-a-file-using-robot-class/ https://blogs.perficient.com/2024/10/09/selenium-uploading-a-file-using-robot-class/#respond Wed, 09 Oct 2024 07:19:35 +0000 https://blogs.perficient.com/?p=370173

Need For Robot Class:

Selenium normally allows file uploads using the send Keys () method for file input fields. However, if the input field is not of type “file,” it might run into problems and here comes the Robot class.

The Robot class, part of the java.awt package, helps you perform system-level events, pressing keys, moving the cursor, and clicking. This makes it a powerful tool, especially when you need to interact with system-level components that Selenium alone cannot handle.

 

When is the Robot Class Suitable for File Uploads?

Use the Robot class for file uploads in cases where:

  • The file input field is hidden or otherwise inaccessible.
  • The input field is not of type “file,” meaning send Keys () cannot be used.
  • The file upload dialog is customized and does not respond to standard Selenium commands.

How to perform a file upload operation through the Robot Class?

Precondition: Whenever we use the press method, we must also use the release method. If the release method is not used, the keyboard press action will remain active. To stop the pressing action, the release method must be invoked.

Now, let’s look at a step-by-step example of how to upload a file using the Robot class.

  1. Trigger the File Upload Dialog
    First, navigate to the page where you need to upload the file. Click the button that opens the file chooser dialog.
  2. Use the Robot Class to Interact with the Dialog
    After the dialog opens, use the Robot class to input the file path and handle the upload process through simulated keyboard events.

Here’s an example of Java code that demonstrates this process.

 

Code Scr 2

 

Explanation of the Code:

  1. Trigger the File Upload Dialog 
    The first step is locating the upload button using Selenium’s FindElement() method and clicking it to open the file upload dialog.
  2. Simulating Keyboard Actions Using the Robot Class
    • Create a Robot object to simulate the required keyboard actions.
    • Use String Selection and Toolkit to copy the file path to the clipboard.
    • The Robot class then simulates pressing Ctrl + v to paste the file path into the file chooser.
    • Finally, simulate pressing the Enter key to upload the file.
  3. Adding Delays
    Delays between actions ensure that each step is completed before moving on to the next, which helps to avoid timing issues.

Final Thoughts:

Pros of Robot Class:

  • Simulates keyboard and mouse actions easily.
  • Can handle pop-ups and system-level tasks.
  • Useful for automating tasks outside of the browser.

Cons of Robot Class:

  • Limited to basic interactions (no complex web actions).
  • Difficult to test dynamic web elements.
  • Not ideal for cross-browser testing.

When facing challenges with hidden input fields or custom dialogs, consider using the Robot class in your automation strategy.

For more information, you can refer to this website: https://www.guru99.com/using-robot-api-selenium.htmlConclusion

 

Similar tools:

  1. AutoIt https://www.autoitscript.com/site/

  2. Sikuli – https://www.sikuli.org/

  3. AWT Event Queue (Java) – https://docs.oracle.com/javase/7/docs/api/java/awt/EventQueue.html

]]>
https://blogs.perficient.com/2024/10/09/selenium-uploading-a-file-using-robot-class/feed/ 0 370173
SNOWPIPE WITH AWS https://blogs.perficient.com/2024/10/08/snowpipe-with-aws/ https://blogs.perficient.com/2024/10/08/snowpipe-with-aws/#respond Tue, 08 Oct 2024 13:58:04 +0000 https://blogs.perficient.com/?p=370320

SNOWFLAKE’S SNOWPIPE

Snow pipe:

snow pipe is a one of the data loading strategies in snowflake , for continuous data loading, will create a snow pipe to load the data from any data source or storage or any cloud to snowflake tables, its an event trigger ideology whenever a file came to the source immediately it will trigger and notify to the particular external stage in snowflake and load the data to the table immediately

 

procedure of snow pipe:

Picture1

 

S3 bucket setup for snow pipe:

Create a s3 bucket in AWS and a folder in that:

Creating an IAM policyPicture2

  1. From the home dashboard, search for and select IAM.
  2. From the left-hand navigation pane, select Account settings.
  3. Under Security Token Service (STS) in the Endpoints list, find the Snowflake region where your account is located. If the STS status is inactive, move the toggle to Active.
  4. From the left-hand navigation pane, select Policies.
  5. Select Create Policy.
  6. For Policy editor, select JSON.
  7. Add a policy document that will allow Snowflake to access the S3 bucket and folder.

The following policy (in JSON format) provides Snowflake with the required permissions to load or unload data using a single bucket and folder path.

Copy and paste the text into the policy editor:

 Picture3

 

 

 

 

 

 

{

“Version”: “2012-10-17”,

“Statement”: [

{

“Effect”: “Allow”,

“Action”: [

“s3:GetObject”,

“s3:GetObjectVersion”

],

“Resource”: “arn:aws:s3:::<bucket>/<prefix>/*”

}

 

  1. Note that AWS policies support a variety of different security use cases.
  2. Select Next.
  3. Enter a Policy name (for example, snowflake_integration)
  4. Select Create policy.

Step 2: Create the IAM Role in AWS

To configure access permissions for Snowflake in the AWS Management Console, do the following:

  1. From the left-hand navigation pane in the Identity and Access Management (IAM) Dashboard, select Roles.
  2. Select Create role.
  3. Select AWS account as the trusted entity type.
  4. In the Account ID field, enter your own AWS account ID temporarily. Later, you modify the trust relationship and grant access to Snowflake.
  5. Select the Require external ID option. An external ID is used to grant access to your AWS resources (such as S3 buckets) to a third party like Snowflake.

Enter a placeholder ID such as 0000. In a later step, you will modify the trust relationship for your IAM role and specify the external ID for your storage integration.

  1. Select Next.
  2. Select the policy you created in Step 1: Configure Access Permissions for the S3 Bucket(in this topic).
  3. Select Next.
  4. Enter a name and description for the role, then select Create role.

You have now created an IAM policy for a bucket, created an IAM role, and attached the policy to the role.

  1. On the role summary page, locate and record the Role ARN value. In the next step, you will create a Snowflake integration that references this role.

Note

Snowflake caches the temporary credentials for a period that cannot exceed the 60 minute expiration time. If you revoke access from Snowflake, users might be able to list files and access data from the cloud storage location until the cache expires.

Step 3: Create a Cloud Storage Integration in Snowflake

A storage integration is a Snowflake object that stores a generated identity and access management (IAM) user for your S3 cloud storage, along with an optional set of allowed or blocked storage locations (i.e. buckets). Cloud provider administrators in your organization grant permissions on the storage locations to the generated user. This option allows users to avoid supplying credentials when creating stages or loading data.

CREATE or replace STORAGE INTEGRATION bowiya_inte

TYPE = EXTERNAL_STAGE

STORAGE_PROVIDER = ‘S3’

ENABLED = TRUE

STORAGE_AWS_ROLE_ARN = ‘arn:aws:iam::151364773749:role/manojrole’

STORAGE_ALLOWED_LOCATIONS = (‘s3://newbucket1.10/sample.csv’);

 

The following will create an integration that allows access to all buckets in the account.

Additional external stages that also use this integration can reference the allowed buckets and paths:

Step 4: Retrieve the AWS IAM User for your Snowflake Account

  1. To retrieve the ARN for the IAM user that was created automatically for your Snowflake account, use:

desc integration bowiya_inte;

Step 5: Grant the IAM User Permissions to Access Bucket Objects

The following step-by-step instructions describe how to configure IAM access permissions for Snowflake in your AWS Management Console so that you can use a S3 bucket to load and unload data:

  1. Log in to the AWS Management Console.
  2. Select IAM.
  3. From the left-hand navigation pane, select Roles.
  4. Select the role you created
  5. Select the Trust relationships tab.
  6. Select Edit trust policy.
  7. Modify the policy document with the DESC STORAGE INTEGRATION Policy document for IAM role
  8. Picture4

 

Step 6: CREATE A STAGE IN SNOWFLAKE :

 

A stage is an object where files can be stored temporarily from a local storage or cloud storage, using the stage we can load the data into tables.

 

CREATE or replace STAGE mystage

URL = ‘s3://newbucket1.10/sample.csv’

STORAGE_INTEGRATION = bowiya_inte;

 

 

 

Step 7: CREATE A SNOW PIPE IN SNOWFLAKE:

 

CREATE or replace PIPE mypipe

AUTO_INGEST = TRUE

AS

COPY INTO table1

FROM @mystage

FILE_FORMAT = (type = ‘CSV’ SKIP_HEADER = 1);

 

Step 7: CREATE A EVENT NOTIFICATION IN S3:

Event notification will notify when an object is changed or added into the bucket.

 

In s3 go to properties and create one notification event.

 Picture5

STEP 8: Get the SQS queue id from your snowflake pipe.

Picture6

 

Once the notification event is created the snow pipe will load the data whenever the file is added or changed in s3 bucket.

Picture7

 

STEP 9: MONITOR THE SNOW PIPE STATUS.

Picture8

 

NOTE: Snow pipe won’t load the same file again, because the SQS queue is reading the file name read the file and the metadata was captured. If we upload the same file again then the SQS queue will not get any notification so snow pipe can’t load the same file again.

 

 

 

 

 

 

 

 

 

 

]]>
https://blogs.perficient.com/2024/10/08/snowpipe-with-aws/feed/ 0 370320