Cloud Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/cloud/ Expert Digital Insights Wed, 26 Feb 2025 19:39:43 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Cloud Articles / Blogs / Perficient https://blogs.perficient.com/category/services/platforms-and-technology/cloud/ 32 32 30508587 From Cloud to Local: Effortlessly Import Azure SQL Databases https://blogs.perficient.com/2025/02/26/import-azure-sql-databases/ https://blogs.perficient.com/2025/02/26/import-azure-sql-databases/#comments Wed, 26 Feb 2025 08:54:58 +0000 https://blogs.perficient.com/?p=377428

With most systems transitioning to cloud-based environments, databases are often hosted across various cloud platforms. However, during the development cycle, there are occasions when having access to a local database environment becomes crucial, particularly for analyzing and troubleshooting issues originating in the production environment.

Sometimes, it is necessary to restore the production database to a local environment to diagnose and resolve production-related issues effectively. This allows developers to replicate and investigate issues in a controlled setting, ensuring efficient debugging and resolution.

In an Azure cloud environment, database backups are often exported as .bacpac files. The file must be imported and restored locally to work with these databases in a local environment.

There are several methods to achieve this, including:

  1. Using SQL Server Management Studio (SSMS).
  2. Using the SqlPackage command-line.

This article will explore the steps to import a .bacpac file into a local environment, focusing on practical and straightforward approaches.

The first approach—using SQL Server Management Studio (SSMS)—is straightforward and user-friendly. However, challenges arise when dealing with large database sizes, as the import process may fail due to resource limitations or timeouts.

The second approach, using the SqlPackage command-line, is recommended in such cases. This method offers more control over the import process, allowing for better handling of larger .bacpac files.

Steps to Import a .bacpac File Using SqlPackage

1. Download SqlPackage

  • Navigate to the SqlPackage download page: SqlPackage Download.
  • Ensure you download the .NET 6 version of the tool, as the .NET Framework version may have issues processing databases with very large tables.

2. Install the Tool

  • Follow the instructions under the “Windows (.NET 6)” header to download and extract the tool.
  • After extracting, open a terminal in the directory where you extracted SqlPackage.

3. Run SqlPackage

  • Put .bacpac file into the package folder.(ex: C:\sqlpackage-win7-x64-en-162.1.167.1)
  • Use the following example command in the terminal to import the .bacpac file:
  • powershell
    SqlPackage /a:Import /tsn:"localhost" /tdn:"test" /tu:"sa" /tp:"Password1" /sf:"database-backup-filename.bacpac" /ttsc:True /p:DisableIndexesForDataPhase=False /p:PreserveIdentityLastValues=True

4. Adjust Parameters for Your Setup

  • /tsn: The server name (IP or hostname) of your SQL Server instance, optionally followed by a port (default: 1433).
  • /tdn: The name of the target database (must not already exist).
  • /tu: SQL Server username.
  • /tp: SQL Server password.
  • /sf: The path to your .bacpac file (use the full path or ensure the terminal is in the same directory).

5. Run and Wait

  • Let the tool process the import. The time taken will depend on the size of the database.

Important: Ensure the target database does not already exist, as .bacpac files can only be imported into a fresh database.

The options /p:DisableIndexesForDataPhase and /p:PreserveIdentityLastValues optimize the import process for large databases and preserve identity column values. SqlPackage provides more reliability and flexibility than SSMS, especially when dealing with more extensive databases.

 

Reference:

https://learn.microsoft.com/en-us/azure/azure-sql/database/database-import?view=azuresql&tabs=azure-powershell

]]>
https://blogs.perficient.com/2025/02/26/import-azure-sql-databases/feed/ 1 377428
Setting Up CloudFront Using Python https://blogs.perficient.com/2025/02/25/setting-up-cloudfront-using-python/ https://blogs.perficient.com/2025/02/25/setting-up-cloudfront-using-python/#respond Wed, 26 Feb 2025 05:40:01 +0000 https://blogs.perficient.com/?p=376999

Python is an open-source programming language. We can use Python to build/enable AWS services such as Terraform or other IAC code. In this blog, we are going to discuss setting up the CloudFront service using Python.

Why We Use Python

As we know, Python is an imperative language. This means that you can write more customized scripts that can perform advanced complex operations, handle errors, interact with APIs, etc. You also have access to AWS SDKs like Boto3 that allow you to perform any AWS operation you desire, including custom ones that might not yet be supported by Terraform.

How It Works

We have defined methods and classes in the boto3 library for AWS services that we can use to create/modify/update AWS services.

Prerequisites

We require only Python and Boto3 library.

1                                                                      Picture3

 

How to Write Code

As we know, boto3 has different functions that handle AWS services. We have lots of functions, but below are the basic functions to manage CloudFront service:

  • create_distribution is used to create CloudFront Distribution,
  • update_distribution is used to update CloudFront Distribution,
  • delete_distribution is used to delete CloudFront Distribution,
  • create_cache_policy is used to create cache policy,
  • create_invalidation is used to create invalidation requests.

create_distribution and update_distribution require the lots configuration values as well. You can use a Python dictionary variable and pass it to a function, or you can pass it as JSON, but you have to perform parsing as well for that.

Let me share with you a basic example of creating CloudFront distribution using Python & boto3:

import boto3
import os 

s3_origin_domain_name = '<s3bucketname>.s3.amazonaws.com'  
origin_id = 'origin-id'

distribution_config = {
        'CallerReference': str(hash("unique-reference")),
        'Comment': 'My CloudFront Distribution',
        'Enabled': True,
        'Origins': {
            'Items': [
                {
                    'Id': origin_id,
                    'DomainName': s3_origin_domain_name,
                    'S3OriginConfig': {
                        'OriginAccessIdentity': ''
                    },
                    'CustomHeaders': {
                        'Quantity': 0,
                        'Items': []
                    }
                }
            ],
            'Quantity': 1
        },
        'DefaultCacheBehavior': {
            'TargetOriginId': origin_id,
            'ViewerProtocolPolicy': 'redirect-to-https',
            'AllowedMethods': {
                'Quantity': 2,
                'Items': ['GET', 'HEAD'],
                'CachedMethods': {
                    'Quantity': 2,
                    'Items': ['GET', 'HEAD']
                }
            },
            'ForwardedValues': {
                'QueryString': False,
                'Cookies': {
                    'Forward': 'none'
                }
            },
            'MinTTL': 3600
        },
        'ViewerCertificate': {
            'CloudFrontDefaultCertificate': True
        },
        'PriceClass': 'PriceClass_100' 
    }
try:
        aws_access_key = os.getenv('AWS_ACCESS_KEY_ID')
  aws_secret_key = os.getenv('AWS_SECRET_ACCESS_KEY')
        session = boto3.Session(
aws_access_key_id=aws_access_key,
aws_secret_access_key=aws_secret_key,
             region_name='us-east-1'
          )
        client = session.client('cloudfront')
        response = client.create_distribution(DistributionConfig=distribution_config)
        print("CloudFront Distribution created successfully!")
        print(response)
except Exception as e:
        print(f"Error creating CloudFront distribution: {e}")

As you can see in the above sample code, after importing the boto3 module, we have the distribution_config variable where all the configs are stored. After that, we call the  create_dirtibution function to cdn distribution:

        response = client.create_distribution(DistributionConfig=distribution_config)

So, in a similar way, you can write more complex Python code to implement your complex AWS infrastructure as well and automate setting up a cache invalidation request pipeline, which will give users functionality and allow them to clear CDN cache without logging in to the AWS console.

]]>
https://blogs.perficient.com/2025/02/25/setting-up-cloudfront-using-python/feed/ 0 376999
Windows Password Recovery with AWS SSM https://blogs.perficient.com/2025/02/25/windows-password-recovery-with-aws-ssm/ https://blogs.perficient.com/2025/02/25/windows-password-recovery-with-aws-ssm/#respond Wed, 26 Feb 2025 05:27:12 +0000 https://blogs.perficient.com/?p=377706

The Systems Manager (SSM) streamlines managing Windows instances in AWS. If you’ve ever forgotten the password for your Windows EC2 instance, SSM offers a secure and efficient way to reset it without additional tools or manual intervention.

Objective & Business Requirement

In a production environment, losing access to a Windows EC2 instance due to an unknown or non-working password can cause significant downtime. Instead of taking a backup, creating a new instance, and reconfiguring the environment—which is time-consuming and impacts business operations—we leverage AWS Systems Manager (SSM) to efficiently recover access without disruption.

  • Recovery Process
  • Prerequisites
  • Configuration Overview
  • Best Practices
  • Conclusion

Prerequisites

Before you start, ensure the following prerequisites are met:

  1. SSM Agent Installed: The SSM agent must be installed and run on the Windows instance. AWS provides pre-configured AMIs with the agent installed.
  2. IAM Role Attached: Attach an IAM role to your instance with the necessary permissions. The policy should include:
    • AmazonSSMManagedInstanceCore
    • AmazonSSMFullAccess (or custom permissions to allow session management and run commands).
  3. Instance Managed by SSM: The instance must be registered as a managed instance in Systems Manager.

Configuration Overview

Follow this procedure if all you need is a PowerShell prompt on the target instance.

1. Log in to the AWS Management Console

  • Navigate to the EC2 service in the AWS Management Console.
  • Open the instance in the AWS console & click Connect.

S1

  • This opens a PowerShell session with “ssm-user”.

Picture2

2. Verify the Active Users

Run Commands to Reset the Password

With the session active, follow these steps to reset the password:

  • Run the following PowerShell command to list the local users: get-localuser

Picture3

  • Identify the username for which you need to reset the password.
  • Reset the password using the following command:

Replace <username> with the actual username and <password> with your new password.

net user Username password

3. Validate the New Password

  • Use Remote Desktop Protocol (RDP) to log into the Windows instance using the updated credentials.
  • To open an RDP connection to the instance in your browser, follow this procedure.
  • Open the instance in the AWS console & click Connect:
  • Switch to the “RDP client” tab & use Fleet Manager:

Picture4

  • Able to access the server using “RDP client,” Please refer to the below screenshot.

Picture5

 

Best Practices

  1. Strong Password Policy: Ensure the new password adheres to your organization’s password policy for security.
  2. Audit Logs: Use AWS CloudTrail to monitor who initiated the SSM session and track changes made.
  3. Restrict Access: Limit who can access SSM and manage your instances by defining strict IAM policies.

Troubleshooting Tips for Password Recovery

  • SSM Agent Issues: If the instance isn’t listed in SSM, verify that the SSM agent is installed and running.
  • IAM Role Misconfigurations: Ensure the IAM role attached to the instance has the correct permissions.
  • Session Manager Setup: If using the CLI, confirm that the Session Manager plugin is installed and correctly configured on your local machine.

 

Conclusion

AWS Systems Manager is a powerful tool that simplifies Windows password recovery and enhances the overall management and security of your instances. By leveraging SSM, you can avoid downtime, maintain access to critical instances, and adhere to AWS best practices for operational efficiency.

 

]]>
https://blogs.perficient.com/2025/02/25/windows-password-recovery-with-aws-ssm/feed/ 0 377706
From IBM APIC to Apigee: Your Step-by-Step Migration Journey https://blogs.perficient.com/2025/02/24/from-ibm-apic-to-apigee-your-step-by-step-migration-journey/ https://blogs.perficient.com/2025/02/24/from-ibm-apic-to-apigee-your-step-by-step-migration-journey/#comments Mon, 24 Feb 2025 09:57:34 +0000 https://blogs.perficient.com/?p=376797

What is API and API migration?

An API (Application Programming Interface) is a set of guidelines and protocols that allows one software application to communicate with another. API migration refers to the process of migrating an API from one environment, platform, or version to another.

What is IBM API Connect?

IBM API Connect is an integrated API management platform designed by IBM to create, manage, secure, and socialize APIs across different environments (cloud, on-premises, or hybrid). Below are the steps to go through the APIC interface.

What is Apigee?

Apigee is a full lifecycle API management platform developed by Google Cloud, designed to help organizations create, manage, secure, and scale APIs. Enterprises prefer Apigee because of its robust security features, advanced analytics capabilities, scalability to large enterprises and compatibility to multiple clouds. Below are the steps to go through the Apigee interface.

Why APIC and Apigee is needed?

IBM API Connect and Apigee are two comprehensive API management tools that allow organizations to create, secure, manage, and analyze APIs. Here are the advantages why they are needed:

  • API Management and Governance
  • Security and Compliance
  • API Analytics and Monitoring
  • Developer Ecosystem Management

Why would a company choose to switch from APIC to Apigee, and what are the advantages?

              An organization or user will choose API migration when they need to improve their API infrastructure, adapt to new business needs, or implement better technologies. Choosing between Apigee and IBM API Connect depends on the specific needs and priorities of an organization, as each platform has its strengths. However, Apigee may be considered better than IBM API Connect in certain aspects based on features, usability, and industry positioning. Using Apigee is more flexible, where we can easily analyze API monitoring, API metrics, and generate custom reports. The following are some advantages that make Apigee a better option:

  • Google Cloud Integration and Ecosystem
  • Advanced Analytics and Monitoring
  • Developer Experience
  • Security and Rate Limiting
  • API Monetization

Migration Process:

MigrationProcess

 

Applications Used to Migrate:

Below are the applications that we have utilized in the process of migration.

  • IBM API Connect
  • Apigee Edge/Apigee Hybrid
  • Swagger Editor

IBM API Connect

Fetching APIC migration details

  • To migrate API/product from the API Connect, go to the login page and provide USERNAME/PASSWORD, then click on sign in.

APIs:

  • Access APIs by clicking on the APIs
  • After locating the API details, confirm that the type is REST/SOAP and, if multiple versions are displayed, choose the appropriate one.API Search
  • Next, choose the API and navigate to the Assemble section to determine whether the API is Passthrough or Non-Passthrough.
  • Proceed to the Design page and take note of the necessary information that is mandatory.
    1. Name of the API
    2. Basepath
    3. Consumes (JSON/XML)
    4. Security Definitions, Security
    5. Properties -> Backend Endpoint URL
    6. Paths

Design Parameters                       Source Tab

  • Next, navigate to the API’s source page and retrieve the swagger file that is accessible.

Products:

  • Select the Products tab, use the search box to locate the right product, and then click on it.

Products Search        Product Design Parameters

  • Determine how many APIs refer to the same product.
  • Verify the number of plans available for that product.
  • Next, select each plan and take note of the required fields shown below.
    1. Rate Limits (calls/time interval)
    2. Burst Limit (calls/time interval)

Apigee Edge/Apigee Hybrid

Migration of APIs and Products in Apigee

  • Go to the login page, enter your username and password, and then click “sign in” to create an API or product.

APIs:

  • To build a new API, select the API Proxies section and click +Proxy.
  • Choose Reverse Proxy/No Target to manually construct an API.

API Proxies

  • For Reverse proxy provide API name, basepath and Target server which we have noted from IBM API Connect.
  • Make sure to establish the flow paths in accordance with APIC after creating the proxy, including the get, post, put, and delete methods.

Conditional flow

  • Click on the policies section to add the Traffic Management policies, Security policies, Mediation policies and Extension policies as per APIC/our requirement.

Policies

  • Using the host and port from the APIC Endpoint URL, establish a target server, modify the Apigee Target Endpoint XML code as needed, and make the URL dynamic.

             <HTTPTargetConnection>

                    <SSLInfo>

                         <Enabled>true</Enabled>

                   </SSLInfo>

                   <LoadBalancer>

                         <Server name = “TS-testAPI” />

                   </LoadBalancer>

                   <Path>/</Path>

            </HTTPTargetConnection>

Compare and debug the flow:

  • After the API development is completed, we must verify and compare the API flow between API Connect and Apigee to determine whether the flow looks similar.
  • Once the API has been implemented, deploy it to the appropriate environment and begin testing it using the client’s provided test data. Check the flow by using the DEBUG/TRACE section once you’ve reached the proxy endpoint URL.
  • Pre-production testing should be done by the client using real-time data to verify the service’s end-to-end functioning status prior to the production deployment.

Products:

  • Click on the API Products section and click on + API Product to create a new product.
  • Provide product name, display name, quota and burst limits which we have noted from IBM API Connect.
  • Then add APIs that refer to the existing product in the Operations (In Hybrid)/API Resources (In Edge) section.

Create Product

  • If the product contains more than one plan in APIC, repeat the same process and provide required fields to create other plans.

Swagger Editor

Swagger Editor is an open-source, browser-based tool that allows developers to design, define, edit, and document APIs using the OpenAPI Specification (OAS) format.

  • As we have collected the swagger file from the APIC, as per our requirement, we need to edit the file and change the version of the swagger file if required using the swagger editor.
  • From the swagger file we can remove IBM-related tags and add our security variables as per our code.

Apigee Portal Publishing:

  • The swagger file must be published on the Apigee developer portal once it is ready.
  • Go to the Apigee Home page, select the Portals section, and then click on API Catalog to begin the portal publishing process.
  • Click the plus button to add an API product in the catalog. After choosing the product, click the next button, fill out the below required fields, and then click save to publish.
  • Check the published check box.
  • Check the OpenAPI document in the API documentation section.
  • Select the swagger file and upload.
  • Select API visibility as per the specification.

API Catalog

Summary:

Migrating from IBM API Connect (APIC) to Apigee involves moving API management capabilities to the Apigee platform to leverage its more advanced features for design, deployment, and analytics. The process of migration involves the assessment of existing APIs and dependencies, exporting and adapting API definitions, mapping and recreating policies like authentication and rate limiting, and thorough testing to ensure functionality in the new environment.

]]>
https://blogs.perficient.com/2025/02/24/from-ibm-apic-to-apigee-your-step-by-step-migration-journey/feed/ 1 376797
How To Create High Availability Kubernetes Cluster on AWS using KUBEONE: Part-2 https://blogs.perficient.com/2025/02/24/how-to-create-high-availability-kubernetes-cluster-on-aws-using-kubeone-part-2/ https://blogs.perficient.com/2025/02/24/how-to-create-high-availability-kubernetes-cluster-on-aws-using-kubeone-part-2/#respond Mon, 24 Feb 2025 06:40:22 +0000 https://blogs.perficient.com/?p=377339

In Part-1, we learned about the importance of KUBEONE. Now, lets explore on demo part in this practical session will focus on creating High Availability Kubernetes Cluster on AWS using KUBEONE

Setup KubeOne 

1. Downloading KubeOne

CREATE ec2 instance first with any type as suitable, then download KubeOne from below script. The below commands will be used to download KubeOne

sudo apt-get update

sudo apt-get -y install unzip

curl -sfL https://get.kubeone.io | sh

 

The above script downloads the latest version of KubeOne from GitHub, and unpacks it in the /usr/local/bin directory.

2. Downloading Terraform

We will be use Terraform to manage the infrastructure for the control plane, for this we need to install it. We will use the following scripts to download the terraform

Below is the official documentation link to install terraform:

https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli

 

This is the HashiCorp’s GPG signature and it will install HashiCorp’s Debian package repository.

sudo apt-get update && sudo apt-get install -y gnupg software-properties-common

 

Now Install the HashiCorp GPG key:

wget -O- https://apt.releases.hashicorp.com/gpg | \

gpg –dearmor | \

sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/nul

 

Verify the Key’s Fingerprint:

gpg –no-default-keyring \

–keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \

–fingerprint

 

Add the Official HashiCorp Repository into System:

echo “deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \

https://apt.releases.hashicorp.com $(lsb_release -cs) main” | \

sudo tee /etc/apt/sources.list.d/hashicorp.list

 

Download the package information from HashiCorp.

sudo apt update

 

Install Terraform from the new repository.

sudo apt-get install terraform -y

 

3. Configuring The Environment

Download The AWS CLI

sudo apt install unzip -y

curl “https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip” -o “awscliv2.zip”

unzip awscliv2.zip

sudo ./aws/install

KubeOne and Terraform need the cloud provider credentials exported as the environment variables.

 

Create an IAM user

We need the IAM account with the appropriate permissions for Terraform to create the infrastructure and for the machine-controller to create worker nodes.

 

Iam 1

 

 

Click on ‘Users’ and ‘Create User’

 

Iam User 2

 

Once the User Created then ‘attach Policy’

 

Iam Permission 3

 

Click to the create “Access Key and Secret Key”

 

Accesskey 4

 

We will use “Aws Configure” to config the  both keys

aws configure

 

4. Creating The Infrastructure

Create a key pair on the server

ssh-keygen -t rsa -b 4096 Now let’s move the directory with the example Terraform configs that has been created while installing KubeOne.cd kubeone_1.9.1_linux_amd64/examples/terraform/aws 

Before use the Terraform, we will initialize the directory structure and download the required plugins. Use the below init command

terraform init

 

Init 5

 

Also in same directory, create the file i.e. terraform.tfvars it will contain Terraform variables to customize the infrastructure creation process.

vim terraform.tfvars

 

Now add  the below two variables

cluster_name = “kubeone-cluster”

ssh_public_key_file = “~/.ssh/id_rsa.pub”

 

The cluster_name variable is used as prefix for cloud resources. The ssh_public_key_file is the path to a SSH public key this will deployed on the instances. KubeOne will be connects to instances over SSH for provisioning and configuration. If you want to generate then run the ssh-keygen.

Now run terraform plan command to see what changes done.

terraform plan

 

Plan 6

 

Now run terraform apply command and enter “YES”

terraform apply

The above command will create all the infrastructure that we needed to get started.

 

Apply 7

 

Finally, we need to save the Terraform state in a format KubeOne can read to get info about the AWS resources. This helps with setting up the cluster and creating worker nodes later. The format is already set in a file called output.tf, so all you have to do is run the output command.

terraform output -json > tf.json

This command creates a file named tf.json with the Terraform state in JSON format, which KubeOne can read. Once that’s done, we’re ready to set up our cluster in the next step.

 

Step 5. Provisioning The Cluster

Now that the infrastructure is ready, we can use KubeOne to set up a Kubernetes cluster.

The first step is to create a KubeOne configuration file (kubeone.yaml). This file will define details like how the cluster will be set up and which version of Kubernetes to use..

vim kubeone.yaml

 

Add the below code into above yaml file

apiVersion: kubeone.k8c.io/v1beta2

kind: KubeOneCluster

versions:

  kubernetes: ‘1.30.0’

cloudProvider:

  aws: {}

  external: true

 

Before proceeding, choose the Kubernetes version you want to use and replace any placeholder values with the actual ones.

Now set the environment variable, use the export command for this

 

export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)

export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)

 

Now run the below command again

kubeone apply -m kubeone.yaml -t tf.json

If we get the below error

 

Error 8

 

Now Run these below Commands to start agents

Start the fresh agent

# Start SSH agent correctly

eval “$(ssh-agent)”

# Verify the environment variables

echo $SSH_AUTH_SOCK

echo $SSH_AGENT_PID

 

Add the ssh keys

# Add your private key

ssh-add ~/.ssh/id_rsa

# Verify keys are added

ssh-add -l

 

Set the Correct Permissions:

 

# Fix SSH directory permissions

chmod 700 ~/.ssh

chmod 600 ~/.ssh/id_rsa

chmod 644 ~/.ssh/id_rsa.pub

 

Run the apply command again
kubeone apply -m kubeone.yaml -t tf.json

This will be creating cluster.

 

6. Install Kubectl

Let’s install the Kubectl:

curl -LO “https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl”

curl -LO “https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256”

echo “$(cat kubectl.sha256)  kubectl” | sha256sum –check

sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

chmod +x kubectl

mkdir -p ~/.local/bin

mv ./kubectl ~/.local/bin/kubectl

 

7. Configuring The Cluster Access 

KubeOne automatically downloads the Kubeconfig file for your cluster, named <cluster_name>-kubeconfig (where <cluster_name> is from the terraform.tfvars file). You can use this file with kubectl like this:

kubectl –kubeconfig=<cluster_name>-kubeconfig

kubectl get nodes –kubeconfig=kubeone-cluster-kubeconfig

 

Get Nodes 9

 

Now Copy the config to .Kube folder

cp kubeone-cluster-kubeconfig ~/.kube/config

Try now without –kubeconfig

kubectl get nodes

 

Get Nodes 10

 

Let’s create one “nginx” pod using below scripts

Kubectl run nginx –image=nginx

Kubectl get pods

 

Get Pods 11

 

Shutting down the cluster

The goal of unprovisioning is to delete the cluster and free up cloud resources. Use it only if you no longer need the cluster. If you want to undo this, you can use the below reset command.

kubeone reset –manifest kubeone.yaml -t tf.json 

Removing Infrastructure Using Terraform

If you’re using Terraform, you can delete all resources with the destroy command. Terraform will list what will be removed and ask you to confirm by typing “yes.” If your cluster is on GCP, you need to manually remove Routes created by kube-controller-manager in the cloud console before running terraform destroy.

terraform destroy

Remove all Servers and IAM user at end.

Conclusion:

KubeOne is a solid, reliable choice for automating Kubernetes cluster management, especially for users who need high availability and multi-cloud or hybrid support. It is particularly well-suited for organizations looking for a simple yet powerful solution for managing Kubernetes at scale without the overhead of more complex management platforms. However, it might not have as broad an ecosystem or user base as some of the more widely known alternatives.

]]>
https://blogs.perficient.com/2025/02/24/how-to-create-high-availability-kubernetes-cluster-on-aws-using-kubeone-part-2/feed/ 0 377339
Informatica Intelligent Cloud Services (IICS) Cloud Data Integration (CDI) for PowerCenter Experts https://blogs.perficient.com/2025/02/18/informatica-intelligent-cloud-services-iics-cloud-data-integration-cdi-for-powercenter-experts/ https://blogs.perficient.com/2025/02/18/informatica-intelligent-cloud-services-iics-cloud-data-integration-cdi-for-powercenter-experts/#respond Wed, 19 Feb 2025 05:56:08 +0000 https://blogs.perficient.com/?p=377296

Informatica Power Center professionals transitioning to Informatica Intelligent Cloud Services (IICS) Cloud Data Integration (CDI) will find both exciting opportunities and new challenges. While core data integration principles remain, IICS’s cloud-native architecture requires a shift in mindset. This article outlines key differences, migration strategies, and best practices for a smooth transition.

Core Differences Between Power Center and IICS CDI:

  • Architecture: Power Center is on-premise, while IICS CDI is a cloud-based iPaaS. Key architectural distinctions include:
    • Agent-Based Processing: IICS uses Secure Agents as a bridge between on-premise and cloud sources.
    • Cloud-Native Infrastructure: IICS leverages cloud elasticity for scalability, unlike Power Center’s server-based approach.
    • Microservices: IICS offers modular, independently scalable services.
  • Development and UI: IICS uses a web-based UI, replacing Power Center’s thick client (Repository Manager, Designer, Workflow Manager, Monitor). IICS organizes objects into projects and folders (not repositories) and uses tasks, taskflows, and mappings (not workflows) for process execution.
  • Connectivity and Deployment: IICS offers native cloud connectivity to services like AWS, Azure, and Google Cloud. It supports hybrid deployments and enhanced parameterization.

Migration Strategies:

  1. Assessment: Thoroughly review existing Power Center workflows, mappings, and transformations to understand dependencies and complexity.
  2. Automated Tools: Leverage Informatica’s migration tools, such as the Power Center to IICS Migration Utility, to convert mappings.
  3. Optimization: Rebuild or optimize mappings as needed, taking advantage of IICS capabilities.

Best Practices for IICS CDI:

  1. Secure Agent Efficiency: Deploy Secure Agents near data sources for optimal performance and reduced latency.
  2. Reusable Components: Utilize reusable mappings and templates for standardization.
  3. Performance Monitoring: Use Operational Insights to track execution, identify bottlenecks, and optimize pipelines.
  4. Security: Implement robust security measures, including role-based access, encryption, and data masking.

Conclusion:

IICS CDI offers Power Center users a modern, scalable, and efficient cloud-based data integration platform. While adapting to the new UI and development paradigm requires learning, the fundamental data integration principles remain. By understanding the architectural differences, using migration tools, and following best practices, Power Center professionals can successfully transition to IICS CDI and harness the power of cloud-based data integration.

]]>
https://blogs.perficient.com/2025/02/18/informatica-intelligent-cloud-services-iics-cloud-data-integration-cdi-for-powercenter-experts/feed/ 0 377296
Navigating the Landscape of Development Frameworks: A Guide for Aspiring Developers. https://blogs.perficient.com/2025/02/17/navigating-the-landscape-of-development-frameworks-a-guide-for-aspiring-developers/ https://blogs.perficient.com/2025/02/17/navigating-the-landscape-of-development-frameworks-a-guide-for-aspiring-developers/#comments Tue, 18 Feb 2025 05:44:58 +0000 https://blogs.perficient.com/?p=377319

Nine years ago, I was eager to be a developer but found no convincing platform. Luckily, the smartphone world was booming, and its extraordinary growth immediately caught my eye. This led to my career as an Android developer, where I had the opportunity to learn the nuances of building mobile applications. The time I went along helped me expand my reach into hybrid mobile app development, allowing me to smoothly adapt to various platforms.

I also know the struggles of countless aspiring developers dilemma with uncertainty about which direction to head and which technology to pursue. Hence, the idea of writing this blog stemmed from my experiences and insights while making my own way through mobile app development. It is geared toward those beginning to learn this subject or adding to current knowledge.

Web Development

  • Web Development: Focuses on building the user interface (UI) and user experience (UX) of applications.
    • Technologies:
      • HTML (HyperText Markup Language): The backbone of web pages, used to structure content with elements like headings, paragraphs, images, and links.
      • CSS (Cascading Style Sheets): Styles web pages by controlling layout, colors, fonts, and animations, making websites visually appealing and responsive.
      • JavaScript: A powerful programming language that adds interactivity to web pages, enabling dynamic content updates, event handling, and logic execution.
      • React: A JavaScript library developed by Facebook for building fast and scalable user interfaces using a component-based architecture.
      • Angular: A TypeScript-based front-end framework developed by Google that provides a complete solution for building complex, dynamic web applications.
      • Vue.js: A progressive JavaScript framework known for its simplicity and flexibility, allowing developers to build user interfaces and single-page applications efficiently.
    • Upskilling:
      • Learn the basics of HTML, CSS, and JavaScript (essential for any front-end developer).
      • Explore modern frameworks like React or Vue.js for building interactive UIs.
      • Practice building small projects like a portfolio website or a simple task manager.
      • Recommended Resources:

Backend Development

  • Backend Development: Focuses on server-side logic, APIs, and database management.
    • Technologies:
      • Node.js: A JavaScript runtime that allows developers to build fast, scalable server-side applications using a non-blocking, event-driven architecture.
      • Python (Django, Flask): Python is a versatile programming language; Django is a high-level framework for rapid web development, while Flask is a lightweight framework offering flexibility and simplicity.
      • Java (Spring Boot): A Java-based framework that simplifies the development of enterprise-level applications with built-in tools for microservices, security, and database integration.
      • Ruby on Rails: A full-stack web application framework built with Ruby, known for its convention-over-configuration approach and rapid development capabilities.
    • Upskilling:
      • Learn the basics of backend languages like JavaScript (Node.js) or Python.
      • Understand APIs (REST and GraphQL).
      • Practice building CRUD applications and connecting them to databases like MySQL or MongoDB.
      • Recommended Resources:

Mobile App Development

  • Native Development:
    • Android Development
      • Java: A widely used, object-oriented programming language known for its platform independence (Write Once, Run Anywhere) and strong ecosystem, making it popular for enterprise applications and Android development.
      • Kotlin: A modern, concise, and expressive programming language that runs on the JVM, is fully interoperable with Java, and is officially recommended by Google for Android app development due to its safety and productivity features.
    • iOS Development:
      • Swift: A modern, fast, and safe programming language developed by Apple for iOS, macOS, watchOS, and tvOS development. It offers clean syntax, performance optimizations, and strong safety features.
      • Objective-C: An older, dynamic programming language used for Apple app development before Swift. It is based on C with added object-oriented features but is now largely replaced by Swift for new projects.
    • Upskilling:
      • Learn Kotlin or Swift (modern, preferred languages for Android and iOS).
      • Use platform-specific tools: Android Studio (Android) or Xcode (iOS).
      • Start small, like creating a to-do list app or weather app.
      • Recommended Resources:
  • Cross-Platform Development:
    • Technologies:
      • React Native: A JavaScript framework developed by Meta for building cross-platform mobile applications using a single codebase. It leverages React and native components to provide a near-native experience.
      • Flutter: A UI toolkit by Google that uses the Dart language to build natively compiled applications for mobile, web, and desktop from a single codebase, offering high performance and a rich set of pre-designed widgets.
    • Upskilling:

Game Development

  • Technologies:
    • Unity (C#): A popular game engine known for its versatility and ease of use, supporting 2D and 3D game development across multiple platforms. It uses C# for scripting and is widely used for indie and AAA games.
    • Unreal Engine (C++): A high-performance game engine developed by Epic Games, known for its stunning graphics and powerful features. It primarily uses C++ and Blueprints for scripting, making it ideal for AAA game development.
    • Godot: An open-source game engine with a lightweight footprint and built-in scripting language (GDScript), along with support for C# and C++. It is beginner-friendly and widely used for 2D and 3D game development.
  • Upskilling:
    • Learn a game engine (Unity is beginner-friendly and widely used).
    • Explore C# (for Unity) or C++ (for Unreal Engine).
    • Practice by creating simple 2D games, then progress to 3D.
    • Recommended Resources:

Data Science and Machine Learning

  • Technologies:
    • Python (NumPy, Pandas, Scikit-learn): Python is widely used in data science and machine learning, with NumPy for numerical computing, Pandas for data manipulation, and Scikit-learn for machine learning algorithms.
    • R: A statistical programming language designed for data analysis, visualization, and machine learning. It is heavily used in academic and research fields.
    • TensorFlow: An open-source machine learning framework developed by Google, known for its scalability and deep learning capabilities, supporting both CPUs and GPUs.
    • PyTorch: A deep learning framework developed by Facebook, favored for its dynamic computation graph, ease of debugging, and strong research community support.
  • Upskilling:
    • Learn Python and libraries like NumPy, Pandas, and Matplotlib.
    • Explore machine learning concepts and algorithms using Scikit-learn or TensorFlow.
    • Start with data analysis projects or simple ML models.
    • Recommended Resources:

DevOps and Cloud Development

  • Technologies:
    • Docker: A containerization platform that allows developers to package applications with dependencies, ensuring consistency across different environments.
    • Kubernetes: An open-source container orchestration system that automates the deployment, scaling, and management of containerized applications.
    • AWS, Azure, Google Cloud: Leading cloud platforms offering computing, storage, databases, and AI/ML services, enabling scalable and reliable application hosting.
    • CI/CD tools: Continuous Integration and Continuous Deployment tools (like Jenkins, GitHub Actions, and GitLab CI) automate testing, building, and deployment processes for faster and more reliable software releases.
  • Upskilling:
    • Learn about containerization (Docker) and orchestration (Kubernetes).
    • Understand cloud platforms like AWS and their core services (EC2, S3, Lambda).
    • Practice setting up CI/CD pipelines with tools like Jenkins or GitHub Actions.
    • Recommended Resources:

Embedded Systems and IoT Development

  • Technologies:
    • C, C++: Low-level programming languages known for their efficiency and performance, widely used in system programming, game development, and embedded systems.
    • Python: A versatile, high-level programming language known for its simplicity and readability, used in web development, automation, AI, and scientific computing.
    • Arduino: An open-source electronics platform with easy-to-use hardware and software, commonly used for building IoT and embedded systems projects.
    • Raspberry Pi: A small, affordable computer that runs Linux and supports various programming languages, often used for DIY projects, robotics, and education.
  • Upskilling:
    • Learn C/C++ for low-level programming.
    • Experiment with hardware like Arduino or Raspberry Pi.
    • Build projects like smart home systems or sensors.
    • Recommended Resources:

How to Get Started and Transition Smoothly

  1. Assess Your Interests:
    • Do you prefer visual work (Frontend, Mobile), problem-solving (Backend, Data Science), or system-level programming (IoT, Embedded Systems)?
  2. Leverage Your QA Experience:
    • Highlight skills like testing, debugging, and attention to detail when transitioning to development roles.
    • Learn Test-Driven Development (TDD) and how to write unit and integration tests.
  3. Build Projects:
    • Start with small, practical projects and showcase them on GitHub.
    • Examples: A weather app, an e-commerce backend, or a simple game.
  4. Online Platforms for Learning:
    • FreeCodeCamp: For web development.
    • Udemy and Coursera: Wide range of development courses.
    • HackerRank or LeetCode: For coding practice.
  5. Network and Apply:
    • Contribute to open-source projects.
    • Build connections in developer communities like GitHub, Reddit, or LinkedIn.

Choosing the right development framework depends on your interests, career goals, and project requirements. If you enjoy building interactive user experiences, Web Development with React, Angular, or Vue.js could be your path. If you prefer handling server-side logic, Backend Development with Node.js, Python, or Java might be ideal. Those fascinated by mobile applications can explore Native (Kotlin, Swift) or Cross-Platform (React Native, Flutter) Development.

For those drawn to game development, Unity and Unreal Engine provide powerful tools, while Data Science & Machine Learning enthusiasts can leverage Python and frameworks like TensorFlow and PyTorch. If you’re passionate about infrastructure and automation, DevOps & Cloud Development with Docker, Kubernetes, and AWS is a strong choice. Meanwhile, Embedded Systems & IoT Development appeals to those interested in hardware-software integration using Arduino, Raspberry Pi, and C/C++.

Pros and Cons of Different Development Paths

Path Pros Cons
Web Development High-demand, fast-paced, large community Frequent technology changes
Backend Development Scalable applications, strong job market Can be complex, requires database expertise
Mobile Development Booming industry, native vs. cross-platform options Requires platform-specific knowledge
Game Development Creative field, engaging projects Competitive market, longer development cycles
Data Science & ML High-paying field, innovative applications Requires strong math and programming skills
DevOps & Cloud Essential for modern development, automation focus Can be complex, requires networking knowledge
Embedded Systems & IoT Hardware integration, real-world applications Limited to specialized domains

Final Recommendations

  1. If you’re just starting, pick a general-purpose language like JavaScript or Python and build small projects.
  2. If you have a specific goal, choose a framework aligned with your interest (e.g., React for frontend, Node.js for backend, Flutter for cross-platform).
  3. For career growth, explore in-demand technologies like DevOps, AI/ML, or cloud platforms.
  4. Keep learning and practicing—build projects, contribute to open-source, and stay updated with industry trends.

No matter which path you choose, the key is continuous learning and hands-on experience. Stay curious, build projects, and embrace challenges on your journey to becoming a skilled developer, check out Developer Roadmaps for further insights and guidance. 🚀 Happy coding!

]]>
https://blogs.perficient.com/2025/02/17/navigating-the-landscape-of-development-frameworks-a-guide-for-aspiring-developers/feed/ 3 377319
Prospective Developments in API and APIGEE Management: A Look Ahead for the Next Five Years https://blogs.perficient.com/2025/02/12/prospective-developments-in-api-and-apigee-management-a-look-ahead-for-the-next-five-years/ https://blogs.perficient.com/2025/02/12/prospective-developments-in-api-and-apigee-management-a-look-ahead-for-the-next-five-years/#respond Wed, 12 Feb 2025 11:39:03 +0000 https://blogs.perficient.com/?p=376548

Application programming interfaces, or APIs, are crucial to the ever-changing digital transformation landscape because they enable businesses to interact with their data and services promptly and effectively. Effective administration is therefore necessary to guarantee that these APIs operate as intended, remain secure, and offer the intended advantages. This is where Apigee, Google Cloud’s premier API management solution, is helpful.

What is Apigee?

Apigee is an excellent tool for businesses wanting to manage their APIs smoothly. It simplifies the process of creating, scaling, securing, and deploying APIs, making developers’ work easier. One of Apigee’s best features is its flexibility—it can manage both external APIs for third-party access and internal APIs for company use, making it suitable for companies of all sizes. Apigee also works well with security layers like Nginx, which adds a layer of authentication between Apigee and backend systems. This flexibility and security make Apigee a reliable and easy-to-use platform for managing APIs.

What is Gemini AI?

Gemini AI is an advanced artificial intelligence tool that enhances the management and functionality of APIs. Think of it as a smart assistant that helps automate tasks, answer questions, and improve security for API systems like Apigee. For example, if a developer needs help setting up an API, Gemini AI can guide them with instructions, formats, and even create new APIs based on simple language input. It can also answer common user questions or handle customer inquiries automatically, making the whole process faster and more efficient. Essentially, Gemini AI brings intelligence and automation to API management, helping businesses run their systems smoothly and securely.

Why Should Consumers Opt for Gemini AI with Apigee?

Consumers should choose Gemini AI with Apigee because it offers more innovative, faster, and more secure API management. It also brings security, efficiency, and ease of use to API management, making it a valuable choice for businesses that want to streamline their operations and ensure their APIs are fast, reliable, and secure. Here are some key benefits: Enhanced Security, Faster Development, and Time-Saving Automation.

Below is the flow diagram for Prospective Developments in APIGEE.

Image1


Greater Emphasis on API Security

  • Zero Trust Security:  The Zero Trust security approach is founded on “never trust, always verify,” which states that no device or user should ever be presumed trustworthy, whether connected to the network or not. Each request for resource access under this architecture must undergo thorough verification.
  • Zero Trust Models: APIs will increasingly adopt zero-trust security principles, ensuring no entity is trusted by default. The future of Zero-Trust in Apigee will likely focus on increasing the security and flexibility of API management through tighter integration with identity management, real-time monitoring, and advanced threat protection technologies.
  • Enhanced Data Encryption: Future developments might include more substantial data encryption capabilities, both in transit and at rest, to protect sensitive information in compliance with Zero Trust principles.

    Picture2


Resiliency and Fault Tolerance

 The future of resiliency and fault tolerance in Apigee will likely involve advancements and innovations driven by evolving technological trends and user needs. Here are some key areas where we can expect Apigee to enhance its resiliency and fault tolerance capabilities.

Picture3

  • Automated Failover: Future iterations of Apigee will likely have improved automated failover features, guaranteeing that traffic is redirected as quickly as possible in case of delays or outages. More advanced failure detection and failover methods could be a part of this.
  • Adaptive Traffic Routing: Future updates could include more dynamic and intelligent traffic management features. This might involve adaptive routing based on real-time performance metrics, enabling more responsive adjustments to traffic patterns and load distribution.
  • Flexible API Gateway Configurations: Future enhancements could provide more flexibility in configuring API gateways to better handle different fault scenarios. This includes custom policies for fault tolerance, enhanced error handling, and more configurable redundancy options.

Gemini AI with Apigee

Gemini AI and Apigee’s integration has the potential to improve significantly API administration by enhancing its intelligence, security, and usability. Organizations can anticipate improved security, more effective operations, and better overall user and developer experience by utilizing cutting-edge AI technologies. This integration may open the door to future breakthroughs and capabilities as AI and API management technologies develop. If the API specifications that are currently available in API Hub do not satisfy your needs, you can utilize Gemini to create a new one by just stating your needs in basic English. Considerable time is saved in the cycles of development and assessment.

Gemini AI can inform you about the policy documentation in parallel while adding policies to the Apigee development. Gemini AI can guide you with the formats used in the policies. We can automate the query region like chatbots with Gemini AI. We may utilize Gemini AI to improve and get answers to questions about the APIs available on the Apigee portal.

If any integration is currently in use. We can use Gemini AI to accept inquiries from customers or clients and automate the most frequently asked responses. Additionally, Gemini AI can simply reply to customers until our professionals are active.


Overview

Apigee, Google Cloud’s API management platform, plays a key role in digital transformation by securely and flexibly connecting businesses with data and services. Future advancements focus on stronger security with a “Zero Trust” approach, improved resilience through automated failover and adaptive traffic routing, and enhanced flexibility in API gateway settings. Integration with Gemini AI will make Apigee smarter, enabling automated support, policy guidance, API creation, streamlining development, and improving customer service.

]]>
https://blogs.perficient.com/2025/02/12/prospective-developments-in-api-and-apigee-management-a-look-ahead-for-the-next-five-years/feed/ 0 376548
AWS Secrets Manager – A Secure Solution for Protecting Your Data https://blogs.perficient.com/2025/02/05/aws-secrets-manager-a-secure-solution-for-protecting-your-data/ https://blogs.perficient.com/2025/02/05/aws-secrets-manager-a-secure-solution-for-protecting-your-data/#respond Wed, 05 Feb 2025 16:46:02 +0000 https://blogs.perficient.com/?p=376895

Objective

If you are looking for a solution to securely store your secrets like DB credentials, API keys, tokens, passwords, etc., AWS Secret Manager is the service that comes to your rescue. Keeping the secrets as plain text in your code is highly risky. Hence, storing the secrets in AWS secret manager helps you with the following.

AWS Secret Manager is a fully managed service that can store and manage sensitive information. It simplifies secret handling by enabling the auto-rotation of secrets to reduce the risk of compromise, monitoring the secrets for compliance, and reducing the manual effort of updating the credentials in the application after rotation.

Essential Features of AWS Secret Manager

Picture1

  • Security: Secrets are encrypted using encryption keys we can manage through AWS KMS.
  • Rotation schedule: Enable rotation of credentials through scheduling to replace long-term with short-term ones.
  • Authentication and Access control: Using AWS IAM, we can control access to the secrets, control lambda rotation functions, and permissions to replicate the secrets.
  • Monitor secrets for compliance: AWS Config rules can be used to check whether secrets align with internal security and compliance standards, such as HIPAA, PCI, ISO, AICPA SOC, FedRAMP, DoD, IRAP, and OSPAR.
  • Audit and monitoring: We can use other AWS services, such as Cloud Trail for auditing and Cloud Watch for monitoring.
  • Rollback through versioning: If needed, the secret can be reverted to the previous version by moving the labels attached to that secret.
  • Pay as you go: Charged based on the number of secrets managed through the Secret manager.
  • Integration with other AWS services: Integrating with other AWS services, such as EC2, Lambda, RDS, etc., eliminates the need to hard code secrets.

AWS Secret Manager Pricing

At the time of publishing this document, AWS Secret Manager pricing is below. This might be revised in the future.
ComponentCostDetails
Secret storage$0.40 per secret per monthCharges are done per month. If they are stored for less than a month, the cost is prorated.
API calls$0.05 per 10,000 API callsCharges are charged to API interactions like managing secrets / retrieving secrets.

Creating a Secret

Let us get deeper into the process of creating secrets.

  1. Log in to the AWS Secret management console and select the “store a new secret” option: https://console.aws.amazon.com/secretsmanager/.
    Picture2
  2. On the Choose secret type page,
    1. For Secret type, select the type of database secret that you want to store:
    2. For Credentials, input the credentials for the database that has been hardcoded. Picture3
    3. For the Encryption key, choose AWS/Secrets Manager. This encryption key service is free to use.
    4. For the Database field, choose your database.
    5. Then click Next.
  3. On the Configure secret page,
    1. Provide a descriptive secret name and description.
    2. In the Resource permissions field, choose Edit permissions. Provide the policy that allows RoleToRetrieveSecretAtRuntime and Save.
    3. Then, click Next. Picture4
  4. On the Configure rotation page,
    1. select the schedule for which you want this to be rotated.
    2. Click Next. Picture6
  5. On the Review page, review the details, and then Store.

Output

The secret is created as below.

Picture7

We can update the code to fetch the secret from Secrets Manager. For this, we need to remove the hardcoded credentials from the code. Based on the code language, there is a need to add a call to the function or method to the code to call the secret manager for the secret stored here. Depending on our requirements, we can modify the rotation strategy, versioning, monitoring, etc.

Secret Rotation Strategy

Picture8

  • Single user – It updates credentials for one user in one secret. During secret rotation, open connections will not be dropped. While rotating, Open connections might experience a low risk of database denial calls that use the newly rotated secrets. This can be mitigated through retry strategies. Once the rotation is completed, all new calls will use the rotated credentials.
    • Use case – This strategy can be used for one-time or interactive users.
  • Alternating users – This method updates secret values for two users in one secret. We create the first use. Then, we create a cloned second user using the rotation function during the first rotation. Whenever the secret rotates, the rotation function alternates between the user’s password and the one it updates. Even during rotation, the application gets a valid set of credentials.
    • Uses case – This is good for systems that require high availability.

Versioning of Secrets

A secret consists of the secret value and the metadata. To store multiple values in one secret, we can use json with key-value pairs. A secret has a version that holds copies of the encrypted secret values. AWS uses three labels, like:

  • AWSCURRENT – to store current secret value.
  • AWSPREVIOUS – to hold the previous version.
  • AWSPENDING – to hold pending value during rotation.

Custom labeling of the versions is also possible. AWS can never remove labeled versions of secrets, but unlabeled versions are considered deprecated and will be removed at any time.

Monitoring Secrets in AWS Secret Manager

Secrets stored in AWS Secret Manager can be monitored by services provided by AWS as below.

  • Using cloud trail – This stores all API calls to the secret Manager as events, including secret rotation and version deletion.
  • Monitoring using Cloudwatch – the number of secrets in our account can be managed, secrets that are marked for deletion, monitor metrics, etc. We can also set an alarm for metric changes.

Conclusion

AWS Secrets Manager offers a secure, automated, scalable solution for managing sensitive data and credentials. It reduces the risk of secret exposure and helps improve application security with minimal manual intervention. Adopting best practices around secret management can ensure compliance and minimize vulnerabilities in your applications.

 

]]>
https://blogs.perficient.com/2025/02/05/aws-secrets-manager-a-secure-solution-for-protecting-your-data/feed/ 0 376895
Setting Up Virtual WAN (VWAN) in Azure Cloud: A Comprehensive Guide – I https://blogs.perficient.com/2025/02/05/setting-up-azure-vwan/ https://blogs.perficient.com/2025/02/05/setting-up-azure-vwan/#comments Wed, 05 Feb 2025 11:01:41 +0000 https://blogs.perficient.com/?p=376281

As businesses expand their global footprint, the need for a flexible, scalable, and secure networking solution becomes paramount. Enter Azure Virtual WAN (VWAN), a cloud-based offering designed to simplify and centralize network management while ensuring top-notch performance. Let’s dive into what Azure VWAN offers and how to set it up effectively.

What is Azure Virtual WAN (VWAN)?

Azure Virtual WAN, or VWAN, is a cloud-based network solution that connects secure, seamless, and optimized connectivity across hybrid and multi-cloud environments.

It provides:

I. Flexibility for Dynamic Network Requirements

  • Adaptable Connectivity: Azure VWAN supports various connectivity options, including ExpressRoute, Site-to-Site VPN, and Point-to-Site VPN, ensuring compatibility with diverse environments like on-premises data centers, branch offices, and remote workers.
  • Scale On-Demand: As network requirements grow or change, Azure VWAN allows you to dynamically add or remove connections, integrate new virtual networks (VNets), or scale bandwidth based on traffic needs.
  • Global Reach: Azure VWAN enables connectivity across regions and countries using Microsoft’s extensive global network, ensuring that organizations with distributed operations stay connected.
  • Hybrid and Multi-Cloud Integration: Azure VWAN supports hybrid setups (on-premises + cloud) and integration with other public cloud providers, providing the flexibility to align with business strategies.

II. Improved Management with Centralized Controls

  • Unified Control Plane: Azure VWAN provides a centralized dashboard within the Azure Portal to manage all networking components, such as VNets, branches, VPNs, and ExpressRoute circuits.
  • Simplified Configuration: Automated setup and policy management make deploying new network segments, traffic routing, and security configurations easy.
  • Network Insights: Built-in monitoring and diagnostic tools offer deep visibility into network performance, allowing administrators to quickly identify and resolve issues.
  • Policy Enforcement: Azure VWAN enables consistent policy enforcement across regions and resources, improving governance and compliance with organizational security standards.

III. High Performance Leveraging Microsoft’s Global Backbone Infrastructure

  • Low Latency and High Throughput: Azure VWAN utilizes Microsoft’s global backbone network, known for its reliability and speed, to provide high-performance connectivity across regions and to Azure services.
  • Optimized Traffic Routing: Intelligent routing ensures that traffic takes the most efficient path across the network, reducing latency for applications and end users.
  • Built-in Resilience: Microsoft’s backbone infrastructure includes redundant pathways and fault-tolerant systems, ensuring high availability and minimizing the risk of network downtime.
  • Proximity to End Users: With a global footprint of Azure regions and points of presence (PoPs), Azure VWAN ensures proximity to end users, improving application responsiveness and user experience.

High-level architecture of VWAN

This diagram depicts a high-level architecture of Azure Virtual WAN and its connectivity components.

 

Vwanarchitecture

 

  • HQ/DC (Headquarters/Data Centre): Represents the organization’s primary data center or headquarters hosting critical IT infrastructure and services. Acts as a centralized hub for the organization’s on-premises infrastructure. Typically includes servers, storage systems, and applications that need to communicate with resources in Azure.
  • Branches: Represents the organization’s regional or local office locations. Serves as local hubs for smaller, decentralized operations. Each branch connects to Azure to access cloud-hosted resources, applications, and services and communicates with other branches or HQ/DC. The HQ/DC and branches communicate with each other and Azure resources through the Azure Virtual WAN.
  • Virtual WAN Hub: At the heart of Azure VWAN is the Virtual WAN Hub, a central node that simplifies traffic management between connected networks. This hub acts as the control point for routing and ensures efficient data flow.
  • ExpressRoute: Establishes a private connection between the on-premises network and Azure, bypassing the public internet. It uses BGP for route exchange, ensuring secure and efficient connectivity.
  • VNet Peering: Links Azure Virtual Networks directly, enabling low-latency, high-bandwidth communication.
    • Intra-Region Peering: Connects VNets within the same region.
    • Global Peering: Bridges VNets across different regions.
  • Point-to-Site (P2S) VPN: Ideal for individual users or small teams, this allows devices to securely connect to Azure resources over the internet.
  • Site-to-Site (S2S) VPN: Connects the on-premises network to Azure, enabling secure data exchange between systems.

Benefits of VWAN

  • Scalability: Expand the network effortlessly as the business grows.
  • Cost-Efficiency: Reduce hardware expenses by leveraging cloud-based solutions.
  • Global Reach: Easily connect offices and resources worldwide.
  • Enhanced Performance: Optimize data transfer paths for better reliability and speed.

Setting Up VWAN in Azure

Follow these steps to configure Azure VWAN:

Step 1: Create a Virtual WAN Resource

  • Log in to the Azure Portal and create a Virtual WAN resource. This serves as the foundation of the network architecture.

Step 2: Configure a Virtual WAN Hub

  • Make the WAN Hub the central traffic manager and adjust it to meet the company’s needs.

Step 3: Establish Connections

  • Configure VPN Gateways for secure, encrypted connections.
  • Use ExpressRoute for private, high-performance connectivity.

Step 4: Link VNets

  • Create Azure Virtual Networks and link them to the WAN Hub. The seamless interaction between resources is guaranteed by this integration.

Monitoring and Troubleshooting VWAN

Azure Monitor

Azure Monitor tracks performance, availability, and network health in real time and provides insights into traffic patterns, latency, and resource usage.

Network Watcher

Diagnose network issues with tools like packet capture and connection troubleshooting. Quickly identify and resolve any bottlenecks or disruptions.

Alerts and Logs

Set up alerts for critical issues such as connectivity drops or security breaches. Use detailed logs to analyze network events and maintain robust auditing.

Final Thoughts

Azure VWAN is a powerful tool for businesses looking to unify and optimize their global networking strategy. Organizations can ensure secure, scalable, and efficient connectivity by leveraging features like ExpressRoute, VNet Peering, and VPN Gateways. With the correct setup and monitoring tools, managing complex networks becomes a seamless experience.

]]>
https://blogs.perficient.com/2025/02/05/setting-up-azure-vwan/feed/ 1 376281
Apex Security Best Practices for Salesforce Applications https://blogs.perficient.com/2025/02/02/apex-security-practices-building-secure-salesforce-applications/ https://blogs.perficient.com/2025/02/02/apex-security-practices-building-secure-salesforce-applications/#respond Mon, 03 Feb 2025 05:51:18 +0000 https://blogs.perficient.com/?p=373874

As businesses increasingly rely on Salesforce to manage their critical data, ensuring data security has become more important than ever. Apex, Salesforce’s proprietary programming language, runs in system mode by default, bypassing object- and field-level security. To protect sensitive data, developers need to enforce strict security measures.

This blog will explore Apex security best practices, including enforcing sharing rules, field-level permissions, and user access enforcement to protect your Salesforce data.

Why Apex Security is Critical for Your Salesforce Applications

Apex’s ability to bypass security settings puts the onus on developers to implement proper Salesforce security practices. Without these protections, your Salesforce application might unintentionally expose sensitive data to unauthorized users.

By following best practices such as enforcing sharing rules, validating inputs, and using security-enforced SOQL queries, you can significantly reduce the risk of data breaches and ensure your app adheres to the platform’s security standards.

Enforcing Sharing Rules in Apex to Maintain Data Security

Sharing rules are central to controlling data access in Salesforce. Apex doesn’t automatically respect these sharing rules unless explicitly instructed to do so. Here’s how to enforce them in your Apex code:

Using with sharing in Apex Classes

  • with sharing: Ensures the current user’s sharing settings are enforced, preventing unauthorized access to records.
  • without sharing: Ignores sharing rules and is often used for administrative tasks or system-level operations where access should not be restricted.
  • inherited sharing: Inherits sharing settings from the calling class.

Best Practice: Always use with sharing unless you explicitly need to override sharing rules for specific use cases. This ensures your code complies with Salesforce security standards.

Example

public class AccountHandlerWithSharing {
    public void fetchAccounts() {
        // Ensures that sharing settings are respected
        List<Account> accounts = [SELECT Id, Name FROM Account];
    }
}
public class AccountHandlerWithoutSharing {
    public void fetchAccounts() {
        // Ignores sharing settings and returns all records
        List<Account> accounts = [SELECT Id, Name FROM Account];
    }
}

Enforcing Object and Field-Level Permissions in Apex

Apex operates in a system context by default, bypassing object- and field-level security. You must manually enforce these security measures to ensure your code respects user access rights.

Using WITH SECURITY_ENFORCED in SOQL Queries

The WITH SECURITY_ENFORCED keyword ensures that Salesforce performs a permission check on fields and objects in your SOQL query, ensuring that only accessible data is returned.

Example

List<Account> accounts = [
    SELECT Id, Name
    FROM Account
    WHERE Industry = 'Technology'
    WITH SECURITY_ENFORCED
];

This approach guarantees that only fields and objects the current user can access are returned in your query results.

Using the stripInaccessible Method to Filter Inaccessible Data

Salesforce provides the stripInaccessible method, which removes inaccessible fields or relationships from query results. It also helps prevent runtime errors by ensuring no inaccessible fields are used in DML operations.

Example

Account acc = [SELECT Id, Name FROM Account LIMIT 1];
Account sanitizedAcc = (Account) Security.stripInaccessible(AccessType.READABLE, acc);

Using stripInaccessible ensures that any fields or relationships the user cannot access are stripped out of the Account record before any further processing.

Apex Managed Sharing: Programmatically Share Records

Apex Managed Sharing can be a powerful tool when you need to manage record access dynamically. This feature allows developers to programmatically share records with specific users or groups.

Example

public void shareRecord(Id recordId, Id userId) {
    CustomObject__Share share = new CustomObject__Share();
    share.ParentId = recordId;
    share.UserOrGroupId = userId;
    share.AccessLevel = 'Edit'; // Options: 'Read', 'Edit', or 'All'
    insert share;
}

This code lets you share a custom object record with a specific user and grant them Edit access. Apex Managed Sharing allows more flexible, dynamic record-sharing controls.

Security Tips for Apex and Lightning Development

Here are some critical tips for improving security in your Apex and Lightning applications:

Avoid Hardcoding IDs

Hardcoding Salesforce IDs, such as record IDs or profile IDs, can introduce security vulnerabilities and reduce code flexibility. Use dynamic retrieval to retrieve IDs, and consider using Custom Settings or Custom Metadata for more flexible and secure configurations.

Validate User Inputs to Prevent Security Threats

It is essential to sanitize all user inputs to prevent threats like SOQL injection and Cross-Site Scripting (XSS). Always use parameterized queries and escape characters where necessary.

Use stripInaccessible in DML Operations

To prevent processing inaccessible fields, always use the stripInaccessible method when handling records containing fields restricted by user permissions.

Review Sharing Contexts to Ensure Data Security

Ensure you use the correct sharing context for each class or trigger. Avoid granting unnecessary access by using with sharing for most of your classes.

Write Test Methods to Simulate User Permissions

Writing tests that simulate various user roles using System.runAs() is crucial to ensure your code respects sharing rules, field-level permissions, and other security settings.

Conclusion: Enhancing Salesforce Security with Apex

Implementing Apex security best practices is essential to protect your Salesforce data. Whether you are enforcing sharing rules, respecting field-level permissions, or programmatically managing record sharing, these practices help ensure that only authorized users can access sensitive data.

When building your Salesforce applications, always prioritize security by:

  • Using with sharing where possible.
  • Implementing security-enforced queries.
  • Tools like stripInaccessible can be used to filter out inaccessible fields.

By adhering to these practices, you can build secure Salesforce applications that meet business requirements and ensure data integrity and compliance.

Further Reading on Salesforce Security

]]>
https://blogs.perficient.com/2025/02/02/apex-security-practices-building-secure-salesforce-applications/feed/ 0 373874
Sales Cloud to Data Cloud with No Code! https://blogs.perficient.com/2025/01/31/sales-cloud-to-data-cloud-with-no-code/ https://blogs.perficient.com/2025/01/31/sales-cloud-to-data-cloud-with-no-code/#respond Fri, 31 Jan 2025 18:15:25 +0000 https://blogs.perficient.com/?p=376326

Salesforce has been giving us a ‘No Code’ way to have Data Cloud notify Sales Cloud of changes through Data Actions and Flows.   But did you know you can go the other direction too?

The Data Cloud Ingestion API allows us to setup a ‘No Code’ way of sending changes in Sales Cloud to Data Cloud.

Why would you want to do this with the Ingestion API?

  1. You are right that we could surely setup a ‘normal’ Salesforce CRM Data Stream to pull data from Sales Cloud into Data Cloud.  This is also a ‘No Code’ way to integrate the two.  But maybe you want to do some complex filtering or logic before sending the data onto Sales Cloud where a Flow could really help.
  2. CRM Data Streams only run on a schedule with every 10 minutes.  With the Ingestion API we can send to Data Cloud immediately, we just need to wait until the Ingestion API can run for that specific request.  The current wait time for the Ingestion API to run is 3 minutes, but I have seen it run faster at times.  It is not ‘real-time’, so do not use this for ‘real-time’ use cases.  But this is faster than CRM Data Streams for incremental and smaller syncs that need better control.
  3. You could also ingest data into Data Cloud easily through an Amazon S3 bucket.  But again, here we have data in Sales Cloud that we want to get to Data Cloud with no code.
  4. We can do very cool integrations by leveraging the Ingestion API outside of Salesforce like in this video, but we want a way to use Flows (No Code!) to send data to Data Cloud.

Use Case:

You have Sales Cloud, Data Cloud and Marketing Cloud Engagement.  As a Marketing Campaign Manager you want to send an email through Marketing Cloud Engagement when a Lead fills out a certain form.

You only want to send the email if the Lead is from a certain state like ‘Minnesota’ and that Email address has ordered a certain product in the past.  The historical product data lives in Data Cloud only.  This email could come out a few minutes later and does not need to be real-time.

Solution A:

If you need to do this in near real-time, I would suggest to not use the Ingestion API.  We can query the Data Cloud product data in a Flow and then update your Lead or other record in a way that triggers a ‘Journey Builder Salesforce Data Event‘ in Marketing Cloud Engagement.

Solution B:

But our above requirements do not require real-time so let’s solve this with the Ingestion API.  Since we are sending data to Data Cloud we will have some more power with the Salesforce Data Action to reference more Data Cloud data and not use the Flow ‘Get Records’ for all data needs.

We can build an Ingestion API Data Stream that we can use in a Salesforce Flow.  The flow can check to make sure that the Lead is from a certain state like ‘Minnesota’.  The Ingestion API can be triggered from within the flow.  Once the data lands in the DMO object in Data Cloud we can then use a ‘Data Action’ to listen for that data change, check if that Lead has purchased a certain product before and then use a ‘Data Action Target’ to push to a Journey in Marketing Cloud Engagement.  All that should occur within a couple of minutes.

Sales Cloud to Data Cloud with No Code!  Let’s do this!

Here is the base Salesforce post sharing that this is possible through Flows, but let’s go deeper for you!

The following are those deeper steps of getting the data to Data Cloud from Sales Cloud.  In my screen shots you will see data moving between a VIN (Vehicle Identification Number) custom object to a VIN DLO/DMO in Data Cloud, but the same process could be used for our ‘Lead’ Use Case above.

  1. Create a YAML file that we will use to define the fields in the Data Lake Object (DLO).  I put an example YAML structure at the bottom of this post.
  2. Go to Setup, Data Cloud, External Integrations, Ingestion API.   Click on ‘New’
    Newingestionapi

    1. Give your new Ingestion API Source a Name.  Click on Save.
      Newingestionapiname
    2. In the Schema section click on the ‘Upload Files’ link to upload your YAML file.
      Newingestionapischema
    3. You will see a screen to preview your Schema.  Click on Save.
      Newingestionapischemapreview
    4. After that is complete you will see your new Schema Object
      Newingestionapischemadone
    5. Note that at this point there is no Data Lake Object created yet.
  3. Create a new ‘Ingestion API’ Data Stream.  Go to the ‘Data Steams’ tab and click on ‘New’.   Click on the ‘Ingestion API’ box and click on ‘Next’.
    Ingestionapipic

    1. Select the Ingestion API that was created in Step 2 above.  Select the Schema object that is associated to it.  Click Next.
      Newingestionapidsnew
    2. Configure your new Data Lake Object by setting the Category, Primary Key and Record Modified Fields
      Newingestionapidsnewdlo
    3. Set any Filters you want with the ‘Set Filters’ link and click on ‘Deploy’ to create your new Data Stream and the associated Data Lake Object.
      Newingestionapidsnewdeploy
    4. If you want to also create a Data Model Object (DMO) you can do that and then use the ‘Review’ button in the ‘Data Mapping’ section on the Data Stream detail page to do that mapping.  You do need a DMO to use the ‘Data Action’ feature in Data Cloud.
  4. Now we are ready to use this new Ingestion API Source in our Flow!  Yeah!
  5. Create a new ‘Start from Scratch’, ‘Record-Triggered Flow’ on the Standard or Custom object you want to use to send data to Data Cloud.
  6. Configure an Asynchronous path.  We cannot connect to this ‘Ingestion API’ from the ‘Run Immediately’ part of the Flow because this Action will be making an API to Data Cloud.  This is similar to how we have to use a ‘Future’ call with an Apex Trigger.
    Newingestionapiflowasync
  7. Once you have configured your base Flow, add the ‘Action’ to the ‘Run Asynchronously’ part of the Flow.    Select the ‘Send to Data Cloud’ Action and then map your fields to the Ingestion API inputs that are available for that ‘Ingestion API’ Data Stream you created.
    Newingestionapiflowasync2
  8. Save and Activate your Flow.
  9. To test, update your record in a way that will trigger your Flow to run.
  10. Go into Data Cloud and see your data has made it there by using the ‘Data Explorer’ tab.
  11. The standard Salesforce Debug Logs will show the details of your Flow steps if you need to troubleshoot something.

Congrats!

You have sent data from Sales Cloud to Data Cloud with ‘No Code’ using the Ingestion API!

Setting up the Data Action and connecting to Marketing Cloud Journey Builder is documented here to round out the use case.

Here is the base Ingestion API Documentation.

At Perficient we have experts in Sales Cloud, Data Cloud and Marketing Cloud Engagement.  Please reach out and let’s work together to reach your business goals on these platforms and others.

Example YAML Structure:

Yaml Pic

openapi: 3.0.3
components:
schemas:
VIN_DC:
type: object
properties:
VIN_Number:
type: string
Description:
type: string
Make:
type: string
Model:
type: string
Year:
type: number
created:
type: string
format: date-time

]]>
https://blogs.perficient.com/2025/01/31/sales-cloud-to-data-cloud-with-no-code/feed/ 0 376326