PerficientGDCNagpur Articles / Blogs / Perficient https://blogs.perficient.com/tag/perficientgdcnagpur/ Expert Digital Insights Tue, 09 Sep 2025 10:13:51 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png PerficientGDCNagpur Articles / Blogs / Perficient https://blogs.perficient.com/tag/perficientgdcnagpur/ 32 32 30508587 AI: Security Threat to Personal Data? https://blogs.perficient.com/2025/08/18/ai-security-threat-to-personal-data/ https://blogs.perficient.com/2025/08/18/ai-security-threat-to-personal-data/#respond Mon, 18 Aug 2025 07:33:26 +0000 https://blogs.perficient.com/?p=385942

In recent years, AI chatbots like ChatGPT have gone from fun tools for answering questions to serious helpers in workplaces, education, and even personal decision-making. With ChatGPT-5 now being the latest and most advanced version, it’s no surprise that people are asking a critical question:

“Is my personal data safe when I use ChatGPT-5?”

First, What Is ChatGPT-5?

ChatGPT-5 is an AI language model created by OpenAI. You can think of it like a super-smart digital assistant that can:

  • Answering questions across a wide range of topics
  • Drafting emails, essays, and creative content
  • Writing and debugging code
  • Assisting with research and brainstorming
  • Supporting productivity and learning

It learns from patterns in data, but here’s an important point – it doesn’t “remember” your conversations unless the developer has built a special memory feature and you’ve agreed to it.

How Your Data Is Used

When you chat with ChatGPT-5, your messages are processed to generate a response. Depending on the app or platform you use, your conversations may be:

  • Temporarily stored to improve the AI’s performance
  • Reviewed by humans (in rare cases) to train and fine-tune the system
  • Deleted or anonymized after a specific period, depending on the service’s privacy policy

This is why reading the privacy policy is not just boring legal stuff – it’s how you find out precisely what happens to your data.

Real Security Risks to Be Aware Of

The concerns about ChatGPT-5 (and similar AI tools) are less about it being “evil” and more about how your data could be exposed if not appropriately handled.

Here are the main risks:

1. Accidental Sharing of Sensitive Information

Many users unknowingly type personal details – such as their full name, home address, phone number, passwords, or banking information – into AI chat windows. While the chatbot itself may not misuse this data, it is still transmitted over the internet and may be temporarily stored by the platform. If the platform suffers a data breach or if the information is accessed by unauthorized personnel, your sensitive data could be exposed or exploited.

Best Practice: Treat AI chats like public forums – never share confidential or personally identifiable information.

2. Data Retention by Third-Party Platforms

AI chatbots are often integrated into third-party platforms, such as browser extensions, productivity tools, or mobile apps. These integrations may collect and store your chat data on their own servers, sometimes without clearly informing you. Unlike official platforms with strict privacy policies, third-party services may lack robust security measures or transparency.

Risk Example: A browser extension that logs your AI chats could be hacked, exposing all stored conversations.

Best Practice: Use only trusted, official apps and review their privacy policies before granting access.

3. Misuse of Login Credentials

In rare but serious cases, malicious AI integrations or compromised platforms could capture login credentials you enter during a conversation. If you share usernames, passwords, or OTPs (one-time passwords), these could be used to access your accounts and perform unauthorized actions – such as placing orders, transferring money, or changing account settings.

Real-World Consequence: You might wake up to find that someone used your credentials to order expensive items or access private services.

Best Practice: Never enter login details into any AI chat, and always use two-factor authentication (2FA) for added protection.

4. Phishing & Targeted Attacks

If chat logs containing personal information are accessed by cybercriminals, they can use that data to craft highly convincing phishing emails or social engineering attacks. For example, knowing your name, location, or recent purchases allows attackers to impersonate trusted services and trick you into clicking malicious links or revealing more sensitive data.

Best Practice: Be cautious of unsolicited messages and verify the sender before responding or clicking links.

5. Overtrusting AI Responses

AI chatbots are trained on vast datasets, but they can still generate inaccurate, outdated, or misleading information. Relying on AI responses without verifying facts can lead to poor decisions, especially in areas like health, finance, or legal advice.

Risk Example: Acting on incorrect medical advice or sharing false information publicly could have serious consequences.

Best Practice: Always cross-check AI-generated content with reputable sources before taking action or sharing it.

How to Protect Yourself

Here are simple steps you can take:

  • Never share sensitive login credentials or card details inside a chat.
  • Stick to official apps and platforms to reduce the risk of malicious AI clones.
  • Use 2-factor authentication (2FA) for all accounts, so even stolen passwords can’t be used easily.
  • Check permissions before connecting ChatGPT-5 to any service – don’t allow unnecessary access.
  • Regularly clear chat history if your platform stores conversations.

Final Thoughts

ChatGPT-5 is a tool, and like any tool, it can be used for good or misused. The AI itself isn’t plotting to steal your logins or credentials, but if you use it carelessly or through untrusted apps, your data could be at risk.

Golden rule: Enjoy the benefits of AI, but treat it like a stranger online – don’t overshare, and keep control of your personal data.

]]>
https://blogs.perficient.com/2025/08/18/ai-security-threat-to-personal-data/feed/ 0 385942
Mastering GitHub Copilot in VS Code https://blogs.perficient.com/2025/08/12/mastering-github-copilot-in-vs-code/ https://blogs.perficient.com/2025/08/12/mastering-github-copilot-in-vs-code/#respond Tue, 12 Aug 2025 07:55:43 +0000 https://blogs.perficient.com/?p=385832

Ready to go from “meh” to “whoa” with your AI coding assistant? Here’s how to get started.

You’ve installed GitHub Copilot. Now what?

Here’s how to actually get it to work for you – not just with you.

In the blog Using GitHub Copilot in VS Code, we have already seen how to use GitHub Copilot in VS Code.

1. Write for Copilot, Not Just Yourself

Copilot is like a teammate who’s really fast at coding but only understands what you clearly explain.

Start with Intention:

Use descriptive comments or function names to guide Copilot.

// Fetch user data from API and cache it locally
function fetchUserData() {

Copilot will often generate useful logic based on that. It works best when you think one step ahead.

2. Break Problems Into Small Pieces

Copilot shines when your code is modular.

Instead of writing:

function processEverything() {
  // 50 lines of logic
}

Break it down:

// Validate form input
function validateInput(data) {

}

// Submit form to backend
function submitForm(data) {

}

This way, you get smarter, more accurate completions.

3. Use Keyboard Shortcuts to Stay in Flow

Speed = flow. These shortcuts help you ride Copilot without breaking rhythm:

Action Shortcut (Windows) Shortcut (Mac)
Accept Suggestion Tab Tab
Next Suggestion Alt + ] Option + ]
Previous Suggestion Alt + [ Option + [
Dismiss Suggestion Esc Esc
Open Copilot Panel Ctrl + Enter Cmd + Enter

Power Tip: Hold Tab to preview full suggestion before accepting it.

4. Experiment With Different Prompts

Don’t settle for the first suggestion. Try giving Copilot:

  • Function names like: generateInvoicePDF()
  • Comments like: // Merge two sorted arrays
  • Descriptions like: // Validate email format

Copilot might generate multiple versions. Pick or tweak the one that fits best.

5. Review & Refactor – Always

Copilot is smart, but not perfect.

  • Always read the output. Don’t blindly accept.
  • Add your own edge case handling and error checks.
  • Use tools like ESLint or TypeScript for safety.

Think of Copilot as your fast-thinking intern. You still need to double-check their work.

6. Use It Across File Types

Copilot isn’t just for JS or Python. Try it in:

  • HTML/CSS → Suggest complete sections
  • SQL → Generate queries from comments
  • Markdown → Draft docs and README files
  • Dockerfiles, .env, YAML, Regex patterns

Write a comment like # Dockerfile for Node.js app – and watch the magic.

7. Pair It With Unit Tests

Use Copilot to write your test cases too:

// Test case for addTwoNumbers function
describe('addTwoNumbers', () => {

It will generate a full Jest test block. Use this to write tests faster – especially for legacy code.

8. Learn From Copilot (Not Just Use It)

Treat Copilot suggestions as learning opportunities:

  • Ask: “Why did it suggest that?”
  • Compare with your original approach
  • Check docs or MDN if you see unfamiliar code

It’s like having a senior dev whispering best practices in your ear.

9. Use Copilot Chat (If Available)

If you have access to GitHub Copilot Chat, try it. Ask questions like:

  • What does this error mean?
  • Explain this function
  • Suggest improvements for this code

It works like a Stack Overflow built into your IDE.

Quick Recap

Tip Benefit
Write clear comments Better suggestions
Break logic into chunks Modular, reusable code
Use shortcuts Stay in flow
Cycle suggestions Explore better options
Review output Avoid bugs
Test case generation Faster TDD
Learn as you go Level up coding skills

Final Thoughts: Practice With Purpose

To truly master Copilot:

  • Build small projects and let Copilot help
  • Refactor old code using Copilot suggestions
  • Try documenting your code with its help

You’ll slowly build trust – and skill.

]]>
https://blogs.perficient.com/2025/08/12/mastering-github-copilot-in-vs-code/feed/ 0 385832
Using GitHub Copilot in VS Code https://blogs.perficient.com/2025/08/04/using-github-copilot-in-vs-code/ https://blogs.perficient.com/2025/08/04/using-github-copilot-in-vs-code/#respond Mon, 04 Aug 2025 09:21:50 +0000 https://blogs.perficient.com/?p=384796

Let’s be honest – coding isn’t always easy. Some days, you’re laser-focused, knocking out feature after feature. Other days, you stare at your screen, wondering,
“What’s the fastest way to write this function?”
“Is there a cleaner way to loop through this data?”

That’s where GitHub Copilot comes in.

If you haven’t tried it yet, you’re seriously missing out on one of the biggest productivity boosters available to developers today. In this blog, I’ll walk you through how to use GitHub Copilot with Visual Studio Code (VS Code), share my personal experience, and help you decide if it’s worth adding to your workflow.

What is GitHub Copilot?

Think of GitHub Copilot as your AI pair programmer.
It’s trained on billions of lines of public code from GitHub repositories and can:

  • Suggest whole lines of code or entire functions
  • Autocomplete loops, conditions, or boilerplate code
  • Help you learn new frameworks or syntaxes on the fly

It’s like having a coding buddy that never sleeps, doesn’t get tired, and is always ready to assist.

Setting Up Copilot in VS Code

Getting started is easy. Here’s a step-by-step guide:

Step 1: Install Visual Studio Code

If you don’t have VS Code installed yet, you can install it from here.

Step 2: Install the GitHub Copilot Extension

  • Open VS Code
  • Go to the Extensions tab (Ctrl+Shift+X)
  • Search for GitHub Copilot
  • Click Install

Or directly visit here to find the extension.

Step 3: Sign in with GitHub

After installing, you’ll be prompted to sign in using your GitHub account.

Note: GitHub Copilot is a paid service (currently), but there’s usually a free trial to test it out.

How Does Copilot Work?

Once set up, Copilot starts making suggestions as you code. It’s kind of magical.

Here’s how it typically works:

  • Type a comment describing what you want
    • Example:
// Function to reverse a string

Copilot will automatically generate the function for you!

  • Write part of the code, and Copilot completes the rest
    • Start writing a “\for\” loop or an API call, and Copilot will suggest the following lines.
  • Cycle through suggestions
    • Press Tab to accept a suggestion, or use Alt + [ / Alt + ] to browse different options.

Real-Life Use Cases

Here’s how I personally use Copilot in my day-to-day coding:

Use Case Why I Use Copilot
Boilerplate Code Saves time writing repetitive patterns
API Calls Auto-completes fetch or axios calls quickly
Learning New Syntax Helps with unfamiliar frameworks like Rust or Go
Unit Tests Suggests test cases faster than starting from scratch
Regular Expressions Generates regex patterns (saves Googling!)

Tips to Get the Most Out of Copilot

  1. Write clear comments:
    • Copilot works better when you describe what you want.
  2. Don’t blindly trust the output:
    • It’s smart, but not always correct.
    • Review the suggestions carefully, especially for security-sensitive code.
  3. Pair it with documentation:
    • Use Copilot for assistance, but keep the official docs open.
    • Copilot is great, but it doesn’t replace your understanding of the framework.
  4. Use Copilot Labs (Optional):
    • If you want more experimental features like code explanation or refactoring suggestions, try Copilot Labs.

Is Copilot Replacing Developers?

Short answer? No.

Copilot is a tool, not a replacement for developers.
It speeds up the boring parts, but:

  • Critical thinking? Still you.
  • Architecture decisions? Still you.
  • Debugging complex issues? Yes, still you.

Think of Copilot as an assistant, not a boss. It helps you code faster, but you’re still in charge of the logic and creativity.

Pros and Cons of Copilot

Pros

  • Saves time on repetitive coding tasks
  • Reduces context-switching to StackOverflow or Google
  • Helps you learn new syntaxes quickly
  • Available right inside VS Code

Cons

  • Requires an active subscription after the free trial
  • Sometimes generates incorrect or outdated code
  • Can make you over-rely on suggestions if you’re not careful

Final Thoughts: Is Copilot Worth It?

If you’re someone who:

  • Codes daily
  • Works across multiple languages or frameworks
  • Wants to focus on the “what” and less on the “how”

Then GitHub Copilot is absolutely worth trying out.

Personally, I’ve found it to be a game-changer for productivity. It doesn’t write all my code, but it takes away the mental fatigue of boilerplate so I can focus on solving real problems.

Useful Links

]]>
https://blogs.perficient.com/2025/08/04/using-github-copilot-in-vs-code/feed/ 0 384796
Optimize Sitecore Docker Instance: Increase Memory Limits https://blogs.perficient.com/2025/07/28/optimize-sitecore-docker-instance-increase-memory/ https://blogs.perficient.com/2025/07/28/optimize-sitecore-docker-instance-increase-memory/#respond Mon, 28 Jul 2025 07:39:39 +0000 https://blogs.perficient.com/?p=384666

Running a Sitecore Docker instance is a game-changer for developers. It streamlines deployments, accelerates local setup, and ensures consistency across environments. However, performance can suffer – even on high-end laptops – if Docker resources aren’t properly optimized, especially after a hardware upgrade.

I recently faced this exact issue. My Sitecore XP0 instance, running on Docker, became noticeably sluggish after I upgraded my laptop. Pages loaded slowly, publishing dragged on forever, and SQL queries timed out.

The good news? The fix was surprisingly simple: allocate more memory to the proper containers using docker-compose.override.yml

What Went Wrong?

After the upgrade, I noticed:

  • The Content Management (CM) UI was lagging.
  • Publishing and indexing took ages.
  • SQL queries and Sitecore services kept timing out.

At first, this was puzzling because my new laptop had better specs. However, I then realized that Docker was still running with outdated memory limits for containers. By default, these limits are often too low for heavy workloads, such as Sitecore.

Root Cause

Docker containers run with memory constraints either from:

  • docker-compose.override.yml
  • Docker Desktop global settings

When memory is too low, Sitecore roles such as CM and MSSQL can’t perform optimally. They need significant RAM for caching, pipelines, and database operations.

The Solution: Increase Memory in docker-compose.override.yml

To fix the issue, I updated the memory allocation for key containers (mssql and cm) in the docker-compose.override.yml file.

Here’s what I did:

Before

mssql: 
 mem_limit: 2G

After

mssql:
  mem_limit: 4GB

cm:
  image: ${REGISTRY}${COMPOSE_PROJECT_NAME}-xp0-cm:${VERSION:-latest}
  build:
    context: ./build/cm
    args:
      BASE_IMAGE: ${SITECORE_DOCKER_REGISTRY}sitecore-xp0-cm:${SITECORE_VERSION}
      SPE_IMAGE: ${SITECORE_MODULE_REGISTRY}sitecore-spe-assets:${SPE_VERSION}
      SXA_IMAGE: ${SITECORE_MODULE_REGISTRY}sitecore-sxa-xp1-assets:${SXA_VERSION}
      TOOLING_IMAGE: ${SITECORE_TOOLS_REGISTRY}sitecore-docker-tools-assets:${TOOLS_VERSION}
      SOLUTION_IMAGE: ${REGISTRY}${COMPOSE_PROJECT_NAME}-solution:${VERSION:-latest}
      HORIZON_RESOURCES_IMAGE: ${SITECORE_MODULE_REGISTRY}horizon-integration-xp0-assets:${HORIZON_ASSET_VERSION}
  depends_on:
    - solution
  mem_limit: 8GB
  volumes:
    - ${LOCAL_DEPLOY_PATH}\platform:C:\deploy
    - ${LOCAL_DATA_PATH}\cm:C:\inetpub\wwwroot\App_Data\logs
    - ${HOST_LICENSE_FOLDER}:c:\license
    - ${LOCAL_ITEM_PATH}:c:\items-mounted

How to Apply the Changes

  1. Open docker-compose.override.yml.
  2. Locate the mssql and cm services.
  3. Update or add the mem_limit property:
    • mssql → 4GB
    • cm → 8GB
  4. Rebuild containers:
    
    docker compose down
    docker compose up --build -d
  1. Check updated limits:
  docker stats

Impact After Change

After increasing memory:

  • CM dashboard loaded significantly faster.
  • Publishing operations completed in less time.
  • SQL queries executed smoothly without timeouts.

Why It Works

Sitecore roles (especially CM) and SQL Server are memory-hungry. If Docker allocates too little memory:

  • Containers start swapping.
  • Performance tanks.
  • Operations fail under load.

By increasing memory:

  • CM handles ASP.NET, Sitecore pipelines, and caching more efficiently.
  • SQL Server caches queries better and reduces disk I/O.

Pro Tips

  • Ensure Docker Desktop or Docker Engine is configured with enough memory globally.
  • Avoid setting memory limits too high if your laptop has limited RAM.
  • If using multiple Sitecore roles, adjust memory allocation proportionally.

Final Thoughts

A simple tweak in docker-compose.override.yml can drastically improve your Sitecore Docker instance performance. If your Sitecore CM is sluggish or SQL queries are slow, try increasing the memory limit for critical containers.

]]>
https://blogs.perficient.com/2025/07/28/optimize-sitecore-docker-instance-increase-memory/feed/ 0 384666
AI in Sitecore: How Artificial Intelligence is Shaping Modern Digital Experiences https://blogs.perficient.com/2025/07/24/ai-in-sitecore-digital-experience/ https://blogs.perficient.com/2025/07/24/ai-in-sitecore-digital-experience/#respond Thu, 24 Jul 2025 07:28:22 +0000 https://blogs.perficient.com/?p=384706

The world of digital experiences is evolving more quickly than ever before, and let’s be honest, artificial intelligence (AI) is more than just a trendy term these days. It’s becoming a business necessity.

AI is no longer a “nice to have” for companies that use Sitecore as their Digital Experience Platform (DXP). It’s turning into the difference between falling behind and meeting customer expectations.

In this blog, we’ll explore:

  • How AI is transforming Sitecore
  • Current AI tools and integrations
  • Real-world use cases
  • What’s next for Sitecore and AI

Let’s get started!

Why AI Matters in Sitecore Digital Experience Platform?

Sitecore has long been known for managing content, personalization, commerce, and customer data at scale.

However, as digital complexity grows, traditional rule-based systems start to struggle, especially when:

  • Audience segments are too granular
  • Data is too vast to process manually
  • Real-time personalization is required

This is where AI makes a real difference.

Fact Check:

Gartner’s 2024 Magic Quadrant for DXP reports:

“AI-based personalization increases customer engagement compared to rule-based systems.”

Source: Gartner DXP Report 2024

Current AI Capabilities in Sitecore

Sitecore Stream: Enterprise-Grade AI Across the Stack

Sitecore Stream, launched in 2024–2025, is Sitecore’s newest AI-powered platform.

It brings smart, brand-aware copilots, automated workflows, and secure content management – all designed to help teams work faster and deliver better digital experiences.

Key Capabilities:

  • Brand-Aware AI: Upload brand guidelines and style references so AI generates only on-brand content
  • AI Copilots: Assist in writing, summarizing, ideating content, and setting up campaigns directly inside Experience Hub and Content Hub
  • Agentic Workflows: Multi-step campaign orchestration with autonomous task execution (e.g., campaign brief → draft → assign → publish)
  • Grounded AI via RAG (Retrieval-Augmented Generation) on Azure OpenAI for enterprise-grade security and control

All thanks to Mahima Patel for laying out this detailed overview in her excellent blog post “Why AI-Led Experiences Are the Future – And How Sitecore Stream Delivers Them”.

Sitecore Stream brings AI capabilities to Sitecore products, transforming how marketers work in today’s fast-paced digital landscape.

Source: Sitecore Stream

Sitecore Personalize: AI-Powered Real-Time Personalization

Sitecore Personalize leverages advanced AI and machine learning to deliver real-time individualized experiences across channels.

Key Features:

  • AI-driven experimentation (A/B & multivariate testing)
  • Predictive personalization using behavioral data
  • Real-time decisioning & context-aware content delivery
  • Built-in Code Assistant (2025): Helps non-technical users write JavaScript/SQL snippets for:
    • Personalization conditions
    • Session traits
    • Audience exports
    • Experiment logic

Sitecore Personalize uses AI/ML models to predict visitor actions based on historical data and real-time interactions.

Sources: Sitecore Documentation – Personalize AI Models & Dylan Young Blog – First Look at Sitecore Personalize Code Assistant (2025)

Sitecore Content Hub: AI-Generated Content

Sitecore Content Hub integrates directly with OpenAI (ChatGPT) and other generative AI providers.

This streamlines content creation, editing, and distribution workflows.

Use Cases:

  • Automated content drafts for blogs, emails, and campaigns
  • Product descriptions and metadata generation
  • SEO-focused content suggestions
  • Social media copywriting
  • Translation assistant (2025): Auto-translates components/pages using AI

In 2023, Sitecore announced direct integration with generative AI for Content Hub.

Sources: Sitecore Press Release – AI & Content Hub Integration (2023)

AI-Powered Search & Recommendations

Sitecore partners with Coveo, SearchStax, and Azure Cognitive Search to offer intelligent, personalized discovery experiences across websites and commerce platforms.

  • Semantic search using NLP
  • AI-powered relevance tuning based on user behavior
  • Personalized recommendations for content, products, and CTAs
  • Predictive search and autocomplete

Coveo for Sitecore uses machine learning to adjust search relevance automatically based on user behavior.

Sources: Coveo for Sitecore Documentation – Get started with Coveo Machine Learning in Sitecore

AI in Sitecore XM Cloud: The SaaS Evolution (2025)

Sitecore XM Cloud is evolving fast and AI is at the heart of it.

Whether you’re building pages or analyzing performance, AI helps you work smarter, not harder.

  • Suggest Page Layouts: Get smart layout ideas while editing pages, based on your goals.
  • Improve Components: AI recommends tweaks to improve SEO, conversions, or accessibility.
  • Predict What Works: Built-in insights tell you how your content is performing—and what to test next.
  • Help Developers Too: From faster component setup to AI-generated code and test helpers, dev’s get a boost too.

These features are part of Sitecore’s ongoing investment in AI, highlighted at Sitecore Symposium 2024 and expanded throughout 2025 with Sitecore Stream.

Sources: Sitecore Symposium Keynote 2024 – Roadmap XM Cloud Developer Experience

Generative AI for Sitecore Development Teams

AI isn’t just for marketers – it’s transforming Sitecore development workflows too.

Developer Use Cases:

  • AI-assisted code generation & scaffolding
  • Automated testing with Copilot & ChatGPT plugins
  • AI-based Sitecore log summarization

Challenges & Considerations

AI in Sitecore brings opportunities – but also some challenges:

  • Data Privacy: GDPR, CCPA compliance is crucial
  • Bias in AI Models: Requires careful monitoring
  • Integration Complexity: AI tools need thoughtful orchestration
  • Vendor Lock-In: Cloud service dependencies (OpenAI, Azure, Coveo)

What’s Next? The Future of AI in Sitecore

Here’s what’s coming in the next wave of Sitecore AI innovation:

  • AI-based Content Performance Prediction
  • AI-driven Brand Compliance & Tone Checking
  • Conversational Interfaces for Commerce (ChatGPT Plugins)
  • Hyper-Personalization via AI CDP (Customer Data Platform)

Conclusion

AI is no longer a “nice-to-have” in Sitecore – it’s essential.

From content creation to personalization and commerce optimization, AI is enhancing every layer of the Sitecore ecosystem.

If you’re in Sitecore development, marketing, or digital strategy, now is the time to embrace AI to future-proof your digital experiences.

References & Further Reading

]]>
https://blogs.perficient.com/2025/07/24/ai-in-sitecore-digital-experience/feed/ 0 384706
Reclaim Space: Delete Docker Orphan Layers https://blogs.perficient.com/2025/07/18/reclaim-space-delete-docker-orphan-layers/ https://blogs.perficient.com/2025/07/18/reclaim-space-delete-docker-orphan-layers/#respond Fri, 18 Jul 2025 06:39:11 +0000 https://blogs.perficient.com/?p=384481

If you’re using Sitecore Docker containers on Windows, you’ve probably noticed your disk space mysteriously shrinking over time. I recently encountered this issue myself and was surprised to discover the culprit: orphaned Docker layers – leftover chunks of data that no longer belong to any container or image.

My Setup

This happened while I was working with Sitecore XP 10.2 in a Dockerized environment. After several rounds of running’ docker-compose up’ and rebuilding custom images, Docker started hoarding storage, and the usual’ docker system prune’ didn’t fully resolve the issue.

That’s when I stumbled upon a great blog post by Vikrant Punwatkar: Regain disk space occupied by Docker

Inspired by his approach, I automated the cleanup process with PowerShell, and it worked like a charm. Let me walk you through it.

So, What Are Orphan Docker Layers?

Docker uses layers to build and manage images. Over time, when images are rebuilt or containers removed, some layers are left behind. These “orphan” layers hang around in your system, specifically under:

C:\ProgramData\Docker\windowsfilter

They’re not in use, but they still consume gigabytes of space. If you’re working with large containers, such as Sitecore’s, these can add up quickly.

Step-by-Step Cleanup with PowerShell

I broke the cleanup process into two simple scripts:

  1. Identify and optionally rename orphan layers
  2. Delete the renamed layers after fixing permissions

Script 1: Find-OrphanDockerLayers.ps1

This script compares the layers used by active images and containers against what’s actually on your disk. Anything extra is flagged as an orphan. You can choose to rename those orphan folders (we add -removing at the end) for safe deletion.

What it does

  • Scans the image and container layers
  • Compared with the actual Docker filesystem folders
  • Identifies unused (orphan) layers
  • Calculates their size
  • Renames them safely (optional)

A. Download PowerShell script and execute (as Administrator) with the parameter -RenameOrphanLayers

B. To Run:

.\Find-OrphanDockerLayers.ps1 -RenameOrphanLayers

C. Sample Output:

WARNING: YOUR-PC - Found orphan layer: C:\ProgramData\Docker\windowsfilter\abc123 with size: 500 MB
...
YOUR-PC - Layers on disk: 130
YOUR-PC - Image layers: 90
YOUR-PC - Container layers: 15
WARNING: YOUR-PC - Found 25 orphan layers with total size 4.8 GB

This provides a clear picture of the space you can recover.

Delete After Stopping Docker

Stop Docker completely first using the below PowerShell command, or you can manually stop the Docker services:

Stop-Service docker

Script 2: Delete-OrphanDockerLayers.ps1

Once you’ve renamed the orphan layers, this second script deletes them safely. It first fixes folder permissions using takeown and icacls, which are crucial for system directories like these.

A. Download the PowerShell script and execute (as Administrator)

B. To Run:

.\Delete-OrphanDockerLayers.ps1

C. Sample Output:

Fixing permissions and deleting: C:\ProgramData\Docker\windowsfilter\abc123-removing
...

Simple and effective — no manual folder browsing or permission headaches.

End Result: A Cleaner, Lighter Docker

After running these scripts, I was able to recover multiple gigabytes of storage, and you’ll definitely benefit from this cleanup. If you’re frequently working with:

  • Sitecore custom Docker images
  • Containerized development setups
  • Large volume-mounted projects

Pro Tips

  • Run PowerShell as Administrator – especially for the delete script.
  • Don’t delete folders manually – rename first to ensure safety.
  • Use -RenameOrphanLayers only when you’re ready to clean up. Otherwise, run the script without it for a dry run.
  • Consider scheduling this monthly if you’re actively building and tearing down containers.

Credit Where It’s Due

Huge thanks to Vikrant Punwatkar for the original idea and guidance. His blog post was the foundation for this automated approach.

Check out his post here: Regain disk space occupied by Docker

Final Thoughts

If your Docker setup is bloated and space is mysteriously disappearing, try this approach. It’s quick, safe, and makes a noticeable difference – especially on Windows, where Docker’s cleanup isn’t always as aggressive as we’d like.

Have you tried it? Got a different solution? Feel free to share your thoughts or suggestions for improvement.

]]>
https://blogs.perficient.com/2025/07/18/reclaim-space-delete-docker-orphan-layers/feed/ 0 384481
Setting Up and Customizing Experience Cloud https://blogs.perficient.com/2025/01/16/experience-cloud-implementation-guide/ https://blogs.perficient.com/2025/01/16/experience-cloud-implementation-guide/#respond Fri, 17 Jan 2025 03:01:50 +0000 https://blogs.perficient.com/?p=375483

Access Salesforce Setup

Before you begin working with Experience Cloud, ensuring that the right people have access to the right information is critical.

  • Log in to Salesforce with administrator credentials.
  • Open the Salesforce App Launcher and search for “Experience Cloud.”
  • Click on Experience Cloud to open the setup page and begin managing your communities.

Create a New Cloud Site

Untitled1

Once you’ve accessed the Experience Cloud setup area, it’s time to create a new site for your community. In the Setup menu, under “All Sites,” click on New Site.

Choose the appropriate template for your needs:

  • Customer Account Portal: For self-service customer portals.
  • Partner Central: For partner collaboration portals.
  • Build Your Own: For a fully customized experience.

Name your site (e.g., “Customer Support Portal”) and assign a URL (e.g., http://www.yourcompany.force.com/portal).

Click Create to build your site. Salesforce will automatically generate a default home page and set up key features.

Untitled2

Untitled3

Configure Your Site’s Branding and Theme

Your site should reflect your company’s branding and aesthetic. Salesforce offers predefined themes and a variety of customization options. Navigate to your newly created site and access the Theme settings.

Choose from predefined themes or customize:

  • Upload your company logo for the header.
  • Adjust the color scheme to match your branding.

Preview your site’s look on both desktop and mobile to ensure it’s responsive.

Customize the Site with Experience Builder

Salesforce’s Experience Builder is a drag-and-drop tool that allows you to personalize your site’s layout and content.

From the “All Sites” page, click the Experience Builder link next to your site.

Customize your site by:

  • Adding essential pages (e.g., Home, Knowledge Base, FAQs).
  • Incorporating prebuilt components like text, images, and videos.
  • Organizing the site’s navigation menu to optimize user experience.
  • Dragging and dropping custom components to enhance functionality.

Preview changes in real-time, with automatic saving.

Define User Profiles and Permissions

Managing user access is crucial to ensuring the correct level of permissions for different users on your site.

  • Go to Settings > Sharing Settings to manage user access.
  • Set up profile-based access (e.g., View, Edit, Admin permissions).
  • For external users (e.g., customers, partners), configure Guest User permissions to allow access to public content like knowledge articles.

You can also create Permission Sets to define access to specific pages or data.

Set Up Site Navigation and Content

Now that the branding and design are in place, it’s time to focus on the site’s structure and content.

Access the Navigation Menu in the Experience Builder.

Add key sections such as:

  • Home: The main landing page.
  • Knowledge Base: A page for FAQs or help articles.
  • Contact Support: A form for user inquiries.
  • Community Forums: For partner or customer collaboration.
  • Add content blocks to each section, including text, images, videos, or knowledge articles.

Test the Site

Before launching your site, testing is critical to ensure everything works as expected.

  • Test the user registration and login processes.
  • Confirm that users with different profiles have access to the correct pages.
  • Preview your site on various devices (desktop, tablet, mobile) to ensure responsiveness.
  • Test automated workflows (e.g., case creation, knowledge article submission) to confirm they function properly.
  • Collect feedback from a small group of users, either internal or external, to identify any potential issues.

Go Live and Monitor Performance

Once testing is complete, it’s time to officially launch your site.

  • In Site Settings, enable the site for public access.
  • Announce the launch and provide users with instructions on how to log in and navigate the portal.
  • Monitor user activity and site performance using Salesforce’s built-in reporting tools.
  • Track engagement metrics such as page views, activity logs, and user feedback to continuously improve your site.

 

Visit these articles below:

Marketing Cloud: Introduction to Journey Builder

Salesforce Documentation : Experience Cloud

Experience cloud-implementation-guide

]]>
https://blogs.perficient.com/2025/01/16/experience-cloud-implementation-guide/feed/ 0 375483
Seamless GitHub Integration with Azure Storage for Enhanced Cloud File Management https://blogs.perficient.com/2024/08/05/seamless-github-integration-azure-storage-enhanced-cloud-file-management/ https://blogs.perficient.com/2024/08/05/seamless-github-integration-azure-storage-enhanced-cloud-file-management/#respond Mon, 05 Aug 2024 10:01:23 +0000 https://blogs.perficient.com/?p=365506

In the modern digital landscape, efficient collaboration and streamlined workflows are proven elements of successful project management. Integrating GitHub repositories with Azure Storage proves to be a robust solution for the management of project files in the cloud. Whether you’re a developer, a project manager, or a technology enthusiast, understanding how to push files from a GitHub repository to an Azure Storage container can significantly enhance your productivity and simplify your development process. In this comprehensive guide, we’ll explore the steps required to achieve this seamless integration.
You must be wondering why, although the files already exist in the repository, we are sending them from a GitHub repository to an Azure Storage container. While GitHub repositories are excellent for version control and collaboration, they might not be optimized for certain types of file storage and access patterns. Comparatively, Azure Storage provides a scalable, high-performance solution specifically designed for storing various types of data, including large files, binaries, and media assets.

By transferring files from a GitHub repository to an Azure Storage container, you can leverage Azure’s robust infrastructure to enhance scalability and optimize performance, especially in the below scenarios:      

  • Large File Storage
  • High Availability and Redundancy
  • Access Control and Security
  • Performance Optimization

Understanding the Solution

Before we dive into the practical steps, let’s gain a clear understanding of the solution we’re implementing:

  1. GitHub Repository: This is where your project’s source code resides. By leveraging version control systems like Git and hosting platforms like GitHub, you can collaborate with team members, track changes, and maintain a centralized repository of your project files.
  2. Azure Storage: Azure Storage provides a scalable, secure, and highly available cloud storage solution. By creating a storage account and defining containers within it, you can store a variety of data types, including documents, images, videos, and more.
  3. Integration: We’ll establish a workflow to automatically push files from your GitHub repository to an Azure Storage container whenever changes are made. This integration automates deployment, ensuring synchronization between your Azure Storage container and GitHub repository. This not only unlocks new possibilities for efficient cloud-based file management but also streamlines the development process.

Prerequisites

  1. Basic Knowledge of Git and GitHub: Understanding the fundamentals of version control systems like Git and how to use GitHub for hosting repositories is essential. Users should be familiar with concepts such as commits, branches, and pull requests.

  2. Azure Account: Readers should have access to an Azure account to create a storage account and containers. If they don’t have an account, they’ll need to sign up for one.

  3. Azure Portal Access: Familiarity with navigating the Azure portal is helpful for creating and managing Azure resources, including storage accounts.

  4. GitHub Repository, Access to GitHub Repository Settings, and GitHub Actions Knowledge: Readers should have a GitHub account with a repository set up for deploying files to Azure Storage. Understanding how to access and modify repository settings, including adding secrets, is crucial for configuring the integration. Additionally, familiarity with GitHub Actions and creating workflows is essential for setting up the deployment pipeline efficiently.

  5. Azure CLI (Command-Line Interface) Installation: Readers should have the Azure CLI installed on their local machine or have access to a terminal where they can run Azure CLI commands. Instructions for installing the Azure CLI should be provided or linked to.

  6. Understanding of Deployment Pipelines: A general understanding of deployment pipelines and continuous integration/continuous deployment (CI/CD) concepts will help readers grasp the purpose and functionality of the integration.

  7. Environment Setup: Depending on the reader’s development environment (Windows, macOS, Linux), they may need to make adjustments to the provided instructions. For example, installing and configuring Azure CLI might differ slightly across different operating systems.

Let’s Start from Scratch and See Step-By-Step Process to Integrate GitHub Repositories with Azure Storage

Step 1: Set Up Azure Storage Account

  1. Sign in to Azure Portal: If you don’t have an Azure account, you’ll need to create one. Once you’re signed in, navigate to the Azure portal. – “portal.azure.com/#home”
         a. Create a Storage Account: In the Azure portal, click on “Create a resource” and search for “Storage account”. Click on “Storage account – blob, file, table, queue” from the search results. Then, click “Create”.
    Azure Storage

  2. Configure Storage Account Settings: Provide the required details such as subscription, resource group, storage account name, location, and performance tier. For this guide, choose the appropriate options based on your preferences and requirements.
    Name

  3. Retrieve Access Keys: Once the storage account is created, navigate to it in the Azure portal. Go to “Settings” > “Access keys” to retrieve the access keys. You’ll need these keys to authenticate when accessing your storage account programmatically.
    Note: Click on the show button to copy the Access key.

Access Key

Step 2: Set Up GitHub Repository

  1. Create a GitHub Account: If you don’t have a GitHub account, sign up for one at “github.com”

  2. Create a New Repository: Once logged in, click on the “+” icon in the top-right corner and select “New repository”. Give your repository a name, description, and choose whether it should be public or private. Click “Create repository”.
    GitHub

  3. Clone the Repository: After creating the repository, clone it to your local machine using Git. You can do this by running the following command in your terminal or command prompt:
    Command:

    git clone https://github.com/your-username/your-repository.gi

Note: Replace ‘your-username’ with your GitHub username and ‘your-repository’ with the name of your repository.

Clone Command Ss

Step 3: Push Files to GitHub Repository

  1. Add Files to Your Local Repository: Place the files you want to push to Azure Storage in your machine’s local repository directory.
    File Loaction

  2. Stage and Commit Changes: In your terminal or command prompt, navigate to the local repository directory and stage the changes by running:
        Command:

    git add .

     Then, commit the changes with a meaningful commit message:
       Command:

    git commit -m "Add files to be pushed to Azure Storage
  3. Push Changes to GitHub: Finally, push the committed changes to your GitHub repository by running:
         Command: 

    git push origin main

      Note: Replace `main` with the name of your branch if it’s different.

Verify Files in GitHub: Check in your GitHub account file has been uploaded.

File Uploaded In Github

Step 4: Push Files from GitHub to Azure Storage

  1. Install Azure CLI: If you haven’t already, install the Azure CLI on your local machine.
      Note: You can find installation instructions –  

    https://docs.microsoft.com/en-us/cli/azure/install-azure-cli
  2. Authenticate with Azure CLI: Open your terminal or command prompt and login to your Azure account using the Azure CLI:
     Command:  

    az login

    Follow the prompts to complete the login process.

    Azure Cli Login Command

  3. Upload Files to Azure Storage: Use the Azure CLI to upload the files from your GitHub repository to your Azure Storage container:
       Command:

    az storage blob upload-batch --source <local-path> --destination <container-name> --account-name <storage-account-name> --account-key <storage-account-key>

Note: Replace `<storage-account-name>` and `<storage-account-key>` with the name and access key of your Azure Storage account, respectively. Replace `<container-name>` and `<local-path>` with your container name and the local path to your repository directory, respectively.

Azure Cli File Upload Command

Step 5: Verify Deployment

Once the workflow is complete, navigate to your Azure Storage container. You should see the files from your GitHub repository synchronized to the container. Verify the integrity of the files and ensure that the deployment meets your expectations.
Azure Container File Uploaded
Conclusion

By following these steps, you’ve successfully set up a seamless integration between your GitHub repository and Azure Storage container. This integration automates pushing files from your repository to the cloud, enabling efficient collaboration and simplified project management. Embrace the power of automation, leverage the capabilities of GitHub Actions and Azure Storage, and unlock new possibilities for your development workflow. Happy coding

]]>
https://blogs.perficient.com/2024/08/05/seamless-github-integration-azure-storage-enhanced-cloud-file-management/feed/ 0 365506
The Ultimate Guide for Cutting AWS Costs https://blogs.perficient.com/2024/07/30/the-ultimate-guide-for-cutting-aws-costs/ https://blogs.perficient.com/2024/07/30/the-ultimate-guide-for-cutting-aws-costs/#respond Wed, 31 Jul 2024 01:35:30 +0000 https://blogs.perficient.com/?p=366360

AWS cloud solution is becoming a requirement of the fast-evolving infrastructure needed in today’s IT business. All clients wish to move to cloud because it has higher availability and durability. The current consumers on cloud are always concerned with the ways that they can cut costs by a huge on Amazon web service monthly and or yearly billing cycle.
In this article we will examine such AWS resources & how, by using them, you can minimize your billing period.

1) AWS Cost allocation Tags

Using AWS tags we can track the resources that relate to each other. We can enable detailed cost report. The allocated tags show up in billing in column wise structure.

AWS generated cost allocation tagsUser tags
AWS tags will be automatically applied to resources we created if we have not tagged themThese tags are defined by the user. They start with prefix “user:”
It will start with prefix “aws:” e.g. (aws: createdBy).
They are not applied to the resources created before the activation

These cost allocation tags only show up in the billing console segment. Generally, it may take up to 24 hours for the tags to appear in the report.

To Activate Cost Allocation Tags:

Go to the AWS Billing and Cost Management console.

Select “Cost Allocation Tags” under “Billing preferences.”

Activate the tags you want to use for cost allocation by checking them.

Aaa

2) Trusted Advisor

This is a high level in AWS service assessment. This aids in the assessment of or proposing options such as cost management, robustness, reliability, scalability, quality of service, quality of operations. It is the same for all customers of AWS It offers core checks and basic suggestions.

To benefit from the full-on usage of this service you may have to be on a commercial and enterprise plan. We can make automatic reports and alerts for specific checks to stay informed about your AWS environment’s health and compliance with best practices.

From AWS management console, we can find Trusted advisor under support section.

Bbb

In trusted advisors, a service limit is used to monitor the recommendation. We can create manual cases from AWS support centre to increase limits or by using AWS service quotas service.

3) AWS Service Quotas

AWS Service Quotas, or limits, define the maximum number of resources or operations allowed within an AWS account. These quotas help ensure the stability and security of the AWS environment while providing predictable performance. AWS automatically sets these quotas, but many can be adjusted upon request.

We can setup CloudWatch Monitor usage against quotas and create alarms to alert you when you are nearing a quota limit.

Managing Service Quotas

  • AWS Management Console: Use the Service Quotas dashboard to view and manage your service quotas.

Ccc

  • AWS CLI: Use commands like aws service-quotas list-service-quotas to list quotas.
  • AWS SDKs: Use AWS SDKs to programmatically retrieve quota information.

 

Categories:

Account Quotas: Limits that apply to your entire AWS account.

Service-specific Quotas: Limits that apply to specific services like EC2, S3, RDS, etc.

 

Common AWS Service Quotas

EC2RDSS3
Running On-Demand Instances: varies depending on the type of instance;
for example, 20 are available for instances
with generic purposes.
DB Instances: Each account has 40 DB instances.Buckets: 1100 per account by default.
Spot Instances: There is a cap on how many spot instances you can execute.Storage: 100 TB of storage is available for all DB instances.Object Size: 5 TB or more per object
Elastic IP Addresses: 5 in each region.Snapshots: 100 manual snapshots per account.

4) AWS Saving Plans

Savings Plans promise a fixed level of usage (measured in $ / hour) for one or three years in exchange for a flexible pricing strategy that offers significant savings over On-Demand pricing.

Compute Savings Plans:EC2 Instance Savings Plans:
most flexible and cost-effectiveOffer maximum savings of up to 72%.
Regardless of region, instance family, operating system, or tenancy, apply to every EC2 instance.Specific to individual instance families in a chosen region
can also be used with AWS Lambda and Fargate.

Reserved Instances (RIs)

When compared to On-Demand pricing, Reserved Instances offer a substantial reduction of up to 75%. You can reserve capacity for your EC2 instances with them, but they do require a one- or three-year commitment.

Types of Reserved Instances:

Standard: Standard: Provide the largest discount possible; this is ideal for use in steady-state conditions.
Convertible: Offer savings while enabling changes to operating systems, tenancies, and instance kinds.

5) S3 – Intelligent Tiering

Amazon S3 Intelligent-Tiering is developed to automatically optimize storage costs as patterns of data access change. Without affecting performance or adding overhead, it transfers data between the frequent and seldom access tiers based on shifting access patterns.

Regardless of the access tier, there are no retrieval fees to retrieve your data. The cost of keeping an eye on things and transferring them across access tiers is covered by a little monthly monitoring and automation fee that is charged per object. Offers superior resilience and accessibility compared to alternative Amazon S3 storage categories.

Enabling S3 Intelligent-Tiering

Ddd

AWS Management Console: Navigate to the S3 bucket, select the objects, and choose “Change storage class” to move objects to S3 Intelligent-Tiering.

Alternatively, to move objects to Intelligent-Tiering, build up a lifecycle rule.

AWS CLI: To move things to Intelligent-Tiering, use the commands “aws s3 mv” or “aws s3 cp.”

6) AWS Budgets

Using AWS Budgets, a cost management tool, you can create personalized spending plans to monitor your AWS expenses, usage, and consumption. With its alerts, you may efficiently manage your AWS expenditure by being informed when you surpass or are expected to surpass your budget limitations.

Custom Budgets – Make expenses and utilization, Reserved Instances (RIs), Savings Plans, and Custom Budgets-based budgets. Establish budgets for several timeframes, such as monthly, quarterly, and annual.

Alerts and Notifications – When your budget is exceeded by actual or projected usage, get warnings via email or Amazon SNS. To receive warnings at different stages for the same budget, set up several thresholds.

Creating a Budget:

Open the AWS Budgets Dashboard.

Click on “Create a budget.” Follow the requirements and click on create Budget.

Eee

 

7) AWS Compute Optimizer

It helps in the optimization of your AWS resources, including Lambda functions, Auto Scaling groups, EBS volumes, and EC2 instances. It offers suggestions to boost productivity, cut expenses, and improve efficiency based on your usage behaviours.

EC2 Instances offers the best instance types based on how much memory, CPU, and network are used.

Auto Scaling Groups: suggests the ideal sizes and types of instances for groups.

EBS Volumes: makes recommendations for improving the types and settings of EBS volumes.

Lambda Functions: offers suggestions for maximizing concurrency and memory size.

For thorough cost management and monitoring, integrates easily with AWS services like AWS CloudWatch, AWS Budgets, and AWS Cost Explorer.

Enable AWS Compute Optimizer:

Go to the AWS Compute Optimizer Console.

Click “Get started” and follow the instructions to enable the service.

Fff

Example Use Case

EC2 Instance Optimization – To reduce expenses, find unused EC2 instances and reduce their size. To increase performance, find instances that are being overused and upgrade them.

Auto Scaling Group Optimization – To guarantee economical and effective scaling, optimize instance sizes and kinds within Auto Scaling groups.

Conclusion

We now know the seven most crucial things to do to reduce your AWS billing cycle. In the majority of situations, we can use CloudWatch to receive alerts when the threshold is reached. This will minimize needless billing for our management and maximize available resources.

]]>
https://blogs.perficient.com/2024/07/30/the-ultimate-guide-for-cutting-aws-costs/feed/ 0 366360
Salesforce Data Cloud – Introduction on Salesforce Data Cloud https://blogs.perficient.com/2024/07/25/salesforce-data-cloud-introduction-on-salesforce-data-cloud/ https://blogs.perficient.com/2024/07/25/salesforce-data-cloud-introduction-on-salesforce-data-cloud/#respond Thu, 25 Jul 2024 05:39:35 +0000 https://blogs.perficient.com/?p=366346

Salesforce Data Cloud

Salesforce Data Cloud allows you to unify all your data on Salesforce without building complex data pipelines. Data Cloud easily takes action on all your data across every Salesforce cloud and enables trusted AI solutions powered by your data.

1

 

Data Cloud collect all your data from different sources and work together for you customer. Salesforce Data cloud is embedded with the Einstein 1 platform, which means any external database or warehouse can now drive actions and workflows inside of your CRM.  Data Cloud is not only to collect data from different source applications. It’s about collecting different applications together to provide the customer with improved experiences and drive growth.

2

Data Cloud History

Salesforce Data Cloud started in 2020 as a Salesforce customer 360 audience. After that, it went through various stages of innovation, and in 2021, it was named Salesforce customer data platform. After that, it was known as a marketing cloud customer data platform and Salesforce Genie in 2022. Finally, in 2023, it officially became “Salesforce Data Cloud.”

Advantages of Salesforce Data Cloud

Data Cloud has multiple advantages, such as generating insightful decisions and unlocking actionable insights. Some of the benefits of Salesforce Data Cloud are shown below:

  1. Salesforce Data Enrichment: By using Salesforce Data Cloud, businesses can get their existing data in an updated, accurate, and more comprehensive format that can be useful for making business decisions.
  2. Third-Party Data Integration: Salesforce Data Cloud seamlessly partners with a wide range of third-party data providers, which means businesses can use industry-specific data sets, market information, and other external data sources.
  3. Data Security: Robust security mechanisms and encryption are used in Data Cloud to protect shared and stored data.
  4. Customer Targeting: Salesforce Data Cloud provides meaningful information to users, which helps businesses target the correct audience.
  5. Scalability: Salesforce Data Cloud can fit the needs of all sorts of businesses. It’s flexible enough to provide the right information to help a company grow.

 

Salesforce Data Cloud Architecture

3

Let’s understand the data cloud architecture in below:

  1. Data Ingestion

  • The Data Ingestion is to fetch/get data from an external system into the data cloud, called data ingestion.
  • As shown in the above architecture image on the left side, these are all the data sources a business can have. It can be a Salesforce cloud (Sales, service, marketing, health, etc.) and other external platforms like Amazon S3, mobile and web connectors, Salesforce SDK, and more. Data cloud brings all these data source data together with less effort.
  • Now, these data are coming in two formats: Batch & Streaming. Batch data is received periodically, like 1 hour, six hours, or daily. Streaming data is in real time.
  1. Transform & Govern

  • The next step in transforming your data is assembling it into a structured format using a data cloud. Using a data cloud, you can prepare, filter, and transform data before using it.
  1. Harmonizing the data

  • Data modeling or harmonizing is the process of transforming different data sources into a single standardized data model.
  • In the Harmonisation stage of the data cloud, it will get all random unstructured data and convert it into a standard format.
  1. Unify

  • Unification in Data Cloud is the process of combining data from multiple sources into a single profile. It is based on user-defined identity resolution rules in a ruleset, data mappings, and match and reconciliation rules.
  1. Insight & AI Prediction

  • The insight is a statistically significant finding in your data.
  • We can collect all data related to specific individuals in the Data Cloud. This means we can retrieve the targeted audience record for marketing, enhance our analytics, and improve our generative AI.
  • In this phase, the data cloud helps us get calculated insight to analyze.
  1. Segment & analyze your data

  • Segmentation is a tool that lets users create targeted audiences for marketing campaigns.
  1. Activation

  • In the final stage, the Data Cloud activates the collected, analyzed, and processed data and generates insights. Now, in this phase, you can take appropriate action on processed data.
  • At this stage, the processed data can be used for marketing purposes or to fulfill any other data-based business need.

 

Must Know Data Cloud Terms 

  1. Data Stream: To fetch data from an external system into the data cloud, we need to create a data stream using connectors that will refresh every other day or continuously as we define the frequency.
  2. Data Lake Object (DLO): The fetched data from the external system comes into the data lake object first after running the data stream.
  3. Data Model Object (DMO): The data model object/harmonization transforms different data sources into a single standardized data model.
  4. Unified Profile: It will give a complete overview of collected information about the user.
  5. Identity Resolution: Identity Resolution in the data cloud is a data management process that combines data from different sources into unified profiles of customers and accounts.

Summary

In this blog, we covered the introduction of Data Cloud and its history, Data Cloud architecture, and how to understand how it works. We also discussed some of the critical terminology or key terms we should know when working on Data Cloud.

 

References

  1. Salesforce Data Cloud

You Can Also Read

  1. Salesforce CPQ Overview
]]>
https://blogs.perficient.com/2024/07/25/salesforce-data-cloud-introduction-on-salesforce-data-cloud/feed/ 0 366346
AI Toolkits Magic: Automating CAPTCHA Recognition Using OpenCV and Tesseract https://blogs.perficient.com/2024/07/08/ai-toolkits-magic-automating-captcha-recognition-using-opencv-and-tesseract/ https://blogs.perficient.com/2024/07/08/ai-toolkits-magic-automating-captcha-recognition-using-opencv-and-tesseract/#comments Mon, 08 Jul 2024 05:31:18 +0000 https://blogs.perficient.com/?p=365295

OpenCV and Tesseract can be associated with Artificial Intelligence due to their involvement in tasks that often fall under the AI umbrella, such as computer vision and text recognition. To automate solving image CAPTCHAs using Java, you will typically need several dependencies for tasks such as image processing, machine learning, and possibly computer vision.

OpenCV: A powerful library for computer vision and image processing. You can use the Java bindings for OpenCV.

Tesseract OCR: Tesseract OCR is an optical character recognition library that extracts text from images.

OpenCV (Open-Source Computer Vision Library)

  • Category: AI Toolkit for Computer Vision and Image Processing.
  • Purpose: Provides comprehensive tools for image and video processing, essential for many AI applications.
  • Capabilities: Includes image transformation, filtering, feature detection, object detection, and support for machine learning and deep learning.
  • Usage: Commonly used in AI projects for tasks like object detection, face recognition, and image classification.

Tesseract

  • Category: AI Toolkit for Optical Character Recognition (OCR)
  • Purpose: Converts images of text into machine-readable text using machine learning techniques.
  • Capabilities: Recognizes and extracts text from images, supporting multiple languages and fonts.
  • Usage: Utilized in AI projects for tasks such as document digitization, data extraction from scanned documents, and integrating text recognition into applications.

Step 1: Set up Dependencies

First, add the necessary dependencies to your pom.xml file:

Opencv dependencies

Tesseract dependencies

Step 2: Write the Java Code

Create a Java class to preprocess the CAPTCHA image and extract the text using Tesseract.

Captchasolver Code for image captcha

Step 3: Set up Tesseract Data

You must create your Tesseract data files and place them in a project directory (e.g., tessdata). but you do need at least the data file for the language used in the CAPTCHA. Tesseract uses trained data files (often referred to as “language data” or “Tessdata”) to recognize text in different languages.

For example, you only need the English-trained data file if your CAPTCHA contains English text. If contains text in another language, you’ll need the corresponding trained data file.

Step 4: Run the code

Ensure you have an image CAPTCHA in your project directory(e.g.,captcha//captcha.png) and adjust the paths in the code accordingly.

Captcha image under folder

Then, run the CaptchaSolver class.

Final Console Output of Extracted Image Text

Output of code

Explanation

  1. Image Preprocessing:
    • Load the image in grayscale mode.
    • Apply a binary threshold to convert the image to black and white.
    • Save the preprocessed image to disk.
  2. Text Extraction with Tesseract:
    • Initialize Tesseract and point it to the tessdata directory.
    • Process the preprocessed image with Tesseract to extract the text.

By running this code, you should be able to automate the solving of simple image CAPTCHAs. Adjust the preprocessing steps as necessary for more complex CAPTCHAs.

Summary

Referring to OpenCV and Tesseract as components of an “AI toolkit” accurately reflects their roles in enabling and enhancing AI applications, particularly in the domains of computer vision and text recognition. They are essential tools for implementing AI-driven solutions, making them integral parts of the AI development ecosystem.

]]>
https://blogs.perficient.com/2024/07/08/ai-toolkits-magic-automating-captcha-recognition-using-opencv-and-tesseract/feed/ 1 365295
Efficient Record Assignment: Assign Records to Queues with Salesforce Flows https://blogs.perficient.com/2024/06/18/efficient-record-assignment-assign-records-to-queues-with-salesforce-flows/ https://blogs.perficient.com/2024/06/18/efficient-record-assignment-assign-records-to-queues-with-salesforce-flows/#respond Tue, 18 Jun 2024 14:36:42 +0000 https://blogs.perficient.com/?p=364560

Hello Trailblazers,

In today’s fast-paced business environment, efficiency is key. Salesforce Flows offers a powerful way to automate processes, and one of the impactful uses of this is to assign records to Queue.

In the previous parts of this blog, we discussed everything about queues, from description to creation. If you would like to learn more about it, you can check out that part here. It’ll help you to understand the core concepts of Salesforce Queue.

In this blog post, we’ll learn how to assign records to queues using Salesforce Flows, ensuring your team can manage workloads more effectively and respond to tasks promptly.

So let’s get started…

Salesforce Queue

Let’s discuss Salesforce queues very briefly.

Salesforce Queues are a powerful tool that helps manage workloads and distribute tasks among team members efficiently.

Queues are a way to prioritize, distribute, and assign records to teams or groups of users. They act as holding areas where records wait for a user to pick them up.

Why Use Salesforce Flows for Record Assignment?

Assigning records to queues in Salesforce has several advantages:

  1. Improved Efficiency: Automating the assignment process reduces manual effort and the risk of errors, ensuring records are always directed to the right team members
  2. Enhanced Team Collaboration: Queues help distribute work among team members, preventing bottlenecks and ensuring tasks are handled promptly.
  3. Better Workload Management: Queues make it easier to monitor and manage workloads, helping managers allocate resources more effectively.

How to Assign Record to Queue with Salesforce Flows

To assign records to the queue via flows, you need to create a flow.

So, let’s create a flow…

Step 1: Create a New Flow

  1. Go to the Quick Find box.
  2. Type Flows and select it.
  3. Click “New Flow”.
  4. Select “Record- Triggered Flow”.
  5. Click Create.

Step 2: Configure the Trigger

  1. Choose the object for which you want to assign records to queues (e.g., Case, Lead, Opportunity).
    Here we are selecting “Lead Object”.
  2. In the Configure Trigger, select “A record is created”. You can select whatever meets your requirements.
  3. Set the Entry conditions.
    a. In the Condition Requirements dropdown – select “All Conditions Are Met (AND)”.
    b. Field – LeadSource.
    c. Operator – Equal.
    d. Value – Web.
  4. In ‘Optimize the Flow for’: select “Fast Field Updates”.

With this, your conditions will look like this:

Img1

 

Step 3: Get the desired Queue in Flow

Once you are done with the entry conditions, you can start building the further flow logic.

So, to assign the lead record to a particular queue, you need to get the ID of that Queue. I already have created a queue, so using it here.

For that, follow the below steps…

  1. Choose the “Add Element” or “+” button in the flow.
  2. Select the “Get Records” element.
  3. Give the Label – GetQueueId.
  4. Select Object – Group.
  5. In the Filter Group Records: Condition Requirements – All Conditions Are Met (AND)
    a. Field – DeveloperName
    b. Operator – Equal
    c. Value – DemoQueue
    [Note: ‘DeveloperName’ is simply a Queue name.]
  6. Click on “Add Condition”.
    a. Field – Type
    b. Operator – Equal
    c. Value – Queue
  7. Sort Group Records – Not Sorted.
  8. In the How Many Records to Store – select “Only the first record.”
  9. And in the How to Store Record Data – select “Automatically store all fields.”

So, with this, your get record element will look like this.

Img2

 

Step 4: Update the Lead Record

Once you get the Queue ID, you can directly assign it to the lead. To do so, follow the below steps…

  1. Choose the “Add Element” or “+” button in the flow.
  2. Select “Update Triggering Record” from the Shortcuts.
  3. In the Set Filter Conditions, Condition Requirements to Update Record – select “None—Always Update Record”.
  4. In the Set Field Values for the Lead Record…
    a. Field – type OwnerId.
    b. Value – select the Group’s ID, i.e. GetqueueId that we created above.Img3
  5. Save your Flow.
  6. Activate.

So, your flow will appear as shown below.

Img6

Step 5: Test the Flow

Now, it’s time to test the flow.

Once you’ve configured the flow, activate it. Create sample records to ensure they are correctly assigned to the appropriate queues based on your flow logic.

To demonstrate this, I’m creating a new lead record with the Lead Source set to “Web.” As you can see, while creating the record, the Lead Owner is initially set to “F Demo.”

Once the record is saved, the owner will change, and the record will be assigned to the appropriate queue.

Img4

 

After Save:

Img5

 

This way, the flow triggers and assigns the lead record to the queue.

Conclusion

Salesforce Flows are a powerful tool for automating record assignments, improving efficiency, and enhancing team collaboration. By following the steps outlined in this guide, you can create effective flows that assign records to queues, ensuring your team can manage their workloads more efficiently.

Happy Reading!

The dream is not that which

You see while sleeping;

It is something that

does not let you sleep…

 

Related Posts on Queue:

  1. Salesforce Queues
  2. Salesforce Queue Types

You Can Also Read:

1. Introduction to the Salesforce Queues – Part 1
2.Mastering Salesforce Queues: A Step-by-Step Guide – Part 2
3.How to Assign Records to Salesforce Queue: A Complete Guide
4. An Introduction to Salesforce CPQ
5. Revolutionizing Customer Engagement: The Salesforce Einstein Chatbot

 

]]>
https://blogs.perficient.com/2024/06/18/efficient-record-assignment-assign-records-to-queues-with-salesforce-flows/feed/ 0 364560