“AI-first” has become a buzzword in executive conversations, but what does it really mean? Is it about using artificial intelligence at every turn, or applying it with intention and purpose? For analyst and researcher Susan Etlinger, it’s clearly the latter.
On the latest episode of “What If? So What?”, Susan joins host Jim Hertzfeld to explore what it takes to build AI strategies that are both innovative and responsible. With a background that bridges the humanities and technology, she makes a compelling case for the critical role of human insight in an AI-driven world.
When (and When Not) to Automate
AI’s power lies not just in what it can do, but in knowing when not to use it. Susan argues that leaders must assess whether automation truly improves outcomes or risks eliminating valuable learning opportunities.
She shares a story from early in her career, when manually compiling business data helped her develop essential skills like stakeholder management, strategic thinking, and financial literacy. Her point: AI can accelerate, but only human experience gives results meaning.
From Generative to Agentic AI: Who’s in Control?
The conversation explores the evolution from machine learning to Generative AI, and now to Agentic AI. Susan encourages leaders to ask:
Who sets the goals? Who ensures alignment?
While AI agents can handle tasks from start to finish, intention, ethics, and judgment remain the responsibility of humans.
Smarter AI Strategies, Not Just More AI
Susan’s key takeaway is clear:
Organizations don’t need more AI; they need better AI strategies.
Start with a clear use case, implement with intention, and learn from the outcome. The most effective approaches respect the limits of automation while amplifying human strengths.
Keep People at the Center of Your AI Strategy
For leaders shaping AI strategy, Susan offers a clear reminder: progress isn’t about replacing human decision making, it’s about enhancing it. AI can accelerate outcomes, but it’s people who ensure those outcomes are purposeful, ethical, and aligned to your business goals.
Listen to the full conversation
Apple | Spotify | Amazon | Overcast | Watch the full video episode on YouTube
Susan Etlinger is a globally recognized expert on the business and societal impact of data and artificial intelligence and senior fellow at the Centre for International Governance Innovation, an independent, non-partisan think tank based in Canada. Her TED talk, “What Do We Do With All This Big Data?” has been translated into 25 languages and has been viewed more than 1.5 million times. Her research is used in university curricula around the world, and she has been quoted in numerous media outlets including The Wall Street Journal, The Atlantic, The New York Times and the BBC. Susan holds a Bachelor of Arts in Rhetoric from the University of California at Berkeley.
Follow Susan on LinkedIn
Learn More about Susan Etlinger
Jim Hertzfeld is Area Vice President, Strategy for Perficient.
For over two decades, he has worked with clients to convert market insights into real-world digital products and customer experiences that actually grow their business. More than just a strategist, Jim is a pragmatic rebel known for challenging the conventional and turning grand visions into actionable steps. His candid demeanor, sprinkled with a dose of cynical optimism, shapes a narrative that challenges and inspires listeners.
]]>
In recent years, AI chatbots like ChatGPT have gone from fun tools for answering questions to serious helpers in workplaces, education, and even personal decision-making. With ChatGPT-5 now being the latest and most advanced version, it’s no surprise that people are asking a critical question:
“Is my personal data safe when I use ChatGPT-5?”
ChatGPT-5 is an AI language model created by OpenAI. You can think of it like a super-smart digital assistant that can:
It learns from patterns in data, but here’s an important point – it doesn’t “remember” your conversations unless the developer has built a special memory feature and you’ve agreed to it.
When you chat with ChatGPT-5, your messages are processed to generate a response. Depending on the app or platform you use, your conversations may be:
This is why reading the privacy policy is not just boring legal stuff – it’s how you find out precisely what happens to your data.
The concerns about ChatGPT-5 (and similar AI tools) are less about it being “evil” and more about how your data could be exposed if not appropriately handled.
Here are the main risks:
Many users unknowingly type personal details – such as their full name, home address, phone number, passwords, or banking information – into AI chat windows. While the chatbot itself may not misuse this data, it is still transmitted over the internet and may be temporarily stored by the platform. If the platform suffers a data breach or if the information is accessed by unauthorized personnel, your sensitive data could be exposed or exploited.
Best Practice: Treat AI chats like public forums – never share confidential or personally identifiable information.
AI chatbots are often integrated into third-party platforms, such as browser extensions, productivity tools, or mobile apps. These integrations may collect and store your chat data on their own servers, sometimes without clearly informing you. Unlike official platforms with strict privacy policies, third-party services may lack robust security measures or transparency.
Risk Example: A browser extension that logs your AI chats could be hacked, exposing all stored conversations.
Best Practice: Use only trusted, official apps and review their privacy policies before granting access.
In rare but serious cases, malicious AI integrations or compromised platforms could capture login credentials you enter during a conversation. If you share usernames, passwords, or OTPs (one-time passwords), these could be used to access your accounts and perform unauthorized actions – such as placing orders, transferring money, or changing account settings.
Real-World Consequence: You might wake up to find that someone used your credentials to order expensive items or access private services.
Best Practice: Never enter login details into any AI chat, and always use two-factor authentication (2FA) for added protection.
If chat logs containing personal information are accessed by cybercriminals, they can use that data to craft highly convincing phishing emails or social engineering attacks. For example, knowing your name, location, or recent purchases allows attackers to impersonate trusted services and trick you into clicking malicious links or revealing more sensitive data.
Best Practice: Be cautious of unsolicited messages and verify the sender before responding or clicking links.
AI chatbots are trained on vast datasets, but they can still generate inaccurate, outdated, or misleading information. Relying on AI responses without verifying facts can lead to poor decisions, especially in areas like health, finance, or legal advice.
Risk Example: Acting on incorrect medical advice or sharing false information publicly could have serious consequences.
Best Practice: Always cross-check AI-generated content with reputable sources before taking action or sharing it.
Here are simple steps you can take:
ChatGPT-5 is a tool, and like any tool, it can be used for good or misused. The AI itself isn’t plotting to steal your logins or credentials, but if you use it carelessly or through untrusted apps, your data could be at risk.
Golden rule: Enjoy the benefits of AI, but treat it like a stranger online – don’t overshare, and keep control of your personal data.
]]>Have you ever wondered what happens if you ask AI to create an image, write a poem, or draft an email?
Most of us picture “the cloud” working its magic in a distant location. The twist is that the cloud is physical, real, and thirsty. Data centers require water, sometimes millions of gallons per day, to stay cool while AI is operating.
By 2025, it is impossible to overlook AI’s growing water footprint. But don’t worry, AI isn’t to blame here. It’s about comprehending the problem, the ingenious ways technology is attempting to solve it, and what we (as humans) can do to improve the situation.
Doesn’t your laptop heat up quickly when you run it on overdrive for hours? Now multiply that by millions of machines that are constantly in operation and stacked in enormous warehouses. A data centre is that.
These facilities are cooled by air conditioning units, liquid cooling, or evaporative cooling to avoid overheating. And gallons of fresh water are lost every day due to evaporative cooling, in which water actually evaporates into the atmosphere to remove heat.
Therefore, there is an invisible cost associated with every chatbot interaction, artificial intelligence-powered search, and generated image: water.
Pretty Big—and expanding. According to a 2025 industry report, data centers related to artificial intelligence may use more than 6 billion cubic meters of water a year by the end of this decade. That is roughly equivalent to the annual consumption of a mid-sized nation.
In short, AI’s water consumption is no longer a “future problem.” The effects are already being felt by the communities that surround big data centers. Concerns regarding water stress during dry months have been voiced by residents in places like Arizona and Ireland.
Surprisingly, yes. It is being saved by the same intelligence that requires water.
optimised cooling: Businesses are utilising AI to operate data centers more effectively by anticipating precisely when and how much cooling is required, which can reduce water waste by as much as 20–30%.
Technology for liquid cooling: Some new servers are moving to liquid cooling systems, which consume a lot less water than conventional techniques.
Green data centers: Major corporations, such as Google and Microsoft, are testing facilities that use recycled water rather than fresh water for cooling and are powered by renewable energy.
Therefore, “AI is the problem” is not the story. “AI is thirsty, but also learning how to drink smarter,” it says.
Absolutely.Our decisions have an impact even though the majority of us do not manage data centers. Here’s how:
More intelligent use of AI: We can be aware of how frequently we execute complex AI tasks, just as we try to conserve energy. (Is 50 AI-generated versions of the same image really necessary?)
Encourage green tech: Selecting platforms and services that are dedicated to sustainable data practices encourages the sector to improve.
Community action: Cities can enact laws that promote the use of recycled water in data centers and openness regarding the effects of water use in their communities.
Consider it similar to electricity, whose hidden costs we initially hardly noticed. Efficiency and awareness, however, had a significant impact over time. Water and AI can have the same effect.
AI is only one piece of the global water puzzle. Water stress is still primarily caused by industry, agriculture, and climate change. However, the emergence of AI makes us reevaluate how we want to engage with the planet’s most valuable resource in the digital future.
If this is done correctly, artificial intelligence (AI) has the potential to be a partner in sustainability, not only in terms of how it uses water but also in terms of how it aids in global water monitoring, forecasting, and conservation.
The cloud isn’t magic. It’s water, energy, wires, and metal. And AI’s thirst increases with its growth. However, this is an opportunity for creativity rather than panic. Communities, engineers, and even artificial intelligence (AI) are already rethinking how to keep machines cool without depleting the planet.
Therefore, keep in mind that every pixel and word contains a hidden drop of water the next time you converse with AI or create an interesting image. Furthermore, the more information we have, the better decisions we can make to ensure the future continues.
]]>Imagine starting your workday with an alert not from a human analyst, but from an AI agent. While you slept, this agent sifted through last night’s sales data, spotted an emerging decline in a key region, and already generated a mini-dashboard highlighting the issue and recommending a targeted promotion. No one asked it to; it acted on its own. This scenario isn’t science fiction or some distant future; it’s the imminent reality of agentic AI in enterprise analytics. Businesses have spent years perfecting dashboards and self-service BI, empowering users to explore data on their own. However, in a world where conditions are constantly changing, even the most advanced dashboard may feel excessively slow. Enter agentic AI: the next frontier where intelligent agents don’t just inform decisions; they make and even execute decisions autonomously. Over the next 1–3 years, this shift toward AI-driven “autonomous BI” is poised to redefine how we interact with data, how analytics teams operate, and how insights are delivered across organizations.
In this post, we’ll clarify what agentic AI means in the context of enterprise analytics and explore how it differs from traditional automation or self-service BI. We’ll forecast specific changes this paradigm will bring, from business users getting proactive insights to data teams overseeing AI collaborators, and call out real examples (think AI agents auto-generating dashboards, orchestrating data pipelines, or flagging anomalies in real time). We’ll also consider the cultural and organizational implications of this evolution, such as trust and governance, and conclude with a point of view on how enterprises can prepare for the agentic AI era.
Agentic AI (often called agentic analytics in BI circles) refers to analytics systems powered by AI “agents” that can autonomously analyze data and take action without needing constant human prompts. In traditional BI, a human analyst or business user queries data, interprets results, and decides on an action. By contrast, an agentic AI system is goal-driven and proactive; it continuously monitors data, interprets changes, and initiates responses aligned with business objectives on its own. In other words, it shifts the analytics model from simply supporting human decisions to executing or recommending decisions independently.
Put simply, agentic analytics enables autonomous, goal-driven analytic agents that behave like tireless virtual analysts. They’re designed to think, plan, and act much like a human analyst would, but at machine speed and scale. Instead of waiting for someone to run a report or ask a question, these AI agents proactively scan data streams, reason over what they find, and trigger the appropriate next steps. For example, an agent might detect that a KPI is off track and automatically send an alert or even adjust a parameter in a system, closing the loop between insight and action. This stands in contrast to earlier “augmented analytics” or alerting tools that, while they could highlight patterns or outliers, were fundamentally passive; they still waited for a human to log in or respond. Agentic AI, by definition, carries the initiative: it doesn’t just explain what’s happening; it helps change what happens next.
It’s worth noting that the term “agentic” implies having agency, the capacity to act autonomously. In enterprise analytics, this means the AI isn’t just crunching numbers; it’s making choices about what analyses to perform and what operational actions to trigger based on those analyses. This could range from generating a new visualization to writing back results into a CRM to launching a workflow in response to a detected trend. Crucially, agentic AI doesn’t operate in isolation of humans’ goals. These agents are usually configured around explicit business objectives or KPIs (e.g., reduce churn, optimize inventory). They aim to carry out the intent set by business leaders, just without needing a person to micromanage each step.
It’s important to distinguish agentic AI from the traditional automation and self-service BI approaches that many enterprises have implemented over the past decade. While those were important steps in modernizing analytics, agentic AI goes a step further in several key ways:
In summary, agentic AI goes beyond what traditional automation or self-service BI can do. If a classic self-service dashboard was like a GPS map you had to read, an agentic AI is like a self-driving car; you tell it where you want to go, and it navigates there (while you watch and ensure it stays on track). This evolution is happening now because of converging advances in technology: more powerful AI models, API-accessible cloud tools, and enterprises’ appetite for real-time, automated decisions. With the groundwork laid, analytics is moving from a manual, human-driven endeavor to a collaborative human-AI partnership, and often, the AI will take the first action.
What practical changes should we expect as agentic AI becomes part of enterprise analytics in the next 1–3 years? Let’s explore the forecast across three dimensions: how business users interact with data, how data and analytics teams work, and how analytics capabilities are delivered in organizations.
Impact on Business Users: From Asking for Insights to Acting on Conversations
For business users, the managers, analysts, and non-technical staff who consume data, agentic AI will make analytics feel more like a conversation and less like a hunt for answers. Instead of clicking through dashboards or waiting for weekly reports, users will have AI assistants that deliver insights proactively and in real-time.
Overall, for business users, the next few years with agentic AI will feel like analytics has turned from a static product (dashboards and reports you check) into an interactive service (an intelligent assistant that tells you what you need to know and helps you act on it). The organizations that embrace this will likely see faster decision cycles and a more data-informed workforce, as employees spend less time gathering insights and more time using them.
For data and analytics teams (data analysts, BI developers, data engineers, data scientists), agentic AI will bring a significant shift in roles and workflows. Rather than manually producing every insight or report, these teams will collaborate with AI agents and focus on enabling and governing these agents.
In short, data teams will transition from being the sole producers of analytics output to being the enablers and overseers of AI-driven analytics. Their success will be measured not just by the reports they build, but by how well they can leverage AI to scale insights. This means stronger emphasis on data quality, real-time data availability, and robust governance. Culturally, it may require a mindset shift: accepting that some of the work traditionally done “by hand” can be delegated to machines, and that the value of the team is in how they guide those machines and interpret the results, rather than in producing every chart themselves. Organizations that prepare their data talent for this augmented role, through training in AI tools and proactive change management, will handle the transition more smoothly.
Agentic AI will also transform how analytics capabilities are delivered and consumed in the enterprise. Today, the typical delivery mechanism is a dashboard, report, or perhaps a scheduled email, in other words, the user has to go to a tool or receive a static packet of information. In the coming years, analytics delivery will become more embedded, continuous, and personalized, largely thanks to AI agents working behind the scenes.
To sum up, analytics capabilities will be delivered more fluidly and in an integrated fashion. Rather than thinking of “going to analytics,” the analytics will come to you, often initiated by an agent. Dashboards and reports will not disappear overnight (they still have their place for deep dives and record-keeping), but the center of gravity will shift toward timely insights injected into decision points. The business impact is significant: decisions can be made faster and in context, and fewer opportunities or risks will slip through unnoticed between reporting cycles. It’s a world where, ideally, nothing important waits for the next report; your AI agent has already informed the right people or taken action.
The technical capabilities of agentic AI are exciting, but enterprises must also grapple with cultural and organizational implications. Introducing autonomous AI into analytics workflows will affect how people feel about trust, control, and their own roles. Here are some key considerations:
Agentic AI in analytics is on the horizon, and the time to prepare is now. Here’s a forward-thinking game plan for enterprises to get ready for this shift:
Agentic AI represents a bold leap in the evolution of business intelligence, from tools that we operate to intelligent agents that work alongside us (and sometimes ahead of us). In the next 1–3 years, we can expect early forms of these AI agents to become part of everyday analytics in forward-thinking enterprises. They will likely start by tackling well-defined tasks: automatically generating reports, sending alerts for anomalies, and answering common analytical questions. Over time, as trust and sophistication grow, their autonomy will increase to more complex orchestrations and decision executions. The payoff can be substantial: faster decision cycles, decisions that are more data-driven and less prone to human overlook, and analytics capabilities that truly scale across an organization. Companies that embrace this shift early could gain a competitive edge, outpacing those stuck in manual analytics with speed, agility, and insights that are both deeper and more timely.
Yet, success with agentic AI won’t come just from buying the latest AI tool. It requires a thoughtful approach to technology, process, and people. The enterprises that thrive will be those that pair innovation with governance, enthusiasm with education, and automation with a human touch. By laying the groundwork now, improving data infrastructure, cultivating AI-friendly skills, and establishing clear rules, organizations can confidently welcome their new AI “colleagues” and harness their potential. In the near future, your most trusted analyst might not be a person at all, but an algorithmic agent that never sleeps, never gets tired, and continuously learns. The question is, will your organization be ready to partner with it and leap ahead into this new age of analytics?
Sources:
]]>
In today’s hyper-personalized digital world, delivering the right message to the right customer at the right time is non-negotiable.
Adobe Commerce is a powerful eCommerce engine, but when coupled with Adobe Real-Time CDP (Customer Data Platform), it evolves into an intelligent experience machine, which is capable of deep AI-powered personalization, dynamic segmentation, and real-time responsiveness.
Adobe Real-Time CDP is a Customer Data Platform that collects and unifies data across various sources (websites, apps, CRM, etc.) into a single, comprehensive real-time customer profile. This data is then accessible to other systems for marketing, sales, and service.
Adobe Commerce offers native customer segmentation, but it’s limited to session or behavior data within the commerce environment. When the customer data is vast, the native segmentation becomes very slow, impacting overall performance.
Feature | Native Commerce | Adobe Real-Time CDP |
---|---|---|
Segmentation | Static, rule-based | Real-time, AI-powered |
Data Sources | Commerce-only | Omnichannel (web, CRM, etc.) |
Personalization | Session-based | Cross-channel, predictive |
Identity Graph | No Identity Graph | Cross-device customer data |
Activation | Limited to Commerce | Activate across systems |
Integrating Adobe Commerce with CDP empowers both business and technical teams to unify profiles and stay ahead in a dynamic marketplace by delivering personalization.
Adobe Real-Time CDP is not just a marketing tool, it’s an asset for creating commerce experiences that adapt to the customer in real-time.
]]>The pressure is on for healthcare organizations to deliver more—more value, more equity, more impact. That’s where a well-known approach is stepping back into the spotlight.
If you’ve been around healthcare conversations lately, you’ve probably heard the resurgence of term value-based care. And there’s a good reason for that. It’s not just a buzzword—it’s reshaping how we think about health, wellness, and the entire care experience.
At its core, value-based care is a shift away from the old-school fee-for-service model, where providers got paid for every test, procedure, or visit, regardless of whether it actually helped the patient. Instead, value-based care rewards providers for delivering high-quality, efficient care that leads to better health outcomes.
It’s not about how much care is delivered, it’s about how effective that care is.
This shift matters because it places patients at the center of everything. It’s about making sure people get the right care, at the right time, in the right setting. That means fewer unnecessary tests, fewer duplicate procedures, and less of the fragmentation that’s plagued the system for decades.
The results? Better experiences for patients. Lower costs. Healthier communities.
Explore More: Access to Care Is Evolving: What Consumer Insights and Behavior Models Reveal
There’s a lot to be excited about, and for good reason! When we focus on prevention, chronic disease management, and whole-person wellness, we can avoid costly hospital stays and emergency room visits. That’s not just good for the healthcare system, it’s good for people, families, and communities. It moves us closer to the holy grail in healthcare: the quintuple aim. Achieving it means delivering better outcomes, elevating experiences for both patients and clinicians, reducing costs, and advancing health equity.
The challenge? Turning value-based care into a scalable, sustainable reality isn’t easy.
Despite more than a decade of pilots, programs, and well-intentioned reforms, only a small number of healthcare organizations have been able to scale their value-based care models effectively. Why? Because many still struggle with some pretty big roadblocks—like outdated technology, disconnected systems, siloed data, and limited ability to manage risk or coordinate care.
That’s where digital transformation comes in.
To make value-based care real and sustainable, healthcare organizations are rethinking their infrastructure from the ground up. They’re adopting cloud-based platforms and interoperable IT systems that allow for seamless data exchange across providers, payers, and patients. They’re tapping into advanced analytics, intelligent automation, and AI to identify at-risk patients, personalize care, and make smarter decisions faster.
As organizations work to enable VBC through digital transformation, it’s critical to really understand what the current research says. Our recent study, Access to Care: The Digital Imperative for Healthcare Leaders, backs up these trends, showing that digital convenience is no longer a differentiator—it’s a baseline expectation.
Findings show that nearly half of consumers have opted for digital-first care instead of visiting their regular physician or provider.
This shift highlights how important it is to offer simple and intuitive self-service digital tools that help people get what they need—fast. When it’s easy to find and access care, people are more likely to trust you, stick with you, and come back when they need you again.
You May Also Enjoy: How Innovative Healthcare Organizations Integrate Clinical Intelligence
Care models are also evolving. Instead of reacting to illness, we’re seeing a stronger focus on prevention, early intervention, and proactive outreach. Consumer-centric tools like mobile apps, patient portals, and personalized health reminders are becoming the norm, not the exception. It’s all part of a broader movement to meet people where they are and give them more control over their health journey.
But here’s an important reminder: none of these efforts work in a vacuum.
Value-based care isn’t just a technology upgrade or a process tweak. It’s a cultural shift.
Success requires aligning people, processes, data, and technology in a way that’s intentional and strategic. It’s about creating an integrated system that’s designed to improve outcomes and then making those improvements stick.
So, while the road to value-based care may be long and winding, the destination is worth it. It’s not just a different way of delivering care—it’s a smarter, more sustainable one.
Success In Action: Empowering Healthcare Consumers and Their Care Ecosystems With Interoperable Data
If you’re exploring how to modernize your digital front door, consider starting with a strategic assessment. Align your goals, audit your content, and evaluate your tech stack. The path to better outcomes starts with a smarter, simpler way to help patients find care.
We combine strategy, industry best practices, and technology expertise to deliver award-winning results for leading healthcare organizations.
Our approach to designing and implementing AI and machine learning (ML) solutions promotes secure and responsible adoption and ensures demonstrated and sustainable business value.
Discover why we have been trusted by the 10 largest health systems and the 10 largest health insurers in the U.S. Explore our healthcare expertise and contact us to learn more.
]]>Ready to go from “meh” to “whoa” with your AI coding assistant? Here’s how to get started.
You’ve installed GitHub Copilot. Now what?
Here’s how to actually get it to work for you – not just with you.
In the blog Using GitHub Copilot in VS Code, we have already seen how to use GitHub Copilot in VS Code.
Copilot is like a teammate who’s really fast at coding but only understands what you clearly explain.
Use descriptive comments or function names to guide Copilot.
// Fetch user data from API and cache it locally function fetchUserData() {
Copilot will often generate useful logic based on that. It works best when you think one step ahead.
Copilot shines when your code is modular.
Instead of writing:
function processEverything() { // 50 lines of logic }
Break it down:
// Validate form input function validateInput(data) { } // Submit form to backend function submitForm(data) { }
This way, you get smarter, more accurate completions.
Speed = flow. These shortcuts help you ride Copilot without breaking rhythm:
Action | Shortcut (Windows) | Shortcut (Mac) |
---|---|---|
Accept Suggestion | Tab |
Tab |
Next Suggestion | Alt + ] |
Option + ] |
Previous Suggestion | Alt + [ |
Option + [ |
Dismiss Suggestion | Esc |
Esc |
Open Copilot Panel | Ctrl + Enter |
Cmd + Enter |
Power Tip: Hold Tab
to preview full suggestion before accepting it.
Don’t settle for the first suggestion. Try giving Copilot:
Copilot might generate multiple versions. Pick or tweak the one that fits best.
Copilot is smart, but not perfect.
Think of Copilot as your fast-thinking intern. You still need to double-check their work.
Copilot isn’t just for JS or Python. Try it in:
Write a comment like # Dockerfile for Node.js app – and watch the magic.
Use Copilot to write your test cases too:
// Test case for addTwoNumbers function describe('addTwoNumbers', () => {
It will generate a full Jest test block. Use this to write tests faster – especially for legacy code.
Treat Copilot suggestions as learning opportunities:
It’s like having a senior dev whispering best practices in your ear.
If you have access to GitHub Copilot Chat, try it. Ask questions like:
It works like a Stack Overflow built into your IDE.
Tip | Benefit |
---|---|
Write clear comments | Better suggestions |
Break logic into chunks | Modular, reusable code |
Use shortcuts | Stay in flow |
Cycle suggestions | Explore better options |
Review output | Avoid bugs |
Test case generation | Faster TDD |
Learn as you go | Level up coding skills |
To truly master Copilot:
You’ll slowly build trust – and skill.
]]>For most of people, JavaScript still conjures images of simple web interactions like toggling menus, validating forms, or animating buttons. But that perception is rapidly changing.
JavaScript has quietly transformed into a powerful tool for machine learning, not in data centers or cloud clusters, but right in your browser. No need for Python scripts or backend servers, client-side intelligence, powered by frameworks like TensorFlow.js and Brain.js.
This shift means developers can now build smart, responsive applications that learn and adapt all without leaving the browser window.
What Is AI at the Edge?
AI at the edge means running artificial intelligence models directly on your device, whether it’s your phone, laptop, or even a micro-controller rather than relying on cloud servers.
Why does this matter?
This opens the door to amazing new features for users, like recognizing hand movements, detecting faces, or translating languages instantly. And the best part It works right inside your browser, without needing any extra software.
What is TensorFlow.js?
TensorFlow.js is a JavaScript library created by Google that lets you build and run machine learning models directly in your browser or in a Node.js environment.
Key Features:
Why JavaScript Is a Natural Fit for Edge AI
JavaScript is changing how we use AI on devices like phones, laptops, and tablets right in the browser. Here’s why it’s so powerful:
Real-World Examples
Limitations:
However, the gap is narrowing. Tools like WebAssembly, model quantization, and on device hardware acceleration (e.g. WebGPU) are rapidly improving JavaScript’s capabilities in the AI domain.
The Future Is Now: JavaScript
Conclusion:
JavaScript is quietly yet powerfully reshaping the future of AI at the edge. What was once a humble scripting language is now a gateway to real-time, intelligent experiences that run directly in the browser.
The lines between web development and machine learning are blurring and that’s a good thing.
Machine Learning (ML) is no longer limited to research labs — it’s actively driving decisions in real estate, finance, healthcare, and more. But deploying and managing ML models in production is a different ballgame. That’s where MLOps comes in.
In this blog, we’ll walk through a practical MLOps learning project — building a House Price Predictor using Azure DevOps as the CI/CD backbone. We’ll explore the evolution from DevOps to MLOps, understand the model development lifecycle, and see how to automate and manage it effectively.
MLOps (Machine Learning Operations) is the discipline of combining Machine Learning, DevOps, and Data Engineering to streamline the end-to-end ML lifecycle.
It aims to:
MLOps ensures that your model doesn’t just work in Jupyter notebooks but continues to deliver accurate predictions in production environments over time.
DevOps revolutionized software engineering by integrating development and operations through automation, CI/CD, and infrastructure as code (IaC). However, ML projects add new complexity:
Aspect | Traditional DevOps | MLOps |
Artifact | Source code | Code + data + models |
Version Control | Git | Git + Data Versioning (e.g., DVC) |
Testing | Unit & integration tests | Data validation + model validation |
Deployment | Web services, APIs | ML models, pipelines, batch jobs |
Monitoring | Logs, uptime, errors | Model drift, data drift, accuracy decay |
So, MLOps builds on DevOps but extends it with data-centric workflows, experimentation tracking, and model governance.
Our goal is to build an ML model that predicts house prices based on input features like square footage, number of bedrooms, location, etc. This learning project is structured to follow MLOps best practices, using Azure DevOps pipelines for automation.
house-price-predictor/ ├── configs/ # Model configurations stored in YAML format ├── data/ # Contains both raw and processed data files ├── deployment/ │ └── mlflow/ # Docker Compose files to set up MLflow tracking ├── models/ # Saved model artifacts and preprocessing objects ├── notebooks/ # Jupyter notebooks for exploratory analysis and prototyping ├── src/ │ ├── data/ # Scripts for data preparation and transformation │ ├── features/ # Logic for generating and engineering features │ ├── models/ # Code for model building, training, and validation ├── k8s/ │ ├── deployment.yaml # Kubernetes specs to deploy the Streamlit frontend │ └── fast_model.yaml # Kubernetes specs to deploy the FastAPI model service ├── requirements.txt # List of required Python packages
Before getting started, make sure the following tools are installed on your machine:
# Replace 'xxxxxx' with your GitHub username or organization git clone https://github.com/xxxxxx/house-price-predictor.git cd house-price-predictor
uv venv --python python3.11 source .venv/bin/activate
uv pip install -r requirements.txt
To enable experiment and model run tracking with MLflow:
cd deployment/mlflow docker compose -f mlflow-docker-compose.yml up -d docker compose ps
podman compose -f mlflow-docker-compose.yml up -d podman compose ps
Access the MLflow UI. Once running, open your browser and navigate to http://localhost:5555
Perform cleaning and preprocessing on the raw housing dataset:
python src/data/run_processing.py --input data/raw/house_data.csv --output data/processed/cleaned_house_data.csv
Perform data transformations and feature generation:
python src/features/engineer.py --input data/processed/cleaned_house_data.csv --output data/processed/featured_house_data.csv --preprocessor models/trained/preprocessor.pkl
Train the model and track all metrics using MLflow:
python src/models/train_model.py --config configs/model_config.yaml --data data/processed/featured_house_data.csv --models-dir models --mlflow-tracking-uri http://localhost:5555
The source code for both applications — the FastAPI backend and the Streamlit frontend — is already available in the src/api and streamlit_app directories, respectively. To build and launch these applications:
Once both services are up and running, you can access the Streamlit web UI in your browser to make predictions.
You can also test the prediction API directly by sending requests to the FastAPI endpoint.
curl -X POST "http://localhost:8000/predict" \ -H "Content-Type: application/json" \ -d '{ "sqft": 1500, "bedrooms": 3, "bathrooms": 2, "location": "suburban", "year_built": 2000, "condition": fair }'
Be sure to replace http://localhost:8000/predict with the actual endpoint based on where it’s running.
At this stage, your project is running locally. Now it’s time to implement the same workflow using Azure DevOps.
To implement a similar MLOps pipeline using Azure DevOps, the following prerequisites must be in place:
Start by cloning the existing GitHub repository into your Azure Repos. Inside the repository, you’ll find the azure-pipeline.yaml file, which defines the Azure DevOps CI/CD pipeline consisting of the following four stages:
This pipeline automates the end-to-end ML workflow from raw data to production deployment.
The CI/CD pipeline is already defined in the existing YAML file and is configured to run manually based on the parameters specified at runtime.
This pipeline is manually triggered (no automatic trigger on commits or pull requests) and supports the conditional execution of specific stages using parameters.
It consists of four stages, each representing a step in the MLOps lifecycle:
Condition: Runs if run_all or run_data_processing is set to true.
Depends on: DataProcessing
Condition: Runs if run_all or run_model_training is set to true.
Depends on: ModelTraining
Condition: Runs if run_all or run_build_and_publish is set to true.
Depends on: BuildAndPublish
Condition: Runs only if the previous stages succeed.
Both deployment and service YAML files for these components are already present in the k8s/ folder and will be used for deploying to Azure Kubernetes Service (AKS).
In short, it deploys the Streamlit frontend and makes it publicly accessible while connecting it to the FastAPI backend for predictions.
In short, it runs the backend API in Kubernetes and makes it accessible for predictions.
Now it’s time for the final run to verify the deployment on the AKS cluster. Trigger the pipeline by selecting the run_all parameter.
After the pipeline completes successfully, all four stages and their corresponding jobs will be executed, confirming that the application has been successfully deployed to the AKS cluster.
Now, log in to the Azure portal and retrieve the external IP address of the Streamlit app service. Once accessed in your browser, you’ll see the House Price Prediction Streamlit application up and running.
Now, go ahead and perform model inference by selecting the appropriate parameter values and clicking on “Predict Price” to see how the model generates the prediction.
In this blog, we explored the fundamentals of MLOps and how it bridges the gap between machine learning development and scalable, production-ready deployment. We walked through a complete MLOps workflow—from data processing and feature engineering to model training, packaging, and deployment—using modern tools like FastAPI, Streamlit, and MLflow.
Using Azure DevOps, we implemented a robust CI/CD pipeline to automate each step of the ML lifecycle. Finally, we deployed the complete House Price Predictor application on an Azure Kubernetes Service (AKS) cluster, enabling a user-friendly frontend (Streamlit) to interact seamlessly with a predictive backend (FastAPI).
This end-to-end project not only showcases how MLOps principles can be applied in real-world scenarios but also provides a strong foundation for deploying scalable and maintainable ML solutions in production.
]]>Let’s be honest – coding isn’t always easy. Some days, you’re laser-focused, knocking out feature after feature. Other days, you stare at your screen, wondering,
“What’s the fastest way to write this function?”
“Is there a cleaner way to loop through this data?”
That’s where GitHub Copilot comes in.
If you haven’t tried it yet, you’re seriously missing out on one of the biggest productivity boosters available to developers today. In this blog, I’ll walk you through how to use GitHub Copilot with Visual Studio Code (VS Code), share my personal experience, and help you decide if it’s worth adding to your workflow.
Think of GitHub Copilot as your AI pair programmer.
It’s trained on billions of lines of public code from GitHub repositories and can:
It’s like having a coding buddy that never sleeps, doesn’t get tired, and is always ready to assist.
Getting started is easy. Here’s a step-by-step guide:
If you don’t have VS Code installed yet, you can install it from here.
Or directly visit here to find the extension.
After installing, you’ll be prompted to sign in using your GitHub account.
Note: GitHub Copilot is a paid service (currently), but there’s usually a free trial to test it out.
Once set up, Copilot starts making suggestions as you code. It’s kind of magical.
Here’s how it typically works:
// Function to reverse a string
Copilot will automatically generate the function for you!
Tab
to accept a suggestion, or use Alt
+ [
/ Alt
+ ]
to browse different options.Here’s how I personally use Copilot in my day-to-day coding:
Use Case | Why I Use Copilot |
---|---|
Boilerplate Code | Saves time writing repetitive patterns |
API Calls | Auto-completes fetch or axios calls quickly |
Learning New Syntax | Helps with unfamiliar frameworks like Rust or Go |
Unit Tests | Suggests test cases faster than starting from scratch |
Regular Expressions | Generates regex patterns (saves Googling!) |
Short answer? No.
Copilot is a tool, not a replacement for developers.
It speeds up the boring parts, but:
Think of Copilot as an assistant, not a boss. It helps you code faster, but you’re still in charge of the logic and creativity.
If you’re someone who:
Then GitHub Copilot is absolutely worth trying out.
Personally, I’ve found it to be a game-changer for productivity. It doesn’t write all my code, but it takes away the mental fatigue of boilerplate so I can focus on solving real problems.
Whether you’re building embedded software for next-gen diagnostics, modernizing lab systems, or scaling user-facing platforms, the pressure to innovate is universal, and AI is becoming a key differentiator. When embedded into the software development lifecycle (SDLC), AI offers a path to reduce costs, accelerate timelines, and equip the enterprise to scale with confidence.
But AI doesn’t implement itself. It requires a team that understands the nuance of regulated software, SDLC complexities, and the strategic levers that drive growth. Our experts are helping MedTech leaders move beyond experimentation and into execution, embedding AI into the core of product development, testing, and regulatory readiness.
“AI is being used to reduce manual effort and improve accuracy in documentation, testing, and validation.” – Reuters MedTech Report, 2025
Whether it’s generating test cases from requirements, automating hazard analysis, or accelerating documentation, we help clients turn AI into a strategic accelerator.
Outcome: Faster time to submission, reduced manual burden, improved compliance confidence
Regulatory documentation remains one of the most resource-intensive phases of medical device development.
AI won’t replace regulatory experts, but it will eliminate the grind. That’s where the value lies.
For regulatory affairs leaders and product teams, this means faster submissions, reduced rework, and greater confidence in compliance, all while freeing up resources to focus on innovation.
Outcome: Increased development velocity, reduced error rates, scalable automation
Agentic AI—systems of multiple AI agents working in coordination—is emerging as a force multiplier in software development.
Agentic AI can surface vulnerabilities early and propose mitigations faster than traditional methods. This isn’t about replacing engineers. It’s about giving them a smarter co-pilot.
Outcome: Higher product reliability, faster regression cycles, better user experiences
AI is transforming QA from a bottleneck into a strategic advantage.
We’re not just testing functionality—we’re testing intelligence. That requires a new kind of QA.
For organizations managing complex software portfolios, this transforms QA from a bottleneck into a strategic enabler of faster, safer releases.
Outcome: Scalable expertise without long onboarding cycles
AI tools are only as effective as the teams that deploy them. We provide specialized talent—regulatory technologists, QA engineers, data scientists—that bring both domain knowledge and AI fluency.
Whether you’re a CIO scaling digital health initiatives or a VP of Software managing multiple product lines, our AI-fluent teams integrate seamlessly to accelerate delivery and reduce risk.
Outcome: Informed investment decisions, future-ready capabilities
Many of the AI capabilities discussed are already in early deployment or active pilot phases. Others are in proof-of-concept, with clear paths to scale.
We understand that every organization is on a unique AI journey. Whether you’re starting from scratch, experimenting with pilots, or scaling AI across your enterprise, we meet you where you are. Our structured approach delivers value at every stage, helping you turn AI from an idea into a business advantage.
AI isn’t the new compliance partner. It’s the next competitive edge, but only when guided by the right strategy. For MedTech leaders, AI’s real opportunity comes by adopting and scaling it with precision, speed, and confidence. That kind of transformation can be accelerated by a partner who understands the regulatory terrain, the complexity of the SDLC, and the business outcomes that matter most.
No matter where you sit — on the engineering team, in the lab, in business leadership, or in patient care — AI is reshaping how MedTech companies build, test, and deliver value.
From insight to impact, our industry, platform, data, and AI expertise help organizations modernize systems, personalize engagement, and scale innovation. We deliver AI-powered transformation that drives engagement, efficiency, and loyalty throughout the lifecycle—from product development to commercial success.
Ready to move from AI potential to performance? Let’s talk about how we can accelerate your roadmap with the right talent, tools, and strategy. Contact us to get started.
]]>In today’s data-driven landscape, Organizations are not just collecting the Data, they are striving to understand, trust, and maximize its value. One of the critical capabilities that helps achieve the goal is data enrichment, especially when implemented through enterprise-grade governance tools like Collibra.
In this blog, we will explore how Collibra enables data enrichment, why it is essential for effective data governance, and how organizations can leverage it to drive better decision-making.
Data enrichment enhances the dataset within the Collibra data governance tool by adding business context, metadata, correcting inaccuracies, and governance attributes that help users to understand the data’s meaning, usage, quality, and lineage.
Rather than simply documenting tables/columns, data enrichment enables organizations to transform technical metadata into meaningful, actionable insights, in which this enriched context empowers business and technical users alike to trust the data they are working with and use it confidently for analysis, reporting, and compliance.
In today’s digital landscape, banks manage various data formats (such as CSV, JSON, XML, and tables) with vast volumes of data originating from internal and external sources like file systems, cloud platforms, and databases. Collibra automatically catalogs these data assets and generates metadata.
But simply cataloging data isn’t enough. The next step is data enrichment, where we link technical metadata with business-glossary terms to give metadata meaningful business context and ensure consistent description and understanding across the organization. Business terms clarify what each data element represents from a business perspective, making it accessible not just to IT teams but also to business users.
In addition, each data asset is tagged with data classification labels such as PCI (Payment Card Information), PII (Personally Identifiable Information), and confidential. This classification plays a critical role in data security, compliance, and risk management, especially in a regulated industry like banking.
To further enhance the trustworthiness of data, Collibra integrates data profiling capabilities. This process analyzes the actual content of datasets to assess their structure and quality. Based on profiling results, we link data to data‑quality rules that monitor completeness, accuracy, and conformity. These rules help enforce high-quality standards and ensure that the data aligns with both internal expectations and external regulatory requirements.
An essential feature in Collibra is data lineage, which provides a visual representation of the data flow from its source to its destination. This visibility helps stakeholders understand how data transforms and moves through various systems, which is essential for impact analysis, audits, and regulatory reporting.
Finally, the enriched metadata undergoes a structured workflow-driven review process. This involves all relevant stakeholders, including data owners, application owners, and technical managers. The workflow ensures that we not only produce accurate and complete metadata but also review and approve it before publishing or using it for decision-making.
Data enrichment in Collibra is a cornerstone of a mature Data Governance Framework; it helps transform raw technical metadata into a living knowledge asset, fueling trust, compliance, and business value. By investing time in enriching your data assets, you are not just cataloging them; you are empowering your organization to make smarter, faster, and more compliant data-driven decisions.
]]>