Skip to main content

Artificial Intelligence

Connect Ollama to Your Workflows: Power Automate + VS Code Integration Guide

Istock 2216481617

AI is evolving rapidly, and the ability to install Ollama local models on your machine opens up powerful new possibilities for developers, hobbyists, and builders. Whether you’re working on automation, development tools, or privacy-sensitive applications, cloud-based models aren’t always ideal.

That’s where Ollama comes in.

Ollama makes it easy to run, customize, and serve LLMs directly from your machine — no GPU setup or Docker needed. You can run models like LLaMA2, Mistral, or Gemma, or even build your own using a simple Modelfile.

To take it further, you can integrate Ollama with Power Automate to trigger real-time, AI-powered workflows — all while keeping your data local and secure. This integration lets you automate tasks like generating email replies, summarizing content, or logging AI responses to SharePoint or Teams — without relying on cloud APIs.

In this blog, I’ll walk you through everything you need to get started with Ollama — from downloading and interacting with models in VS Code to integrating responses into Power Automate flows.

What is Ollama?

Ollama is a local LLM (Large Language Model) runtime that can be installed directly on your PC, making it completely cloud independent. You can use it as your personal AI assistant with the added benefit of enhanced security and privacy since everything runs locally.

Why Do We Need Ollama?

  • Works without internet — ideal for offline or network-restricted environments
  • No cloud dependency — full control over your data and usage
  • Acts like a custom assistant tailored to your tasks
  • Allows you to build your own models using a simple Modelfile

Steps to Download and Install Ollama

  1. Visit the official site: https://ollama.com/download
  2. You can install Ollama local models on Windows, macOS, or Linux, depending on your OS.
  3. Run the downloaded installer (.exe or .dmg)
  4. Once you install Ollama local models, you can run them directly in your command prompt but first check whether it was installed or not with:
    ollama --version

    or

    ollama
  5. Explore the available commands using:
    ollama --help
    
    Command prompt 

Ollama Command Reference (Terminal Commands)

Command ContextDescriptionExample
ollama run TerminalRuns the specified model for chat interaction.ollama run mistral
ollama pull TerminalDownloads the model to your machine.ollama pull llama2
ollama listTerminalShows all downloaded models locally.ollama list
ollama create -f ModelfileTerminalCreates a new model from a custom Modelfile.ollama create mistral_assistant -f Modelfile
ollama serveTerminalStarts the Ollama API server for integrations.ollama serve

Downloading a Model / Choosing a Model

  1. Visit the model library: https://ollama.com/library — here, you can explore model usage, specialties, and space requirements.
  2. Choose a model (e.g., mistral)
  3. Pull the model by running:
    ollama pull mistral

    or

    ollama pull <model_name>
  4. Confirm the download with:
    ollama list
  5. To interact with the model, use:
    ollama run mistral or ollama run <model_name>

    Terminal command for run and /bye

  6. When you’re done, type /bye to end the session — otherwise, it will keep running in the background.

Inside the model session, use /help or /? to see available commands.

In-Model Commands

When you’re interacting inside a model session (after running ollama run <model>), the following shortcuts and commands are available:

CommandDescriptionExample
/? or /helpLists all available chat commands./?
/byeEnds the current model session./bye
/systemSets a system prompt to guide the model’s behavior./system You are a polite assistant.
/resetClears the current conversation history./reset

Using Ollama in VS Code

  1. Install the Python package:
    pip install ollama
  2. Ensure Ollama is running in the background by either:
    • Running ollama serve in the terminal, or
    • Searching for “Ollama” and clicking on its icon.
  3. Use this sample Python script to interact with a model:
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import ollama
response = ollama.chat(
model='mistral',
messages= [
{
'role': 'user',
'content': 'Explain quantum computing in simple terms'
}
],
options={
'temperature': 0.8
}
)
print(response['message']['content'])
import ollama response = ollama.chat( model='mistral', messages= [ { 'role': 'user', 'content': 'Explain quantum computing in simple terms' } ], options={ 'temperature': 0.8 } ) print(response['message']['content'])
import ollama

response = ollama.chat(
    model='mistral',
    messages= [
        {
            'role': 'user',
            'content': 'Explain quantum computing in simple terms'
        }
    ],
    options={
        'temperature': 0.8
    }
)

print(response['message']['content'])

Now let’s understand what each part of the code means:
Code LineExplanation
import ollamaImports the Ollama Python library to interact with local language models.
model='mistral', options={'temperature': 0.8}Specifies the model to use (mistral) and sets the temperature option.

temperature = 0.8 means the output will be more creative and diverse.
Lower values (e.g., 0.2) produce more focused and predictable answers.
messages=[{'role': 'user', 'content': 'Explain quantum computing in simple terms'}]Defines the user message you want to send to the model.

You can add multiple messages in a list to maintain chat context.
print(response['message']['content'])Displays only the model’s reply (text content) in the console.

As you can see, we’ve received a valid response from Ollama.

Vs code and prompt generation for ollama

4. You can also adjust parameters like temperature, top_p, and repeat_penalty for more control.

Integrate Ollama with Power Automate

You can connect Ollama to Power Automate by triggering HTTP flows using Python or any backend script. For example, after getting a response from Ollama, you can forward it to Power Automate using a simple POST request. Alternatively, you can use this code and replace the URL with your own to utilize it.

Make sure you have already created a flow in Power Automate with a “When an HTTP request is received” trigger.
As you can see, the Python script successfully triggers the Power Automate flow.

Integrate Ollama with Power Automate

and as you can see the python script is triggering the flow successfully. Integrate Ollama with Power Automate

Here’s the code.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import ollama
import requests
# Step 1: Get response from Ollama
response = ollama.chat(
model='mistral',
messages=[
{'role': 'user', 'content': 'Explain quantum computing in simple terms'}
],
options={'temperature': 0.8}
)
result_text = response['message']['content']
# Step 2: Send response to Power Automate
flow_url = 'https://prod-xxx.westus.logic.azure.com:443/workflows/xyz/triggers/manual/paths/invoke?...'# Replace with your real URL
payload = {
'response': result_text
}
headers = {
'Content-Type': 'application/json'
}
r = requests.post(flow_url, json=payload, headers=headers)
print(f"Power Automate Status Code: {r.status_code}")
import ollama import requests # Step 1: Get response from Ollama response = ollama.chat( model='mistral', messages=[ {'role': 'user', 'content': 'Explain quantum computing in simple terms'} ], options={'temperature': 0.8} ) result_text = response['message']['content'] # Step 2: Send response to Power Automate flow_url = 'https://prod-xxx.westus.logic.azure.com:443/workflows/xyz/triggers/manual/paths/invoke?...' # Replace with your real URL payload = { 'response': result_text } headers = { 'Content-Type': 'application/json' } r = requests.post(flow_url, json=payload, headers=headers) print(f"Power Automate Status Code: {r.status_code}")
import ollama
import requests

# Step 1: Get response from Ollama
response = ollama.chat(
    model='mistral',
    messages=[
        {'role': 'user', 'content': 'Explain quantum computing in simple terms'}
    ],
    options={'temperature': 0.8}
)

result_text = response['message']['content']

# Step 2: Send response to Power Automate
flow_url = 'https://prod-xxx.westus.logic.azure.com:443/workflows/xyz/triggers/manual/paths/invoke?...'  # Replace with your real URL

payload = {
    'response': result_text
}

headers = {
    'Content-Type': 'application/json'
}

r = requests.post(flow_url, json=payload, headers=headers)

print(f"Power Automate Status Code: {r.status_code}")

For step-by-step integration, refer to my other blog:
Python Meets Power Automate: Trigger via URL / Blogs / Perficient

Conclusion

Now you know how to:

  • Install and run Ollama locally
  • Download and interact with models
  • Use Ollama in VS Code
  • Integrate Ollama with Power Automate

Coming Up Next

In the next part of this series, we’ll explore how to create your own model using Ollama and run it using a Modelfile.

Stay tuned!

Thoughts on “Connect Ollama to Your Workflows: Power Automate + VS Code Integration Guide”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Punyaa Dixit

As an intern at Perficient, Punyaa is eager to learn, grow, and gain hands-on experience. She hopes to share meaningful insights from her journey and connect with others navigating a similar path.

More from this Author

Follow Us