Enter your email address below and subscribe to our newsletter

Use Ollama to Run LLMs Locally on Windows

Run LLMs locally to ensure that no model is being trained over your personal data. Ollama is a great way to do that.

Share your love

While interacting with AI models, I feel I’m under surveillance as every AI chatbot will collect data on how I interacted with the given output.

So I hesitate while asking personal questions. Because I don’t want companies to collect my personal insecurities in the name of “collecting data to improve their product further”.

But giving up on AI models is not a good choice either and while looking for a solution, I came across Ollama which lets you use LLMs locally.

Without internet. Without collecting your data. Your data will never be sent to their servers.

This tutorial covers the following:

  • What is Ollama?
  • How to download AI models from Ollama
  • How to run AI models locally using Ollama
  • How to use a graphical user interface with Ollama (my favourite)
  • How to remove Ollama from Windows (if you want to)

So let’s start with the first one.

What is Ollama?

Ollama is an open-source tool that simplifies the process of running large language models (LLMs) like Llama 2, Mistral, and CodeLlama locally on your own machines. It bundles model weights, configuration, and data into a single package defined by a Modelfile, optimizing setup and GPU usage.

I’d say it is one of the easiest ways to fine-tune and run LLMs locally. The best part is that Ollama is available for all major platforms including Linux, Windows and macOS.

To run Ollama, there are a few key prerequisites:

System Requirements:

  • RAM: 8GB for 3B models, 16GB for 7B models, 32GB for 13B models
  • GPU (Optional): An NVIDIA or AMD GPU with compute capability 5+ is recommended for optimal performance.

Install Ollama on Windows

To install Ollama on Windows, visit the official download page of Ollama, choose Windows and download the executable file:

Download Ollama on Windows

Once done, open the downloaded file where all you have to do is hit the Install button and everything else will be taken care of by the installer:

Installer of Ollama on Windows

Download AI models in Ollama

Visit the model library of Ollama to know the availability of your favourite model. For example, I want to run Dolphin Llama, an uncensored version of Llama so here, I checked for the availability for that particular model:

Search models in Ollama

Once you find the model let’s take a look at how to download it. Open your terminal and replace model-name with the actual model name in the following command to download the target model:

ollama pull <model-name>

For example, I wanted to download dolphin-llama3, so I will use the following:

ollama pull dolphin-llama3
How to download model in ollama

After downloading the model, you can list added models through the given command:

List downloaded models through Ollama

Running AI models locally using Ollama

Once you are done downloading AI models, it is time to run them. Remember, Ollama can only be used through the terminal but in the next part, I will be sharing a tool by which you can use GUI to interact with Ollama.

To run a model, it is necessary to know its exact name and for that purpose, you can list the downloaded models:

ollama list
List downloaded models through Ollama

To run a model, you need to append the model name to the ollama command with the run flag as shown here:

ollama run <model-name>

As I downloaded Dolphin Llama, my command would look like this:

ollama run dolphin-llama3
Running AI model locally using Ollama

As you can see, I gave it a prompt asking why running an AI model locally is important and it gave me pretty reasonable answers.

Without sharing my data, without using the internet.

Run Ollama in GUI using Open WebUI

While every geek (like me) prefers using a terminal, GUI will always be a neat option to interact with software. But as you know Ollama does not come pre-backed with GUI so we need a third-party solution.

Don’t worry, the solution that I’m about to recommend is free, open-source and trusted by thousands of users. I’ll be using Open WebUI for this tutorial.

To install Open WebUI, you first need to install Docker on Windows. For that purpose, head over to the official download page of Docker and download the Docker executable file for your system architecture:

Download docker in Windows

Open the downloaded setup file and keep all the default settings checked as shown here:

Choose default options while installing Docker on Windows

Once the installation is complete, start Docker Desktop from the start menu and it will ask you to read and accept/decline the terms and conditions. You may go through all the terms and then press the Accept button:

Accept terms and conditions of Docker Desktop

It will start Docker automatically and now you can minimise the Docker Desktop.

Now, let’s install Open WebUI on Windows.

To install Open WebUI which will be used as GUI for Ollama, open the terminal and execute the following command:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Install Open WebUI in Windows to run Ollama with GUI

Once the installation process is complete, make sure the Open WebUI image is running using the following command:

docker ps
Check the status of Open WebUI

It is time to run Open WebUI. To do so, open your browser and enter the following in your address bar:

http://localhost:3000

As you will be using Open WebUI for the first time, it will ask you to create an account. Don’t worry, the password and email will be stored locally as it is hosted on Docker (locally):

Create account on Open WebUI to run Ollama on Windows

That’s it! Now, you can choose from the downloaded models at the top and start asking questions without any hesitation (as you are not being monitored):

Running Ollama on Windows in GUI using Open WebUI

There you have it!

How to uninstall Ollama from Windows

If you don’t want to use Ollama on your computer, then it can easily be removed through a few easy steps.

First, I will explain how you can remove the Open WebUI’s docker image and then will explain how you can remove installed AI models and at the end, we will remove Ollama from Windows.

Removing Open WebUI from Windows

As the Open WebUI was installed as a Docker image, you’d need to remove the Docker image. Worry not. It may sound like a huge task but it only takes two commands to do so.

First, stop the Docker instance of Open WebUI by executing the following command in the terminal:

docker container stop open-webui
Stop Open WebUI in Windows

Now, remove the Docker instance of Open WebUI using the following command:

docker container remove open-webui
Remove Open WebUI from Windows

Next, we need to remove the Docker image of the Open WebUI. To do so, first, we need to know its image ID and to find out the image ID of Open WebUI, list docker images using the following command:

docker images
Know the image ID of Open WebUI

Now, enter the image ID of your docker instance in the following command:

docker image rm IMAGE_ID

In my case, the Docker ID is 7d2c4a94a90f so my command to remove the Docker image would look like this:

docker image rm 7d2c4a94a90f

Removing Ollama from Windows

To remove Ollama from Windows effectively, you first need to remove the installed models and for that purpose, you first have to list them using the following:

ollama list
List downloaded models through Ollama

Next, enter the exact name of the model in the following command to remove it:

ollama rm Model-name

As I want to remove the Dolphin Llama3, I will use the following:

ollama rm dolphin-llama3
Remove Ollama models from Windows

If you have more than one model installed, you can repeat this process multiple times.

Now, you need to stop the Ollama service. For that purpose, right-click on the Ollama icon located at the bottom right (in the system tray) and choose Quit Ollama:

Stop Ollama from Windows

Now, search for Ollama in the start menu and choose the Uninstall option:

Uninstall Ollama from Windows

Now, double-click on the Ollama and it will remove Ollama from your system:

Uninstalling Ollama from Windows

That’s it!

Wrapping Up…

In this tutorial, I went through how you can install and use Ollama on Windows including installing AI models, using it in the terminal and how you can run Ollama with GUI.

At the end, I’ve also mentioned how you can remove almost everything that you installed for this project. I hope you will find this guide helpful.

If you have any queries, leave us a comment.

Share your love
Kabir
Kabir

A tech journalist whose life revolves around networks.

Newsletter Updates

Enter your email address below and subscribe to our newsletter

Stay informed and not overwhelmed, subscribe now!