Newsletter Subscribe
Enter your email address below and subscribe to our newsletter
While interacting with AI models, I feel I’m under surveillance as every AI chatbot will collect data on how I interacted with the given output.
So I hesitate while asking personal questions. Because I don’t want companies to collect my personal insecurities in the name of “collecting data to improve their product further”.
But giving up on AI models is not a good choice either and while looking for a solution, I came across Ollama which lets you use LLMs locally.
Without internet. Without collecting your data. Your data will never be sent to their servers.
This tutorial covers the following:
So let’s start with the first one.
Table of Contents
Ollama is an open-source tool that simplifies the process of running large language models (LLMs) like Llama 2, Mistral, and CodeLlama locally on your own machines. It bundles model weights, configuration, and data into a single package defined by a Modelfile, optimizing setup and GPU usage.
I’d say it is one of the easiest ways to fine-tune and run LLMs locally. The best part is that Ollama is available for all major platforms including Linux, Windows and macOS.
To run Ollama, there are a few key prerequisites:
System Requirements:
To install Ollama on Windows, visit the official download page of Ollama, choose Windows
and download the executable file:
Once done, open the downloaded file where all you have to do is hit the Install
button and everything else will be taken care of by the installer:
Visit the model library of Ollama to know the availability of your favourite model. For example, I want to run Dolphin Llama, an uncensored version of Llama so here, I checked for the availability for that particular model:
Once you find the model let’s take a look at how to download it. Open your terminal and replace model-name
with the actual model name in the following command to download the target model:
ollama pull <model-name>
For example, I wanted to download dolphin-llama3
, so I will use the following:
ollama pull dolphin-llama3
After downloading the model, you can list added models through the given command:
Once you are done downloading AI models, it is time to run them. Remember, Ollama can only be used through the terminal but in the next part, I will be sharing a tool by which you can use GUI to interact with Ollama.
To run a model, it is necessary to know its exact name and for that purpose, you can list the downloaded models:
ollama list
To run a model, you need to append the model name to the ollama
command with the run
flag as shown here:
ollama run <model-name>
As I downloaded Dolphin Llama, my command would look like this:
ollama run dolphin-llama3
As you can see, I gave it a prompt asking why running an AI model locally is important and it gave me pretty reasonable answers.
Without sharing my data, without using the internet.
While every geek (like me) prefers using a terminal, GUI will always be a neat option to interact with software. But as you know Ollama does not come pre-backed with GUI so we need a third-party solution.
Don’t worry, the solution that I’m about to recommend is free, open-source and trusted by thousands of users. I’ll be using Open WebUI for this tutorial.
To install Open WebUI, you first need to install Docker on Windows. For that purpose, head over to the official download page of Docker and download the Docker executable file for your system architecture:
Open the downloaded setup file and keep all the default settings checked as shown here:
Once the installation is complete, start Docker Desktop from the start menu and it will ask you to read and accept/decline the terms and conditions. You may go through all the terms and then press the Accept button:
It will start Docker automatically and now you can minimise the Docker Desktop.
Now, let’s install Open WebUI on Windows.
To install Open WebUI which will be used as GUI for Ollama, open the terminal and execute the following command:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Once the installation process is complete, make sure the Open WebUI image is running using the following command:
docker ps
It is time to run Open WebUI. To do so, open your browser and enter the following in your address bar:
http://localhost:3000
As you will be using Open WebUI for the first time, it will ask you to create an account. Don’t worry, the password and email will be stored locally as it is hosted on Docker (locally):
That’s it! Now, you can choose from the downloaded models at the top and start asking questions without any hesitation (as you are not being monitored):
There you have it!
If you don’t want to use Ollama on your computer, then it can easily be removed through a few easy steps.
First, I will explain how you can remove the Open WebUI’s docker image and then will explain how you can remove installed AI models and at the end, we will remove Ollama from Windows.
As the Open WebUI was installed as a Docker image, you’d need to remove the Docker image. Worry not. It may sound like a huge task but it only takes two commands to do so.
First, stop the Docker instance of Open WebUI by executing the following command in the terminal:
docker container stop open-webui
Now, remove the Docker instance of Open WebUI using the following command:
docker container remove open-webui
Next, we need to remove the Docker image of the Open WebUI. To do so, first, we need to know its image ID and to find out the image ID of Open WebUI, list docker images using the following command:
docker images
Now, enter the image ID of your docker instance in the following command:
docker image rm IMAGE_ID
In my case, the Docker ID is 7d2c4a94a90f so my command to remove the Docker image would look like this:
docker image rm 7d2c4a94a90f
To remove Ollama from Windows effectively, you first need to remove the installed models and for that purpose, you first have to list them using the following:
ollama list
Next, enter the exact name of the model in the following command to remove it:
ollama rm Model-name
As I want to remove the Dolphin Llama3, I will use the following:
ollama rm dolphin-llama3
If you have more than one model installed, you can repeat this process multiple times.
Now, you need to stop the Ollama service. For that purpose, right-click on the Ollama icon located at the bottom right (in the system tray) and choose Quit Ollama:
Now, search for Ollama in the start menu and choose the Uninstall option:
Now, double-click on the Ollama and it will remove Ollama from your system:
That’s it!
In this tutorial, I went through how you can install and use Ollama on Windows including installing AI models, using it in the terminal and how you can run Ollama with GUI.
At the end, I’ve also mentioned how you can remove almost everything that you installed for this project. I hope you will find this guide helpful.
If you have any queries, leave us a comment.