To run a model in Ollama, use the ollama run command.

For example, to run the DeepSeek-R1:8b model and interact with it, use the following command:

1
ollama run deepseek-r1:8b

If the model isn’t already installed, Ollama will automatically download it.

Once the download is complete, you can interact with the model directly in the terminal:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
C:\Users\Administrator>ollama run deepseek-r1:8b
pulling manifest
pulling 6340dc3229b0... 100% 4.9 GB
pulling 369ca498f347... 100% 387 B
pulling 6e4c38e1172f... 100% 1.1 KB
pulling f4d24e9138dd... 100% 148 B
pulling 0cb05c6e4e02... 100% 487 B
verifying sha256 digest
writing manifest
success
>>> hello
<think>

</think>

Hello! How can I assist you today? 😊

>>> why is everyone's fingerprint unique?
<think>
Okay, so the user asked "why is everyone's fingerprint unique?" Hmm, that's an interesting question. I remember
learning about fingerprints in school, but I'm not exactly sure why they're unique. Maybe it has something to
do with genetics or something else.

Wait, I think it's because each person's DNA is different. So, if two people had the same fingerprint, their
DNA would also be the same, right? But no, that doesn't make sense because identical twins can have the same
fingerprints and DNA, but they're still different people. Oh, but maybe the environment affects it too. Like,
how we use our hands differently could change our fingerprints over time.

I also remember hearing about something called "friction ridges" in fingerprints. Maybe those are what make
each fingerprint unique because of how the skin cells are arranged. But I'm not entirely sure if that's the
full story. Maybe there's more to it than just friction ridges.

Oh, and then there's the fact that some people might have identical fingerprints but still be different
individuals. So, uniqueness isn't the only thing determining individuality. There must be other factors like
facial features or voiceprints that also contribute to identifying someone uniquely.

I should probably look up more about how exactly friction ridges work and why they don't make identical
fingerprints common. Also, I wonder if there are any exceptions where two people could have similar
fingerprints but still not be the same person. It's a bit confusing, but I think it has something to do with
other unique traits beyond just the fingerprint itself.
</think>

The uniqueness of fingerprints is primarily due to the intricate structure of friction ridges found on the
skin, which form when the hands are repeatedly used in different ways. These ridges create patterns that are as
individual as a person's DNA, making it highly unlikely for two people to have identical fingerprints. However,
this isn't the sole factor in determining individuality; other traits like facial features and voiceprints also
play a role. Thus, while fingerprints contribute significantly to our identity, they aren't alone in ensuring
each person is unique.

>>> Send a message (/? for help)

To end the conversation, you can type /bye, press Ctrl+D or hit Ctrl+Z.

You can use the ollama list command to view installed models:

1
2
3
4
5
6
C:\Users\Administrator>ollama list
NAME ID SIZE MODIFIED
deepseek-r1:8b 28f8fd6cdc67 4.9 GB 2 hours ago
deepseek-r1:1.5b a42b25d8c10a 1.1 GB 2 days ago
deepseek-r1:7b 0a8c26691023 4.7 GB 2 days ago
C:\Users\Administrator>

Below is a table of some models and their download commands:

Model Parameters Size Download Command
DeepSeek-R1 671B 404GB ollama run deepseek-r1:671b
DeepSeek-R1 70B 43GB ollama run deepseek-r1:70b
DeepSeek-R1 32B 20GB ollama run deepseek-r1:32b
DeepSeek-R1 14B 9.0GB ollama run deepseek-r1:14b
DeepSeek-R1 8B 4.9GB ollama run deepseek-r1:8b
DeepSeek-R1 7B 4.7GB ollama run deepseek-r1:7b
DeepSeek-R1 1.5B 1.1GB ollama run deepseek-r1:1.5b
Llama 3.3 70B 43GB ollama run llama3.3
Llama 3.2 3B 2.0GB ollama run llama3.2
Llama 3.2 1B 1.3GB ollama run llama3.2:1b
Llama 3.2 Vision 11B 7.9GB ollama run llama3.2-vision
Llama 3.2 Vision 90B 55GB ollama run llama3.2-vision:90b
Llama 3.1 8B 4.7GB ollama run llama3.1
Llama 3.1 405B 231GB ollama run llama3.1:405b
Phi 4 14B 9.1GB ollama run phi4
Phi 3 Mini 3.8B 2.3GB ollama run phi3
Gemma 2 2B 1.6GB ollama run gemma2:2b
Gemma 2 9B 5.5GB ollama run gemma2
Gemma 2 27B 16GB ollama run gemma2:27b
Mistral 7B 4.1GB ollama run mistral
Moondream 2 1.4B 829MB ollama run moondream
Neural Chat 7B 4.1GB ollama run neural-chat
Starling 7B 4.1GB ollama run starling-lm
Code Llama 7B 3.8GB ollama run codellama
Llama 2 Uncensored 7B 3.8GB ollama run llama2-uncensored
LLaVA 7B 4.5GB ollama run llava
Solar 10.7B 6.1GB ollama run solar

For a full list of supported models, visit: https://ollama.com/library.

Using Models with the Python SDK

If you want to integrate Ollama into your Python projects, you can use the Ollama Python SDK to load and run models.

1. Install the Python SDK

First, install the Ollama Python SDK by running the following command in your terminal:

1
pip install ollama

2. Write a Python Script

Here’s an example of a Python script that uses the DeepSeek-r1:8b model to generate text:

1
2
3
4
5
6
7
import ollama

response = ollama.generate(
model="deepseek-r1:8b", # Model name
prompt="why is everyone's fingerprint unique?" # Prompt text
)
print(response)

3. Run the Python Script

Execute the script in your terminal:

1
python ollama-example.py

You’ll see the model’s response based on your input. The output will look something like this:

4. Chat Mode

Here’s an example of a conversational interaction with the model:

1
2
3
4
5
6
7
8
9
from ollama import chat

response = chat(
model="deepseek-r1:8b",
messages=[
{"role": "user", "content": "why is everyone's fingerprint unique?"}
]
)
print(response.message.content)

This code will engage the model in a conversation and print its response.

5. Streaming Responses

For handling large data, you can stream responses from the model:

1
2
3
4
5
6
7
8
9
from ollama import chat

stream = chat(
model="deepseek-r1:8b",
messages=[{"role": "user", "content": "why is everyone's fingerprint unique?"}],
stream=True
)
for chunk in stream:
print(chunk["message"]["content"], end="", flush=True)

This code streams the model’s response in real-time.