Ollama Open WebUI
Open WebUI is a user-friendly AI interface that supports Ollama, OpenAI API, and more.
It’s a powerful AI deployment solution that works with multiple language model runners (like Ollama and OpenAI-compatible APIs) and includes a built-in inference engine for Retrieval-Augmented Generation (RAG).
With Open WebUI, you can customize the OpenAI API URL to connect to services like LMStudio, GroqCloud, Mistral, and OpenRouter.
Administrators can create detailed user roles and permissions, ensuring a secure environment while offering a personalized user experience.
Open WebUI is compatible with desktops, laptops, and mobile devices. It also supports Progressive Web Apps (PWA) for offline access on mobile.
Open Source Repository: https://github.com/open-webui/open-webui
Official Documentation: https://docs.openwebui.com/
Installation
Open WebUI offers multiple installation options, including Python pip, Docker, Docker Compose, Kustomize, and Helm.
Quick Start with Docker
If Ollama is already installed on your system, use the following command:
1 | docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main |
For Nvidia GPU support:
1 | docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda |
Bundled Installation with Ollama
This method combines Open WebUI and Ollama into a single container for easy setup.
For GPU support:
1 | docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama |
For CPU-only setups:
1 | docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama |
Once installed, access Open WebUI at http://localhost:3000.
Updating Open WebUI
Manual Update
Use Watchtower to manually update the Docker container:
1 | docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui |
Automatic Updates
To update the container every 5 minutes:
1 | docker run -d --name watchtower --restart unless-stopped -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --interval 300 open-webui |
Note: Replace open-webui
with your container name if it’s different.
Manual Installation
Open WebUI can be installed using either the uv
runtime manager or Python’s pip
.
Using uv
(Recommended)
uv
simplifies environment management and reduces potential conflicts.
macOS/Linux:
1 | curl -LsSf https://astral.sh/uv/install.sh | sh |
Windows:
1 | powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" |
After installing uv
, run Open WebUI with the following commands. Make sure to set the DATA_DIR
environment variable to avoid data loss.
macOS/Linux:
1 | DATA_DIR=~/.open-webui uvx --python 3.11 open-webui@latest serve |
Windows:
1 | $env:DATA_DIR="C:\open-webui\data"; uvx --python 3.11 open-webui@latest serve |
Using pip
Ensure you’re using Python 3.11 to avoid compatibility issues.
Install Open WebUI with:
1 | pip install open-webui |
Start the server with:
1 | open-webui serve |
Once running, access Open WebUI at http://localhost:8080.