Setting Up Open WebUI with Docker
This step-by-step guide will walk you through setting up Open WebUI with Docker and configuring it to work with locally running Ollama models, with data persistence for both services.
Prerequisites
Before you begin, make sure you have:
- Docker and Docker Compose installed on your system
- Basic understanding of Docker concepts
- At least 10GB of free disk space (varies based on which models you'll use)
Step 1: Understand the Components
-
Open WebUI: An open-source chat interface for AI models
- Provides a user-friendly interface similar to ChatGPT
- Can connect to local LLM solutions like Ollama
- Official GitHub Repository
-
Ollama: A framework for running large language models locally
- Provides a simple API to interact with models
- Handles model management and optimization
- Official Website
Step 2: Create the Docker Compose File
-
Create a new directory for your project:
mkdir openwebui-ollama
cd openwebui-ollama -
Create a
docker-compose.ymlfile with the following configuration:version: '3'
services:
openwebui:
image: ghcr.io/open-webui/open-webui:main
restart: always
ports:
- "3000:8080"
volumes:
- ./openwebui_data:/app/backend/data
ollama:
image: ollama/ollama:latest
ports:
- "11434:11434"
volumes:
- ./ollama_data:/root/.ollama -
Understanding the configuration:
-
Open WebUI Service:
- Uses the official Docker image from GitHub Container Registry
- Maps port 3000 on your host to port 8080 in the container
- Persists data in a local
./openwebui_datadirectory - Set to automatically restart if it crashes
-
Ollama Service:
- Uses the official Ollama Docker image
- Exposes port 11434 for API access
- Stores models and configuration in
./ollama_datadirectory
-
Step 3: Launch the Services
-
Start the services by running:
docker-compose up -d -
Verify both containers are running:
docker-compose ps -
Wait for the containers to download and start (this may take a few minutes depending on your internet connection).
Step 4: Access Open WebUI
-
Open a web browser and navigate to:
http://localhost:3000 -
You should see the Open WebUI interface.
Step 5: Connect Open WebUI to Ollama
-
In the Open WebUI interface, go to Settings > LLM Providers
-
Select Ollama as your provider
-
Set the API endpoint to
http://ollama:11434(this uses Docker's internal DNS) -
Click "Test Connection" to verify it works
-
Save your settings
Step 6: Install AI Models
Example Model: Mistral 7B
For this tutorial, we'll use Mistral 7B as our example model. It's an excellent choice for local deployment because:
- Balanced size and performance: At 7 billion parameters, it offers good performance while being manageable on consumer hardware
- Efficient resource usage: Requires approximately 4-6GB of VRAM when running
- Good capabilities: Handles reasoning, code generation, and general text tasks well
- Fast responses: Generates text at reasonable speeds even on consumer hardware
Option A: Using the Ollama CLI
-
Access the Ollama container:
docker exec -it $(docker ps -q --filter name=ollama) /bin/bash -
Pull the Mistral 7B model:
ollama pull mistralThis will download approximately 4.1GB of data
-
You can verify the model was installed correctly:
ollama list -
Exit the container:
exit
Option B: Using Open WebUI Interface
-
Navigate to the Models section in Open WebUI
-
Find "mistral" in the available models list and click Download
-
Wait for the download to complete (about 4.1GB)
-
After installation, the model will appear in your available models list
Step 7: Start Chatting
-
Select your downloaded model from the model dropdown
-
Start a new conversation
-
Enter prompts and interact with the AI
Step 8: Understand Data Persistence
-
Your data is stored in two locations:
./openwebui_data- for conversations, settings, and Open WebUI configurations./ollama_data- for AI models and Ollama configurations
-
These directories will persist your data even if you restart or rebuild the containers.
Troubleshooting
Connection Issues
If Open WebUI can't connect to Ollama:
-
Verify both containers are running:
docker-compose ps -
Check if the Ollama API endpoint is set correctly in Open WebUI settings
-
Ensure there are no firewall rules blocking the connections
Model Download Failures
If model downloads fail:
-
Check your internet connection
-
Ensure you have enough disk space
-
Try downloading a smaller model first to test
Upgrading
To upgrade to newer versions:
-
Pull the latest images:
docker-compose pull -
Restart the services with the new images:
docker-compose up -d
Step 9: Backup and Restore
Backing Up Your Data
-
Stop the running containers (this ensures data integrity during backup):
docker-compose down -
Create a compressed backup of your data directories:
tar -czvf openwebui-backup-$(date +%Y%m%d).tar.gz openwebui_data ollama_data -
Verify the backup was created successfully:
ls -la openwebui-backup-*.tar.gz -
Store the backup file in a secure location (cloud storage, external drive, etc.)
-
Restart your services:
docker-compose up -d
Restoring on Another Machine
-
Install Docker and Docker Compose on the new machine
-
Create a new directory for your OpenWebUI setup:
mkdir openwebui-ollama
cd openwebui-ollama -
Copy your
docker-compose.ymlfile to this directory -
Copy your backup file to this directory and extract it:
tar -xzvf openwebui-backup-YYYYMMDD.tar.gz -
Verify the data directories were restored correctly:
ls -la openwebui_data ollama_data -
Start the services:
docker-compose up -d -
Verify everything is working by accessing Open WebUI at
http://localhost:3000
Important Notes on Backups
-
Schedule Regular Backups: For important data, consider setting up a cron job to automate backups
-
Large Model Files: Be aware that Ollama model files can be several gigabytes in size. If you have limited bandwidth or storage:
- Consider backing up only the OpenWebUI data:
tar -czvf openwebui-config-backup.tar.gz openwebui_data - Re-download models on the new machine instead of transferring them
- Consider backing up only the OpenWebUI data:
-
Version Compatibility: When restoring, ensure you're using compatible versions of OpenWebUI and Ollama
Advanced Configuration
For more advanced setups, you can modify the Docker Compose file to:
- Change ports
- Add environment variables
- Implement custom networking
- Add authentication
Check the official documentation for Open WebUI and Ollama for more configuration options.