Open WebUI
This guide covers the installation, configuration, and usage of Open WebUI with Ollama, providing a user-friendly graphical interface for interacting with your local large language models.
What is Open WebUI?
Open WebUI (formerly Ollama WebUI) is an open-source web interface designed specifically for Ollama. It provides:
A chat-like interface for interacting with models
File upload and RAG capabilities
Multi-modal support (text and images)
Vision features for supported models
Prompt templates and history
User management
API integrations
Custom model configurations
Prerequisites
Before installing Open WebUI, ensure you have:
A working Ollama installation (follow the installation guide)
Docker and Docker Compose (recommended for easy setup)
4GB+ RAM available (in addition to what Ollama requires)
At least one model installed in Ollama
Ollama running and accessible on port 11434
Installation Methods
Method 1: Docker (Recommended)
The easiest way to install Open WebUI is using Docker:
Access the interface at http://localhost:3000
Method 2: Docker Compose
Create a docker-compose.yml
file containing both Ollama and Open WebUI:
Deploy with:
Access the interface at http://localhost:3000
Method 3: Manual Installation
For users who prefer not to use Docker:
Access the interface at the URL provided by the frontend development server.
Configuration Options
Environment Variables
Open WebUI can be configured with various environment variables:
OLLAMA_API_BASE_URL
URL of Ollama API
http://localhost:11434/api
PORT
Port for the web interface
8080
HOST
Host binding for the interface
0.0.0.0
DATA_DIR
Directory for storing data
/app/backend/data
ENABLE_SIGNUP
Allow new users to register
true
ENABLE_AUTH
Enable authentication
false
LOG_LEVEL
Logging detail level
error
For Docker deployments, pass these as environment variables:
Authentication
To enable multi-user authentication:
Set
ENABLE_AUTH=true
in your environment variablesSet
ENABLE_SIGNUP=true
to allow new user registration (can be disabled later)Open the web interface and create your first admin user
Set
ENABLE_SIGNUP=false
to prevent further registrations if desired
Persistent Storage
For persistent storage of conversations, settings, and users:
Using Open WebUI
Initial Setup
Open your browser and navigate to http://localhost:3000 (or your configured port)
If authentication is enabled, create an account or log in
In the sidebar, you should see available models from your Ollama instance
If no models appear, check that Ollama is running and accessible
Basic Chat Interface
The Open WebUI interface includes:
Left sidebar: Models, conversations, and settings
Main chat area: Messages between you and the model
Input area: Text field for sending prompts to the model
Model settings: Configuration panel for adjusting model parameters
Advanced Features
Custom Model Parameters
To customize model parameters for a specific conversation:
Click on the model name in the top bar of the chat
Adjust parameters:
Temperature: Controls randomness (0.0-2.0)
Top P: Nucleus sampling threshold (0.0-1.0)
Maximum length: Limits response length
Context window: Sets available context tokens
Save settings to apply them to the current conversation
RAG (Retrieval-Augmented Generation)
Enable RAG capabilities for improved responses with external knowledge:
In the sidebar, navigate to "RAG" section
Click "Upload files" to add documents (PDFs, text files, etc.)
Create a new collection and add your documents
When chatting, toggle the RAG feature to use your document collection
Ask questions related to your documents to see context-aware responses
Chat Templates
Create templates for common prompts:
Navigate to "Templates" in the sidebar
Click "New Template"
Define your template with placeholder variables
Save and use these templates in conversations
Vision Support
For models that support image input (like LLaVA):
Ensure you have a multimodal model like
llava
installedIn the chat interface, click the upload button (📎)
Select an image to upload
Ask questions about the image
DevOps Team Setup
For teams using Ollama in DevOps workflows:
Collaborative Setup
With a Caddyfile
:
Custom DevOps Modelfile
Create a special Modelfile for your team:
Build the model:
Security Considerations
When deploying Open WebUI:
Authentication: Always enable authentication in production
Network access: Limit access using a reverse proxy with TLS
User management: Control who has access to the interface
Document handling: Be aware that uploaded documents are stored in the data volume
API security: Protect the Ollama API endpoint from unauthorized access
Securing with Nginx
Example nginx.conf
for securing Open WebUI:
Troubleshooting
"Cannot connect to Ollama" error
Verify Ollama is running: curl http://localhost:11434/api/tags
Models not appearing
Ensure OLLAMA_API_BASE_URL
is correctly set
File uploads failing
Check that the data directory is writable
Authentication issues
Clear browser cache or check user database
Slow performance
Adjust model parameters or upgrade hardware
Extending Open WebUI
API Access
Open WebUI provides its own API that can be accessed at:
This Swagger UI allows for programmatic interaction with the interface.
Custom Plugins
You can extend Open WebUI with custom plugins:
Clone the repository
Create a new directory in
backend/plugins/
Implement the plugin interface
Add your plugin to the configuration
Next Steps
After setting up Open WebUI:
Customize models for specific use cases
Optimize GPU acceleration for better performance
Implement DevOps workflows using the web interface
Create team-specific templates and RAG collections
Additional Resources
Last updated