Open WebUI
Last updated
Last updated
This guide covers the installation, configuration, and usage of Open WebUI with Ollama, providing a user-friendly graphical interface for interacting with your local large language models.
Open WebUI (formerly Ollama WebUI) is an open-source web interface designed specifically for Ollama. It provides:
A chat-like interface for interacting with models
File upload and RAG capabilities
Multi-modal support (text and images)
Vision features for supported models
Prompt templates and history
User management
API integrations
Custom model configurations
Before installing Open WebUI, ensure you have:
A working Ollama installation (follow the )
Docker and Docker Compose (recommended for easy setup)
4GB+ RAM available (in addition to what Ollama requires)
At least one model installed in Ollama
Ollama running and accessible on port 11434
The easiest way to install Open WebUI is using Docker:
Access the interface at http://localhost:3000
Create a docker-compose.yml
file containing both Ollama and Open WebUI:
Deploy with:
Access the interface at http://localhost:3000
For users who prefer not to use Docker:
Access the interface at the URL provided by the frontend development server.
Open WebUI can be configured with various environment variables:
OLLAMA_API_BASE_URL
URL of Ollama API
http://localhost:11434/api
PORT
Port for the web interface
8080
HOST
Host binding for the interface
0.0.0.0
DATA_DIR
Directory for storing data
/app/backend/data
ENABLE_SIGNUP
Allow new users to register
true
ENABLE_AUTH
Enable authentication
false
LOG_LEVEL
Logging detail level
error
For Docker deployments, pass these as environment variables:
To enable multi-user authentication:
Set ENABLE_AUTH=true
in your environment variables
Set ENABLE_SIGNUP=true
to allow new user registration (can be disabled later)
Open the web interface and create your first admin user
Set ENABLE_SIGNUP=false
to prevent further registrations if desired
For persistent storage of conversations, settings, and users:
Open your browser and navigate to http://localhost:3000 (or your configured port)
If authentication is enabled, create an account or log in
In the sidebar, you should see available models from your Ollama instance
If no models appear, check that Ollama is running and accessible
The Open WebUI interface includes:
Left sidebar: Models, conversations, and settings
Main chat area: Messages between you and the model
Input area: Text field for sending prompts to the model
Model settings: Configuration panel for adjusting model parameters
To customize model parameters for a specific conversation:
Click on the model name in the top bar of the chat
Adjust parameters:
Temperature: Controls randomness (0.0-2.0)
Top P: Nucleus sampling threshold (0.0-1.0)
Maximum length: Limits response length
Context window: Sets available context tokens
Save settings to apply them to the current conversation
Enable RAG capabilities for improved responses with external knowledge:
In the sidebar, navigate to "RAG" section
Click "Upload files" to add documents (PDFs, text files, etc.)
Create a new collection and add your documents
When chatting, toggle the RAG feature to use your document collection
Ask questions related to your documents to see context-aware responses
Create templates for common prompts:
Navigate to "Templates" in the sidebar
Click "New Template"
Define your template with placeholder variables
Save and use these templates in conversations
For models that support image input (like LLaVA):
Ensure you have a multimodal model like llava
installed
In the chat interface, click the upload button (📎)
Select an image to upload
Ask questions about the image
For teams using Ollama in DevOps workflows:
With a Caddyfile
:
Create a special Modelfile for your team:
Build the model:
When deploying Open WebUI:
Authentication: Always enable authentication in production
Network access: Limit access using a reverse proxy with TLS
User management: Control who has access to the interface
Document handling: Be aware that uploaded documents are stored in the data volume
API security: Protect the Ollama API endpoint from unauthorized access
Example nginx.conf
for securing Open WebUI:
"Cannot connect to Ollama" error
Verify Ollama is running: curl http://localhost:11434/api/tags
Models not appearing
Ensure OLLAMA_API_BASE_URL
is correctly set
File uploads failing
Check that the data directory is writable
Authentication issues
Clear browser cache or check user database
Slow performance
Adjust model parameters or upgrade hardware
Open WebUI provides its own API that can be accessed at:
This Swagger UI allows for programmatic interaction with the interface.
You can extend Open WebUI with custom plugins:
Clone the repository
Create a new directory in backend/plugins/
Implement the plugin interface
Add your plugin to the configuration
After setting up Open WebUI:
Create team-specific templates and RAG collections
for specific use cases
for better performance
using the web interface