Last updated
Last updated
This guide provides detailed instructions for installing Ollama on various Linux distributions, NixOS, and using Docker containers.
Before installing Ollama, ensure your system meets these minimum requirements:
CPU: 64-bit Intel/AMD (x86_64) or ARM64 processor
RAM: 8GB minimum (16GB+ recommended for larger models)
Storage: 10GB+ free space (varies by model size)
Operating System: Linux (kernel 4.15+), macOS 12.0+, or Windows 10/11
GPU (optional but recommended):
NVIDIA GPU with CUDA 11.4+ support
AMD GPU with ROCm 5.4.3+ support
Intel Arc GPU with OneAPI support
For most Linux distributions, the simplest installation method is using the official install script:
This script automatically detects your Linux distribution and installs the appropriate package.
For Debian-based distributions (Ubuntu, Debian, Linux Mint, etc.):
For Red Hat-based distributions (RHEL, Fedora, CentOS, etc.):
If packages are not available for your distribution:
Ollama is available in the Nixpkgs collection, making it easy to install on NixOS.
For a system-wide installation, add Ollama to your configuration.nix
:
After updating your configuration, apply the changes:
If you're using Home Manager:
Running Ollama in Docker provides a consistent environment across different systems.
Pull and run the official Ollama Docker image:
Create a docker-compose.yml
file:
Launch with Docker Compose:
To enable NVIDIA GPU support:
After installing Ollama, perform these steps to complete the setup:
Start the Ollama service:
Test the installation by running a model:
Verify API access:
Permission Denied Errors:
Network Connectivity Issues:
GPU Not Detected:
Now that you have Ollama installed, proceed to:
for optimal performance
for faster inference
For DevOps engineers, check out to see how Ollama can be integrated into your workflows.