Installation Guide

This guide provides detailed instructions for installing Ollama on various Linux distributions, NixOS, and using Docker containers.

System Requirements

Before installing Ollama, ensure your system meets these minimum requirements:

  • CPU: 64-bit Intel/AMD (x86_64) or ARM64 processor

  • RAM: 8GB minimum (16GB+ recommended for larger models)

  • Storage: 10GB+ free space (varies by model size)

  • Operating System: Linux (kernel 4.15+), macOS 12.0+, or Windows 10/11

  • GPU (optional but recommended):

    • NVIDIA GPU with CUDA 11.4+ support

    • AMD GPU with ROCm 5.4.3+ support

    • Intel Arc GPU with OneAPI support

Linux Installation (Direct Method)

For most Linux distributions, the simplest installation method is using the official install script:

curl -fsSL https://ollama.com/install.sh | sh

This script automatically detects your Linux distribution and installs the appropriate package.

Manual Installation (Debian/Ubuntu)

For Debian-based distributions (Ubuntu, Debian, Linux Mint, etc.):

Manual Installation (Red Hat/Fedora)

For Red Hat-based distributions (RHEL, Fedora, CentOS, etc.):

Manual Installation (Binary Installation)

If packages are not available for your distribution:

NixOS Installation

Ollama is available in the Nixpkgs collection, making it easy to install on NixOS.

Using Nix Package Manager

NixOS Configuration (Configuration.nix)

For a system-wide installation, add Ollama to your configuration.nix:

After updating your configuration, apply the changes:

Using Home Manager

If you're using Home Manager:

Docker Installation

Running Ollama in Docker provides a consistent environment across different systems.

Basic Docker Setup

Pull and run the official Ollama Docker image:

Docker Compose Setup

Create a docker-compose.yml file:

Launch with Docker Compose:

Docker with GPU Support (NVIDIA)

To enable NVIDIA GPU support:

Post-Installation Setup

After installing Ollama, perform these steps to complete the setup:

  1. Start the Ollama service:

  2. Test the installation by running a model:

  3. Verify API access:

Troubleshooting

Common Issues

  1. Permission Denied Errors:

  2. Network Connectivity Issues:

  3. GPU Not Detected:

Next Steps

Now that you have Ollama installed, proceed to:

  1. Configure Ollama for optimal performance

  2. Set up GPU acceleration for faster inference

For DevOps engineers, check out DevOps Usage Examples to see how Ollama can be integrated into your workflows.

Last updated