DevOps help for Cloud Platform Engineers
  • Welcome!
  • Quick Start Guide
  • About Me
  • CV
  • 🧠DevOps & SRE Foundations
    • DevOps Overview
      • Engineering Fundamentals
      • Implementing DevOps Strategy
      • DevOps Readiness Assessment
      • Lifecycle Management
      • The 12 Factor App
      • Design for Self Healing
      • Incident Management Best Practices (2025)
    • SRE Fundamentals
      • Toil Reduction
      • System Simplicity
      • Real-world Scenarios
        • AWS VM Log Monitoring API
    • Agile Development
      • Team Agreements
        • Definition of Done
        • Definition of Ready
        • Team Manifesto
        • Working Agreement
    • Industry Scenarios
      • Finance and Banking
      • Public Sector (UK/EU)
      • Energy Sector Edge Computing
  • DevOps Practices
    • Platform Engineering
    • FinOps
    • Observability
      • Modern Practices
  • 🚀Modern DevOps Practices
    • Infrastructure Testing
    • Modern Development
    • Database DevOps
  • 🛠️Infrastructure as Code (IaC)
    • Terraform
      • Getting Started - Installation and initial setup [BEGINNER]
      • Cloud Integrations - Provider-specific implementations
        • Azure Scenarios
        • AWS Scenarios
        • GCP Scenarios
      • Testing and Validation - Ensuring infrastructure quality
        • Unit Testing
        • Integration Testing
        • End-to-End Testing
        • Terratest Guide
      • Best Practices - Production-ready implementation strategies
        • State Management
        • Security
        • Code Organization
        • Performance
      • Tools & Utilities - Enhancing the Terraform workflow
        • Terraform Docs
        • TFLint
        • Checkov
        • Terrascan
      • CI/CD Integration - Automating infrastructure deployment
        • GitHub Actions - GitHub-based automation workflows
        • Azure Pipelines - Azure DevOps integration
        • GitLab CI - GitLab-based deployment pipelines
    • Bicep
      • Getting Started - First steps with Bicep [BEGINNER]
      • Template Specs
      • Best Practices - Guidelines for effective Bicep implementations
      • Modules - Building reusable components [INTERMEDIATE]
      • Examples - Sample implementations for common scenarios
      • Advanced Features
      • CI/CD Integration - Automating Bicep deployments
        • GitHub Actions
        • Azure Pipelines
  • 💰Cost Management & FinOps
    • Cloud Cost Optimization
  • 🐳Containers & Orchestration
    • Containerization Overview
    • Docker
      • Dockerfile Best Practices
      • Docker Compose
    • Kubernetes
      • CLI Tools - Essential command-line utilities
        • Kubectl
        • Kubens
        • Kubectx
      • Core Concepts
      • Components
      • Best Practices
        • Pod Security
        • Security Monitoring
        • Resource Limits
      • Advanced Features - Beyond the basics [ADVANCED]
        • Service Mesh
        • Ingress Controllers
          • NGINX
          • Traefik
          • Kong
          • Gloo Edge
      • Troubleshooting - Diagnosing and resolving common issues
        • Pod Troubleshooting Commands
      • Enterprise Architecture
      • Health Management
      • Security & Compliance
      • Virtual Clusters
    • OpenShift
  • Service Mesh & Networking
    • Service Mesh Implementation
  • Architecture Patterns
    • Data Mesh
    • Multi-Cloud Networking
    • Disaster Recovery
    • Chaos Engineering
  • Edge Computing
    • Implementation Guide
    • Serverless Edge
    • IoT Edge Patterns
    • Real-Time Processing
    • Edge AI/ML
    • Security Hardening
    • Observability Patterns
    • Network Optimization
    • Storage Patterns
  • 🔄CI/CD & GitOps
    • CI/CD Overview
    • Continuous Integration
    • Continuous Delivery
      • Deployment Strategies
      • Secrets Management
      • Blue-Green Deployments
      • Deployment Metrics
      • Progressive Delivery
      • Release Management for DevOps/SRE (2025)
    • CI/CD Platforms - Tool selection and implementation
      • Azure DevOps
        • Pipelines
          • Stages
          • Jobs
          • Steps
          • Templates - Reusable pipeline components
          • Extends
          • Service Connections - External service authentication
          • Best Practices for 2025
          • Agents and Runners
          • Third-Party Integrations
          • Azure DevOps CLI
        • Boards & Work Items
      • GitHub Actions
      • GitLab
        • GitLab Runner
        • Real-life scenarios
        • Installation guides
        • Pros and Cons
        • Comparison with alternatives
    • GitOps
      • Modern GitOps Practices
      • GitOps Patterns for Multi-Cloud (2025)
      • Flux
        • Overview
        • Progressive Delivery
        • Use GitOps with Flux, GitHub and AKS
  • Source Control
    • Source Control Overview
    • Git Branching Strategies
    • Component Versioning
    • Kubernetes Manifest Versioning
    • GitLab
    • Creating a Fork
    • Naming Branches
    • Pull Requests
    • Integrating LLMs into Source Control Workflows
  • ☁️Cloud Platforms
    • Cloud Strategy
    • Azure
      • Best Practices
      • Landing Zones
      • Services
      • Monitoring
      • Administration Tools - Platform management interfaces
        • Azure PowerShell
        • Azure CLI
      • Tips & Tricks
    • AWS
      • Authentication
      • Best Practices
      • Tips & Tricks
    • Google Cloud
      • Services
    • Private Cloud
  • 🔐Security & Compliance
    • DevSecOps Overview
    • DevSecOps Pipeline Security
    • DevSecOps
      • Real-life Examples
      • Scanning & Protection - Automated security tooling
        • Dependency Scanning
        • Credential Scanning
        • Container Security Scanning
        • Static Code Analysis
          • Best Practices
          • Tool Integration Guide
          • Pipeline Configuration
      • CI/CD Security
      • Secrets Rotation
    • Supply Chain Security
      • SLSA Framework
      • Binary Authorization
      • Artifact Signing
    • Security Best Practices
      • Threat Modeling
      • Kubernetes Security
    • SecOps
    • Zero Trust Model
    • Cloud Compliance
      • ISO/IEC 27001:2022
      • ISO 22301:2019
      • PCI DSS
      • CSA STAR
    • Security Frameworks
    • SIEM and SOAR
  • Security Architecture
    • Zero Trust Implementation
      • Identity Management
      • Network Security
      • Access Control
  • 🔍Observability & Monitoring
    • Observability Fundamentals
    • Logging
    • Metrics
    • Tracing
    • Dashboards
    • SLOs and SLAs
    • Observability as Code
    • Pipeline Observability
  • 🧪Testing Strategies
    • Testing Overview
    • Modern Testing Approaches
    • End-to-End Testing
    • Unit Testing
    • Performance Testing
      • Load Testing
    • Fault Injection Testing
    • Integration Testing
    • Smoke Testing
  • 🤖AI Integration
    • AIops Overview
      • Workflow Automation
      • Predictive Analytics
      • Code Quality
  • 🧠AI & LLM Integration
    • Overview
    • Claude
      • Installation Guide
      • Project Guides
      • MCP Server Setup
      • LLM Comparison
    • Ollama
      • Installation Guide
      • Configuration
      • Models and Fine-tuning
      • DevOps Usage
      • Docker Setup
      • GPU Setup
      • Open WebUI
    • Copilot
      • Installation Guide
      • VS Code Integration
      • CLI Usage
    • Gemini
      • Installation Guides - Platform-specific setup
        • Linux Installation
        • WSL Installation
        • NixOS Installation
      • Gemini 2.5 Features
      • Roles and Agents
      • NotebookML Guide
      • Cloud Infrastructure Deployment
      • Summary
  • 💻Development Environment
    • Tools Overview
    • DevOps Tools
    • Operating Systems - Development platforms
      • NixOS
        • Installation
        • Nix Language Guide
        • DevEnv with Nix
        • Cloud Deployments
      • WSL2
        • Distributions
        • Terminal Setup
    • Editor Environments
    • CLI Tools
      • Azure CLI
      • PowerShell
      • Linux Commands
      • YAML Tools
  • 📚Programming Languages
    • Python
    • Go
    • JavaScript/TypeScript
    • Java
    • Rust
  • 📖Documentation Best Practices
    • Documentation Strategy
    • Project Documentation
    • Release Notes
    • Static Sites
    • Documentation Templates
    • Real-World Examples
  • 📋Reference Materials
    • Glossary
    • Tool Comparison
    • Recommended Reading
    • Troubleshooting Guide
  • Platform Engineering
    • Implementation Guide
  • FinOps
    • Implementation Guide
  • AIOps
    • LLMOps Guide
  • Development Setup
    • Development Setup
Powered by GitBook
On this page
  • Why Use Ollama with Docker?
  • Prerequisites
  • Basic Ollama Docker Setup
  • Using Official Docker Image
  • Testing Your Ollama Container
  • Docker Compose Setup
  • Basic Docker Compose Configuration
  • Advanced Docker Compose with Resource Limits
  • GPU-Accelerated Docker Setup
  • NVIDIA GPU Support
  • Docker Compose with NVIDIA GPUs
  • AMD ROCm GPU Support
  • Multi-Container Setups
  • Ollama with Open WebUI
  • Ollama for DevOps
  • Docker Network Configuration
  • Creating an Isolated Network
  • Accessing Ollama from Other Containers
  • Custom Ollama Docker Images
  • Creating a Custom Dockerfile
  • Production Best Practices
  • Security Considerations
  • Health Checks and Monitoring
  • Docker Swarm and Kubernetes
  • Docker Swarm Deployment
  • Kubernetes Deployment
  • Troubleshooting Docker Issues
  • Next Steps
Edit on GitHub
  1. AI & LLM Integration
  2. Ollama

Docker Setup

This guide provides detailed instructions for deploying Ollama in Docker containers, enabling consistent, isolated environments and streamlined deployment across different systems.

Why Use Ollama with Docker?

Docker provides several advantages for running Ollama:

  • Isolation: Run Ollama in a contained environment without affecting the host system

  • Portability: Deploy the same Ollama setup across different environments

  • Resource control: Limit CPU, memory, and GPU resources allocated to Ollama

  • Version management: Easily switch between different Ollama versions

  • Orchestration: Integrate with Kubernetes or Docker Swarm for scaling

Prerequisites

Before getting started, ensure you have:

  1. Docker installed on your system:

    # Linux
    curl -fsSL https://get.docker.com | sh
    sudo usermod -aG docker $USER
    # Log out and back in to apply group changes
    
    # Verify Docker installation
    docker --version
  2. Docker Compose (optional but recommended):

    # Install Docker Compose V2
    sudo apt update && sudo apt install -y docker-compose-plugin
    
    # Verify installation
    docker compose version
  3. At least 8GB of RAM and sufficient disk space for models (~5-10GB per model)

Basic Ollama Docker Setup

Using Official Docker Image

Pull and run the official Ollama Docker image:

# Pull the latest Ollama image
docker pull ollama/ollama:latest

# Create a volume for persistent storage
docker volume create ollama-data

# Run Ollama container
docker run -d \
  --name ollama \
  -p 11434:11434 \
  -v ollama-data:/root/.ollama \
  ollama/ollama

Testing Your Ollama Container

# Check if the container is running
docker ps

# Download and run a model
docker exec -it ollama ollama run mistral "Hello, how are you?"

# Access the API from the host
curl http://localhost:11434/api/generate -d '{
  "model": "mistral",
  "prompt": "What is Docker?"
}'

Docker Compose Setup

A more manageable way to configure and run Ollama is using Docker Compose.

Basic Docker Compose Configuration

Create a file named docker-compose.yml:

version: '3'

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    volumes:
      - ollama-data:/root/.ollama
    ports:
      - "11434:11434"
    restart: unless-stopped

volumes:
  ollama-data:

Run with:

docker compose up -d

Advanced Docker Compose with Resource Limits

For more control over container resources:

version: '3'

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    volumes:
      - ollama-data:/root/.ollama
      - ./modelfiles:/modelfiles
    ports:
      - "11434:11434"
    environment:
      - OLLAMA_HOST=0.0.0.0:11434
      - OLLAMA_KEEP_ALIVE=15m
    deploy:
      resources:
        limits:
          cpus: '8'
          memory: 16G
        reservations:
          cpus: '4'
          memory: 8G
    restart: unless-stopped

volumes:
  ollama-data:

GPU-Accelerated Docker Setup

NVIDIA GPU Support

To enable NVIDIA GPU acceleration with Docker:

  1. Install the NVIDIA Container Toolkit:

    # Add NVIDIA package repositories
    distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
    curl -s -L https://nvidia.github.io/libnvidia-container/gpgkey | sudo apt-key add -
    curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
    
    # Install nvidia-container-toolkit
    sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
    
    # Configure Docker runtime
    sudo nvidia-ctk runtime configure --runtime=docker
    sudo systemctl restart docker
  2. Run Ollama with GPU support:

    docker run -d \
      --name ollama-gpu \
      --gpus all \
      -p 11434:11434 \
      -v ollama-data:/root/.ollama \
      ollama/ollama

Docker Compose with NVIDIA GPUs

version: '3'

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama-gpu
    volumes:
      - ollama-data:/root/.ollama
    ports:
      - "11434:11434"
    environment:
      - OLLAMA_COMPUTE_TYPE=float16
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
    restart: unless-stopped

volumes:
  ollama-data:

AMD ROCm GPU Support

For AMD GPUs with ROCm:

docker run -d \
  --name ollama-rocm \
  --device=/dev/kfd \
  --device=/dev/dri \
  --security-opt seccomp=unconfined \
  --group-add video \
  -p 11434:11434 \
  -e OLLAMA_COMPUTE_TYPE=rocm \
  -v ollama-data:/root/.ollama \
  ollama/ollama

Multi-Container Setups

Ollama with Open WebUI

This setup combines Ollama with the Open WebUI for a more user-friendly interface:

version: '3'

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    volumes:
      - ollama-data:/root/.ollama
    ports:
      - "11434:11434"
    restart: unless-stopped

  open-webui:
    image: ghcr.io/open-webui/open-webui:latest
    container_name: open-webui
    volumes:
      - open-webui-data:/app/backend/data
    ports:
      - "3000:8080"
    environment:
      - OLLAMA_API_BASE_URL=http://ollama:11434/api
    depends_on:
      - ollama
    restart: unless-stopped

volumes:
  ollama-data:
  open-webui-data:

Ollama for DevOps

A setup designed for DevOps workflows with Ollama and RAG capabilities:

version: '3'

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama-devops
    volumes:
      - ollama-data:/root/.ollama
      - ./models:/models
      - ./devops-docs:/data
    ports:
      - "11434:11434"
    environment:
      - OLLAMA_MODELS=/models
    restart: unless-stopped

  vector-db:
    image: chroma/chroma:latest
    container_name: chroma-db
    volumes:
      - chroma-data:/chroma/data
    ports:
      - "8000:8000"
    restart: unless-stopped

  rag-service:
    image: ghcr.io/yourusername/ollama-rag-service:latest
    container_name: rag-service
    volumes:
      - ./data:/data
    ports:
      - "5000:5000"
    environment:
      - OLLAMA_HOST=ollama:11434
      - CHROMA_HOST=vector-db:8000
    depends_on:
      - ollama
      - vector-db
    restart: unless-stopped

volumes:
  ollama-data:
  chroma-data:

Docker Network Configuration

Creating an Isolated Network

For multi-container deployments, create an isolated network:

# Create a dedicated network
docker network create ollama-network

# Run Ollama in the network
docker run -d \
  --name ollama \
  --network ollama-network \
  -p 11434:11434 \
  -v ollama-data:/root/.ollama \
  ollama/ollama

Accessing Ollama from Other Containers

Other containers can access Ollama using the container name as hostname:

docker run -it --rm --network ollama-network alpine/curl \
  -X POST http://ollama:11434/api/generate \
  -d '{"model": "mistral", "prompt": "Hello!"}'

Custom Ollama Docker Images

Creating a Custom Dockerfile

Create a Dockerfile with pre-loaded models and custom configuration:

FROM ollama/ollama:latest

# Set environment variables
ENV OLLAMA_HOST=0.0.0.0:11434
ENV OLLAMA_KEEP_ALIVE=5m

# Copy custom Modelfiles
COPY ./modelfiles /modelfiles

# Pre-download models during build (optional)
RUN ollama serve & sleep 5 && \
    ollama pull mistral:7b && \
    ollama pull codellama:7b && \
    ollama create devops-assistant -f /modelfiles/DevOps-Assistant

# Expose port
EXPOSE 11434

# Default command
CMD ["ollama", "serve"]

Build and run your custom image:

# Build the image
docker build -t custom-ollama:latest .

# Run the container
docker run -d \
  --name custom-ollama \
  -p 11434:11434 \
  -v ollama-data:/root/.ollama \
  custom-ollama:latest

Production Best Practices

Security Considerations

  1. TLS Encryption:

    services:
      ollama:
        environment:
          - OLLAMA_TLS_CERT=/certs/cert.pem
          - OLLAMA_TLS_KEY=/certs/key.pem
        volumes:
          - ./certs:/certs
  2. Authentication (using a reverse proxy like Nginx):

    # Example nginx.conf snippet
    server {
        listen 443 ssl;
        server_name ollama.example.com;
        
        ssl_certificate /etc/nginx/certs/cert.pem;
        ssl_certificate_key /etc/nginx/certs/key.pem;
        
        auth_basic "Ollama API";
        auth_basic_user_file /etc/nginx/.htpasswd;
        
        location / {
            proxy_pass http://ollama:11434;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }

Health Checks and Monitoring

Add health checks to your Docker Compose:

services:
  ollama:
    # ...
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:11434/api/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s

Docker Swarm and Kubernetes

Docker Swarm Deployment

# Initialize Swarm if not already done
docker swarm init

# Deploy Ollama stack
docker stack deploy -c docker-compose.yml ollama-stack

Kubernetes Deployment

Create a kubernetes.yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ollama
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ollama
  template:
    metadata:
      labels:
        app: ollama
    spec:
      containers:
      - name: ollama
        image: ollama/ollama:latest
        ports:
        - containerPort: 11434
        volumeMounts:
        - name: ollama-data
          mountPath: /root/.ollama
        resources:
          limits:
            memory: "16Gi"
            cpu: "8"
          requests:
            memory: "8Gi"
            cpu: "4"
      volumes:
      - name: ollama-data
        persistentVolumeClaim:
          claimName: ollama-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: ollama
spec:
  selector:
    app: ollama
  ports:
  - port: 11434
    targetPort: 11434
  type: ClusterIP
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ollama-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi

Apply with:

kubectl apply -f kubernetes.yaml

Troubleshooting Docker Issues

Issue
Solution

Container won't start

Check logs with docker logs ollama

Permission errors

Verify volume permissions with docker exec -it ollama ls -la /root/.ollama

Network connectivity

Test with docker exec -it ollama curl localhost:11434/api/tags

Out of memory

Increase memory limits in Docker settings

GPU not detected

Verify the NVIDIA Container Toolkit installation and check logs

Next Steps

After setting up Ollama in Docker:

PreviousDevOps UsageNextGPU Setup

Last updated 3 days ago

for faster model inference

for your specific use cases

with Ollama

for a graphical user interface

🧠
Explore GPU acceleration
Configure and optimize models
Implement DevOps workflows
Set up Open WebUI