DevOps help for Cloud Platform Engineers
  • Welcome!
  • Quick Start Guide
  • About Me
  • CV
  • Contribute
  • 🧠DevOps & SRE Foundations
    • DevOps Overview
      • Engineering Fundamentals
      • Implementing DevOps Strategy
      • DevOps Readiness Assessment
      • Lifecycle Management
      • The 12 Factor App
      • Design for Self Healing
      • Incident Management Best Practices (2025)
    • SRE Fundamentals
      • Toil Reduction
      • System Simplicity
      • Real-world Scenarios
        • AWS VM Log Monitoring API
    • Agile Development
      • Team Agreements
        • Definition of Done
        • Definition of Ready
        • Team Manifesto
        • Working Agreement
    • Industry Scenarios
      • Finance and Banking
      • Public Sector (UK/EU)
      • Energy Sector Edge Computing
  • DevOps Practices
    • Platform Engineering
    • FinOps
    • Observability
      • Modern Practices
  • 🚀Modern DevOps Practices
    • Infrastructure Testing
    • Modern Development
    • Database DevOps
  • 🛠️Infrastructure as Code (IaC)
    • Terraform
      • Cloud Integrations - Provider-specific implementations
        • Azure Scenarios
          • Azure Authetication
            • Service Principal
            • Service Principal in block
            • Service Principal in env
        • AWS Scenarios
          • AWS Authentication
        • GCP Scenarios
          • GCP Authentication
      • Testing and Validation
        • Unit Testing
        • Integration Testing
        • End-to-End Testing
        • Terratest Guide
      • Best Practices
        • State Management
        • Security
        • Code Organization
        • Performance
      • Tools & Utilities - Enhancing the Terraform workflow
        • Terraform Docs
        • TFLint
        • Checkov
        • Terrascan
      • CI/CD Integration - Automating infrastructure deployment
        • GitHub Actions
        • Azure Pipelines
        • GitLab CI
    • Bicep
      • Getting Started - First steps with Bicep [BEGINNER]
      • Template Specs
      • Best Practices - Guidelines for effective Bicep implementations
      • Modules - Building reusable components [INTERMEDIATE]
      • Examples - Sample implementations for common scenarios
      • Advanced Features
      • CI/CD Integration - Automating Bicep deployments
        • GitHub Actions
        • Azure Pipelines
  • 💰Cost Management & FinOps
    • Cloud Cost Optimization
  • 🐳Containers & Orchestration
    • Containerization Overview
      • Docker
        • Dockerfile Best Practices
        • Docker Compose
      • Kubernetes
        • CLI Tools - Essential command-line utilities
          • Kubectl
          • Kubens
          • Kubectx
        • Core Concepts
        • Components
        • Best Practices
          • Pod Security
          • Security Monitoring
          • Resource Limits
        • Advanced Features - Beyond the basics [ADVANCED]
          • Service Mesh
            • Istio
            • Linkerd
          • Ingress Controllers
            • NGINX
            • Traefik
            • Kong
            • Gloo Edge
            • Contour
        • Tips
          • Status in Pods
          • Resource handling
          • Pod Troubleshooting Commands
        • Enterprise Architecture
        • Health Management
        • Security & Compliance
        • Virtual Clusters
      • OpenShift
  • Service Mesh & Networking
    • Service Mesh Implementation
  • Architecture Patterns
    • Data Mesh
    • Multi-Cloud Networking
    • Disaster Recovery
    • Chaos Engineering
  • Edge Computing
    • Implementation Guide
      • Serverless Edge
      • IoT Edge Patterns
      • Real-Time Processing
      • Edge AI/ML
      • Security Hardening
      • Observability Patterns
      • Network Optimization
      • Storage Patterns
  • 🔄CI/CD & GitOps
    • CI/CD Overview
      • Continuous Integration
      • Continuous Delivery
        • Deployment Strategies
        • Secrets Management
        • Blue-Green Deployments
        • Deployment Metrics
        • Progressive Delivery
        • Release Management for DevOps/SRE (2025)
      • CI/CD Platforms - Tool selection and implementation
        • Azure DevOps
          • Pipelines
            • Stages
            • Jobs
            • Steps
            • Templates - Reusable pipeline components
            • Extends
            • Service Connections - External service authentication
            • Best Practices for 2025
            • Agents and Runners
            • Third-Party Integrations
            • Azure DevOps CLI
          • Boards & Work Items
        • GitHub Actions
        • GitLab
          • GitLab Runner
          • Real-life scenarios
          • Installation guides
          • Pros and Cons
          • Comparison with alternatives
      • GitOps
        • Modern GitOps Practices
        • GitOps Patterns for Multi-Cloud (2025)
        • Flux
          • Overview
          • Progressive Delivery
          • Use GitOps with Flux, GitHub and AKS
  • Source Control
    • Source Control Overview
      • Git Branching Strategies
      • Component Versioning
      • Kubernetes Manifest Versioning
      • GitLab
      • Creating a Fork
      • Naming Branches
      • Pull Requests
      • Integrating LLMs into Source Control Workflows
  • ☁️Cloud Platforms
    • Cloud Strategy
      • AWS to Azure
      • Azure to AWS
      • GCP to Azure
      • AWS to GCP
      • GCP to AWS
    • Azure
      • Best Practices
        • Azure Best Practices Overview
        • Azure Architecture Best Practices
        • Azure Naming Standards
        • Azure Tags
        • Azure Security Best Practices
      • Landing Zones
      • Services
        • Azure Active Directory (AAD)
        • Azure Monitor
        • Azure Key Vault
        • Azure Service Bus
        • Azure DNS
        • Azure App Service
        • Azure Batch
        • Azure Machine Learning
        • Azure OpenAI Service
        • Azure Cognitive Services
        • Azure Kubernetes Service (AKS)
        • Azure Databricks
        • Azure SQL Database
      • Monitoring
      • Administration Tools - Platform management interfaces
        • Azure PowerShell
        • Azure CLI
      • Tips & Tricks
    • AWS
      • Authentication
      • Best Practices
      • Tips & Tricks
      • Services
        • AWS IAM (Identity and Access Management)
        • Amazon CloudWatch
        • Amazon SNS (Simple Notification Service)
        • Amazon SQS (Simple Queue Service)
        • Amazon Route 53
        • AWS Elastic Beanstalk
        • AWS Batch
        • Amazon SageMaker
        • Amazon Bedrock
        • Amazon Comprehend
    • Google Cloud
      • Services
        • Cloud CDN
        • Cloud DNS
        • Cloud Load Balancing
        • Google Kubernetes Engine (GKE)
        • Cloud Run
        • Artifact Registry
        • Compute Engine
        • Cloud Functions
        • App Engine
        • Cloud Storage
        • Persistent Disk
        • Filestore
        • Cloud SQL
        • Cloud Spanner
        • Firestore
        • Bigtable
        • BigQuery
        • VPC (Virtual Private Cloud)
  • 🔐Security & Compliance
    • DevSecOps Overview
      • DevSecOps Pipeline Security
      • DevSecOps
        • Real-life Examples
        • Scanning & Protection - Automated security tooling
          • Dependency Scanning
          • Credential Scanning
          • Container Security Scanning
          • Static Code Analysis
            • Best Practices
            • Tool Integration Guide
            • Pipeline Configuration
        • CI/CD Security
        • Secrets Rotation
      • Supply Chain Security
        • SLSA Framework
        • Binary Authorization
        • Artifact Signing
      • Security Best Practices
        • Threat Modeling
        • Kubernetes Security
      • SecOps
      • Zero Trust Model
      • Cloud Compliance
        • ISO/IEC 27001:2022
        • ISO 22301:2019
        • PCI DSS
        • CSA STAR
      • Security Frameworks
      • SIEM and SOAR
  • Security Architecture
    • Zero Trust Implementation
      • Identity Management
      • Network Security
      • Access Control
  • 🔍Observability & Monitoring
    • Observability Fundamentals
      • Logging
      • Metrics
      • Tracing
      • Dashboards
      • SLOs and SLAs
      • Observability as Code
      • Pipeline Observability
  • 🧪Testing Strategies
    • Testing Overview
      • Modern Testing Approaches
      • End-to-End Testing
      • Unit Testing
      • Performance Testing
        • Load Testing
      • Fault Injection Testing
      • Integration Testing
      • Smoke Testing
  • 🤖AI Integration
    • AIops Overview
      • Workflow Automation
      • Predictive Analytics
      • Code Quality
  • 🧠AI & LLM Integration
    • Overview
      • Claude
        • Installation Guide
        • Project Guides
        • MCP Server Setup
        • LLM Comparison
      • Ollama
        • Installation Guide
        • Configuration
        • Models and Fine-tuning
        • DevOps Usage
        • Docker Setup
        • GPU Setup
        • Open WebUI
      • Copilot
        • Installation Guide
        • VS Code Integration
        • CLI Usage
      • Gemini
        • Installation Guides - Platform-specific setup
          • Linux Installation
          • WSL Installation
          • NixOS Installation
        • Gemini 2.5 Features
        • Roles and Agents
        • NotebookML Guide
        • Cloud Infrastructure Deployment
        • Summary
  • 💻Development Environment
    • DevOps Tools
      • Operating Systems - Development platforms
        • NixOS
          • Install NixOS: PC, Mac, WSL
          • Nix Language Deep Dive
          • Nix Language Fundamentals
            • Nix Functions and Techniques
            • Building Packages with Nix
            • NixOS Configuration Patterns
            • Flakes: The Future of Nix
          • NixOS Generators: Azure & QEMU
        • WSL2
          • Distributions
          • Terminal Setup
      • Editor Environments
      • CLI Tools
        • Azure CLI
        • PowerShell
        • Linux Commands
          • SSH - Secure Shell)
            • SSH Config
            • SSH Port Forwarding
        • Linux Fundametals
        • Cloud init
          • Cloud init examples
        • YAML Tools
          • How to create a k8s yaml file - How to create YAML config
          • YQ the tool
  • 📚Programming Languages
    • Python
    • Go
    • JavaScript/TypeScript
    • Java
    • Rust
  • Platform Engineering
    • Implementation Guide
  • FinOps
    • Implementation Guide
  • AIOps
    • LLMOps Guide
  • Should Learn
    • Should Learn
    • Linux
      • Commands
      • OS
      • Services
    • Terraform
    • Getting Started - Installation and initial setup [BEGINNER]
    • Cloud Integrations
    • Testing and Validation - Ensuring infrastructure quality
      • Unit Testing
      • Integration Testing
      • End-to-End Testing
      • Terratest Guide
    • Best Practices - Production-ready implementation strategies
      • State Management
      • Security
      • Code Organization
      • Performance
    • Tools & Utilities
    • CI/CD Integration
    • Bicep
    • Kubernetes
      • kubectl
    • Ansible
    • Puppet
    • Java
    • Rust
    • Azure CLI
  • 📖Documentation Best Practices
    • Documentation Strategy
      • Project Documentation
      • Release Notes
      • Static Sites
      • Documentation Templates
      • Real-World Examples
  • 📋Reference Materials
    • Glossary
    • Tool Comparison
    • Tool Decision Guides
    • Recommended Reading
    • Troubleshooting Guide
    • Development Setup
Powered by GitBook
On this page
  • GPU Acceleration Overview
  • Hardware Requirements
  • NVIDIA GPU Setup
  • Prerequisites
  • Configuring Ollama for NVIDIA GPUs
  • Verifying GPU Usage
  • NVIDIA Docker Setup
  • AMD GPU Setup
  • Prerequisites
  • Configuring Ollama for AMD GPUs
  • Verifying AMD GPU Support
  • AMD Docker Setup
  • Intel GPU Setup
  • Prerequisites
  • Configuring Ollama for Intel GPUs
  • Verifying Intel GPU Support
  • Troubleshooting GPU Issues
  • Common NVIDIA Issues
  • Common AMD Issues
  • Common Intel Issues
  • Performance Optimization
  • NVIDIA Performance Tips
  • AMD Performance Tips
  • Intel Performance Tips
  • Multi-GPU Configuration
  • Real-World Deployment Examples
  • High-Performance Server (4x NVIDIA RTX 4090)
  • Mixed GPU Environment (NVIDIA + AMD)
  • NixOS GPU Configuration
  • Next Steps
Edit on GitHub
  1. AI & LLM Integration
  2. Overview
  3. Ollama

GPU Setup

This guide provides detailed instructions for configuring Ollama to utilize GPU acceleration on different hardware platforms including NVIDIA, AMD, and Intel GPUs.

GPU Acceleration Overview

GPU acceleration dramatically improves Ollama's performance, enabling:

  • Faster model loading times

  • Increased inference speed (token generation)

  • Higher throughput for concurrent requests

  • Ability to run larger models efficiently

Hardware Requirements

GPU Manufacturer
Minimum Requirements
Recommended

NVIDIA

CUDA-capable GPU (Compute 5.0+) Pascal/10xx series or newer

RTX series (30xx/40xx)

AMD

ROCm-compatible GPU (CDNA/RDNA) Radeon RX 6000+ series

Radeon RX 7000 series

Intel

Intel Arc GPUs with OneAPI support

Intel Arc A770/A750

NVIDIA GPU Setup

NVIDIA GPUs offer the best performance and compatibility with Ollama through CUDA integration.

Prerequisites

  1. Install the NVIDIA driver:

    # Ubuntu/Debian
    sudo apt update
    sudo apt install -y nvidia-driver-535 nvidia-utils-535
    
    # RHEL/CentOS/Fedora
    sudo dnf install -y akmod-nvidia
  2. Install the CUDA toolkit (11.4 or newer recommended):

    # Download and install CUDA toolkit
    wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda_11.8.0_520.61.05_linux.run
    sudo sh cuda_11.8.0_520.61.05_linux.run
  3. Add CUDA to your PATH:

    echo 'export PATH=/usr/local/cuda/bin:$PATH' >> ~/.bashrc
    echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
    source ~/.bashrc

Configuring Ollama for NVIDIA GPUs

Ollama automatically detects NVIDIA GPUs when available. You can customize GPU utilization with environment variables:

# Use specific GPUs (zero-indexed)
export CUDA_VISIBLE_DEVICES=0,1

# Limit memory usage per GPU (in MiB)
export GPU_MEMORY_UTILIZATION=90

# Start Ollama with GPU acceleration
ollama serve

Verifying GPU Usage

# Check if CUDA is detected
ollama run mistral "Are you using my GPU?" --verbose

# Monitor GPU usage
nvidia-smi -l 1

NVIDIA Docker Setup

For Docker-based deployments:

# Install NVIDIA Container Toolkit
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/libnvidia-container/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit

# Configure Docker
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

# Run Ollama with GPU support
docker run --gpus all -p 11434:11434 ollama/ollama

AMD GPU Setup

AMD GPU support in Ollama uses the ROCm platform.

Prerequisites

  1. Install the ROCm driver stack:

    # Add ROCm apt repository
    wget -q -O - https://repo.radeon.com/rocm/rocm.gpg.key | sudo apt-key add -
    echo 'deb [arch=amd64] https://repo.radeon.com/rocm/apt/5.4.3/ ubuntu main' | sudo tee /etc/apt/sources.list.d/rocm.list
    
    # Install ROCm
    sudo apt update
    sudo apt install -y rocm-dev rocm-libs miopen-hip
  2. Add your user to the render group:

    sudo usermod -aG render $USER
    sudo usermod -aG video $USER
  3. Set up environment variables:

    echo 'export PATH=/opt/rocm/bin:$PATH' >> ~/.bashrc
    echo 'export HSA_OVERRIDE_GFX_VERSION=10.3.0' >> ~/.bashrc
    source ~/.bashrc

Configuring Ollama for AMD GPUs

# Configure Ollama for AMD GPUs
export OLLAMA_COMPUTE_TYPE=rocm

# For specific AMD GPU settings
export HSA_OVERRIDE_GFX_VERSION=10.3.0

# Start Ollama
ollama serve

Verifying AMD GPU Support

# Check if ROCm is detected
rocm-smi

# Check Ollama logs
ollama run mistral "Are you using my GPU?" --verbose

AMD Docker Setup

# Set up Docker container with ROCm
docker run --device=/dev/kfd --device=/dev/dri \
    --security-opt seccomp=unconfined \
    --group-add render \
    -p 11434:11434 \
    -e OLLAMA_COMPUTE_TYPE=rocm \
    -e HSA_OVERRIDE_GFX_VERSION=10.3.0 \
    ollama/ollama

Intel GPU Setup

Intel Arc GPUs can accelerate Ollama through OneAPI integration.

Prerequisites

  1. Install the Intel GPU drivers:

    # Ubuntu
    sudo apt update
    sudo apt install -y intel-opencl-icd intel-level-zero-gpu level-zero
    
    # Install Intel oneAPI base toolkit
    wget https://registrationcenter-download.intel.com/akdlm/irc_nas/18673/l_BaseKit_p_2022.2.0.262_offline.sh
    sudo sh l_BaseKit_p_2022.2.0.262_offline.sh
  2. Add OneAPI to your PATH:

    echo 'source /opt/intel/oneapi/setvars.sh' >> ~/.bashrc
    source ~/.bashrc

Configuring Ollama for Intel GPUs

# Enable Intel GPU acceleration
export NEOCommandLine="-cl-intel-greater-than-4GB-buffer-required"
export OLLAMA_COMPUTE_TYPE=sycl

# Start Ollama
ollama serve

Verifying Intel GPU Support

# Check OneAPI configuration
sycl-ls

# Test with Ollama
ollama run mistral "Are you using my GPU?" --verbose

Troubleshooting GPU Issues

Common NVIDIA Issues

Issue
Solution

CUDA not found

Verify CUDA installation: nvcc --version

Insufficient memory

Reduce model size or context window: ollama run mistral:7b-q4_0 -c 2048

Multiple GPU conflict

Specify device: export CUDA_VISIBLE_DEVICES=0

Driver/CUDA mismatch

Common AMD Issues

Issue
Solution

ROCm device not found

Check installation: rocm-smi

Hip runtime error

Set HSA_OVERRIDE_GFX_VERSION=10.3.0

Permission issues

Add user to render group: sudo usermod -aG render $USER

Common Intel Issues

Issue
Solution

GPU not detected

Verify driver installation: clinfo

Memory allocation failed

Set -cl-intel-greater-than-4GB-buffer-required

Driver too old

Update Intel GPU driver

Performance Optimization

NVIDIA Performance Tips

# Use mixed precision for better performance
export OLLAMA_COMPUTE_TYPE=float16

# For large models on limited VRAM
export OLLAMA_GPU_LAYERS=35

AMD Performance Tips

# Adjust compute type for better performance
export OLLAMA_COMPUTE_TYPE=float16

# For large models on limited VRAM
export HIP_VISIBLE_DEVICES=0
export OLLAMA_GPU_LAYERS=28

Intel Performance Tips

# Optimize for Intel GPUs
export OLLAMA_COMPUTE_TYPE=float16
export SYCL_CACHE_PERSISTENT=1

Multi-GPU Configuration

For systems with multiple GPUs:

# Use specific GPUs (comma-separated, zero-indexed)
export CUDA_VISIBLE_DEVICES=0,1  # NVIDIA
export HIP_VISIBLE_DEVICES=0,1   # AMD

# Set number of GPUs to use
export OLLAMA_NUM_GPU=2

Real-World Deployment Examples

High-Performance Server (4x NVIDIA RTX 4090)

# Create a systemd service
sudo nano /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network.target

[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="CUDA_VISIBLE_DEVICES=0,1,2,3"
Environment="OLLAMA_COMPUTE_TYPE=float16"
Environment="OLLAMA_NUM_GPU=4"
ExecStart=/usr/local/bin/ollama serve
Restart=always
User=ollama

[Install]
WantedBy=multi-user.target

Mixed GPU Environment (NVIDIA + AMD)

For environments with both NVIDIA and AMD GPUs:

# For NVIDIA
CUDA_VISIBLE_DEVICES=0 ollama serve

# For AMD (in separate instance)
HIP_VISIBLE_DEVICES=0 OLLAMA_COMPUTE_TYPE=rocm ollama serve --port 11435

NixOS GPU Configuration

For NixOS users, configure GPU acceleration in configuration.nix:

{ config, pkgs, ... }:

{
  # Enable NVIDIA driver and CUDA
  hardware.opengl.enable = true;
  hardware.nvidia.package = config.boot.kernelPackages.nvidiaPackages.stable;
  hardware.nvidia.modesetting.enable = true;

  # Enable Ollama service with GPU acceleration
  services.ollama = {
    enable = true;
    acceleration = "cuda"; # Options: none, cuda, rocm, or oneapi
    package = pkgs.ollama;
    environmentFiles = [ "/etc/ollama/env.conf" ]; # Custom environment variables
  };
  
  # Create file with: 
  # OLLAMA_COMPUTE_TYPE=float16
  # OLLAMA_HOST=0.0.0.0:11434
}

Next Steps

After configuring GPU acceleration for Ollama:

PreviousDocker SetupNextOpen WebUI

Last updated 9 days ago

Install compatible versions:

optimized for GPU acceleration

for optimal performance

for a graphical interface

🧠
Explore available models
Set up advanced configurations
Try real-world DevOps usage examples
Set up Open WebUI
NVIDIA Compatibility