DevOps help for Cloud Platform Engineers
  • Welcome!
  • Quick Start Guide
  • About Me
  • CV
  • Contribute
  • 🧠DevOps & SRE Foundations
    • DevOps Overview
      • Engineering Fundamentals
      • Implementing DevOps Strategy
      • DevOps Readiness Assessment
      • Lifecycle Management
      • The 12 Factor App
      • Design for Self Healing
      • Incident Management Best Practices (2025)
    • SRE Fundamentals
      • Toil Reduction
      • System Simplicity
      • Real-world Scenarios
        • AWS VM Log Monitoring API
    • Agile Development
      • Team Agreements
        • Definition of Done
        • Definition of Ready
        • Team Manifesto
        • Working Agreement
    • Industry Scenarios
      • Finance and Banking
      • Public Sector (UK/EU)
      • Energy Sector Edge Computing
  • DevOps Practices
    • Platform Engineering
    • FinOps
    • Observability
      • Modern Practices
  • 🚀Modern DevOps Practices
    • Infrastructure Testing
    • Modern Development
    • Database DevOps
  • 🛠️Infrastructure as Code (IaC)
    • Terraform
      • Cloud Integrations - Provider-specific implementations
        • Azure Scenarios
          • Azure Authetication
            • Service Principal
            • Service Principal in block
            • Service Principal in env
        • AWS Scenarios
          • AWS Authentication
        • GCP Scenarios
          • GCP Authentication
      • Testing and Validation
        • Unit Testing
        • Integration Testing
        • End-to-End Testing
        • Terratest Guide
      • Best Practices
        • State Management
        • Security
        • Code Organization
        • Performance
      • Tools & Utilities - Enhancing the Terraform workflow
        • Terraform Docs
        • TFLint
        • Checkov
        • Terrascan
      • CI/CD Integration - Automating infrastructure deployment
        • GitHub Actions
        • Azure Pipelines
        • GitLab CI
    • Bicep
      • Getting Started - First steps with Bicep [BEGINNER]
      • Template Specs
      • Best Practices - Guidelines for effective Bicep implementations
      • Modules - Building reusable components [INTERMEDIATE]
      • Examples - Sample implementations for common scenarios
      • Advanced Features
      • CI/CD Integration - Automating Bicep deployments
        • GitHub Actions
        • Azure Pipelines
  • 💰Cost Management & FinOps
    • Cloud Cost Optimization
  • 🐳Containers & Orchestration
    • Containerization Overview
      • Docker
        • Dockerfile Best Practices
        • Docker Compose
      • Kubernetes
        • CLI Tools - Essential command-line utilities
          • Kubectl
          • Kubens
          • Kubectx
        • Core Concepts
        • Components
        • Best Practices
          • Pod Security
          • Security Monitoring
          • Resource Limits
        • Advanced Features - Beyond the basics [ADVANCED]
          • Service Mesh
            • Istio
            • Linkerd
          • Ingress Controllers
            • NGINX
            • Traefik
            • Kong
            • Gloo Edge
            • Contour
        • Tips
          • Status in Pods
          • Resource handling
          • Pod Troubleshooting Commands
        • Enterprise Architecture
        • Health Management
        • Security & Compliance
        • Virtual Clusters
      • OpenShift
  • Service Mesh & Networking
    • Service Mesh Implementation
  • Architecture Patterns
    • Data Mesh
    • Multi-Cloud Networking
    • Disaster Recovery
    • Chaos Engineering
  • Edge Computing
    • Implementation Guide
      • Serverless Edge
      • IoT Edge Patterns
      • Real-Time Processing
      • Edge AI/ML
      • Security Hardening
      • Observability Patterns
      • Network Optimization
      • Storage Patterns
  • 🔄CI/CD & GitOps
    • CI/CD Overview
      • Continuous Integration
      • Continuous Delivery
        • Deployment Strategies
        • Secrets Management
        • Blue-Green Deployments
        • Deployment Metrics
        • Progressive Delivery
        • Release Management for DevOps/SRE (2025)
      • CI/CD Platforms - Tool selection and implementation
        • Azure DevOps
          • Pipelines
            • Stages
            • Jobs
            • Steps
            • Templates - Reusable pipeline components
            • Extends
            • Service Connections - External service authentication
            • Best Practices for 2025
            • Agents and Runners
            • Third-Party Integrations
            • Azure DevOps CLI
          • Boards & Work Items
        • GitHub Actions
        • GitLab
          • GitLab Runner
          • Real-life scenarios
          • Installation guides
          • Pros and Cons
          • Comparison with alternatives
      • GitOps
        • Modern GitOps Practices
        • GitOps Patterns for Multi-Cloud (2025)
        • Flux
          • Overview
          • Progressive Delivery
          • Use GitOps with Flux, GitHub and AKS
  • Source Control
    • Source Control Overview
      • Git Branching Strategies
      • Component Versioning
      • Kubernetes Manifest Versioning
      • GitLab
      • Creating a Fork
      • Naming Branches
      • Pull Requests
      • Integrating LLMs into Source Control Workflows
  • ☁️Cloud Platforms
    • Cloud Strategy
      • AWS to Azure
      • Azure to AWS
      • GCP to Azure
      • AWS to GCP
      • GCP to AWS
    • Azure
      • Best Practices
        • Azure Best Practices Overview
        • Azure Architecture Best Practices
        • Azure Naming Standards
        • Azure Tags
        • Azure Security Best Practices
      • Landing Zones
      • Services
        • Azure Active Directory (AAD)
        • Azure Monitor
        • Azure Key Vault
        • Azure Service Bus
        • Azure DNS
        • Azure App Service
        • Azure Batch
        • Azure Machine Learning
        • Azure OpenAI Service
        • Azure Cognitive Services
        • Azure Kubernetes Service (AKS)
        • Azure Databricks
        • Azure SQL Database
      • Monitoring
      • Administration Tools - Platform management interfaces
        • Azure PowerShell
        • Azure CLI
      • Tips & Tricks
    • AWS
      • Authentication
      • Best Practices
      • Tips & Tricks
      • Services
        • AWS IAM (Identity and Access Management)
        • Amazon CloudWatch
        • Amazon SNS (Simple Notification Service)
        • Amazon SQS (Simple Queue Service)
        • Amazon Route 53
        • AWS Elastic Beanstalk
        • AWS Batch
        • Amazon SageMaker
        • Amazon Bedrock
        • Amazon Comprehend
    • Google Cloud
      • Services
        • Cloud CDN
        • Cloud DNS
        • Cloud Load Balancing
        • Google Kubernetes Engine (GKE)
        • Cloud Run
        • Artifact Registry
        • Compute Engine
        • Cloud Functions
        • App Engine
        • Cloud Storage
        • Persistent Disk
        • Filestore
        • Cloud SQL
        • Cloud Spanner
        • Firestore
        • Bigtable
        • BigQuery
        • VPC (Virtual Private Cloud)
  • 🔐Security & Compliance
    • DevSecOps Overview
      • DevSecOps Pipeline Security
      • DevSecOps
        • Real-life Examples
        • Scanning & Protection - Automated security tooling
          • Dependency Scanning
          • Credential Scanning
          • Container Security Scanning
          • Static Code Analysis
            • Best Practices
            • Tool Integration Guide
            • Pipeline Configuration
        • CI/CD Security
        • Secrets Rotation
      • Supply Chain Security
        • SLSA Framework
        • Binary Authorization
        • Artifact Signing
      • Security Best Practices
        • Threat Modeling
        • Kubernetes Security
      • SecOps
      • Zero Trust Model
      • Cloud Compliance
        • ISO/IEC 27001:2022
        • ISO 22301:2019
        • PCI DSS
        • CSA STAR
      • Security Frameworks
      • SIEM and SOAR
  • Security Architecture
    • Zero Trust Implementation
      • Identity Management
      • Network Security
      • Access Control
  • 🔍Observability & Monitoring
    • Observability Fundamentals
      • Logging
      • Metrics
      • Tracing
      • Dashboards
      • SLOs and SLAs
      • Observability as Code
      • Pipeline Observability
  • 🧪Testing Strategies
    • Testing Overview
      • Modern Testing Approaches
      • End-to-End Testing
      • Unit Testing
      • Performance Testing
        • Load Testing
      • Fault Injection Testing
      • Integration Testing
      • Smoke Testing
  • 🤖AI Integration
    • AIops Overview
      • Workflow Automation
      • Predictive Analytics
      • Code Quality
  • 🧠AI & LLM Integration
    • Overview
      • Claude
        • Installation Guide
        • Project Guides
        • MCP Server Setup
        • LLM Comparison
      • Ollama
        • Installation Guide
        • Configuration
        • Models and Fine-tuning
        • DevOps Usage
        • Docker Setup
        • GPU Setup
        • Open WebUI
      • Copilot
        • Installation Guide
        • VS Code Integration
        • CLI Usage
      • Gemini
        • Installation Guides - Platform-specific setup
          • Linux Installation
          • WSL Installation
          • NixOS Installation
        • Gemini 2.5 Features
        • Roles and Agents
        • NotebookML Guide
        • Cloud Infrastructure Deployment
        • Summary
  • 💻Development Environment
    • DevOps Tools
      • Operating Systems - Development platforms
        • NixOS
          • Install NixOS: PC, Mac, WSL
          • Nix Language Deep Dive
          • Nix Language Fundamentals
            • Nix Functions and Techniques
            • Building Packages with Nix
            • NixOS Configuration Patterns
            • Flakes: The Future of Nix
          • NixOS Generators: Azure & QEMU
        • WSL2
          • Distributions
          • Terminal Setup
      • Editor Environments
      • CLI Tools
        • Azure CLI
        • PowerShell
        • Linux Commands
          • SSH - Secure Shell)
            • SSH Config
            • SSH Port Forwarding
        • Linux Fundametals
        • Cloud init
          • Cloud init examples
        • YAML Tools
          • How to create a k8s yaml file - How to create YAML config
          • YQ the tool
  • 📚Programming Languages
    • Python
    • Go
    • JavaScript/TypeScript
    • Java
    • Rust
  • Platform Engineering
    • Implementation Guide
  • FinOps
    • Implementation Guide
  • AIOps
    • LLMOps Guide
  • Should Learn
    • Should Learn
    • Linux
      • Commands
      • OS
      • Services
    • Terraform
    • Getting Started - Installation and initial setup [BEGINNER]
    • Cloud Integrations
    • Testing and Validation - Ensuring infrastructure quality
      • Unit Testing
      • Integration Testing
      • End-to-End Testing
      • Terratest Guide
    • Best Practices - Production-ready implementation strategies
      • State Management
      • Security
      • Code Organization
      • Performance
    • Tools & Utilities
    • CI/CD Integration
    • Bicep
    • Kubernetes
      • kubectl
    • Ansible
    • Puppet
    • Java
    • Rust
    • Azure CLI
  • 📖Documentation Best Practices
    • Documentation Strategy
      • Project Documentation
      • Release Notes
      • Static Sites
      • Documentation Templates
      • Real-World Examples
  • 📋Reference Materials
    • Glossary
    • Tool Comparison
    • Tool Decision Guides
    • Recommended Reading
    • Troubleshooting Guide
    • Development Setup
Powered by GitBook
On this page
  • Available Models
  • General-Purpose Models
  • Code-Specialized Models
  • Small/Efficient Models
  • Multilingual Models
  • Model Management
  • Listing Models
  • Pulling Models
  • Removing Models
  • Model Parameters
  • Customizing Models with Modelfiles
  • Basic Modelfile Example
  • Advanced Modelfile Example
  • Modelfile Commands Reference
  • Fine-tuning with Custom Data
  • Using External Fine-tuned Models
  • Behavior Fine-tuning with Examples
  • Model Quantization
  • RAG (Retrieval-Augmented Generation)
  • Practical Model Selection Guide
  • DevOps-Specific Model Configuration
  • Next Steps
Edit on GitHub
  1. AI & LLM Integration
  2. Overview
  3. Ollama

Models and Fine-tuning

This guide covers the available models in Ollama, how to use them, and techniques for customizing models to suit your specific requirements.

Available Models

Ollama supports a variety of open-source LLMs. Here are some of the most commonly used models:

General-Purpose Models

Model
Size
Description
Command

Llama 2

7B to 70B

Meta's general-purpose model

ollama pull llama2

Mistral

7B

High-quality open-source model

ollama pull mistral

Mixtral

8x7B

Mixture-of-experts model

ollama pull mixtral

Phi-2

2.7B

Microsoft's compact model

ollama pull phi

Neural Chat

7B

Optimized for chat

ollama pull neural-chat

Vicuna

7B to 33B

Fine-tuned LLaMa model

ollama pull vicuna

Code-Specialized Models

Model
Size
Description
Command

CodeLlama

7B to 34B

Code-focused Llama variant

ollama pull codellama

WizardCoder

7B to 34B

Fine-tuned for code tasks

ollama pull wizardcoder

DeepSeek Coder

6.7B to 33B

Specialized for code

ollama pull deepseek-coder

Small/Efficient Models

Model
Size
Description
Command

TinyLlama

1.1B

Compact model for limited resources

ollama pull tinyllama

Gemma

2B to 7B

Google's lightweight model

ollama pull gemma

Phi-2

2.7B

Efficient and compact

ollama pull phi

Multilingual Models

Model
Description
Command

BLOOM

Multilingual capabilities

ollama pull bloom

Qwen

Chinese and English

ollama pull qwen

Japanese Stable LM

Japanese language

ollama pull stablej

Model Management

Listing Models

# List all downloaded models
ollama list

Pulling Models

# Pull a specific model version
ollama pull mistral:7b-v0.1

Removing Models

# Remove a model
ollama rm mistral

Model Parameters

Control model behavior with these parameters:

Parameter
Description
Range

temperature

Controls randomness

0.0 - 2.0

top_p

Nucleus sampling threshold

0.0 - 1.0

top_k

Limits vocabulary to top K tokens

1 - 100+

context_length

Maximum context window size

Model dependent

seed

Random seed for reproducibility

Any integer

Example usage:

# Run a model with specific parameters
ollama run mistral --temperature 0.7 --top_p 0.9

Customizing Models with Modelfiles

Ollama uses Modelfiles (similar to Dockerfiles) to create custom model configurations.

Basic Modelfile Example

FROM mistral:latest
PARAMETER temperature 0.7
SYSTEM You are an expert DevOps engineer specializing in cloud infrastructure.

Save this in a file named Modelfile and create a custom model:

ollama create devops-assistant -f ./Modelfile
ollama run devops-assistant

Advanced Modelfile Example

FROM codellama:latest
PARAMETER temperature 0.3
PARAMETER top_p 0.8
PARAMETER stop "```"
TEMPLATE """
<system>
You are a senior software developer specialized in infrastructure as code, container orchestration, and CI/CD pipelines.
</system>

<user>
{{.Prompt}}
</user>

<assistant>
"""

Modelfile Commands Reference

Command
Description
Example

FROM

Base model

FROM mistral:latest

PARAMETER

Set inference parameter

PARAMETER temperature 0.7

SYSTEM

Set system message

SYSTEM You are a helpful assistant

TEMPLATE

Define chat template

TEMPLATE <s>{{.System}}</s>{{.Prompt}}

ADAPTER

Apply LoRA adapter

ADAPTER ./adapter.bin

MESSAGE

Add example conversation

MESSAGE user "Hi"

PROMPT

Default prompt

PROMPT Answer in bullet points

LICENSE

Specify model license

LICENSE MIT

Fine-tuning with Custom Data

While Ollama doesn't directly support fine-tuning, you can use pre-fine-tuned models and adapt them with Modelfiles.

Using External Fine-tuned Models

  1. Convert the model to GGUF format

  2. Import into Ollama:

# Import a fine-tuned model
ollama create custom-model -f Modelfile

With Modelfile:

FROM ./path/to/fine-tuned-model.gguf
PARAMETER temperature 0.7

Behavior Fine-tuning with Examples

You can "soft fine-tune" model behavior by providing examples in the Modelfile:

FROM mistral:latest
SYSTEM You are a DevOps automation expert who explains complex concepts clearly.

# Include examples of desired behavior
MESSAGE user "How can I automate AWS infrastructure provisioning?"
MESSAGE assistant "For AWS infrastructure automation, I recommend these approaches:

1. **Terraform**: Infrastructure as Code (IaC) tool with declarative configuration
   - Maintains state of resources
   - Supports modules for reusability
   - Works with multiple cloud providers

2. **AWS CloudFormation**: Native AWS IaC service
   - JSON/YAML templates
   - Stack management with change sets
   - Deep integration with AWS services

3. **AWS CDK**: Define infrastructure using familiar programming languages
   - TypeScript, Python, Java support
   - Compiles to CloudFormation
   - Object-oriented approach

For most teams, Terraform offers the best balance of flexibility and functionality."

Model Quantization

Ollama supports various quantization levels to balance performance and resource usage:

Quantization
Size Impact
Performance
Command

Q4_K_M

3-4GB (7B model)

Good

ollama pull mistral:7b-q4_k_m

Q5_K_M

4-5GB (7B model)

Better

ollama pull mistral:7b-q5_k_m

Q8_0

7-8GB (7B model)

Best

ollama pull mistral:7b-q8_0

For resource-constrained environments, use more aggressive quantization:

# Pull a highly quantized model
ollama pull tinyllama:1.1b-q4_0

RAG (Retrieval-Augmented Generation)

Enhance models with external knowledge using RAG:

#!/bin/bash

# Simple RAG implementation with Ollama
MODEL="mistral:latest"
QUERY="What are the key components of a Kubernetes cluster?"
CONTEXT_FILE="kubernetes-docs.txt"

# Get context from a document
CONTEXT=$(grep -i "kubernetes components\|control plane\|node components" "$CONTEXT_FILE" | head -n 15)

# Create prompt with context
PROMPT="Based on the following information:\n\n$CONTEXT\n\nPlease answer: $QUERY"

# Send to Ollama
ollama run $MODEL --prompt "$PROMPT"

Practical Model Selection Guide

Use Case
Recommended Model
Why

General chat

mistral:7b

Good balance of size and capability

Code assistance

codellama:7b

Specialized for code understanding/generation

Resource-constrained

tinyllama:1.1b

Small memory footprint

Technical documentation

neural-chat:7b

Clear instruction following

Complex reasoning

mixtral:8x7b or llama2:70b

Sophisticated reasoning capabilities

DevOps-Specific Model Configuration

For DevOps-specific tasks, create a specialized model configuration:

# DevOps Assistant Modelfile
FROM codellama:latest
PARAMETER temperature 0.3
PARAMETER top_p 0.8
SYSTEM You are an expert in DevOps practices, cloud infrastructure, CI/CD pipelines, and infrastructure as code. You provide concise, accurate answers with practical examples when appropriate. You're familiar with AWS, Azure, GCP, Kubernetes, Docker, Terraform, Ansible, GitHub Actions, and other DevOps tools.

# Example prompt for debugging
PROMPT """
I'm encountering the following issue with my CI/CD pipeline or infrastructure:

{{.Input}}

Please help me by:
1. Identifying potential causes
2. Suggesting troubleshooting steps
3. Recommending a solution
4. Providing a brief example if applicable
"""

Create this model:

ollama create devops-assistant -f ./DevOps-Modelfile

Next Steps

Now that you understand Ollama's models:

PreviousConfigurationNextDevOps Usage

Last updated 9 days ago

Fine-tune a model using an external tool like

to speed up model inferencing

for a graphical interface

for practical applications

🧠
LLaMA Factory
Configure GPU acceleration
Set up Open WebUI
Explore DevOps usage examples