DevOps help for Cloud Platform Engineers
  • Welcome!
  • Quick Start Guide
  • About Me
  • CV
  • 🧠DevOps & SRE Foundations
    • DevOps Overview
      • Engineering Fundamentals
      • Implementing DevOps Strategy
      • DevOps Readiness Assessment
      • Lifecycle Management
      • The 12 Factor App
      • Design for Self Healing
      • Incident Management Best Practices (2025)
    • SRE Fundamentals
      • Toil Reduction
      • System Simplicity
      • Real-world Scenarios
        • AWS VM Log Monitoring API
    • Agile Development
      • Team Agreements
        • Definition of Done
        • Definition of Ready
        • Team Manifesto
        • Working Agreement
    • Industry Scenarios
      • Finance and Banking
      • Public Sector (UK/EU)
      • Energy Sector Edge Computing
  • DevOps Practices
    • Platform Engineering
    • FinOps
    • Observability
      • Modern Practices
  • 🚀Modern DevOps Practices
    • Infrastructure Testing
    • Modern Development
    • Database DevOps
  • 🛠️Infrastructure as Code (IaC)
    • Terraform
      • Getting Started - Installation and initial setup [BEGINNER]
      • Cloud Integrations - Provider-specific implementations
        • Azure Scenarios
        • AWS Scenarios
        • GCP Scenarios
      • Testing and Validation - Ensuring infrastructure quality
        • Unit Testing
        • Integration Testing
        • End-to-End Testing
        • Terratest Guide
      • Best Practices - Production-ready implementation strategies
        • State Management
        • Security
        • Code Organization
        • Performance
      • Tools & Utilities - Enhancing the Terraform workflow
        • Terraform Docs
        • TFLint
        • Checkov
        • Terrascan
      • CI/CD Integration - Automating infrastructure deployment
        • GitHub Actions - GitHub-based automation workflows
        • Azure Pipelines - Azure DevOps integration
        • GitLab CI - GitLab-based deployment pipelines
    • Bicep
      • Getting Started - First steps with Bicep [BEGINNER]
      • Template Specs
      • Best Practices - Guidelines for effective Bicep implementations
      • Modules - Building reusable components [INTERMEDIATE]
      • Examples - Sample implementations for common scenarios
      • Advanced Features
      • CI/CD Integration - Automating Bicep deployments
        • GitHub Actions
        • Azure Pipelines
  • 💰Cost Management & FinOps
    • Cloud Cost Optimization
  • 🐳Containers & Orchestration
    • Containerization Overview
    • Docker
      • Dockerfile Best Practices
      • Docker Compose
    • Kubernetes
      • CLI Tools - Essential command-line utilities
        • Kubectl
        • Kubens
        • Kubectx
      • Core Concepts
      • Components
      • Best Practices
        • Pod Security
        • Security Monitoring
        • Resource Limits
      • Advanced Features - Beyond the basics [ADVANCED]
        • Service Mesh
        • Ingress Controllers
          • NGINX
          • Traefik
          • Kong
          • Gloo Edge
      • Troubleshooting - Diagnosing and resolving common issues
        • Pod Troubleshooting Commands
      • Enterprise Architecture
      • Health Management
      • Security & Compliance
      • Virtual Clusters
    • OpenShift
  • Service Mesh & Networking
    • Service Mesh Implementation
  • Architecture Patterns
    • Data Mesh
    • Multi-Cloud Networking
    • Disaster Recovery
    • Chaos Engineering
  • Edge Computing
    • Implementation Guide
    • Serverless Edge
    • IoT Edge Patterns
    • Real-Time Processing
    • Edge AI/ML
    • Security Hardening
    • Observability Patterns
    • Network Optimization
    • Storage Patterns
  • 🔄CI/CD & GitOps
    • CI/CD Overview
    • Continuous Integration
    • Continuous Delivery
      • Deployment Strategies
      • Secrets Management
      • Blue-Green Deployments
      • Deployment Metrics
      • Progressive Delivery
      • Release Management for DevOps/SRE (2025)
    • CI/CD Platforms - Tool selection and implementation
      • Azure DevOps
        • Pipelines
          • Stages
          • Jobs
          • Steps
          • Templates - Reusable pipeline components
          • Extends
          • Service Connections - External service authentication
          • Best Practices for 2025
          • Agents and Runners
          • Third-Party Integrations
          • Azure DevOps CLI
        • Boards & Work Items
      • GitHub Actions
      • GitLab
        • GitLab Runner
        • Real-life scenarios
        • Installation guides
        • Pros and Cons
        • Comparison with alternatives
    • GitOps
      • Modern GitOps Practices
      • GitOps Patterns for Multi-Cloud (2025)
      • Flux
        • Overview
        • Progressive Delivery
        • Use GitOps with Flux, GitHub and AKS
  • Source Control
    • Source Control Overview
    • Git Branching Strategies
    • Component Versioning
    • Kubernetes Manifest Versioning
    • GitLab
    • Creating a Fork
    • Naming Branches
    • Pull Requests
    • Integrating LLMs into Source Control Workflows
  • ☁️Cloud Platforms
    • Cloud Strategy
    • Azure
      • Best Practices
      • Landing Zones
      • Services
      • Monitoring
      • Administration Tools - Platform management interfaces
        • Azure PowerShell
        • Azure CLI
      • Tips & Tricks
    • AWS
      • Authentication
      • Best Practices
      • Tips & Tricks
    • Google Cloud
      • Services
    • Private Cloud
  • 🔐Security & Compliance
    • DevSecOps Overview
    • DevSecOps Pipeline Security
    • DevSecOps
      • Real-life Examples
      • Scanning & Protection - Automated security tooling
        • Dependency Scanning
        • Credential Scanning
        • Container Security Scanning
        • Static Code Analysis
          • Best Practices
          • Tool Integration Guide
          • Pipeline Configuration
      • CI/CD Security
      • Secrets Rotation
    • Supply Chain Security
      • SLSA Framework
      • Binary Authorization
      • Artifact Signing
    • Security Best Practices
      • Threat Modeling
      • Kubernetes Security
    • SecOps
    • Zero Trust Model
    • Cloud Compliance
      • ISO/IEC 27001:2022
      • ISO 22301:2019
      • PCI DSS
      • CSA STAR
    • Security Frameworks
    • SIEM and SOAR
  • Security Architecture
    • Zero Trust Implementation
      • Identity Management
      • Network Security
      • Access Control
  • 🔍Observability & Monitoring
    • Observability Fundamentals
    • Logging
    • Metrics
    • Tracing
    • Dashboards
    • SLOs and SLAs
    • Observability as Code
    • Pipeline Observability
  • 🧪Testing Strategies
    • Testing Overview
    • Modern Testing Approaches
    • End-to-End Testing
    • Unit Testing
    • Performance Testing
      • Load Testing
    • Fault Injection Testing
    • Integration Testing
    • Smoke Testing
  • 🤖AI Integration
    • AIops Overview
      • Workflow Automation
      • Predictive Analytics
      • Code Quality
  • 🧠AI & LLM Integration
    • Overview
    • Claude
      • Installation Guide
      • Project Guides
      • MCP Server Setup
      • LLM Comparison
    • Ollama
      • Installation Guide
      • Configuration
      • Models and Fine-tuning
      • DevOps Usage
      • Docker Setup
      • GPU Setup
      • Open WebUI
    • Copilot
      • Installation Guide
      • VS Code Integration
      • CLI Usage
    • Gemini
      • Installation Guides - Platform-specific setup
        • Linux Installation
        • WSL Installation
        • NixOS Installation
      • Gemini 2.5 Features
      • Roles and Agents
      • NotebookML Guide
      • Cloud Infrastructure Deployment
      • Summary
  • 💻Development Environment
    • Tools Overview
    • DevOps Tools
    • Operating Systems - Development platforms
      • NixOS
        • Installation
        • Nix Language Guide
        • DevEnv with Nix
        • Cloud Deployments
      • WSL2
        • Distributions
        • Terminal Setup
    • Editor Environments
    • CLI Tools
      • Azure CLI
      • PowerShell
      • Linux Commands
      • YAML Tools
  • 📚Programming Languages
    • Python
    • Go
    • JavaScript/TypeScript
    • Java
    • Rust
  • 📖Documentation Best Practices
    • Documentation Strategy
    • Project Documentation
    • Release Notes
    • Static Sites
    • Documentation Templates
    • Real-World Examples
  • 📋Reference Materials
    • Glossary
    • Tool Comparison
    • Recommended Reading
    • Troubleshooting Guide
  • Platform Engineering
    • Implementation Guide
  • FinOps
    • Implementation Guide
  • AIOps
    • LLMOps Guide
  • Development Setup
    • Development Setup
Powered by GitBook
On this page
  • Model Deployment
  • Ray Serve Configuration
  • Model Monitoring
  • Prometheus Rules
  • Performance Optimization
  • Triton Inference Server
  • Best Practices
Edit on GitHub
  1. AIOps

LLMOps Guide

Model Deployment

Ray Serve Configuration

apiVersion: ray.io/v1alpha1
kind: RayService
metadata:
  name: llm-inference
spec:
  serviceUnhealthySecondThreshold: 300
  deploymentUnhealthySecondThreshold: 300
  serveDeployments:
    - name: llm-deployment
      numReplicas: 2
      rayStartParams:
        num-cpus: "16"
        num-gpus: "1"
      containerConfig:
        image: llm-server:latest
        env:
          - name: MODEL_NAME
            value: "llama2-7b"
          - name: BATCH_SIZE
            value: "4"

Model Monitoring

Prometheus Rules

apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: llm-monitoring
spec:
  groups:
  - name: LLMMetrics
    rules:
    - alert: HighLatency
      expr: |
        rate(llm_inference_duration_seconds_sum[5m])
        /
        rate(llm_inference_duration_seconds_count[5m])
        > 1.0
      for: 5m
      labels:
        severity: warning
    - alert: HighErrorRate
      expr: |
        rate(llm_inference_errors_total[5m])
        /
        rate(llm_inference_requests_total[5m])
        > 0.01
      for: 5m
      labels:
        severity: critical

Performance Optimization

Triton Inference Server

apiVersion: serving.kubeflow.org/v1beta1
kind: InferenceService
metadata:
  name: llm-triton
spec:
  predictor:
    triton:
      storageUri: s3://models/llm
      resources:
        limits:
          cpu: "8"
          memory: "16Gi"
          nvidia.com/gpu: "1"
      containerConcurrency: 4
      env:
        - name: TRITON_CACHE_CONFIG
          value: |
            {
              "cache_size": "8GB",
              "cache_policy": "LRU"
            }

Best Practices

  1. Model Management

    • Version control

    • A/B testing

    • Canary deployment

    • Model registry

  2. Observability

    • Performance metrics

    • Token usage

    • Response quality

    • Cost tracking

  3. Optimization

    • Quantization

    • Batching

    • Caching

    • Load balancing

  4. Security

    • Input validation

    • Output filtering

    • Rate limiting

    • Access control

PreviousImplementation GuideNextDevelopment Setup

Last updated 1 day ago