Cloud Run
Deploying and managing Google Cloud Run for containerized applications
Google Cloud Run is a fully managed compute platform that allows you to run stateless containers directly on top of Google's scalable infrastructure. It abstracts away infrastructure management so you can focus on developing applications in the language of your choice.
Key Features
Fully managed: No infrastructure to provision or manage
Serverless: Pay only for the resources you use
Scale to zero: No charges when your service isn't running
Autoscaling: Automatically scales based on traffic
Multiple languages: Supports any language using a Docker container
Custom domains: Connect your own domain names
Private services: Restrict access to authorized users or internal services
Traffic splitting: Gradually roll out new versions with percentage-based traffic splitting
VPC connectivity: Connect to VPC resources
Cloud SQL connection: Direct connection to Cloud SQL databases
Concurrency: Process multiple requests per container instance
WebSockets: Support for WebSockets and HTTP/2
Deploying Cloud Run with Terraform
Basic Service Deployment
Cloud Run Service with VPC Access
Cloud Run Service with Traffic Splitting
Cloud Run with Custom Domain Mapping
Deploying Cloud Run with gcloud CLI
Building and Deploying a Container
Updating an Existing Service
Creating a Private Service
Configure VPC Connector
Configure Environment Variables and Secrets
Real-World Example: Microservices Application
This example demonstrates a complete microservices architecture using Cloud Run:
Step 1: Infrastructure Setup with Terraform
Step 2: Example Authentication Service Code
Step 3: Example Product Service Code
Step 4: Example of Frontend Integration
Best Practices
Container Design
Use distroless or minimal base images
Follow the single responsibility principle
Optimize Dockerfile for layer caching
Implement proper health checks
Handle graceful shutdowns (SIGTERM)
Keep container images small
Performance Optimization
Configure appropriate memory and CPU limits
Minimize container startup time
Implement connection pooling for databases
Use caching when appropriate
Scale container instances based on actual load
Use concurrency settings effectively
Security
Use dedicated service accounts with minimal permissions
Store secrets in Secret Manager
Enable binary authorization if needed
Implement proper authentication and authorization
Scan container images for vulnerabilities
Follow the principle of least privilege
Cost Optimization
Use CPU throttling for background services
Scale to zero when possible
Use min instances only for critical services
Monitor and set budget alerts
Optimize container image size
Use instance concurrency to handle multiple requests
Networking and Connectivity
Use VPC Service Controls for additional security
Use VPC connectors for private services
Implement proper service-to-service authentication
Configure appropriate connection timeouts
Implement retry logic for network failures
Use Cloud CDN for static content
Common Issues and Troubleshooting
Container Startup Problems
Check container logs in Cloud Logging
Verify correct environment variables
Ensure the container listens on the correct port (defaults to 8080)
Check for permissions issues with service accounts
Review memory and CPU limits
Connection Issues
Verify VPC connector configuration
Check firewall rules
Ensure IAM permissions are set correctly
Verify service accounts have appropriate permissions
Check Cloud SQL connection settings
Performance Problems
Review concurrency settings
Check for memory leaks
Monitor CPU and memory usage
Verify database connection pooling
Look for slow external API calls
Implement proper caching
Deployment Failures
Check for errors in Cloud Build logs
Verify container can be pulled from registry
Ensure build process completes successfully
Review resource quota limitations
Check for syntax errors in configuration
Further Reading
Last updated