This directory contains Terraform configurations for managing the complete infrastructure of the Cover-Letter-LLM application on Google Cloud Platform.
# 1. Setup environment
export GOOGLE_PROJECT="your-project-id"
gcloud auth application-default login
# 2. Deploy to development
cd terraform/environments/development
cp terraform.tfvars.example terraform.tfvars
# Edit terraform.tfvars with your project details
../../deploy.sh development init
../../deploy.sh development apply
# 3. Access your deployed application
terraform output app_url
terraform/
โโโ deploy.sh # Main deployment script
โโโ SETUP_GUIDE.md # Detailed setup guide
โโโ README.md # This file
โโโ environments/ # Environment-specific configurations
โ โโโ development/ # Development environment
โ โ โโโ main.tf # Main configuration
โ โ โโโ variables.tf # Input variables
โ โ โโโ outputs.tf # Output values
โ โ โโโ terraform.tfvars.example
โ โโโ staging/ # Staging environment
โ โ โโโ main.tf # Production-like configuration
โ โ โโโ variables.tf # Staging variables
โ โ โโโ outputs.tf # Staging outputs
โ โ โโโ terraform.tfvars.example
โ โโโ production/ # Production environment
โ โโโ main.tf # High-availability configuration
โ โโโ variables.tf # Production variables
โ โโโ outputs.tf # Production outputs
โ โโโ terraform.tfvars.example
โโโ modules/ # Reusable Terraform modules
โโโ compute/ # Cloud Run services
โ โโโ main.tf # Cloud Run configuration
โ โโโ variables.tf # Compute variables
โ โโโ outputs.tf # Service outputs
โโโ database/ # PostgreSQL database
โ โโโ main.tf # Cloud SQL configuration
โ โโโ variables.tf # Database variables
โ โโโ outputs.tf # Database outputs
โโโ redis/ # Redis cache
โ โโโ main.tf # Memorystore configuration
โ โโโ variables.tf # Redis variables
โ โโโ outputs.tf # Redis outputs
โโโ secrets/ # Secret Manager
โ โโโ main.tf # Secrets configuration
โ โโโ variables.tf # Secret variables
โ โโโ outputs.tf # Secret outputs
โโโ networking/ # VPC and networking
โ โโโ main.tf # Network configuration
โ โโโ variables.tf # Network variables
โ โโโ outputs.tf # Network outputs
โโโ monitoring/ # Monitoring and alerting
โโโ main.tf # Monitoring setup
โโโ variables.tf # Monitoring variables
โโโ outputs.tf # Monitoring outputs
Environment | Database | Redis | Scaling | Cost/Month |
---|---|---|---|---|
Development | db-f1-micro (1GB) | 1GB Basic | 0-2 instances | ~$50-100 |
Staging | db-custom-1-2048 (2GB) | 2GB Basic | 0-5 instances | ~$150-300 |
Production | db-custom-2-4096 (4GB) | 4GB HA | 1-20 instances | ~$300-800 |
# Use the deployment script
./terraform/deploy.sh development plan
./terraform/deploy.sh development apply
terraform-infrastructure
branchmain
branchcd terraform/environments/development
terraform init
terraform plan
terraform apply
gcloud services enable compute.googleapis.com run.googleapis.com \
sql-component.googleapis.com redis.googleapis.com \
secretmanager.googleapis.com storage-component.googleapis.com \
monitoring.googleapis.com servicenetworking.googleapis.com
Each environment requires a terraform.tfvars
file:
# Required variables
project_id = "your-gcp-project-id"
container_image = "gcr.io/your-project/cover-letter-llm:latest"
# Optional variables
region = "us-central1"
domain_name = "coverletter.yourcompany.com"
Manual secret creation required:
# Rails master key
gcloud secrets create cover-letter-llm-dev-rails-master-key --data-file=config/master.key
# Google AI API key
echo "your-api-key" | gcloud secrets create cover-letter-llm-dev-google-api-key --data-file=-
SETUP_GUIDE.md
terraform.tfvars.example
./deploy.sh
scriptSETUP_GUIDE.md
troubleshooting sectionTo destroy infrastructure:
./terraform/deploy.sh development destroy
./terraform/deploy.sh staging destroy
./terraform/deploy.sh production destroy
โ ๏ธ Warning: This will permanently delete all infrastructure and data!
This project implements both GitHub Actions automation and serverless backend storage following modern Infrastructure as Code (IaC) best practices.
โ
Already Implemented - See .github/workflows/terraform.yml
Automated Workflows:
terraform plan
on PRs with results posted as commentsterraform-infrastructure
branchmain
branch mergesBenefits:
โ Already Configured - Uses Google Cloud Storage with state locking
Implementation:
# In each environment's main.tf
backend "gcs" {
bucket = "your-terraform-state-bucket"
prefix = "cover-letter-llm/production"
}
Benefits:
graph TD
A[Developer pushes code] --> B[GitHub Actions triggered]
B --> C[Terraform init - connects to GCS backend]
C --> D[Terraform plan/apply]
D --> E[State stored in Google Cloud Storage]
E --> F[Infrastructure deployed]
G[Multiple developers] --> H[All use same remote state]
H --> I[No conflicts, consistent state]
Workflow Example:
terraform plan
terraform apply
Component | Purpose | Benefit |
---|---|---|
GitHub Actions | Automation & CI/CD | Consistent deployments, collaboration |
Remote Backend | State management | Team collaboration, state safety |
Environment Isolation | Risk management | Safe testing, production protection |
Manual Production | Safety controls | Prevent accidental production changes |
This setup follows the โInfrastructure as Codeโ principle where your infrastructure is:
Terraform is an open-source Infrastructure as Code (IaC) tool by HashiCorp that allows you to define, provision, and manage cloud infrastructure using configuration files written in HashiCorp Configuration Language (HCL).
Key Features:
Basic Workflow:
.tf
files
resource "google_sql_database_instance" "main" {
name = "cover-letter-db"
database_version = "POSTGRES_15"
region = "us-central1"
}
terraform init
terraform plan
terraform apply
๐พ State Management: Terraform tracks infrastructure in state files
Before provisioning any cloud infrastructure manually. Early in the project, during the setup of your cloud environment (before deploying your Rails app, databases, etc.). Before scaling up or introducing environments (staging, production, etc.). Why?
You avoid โsnowflakeโ environments (where infra is different everywhere). Enables repeatable, automated, and auditable infrastructure deployment. Makes team collaboration and disaster recovery much easier. If you already have some manual infrastructure:
Consider importing existing resources into Terraform (terraform import) and start managing them as code. Avoid manual changes after adopting Terraform to prevent state drift. Summary Table Terraform Phase What to do/expect Project start Plan and write infrastructure code, version it in git Before/after initial cloud setup Use Terraform to provision cloud resources (servers, DBs, buckets, etc.) Ongoing Use Terraform for all infra changes; review with terraform plan before applying Already have infra? Import resources into Terraform, then manage via code Example Use Case in Your Project For a Rails app like yours, you might use Terraform to:
Provision GCP/AWS/Azure resources (compute instances, databases, storage, networking) Set up DNS, SSL, load balancers Manage infrastructure for different environments (dev, staging, prod) Integrate with CI/CD pipelines (GitHub Actions can trigger Terraform plans/applies) TL;DR: Implement Terraform as early as possibleโideally before manually creating infrastructure. It lets you manage your cloud resources safely, repeatably, and as code. You define your infrastructure, let Terraform provision it, and use version control to track changes.
TL;DR:
TL;DR stands for โToo Long; Didnโt Readโ โ it means a quick summary or the main point. How to Implement Terraform in Dev for Your Project
What Terraform Does Terraform lets you define all your infrastructure (servers, databases, DNS, etc.) as code in .tf files. You can version control this code, review changes, and safely apply updates.
Install Terraform: Follow Terraform installation instructions.
Create a new directory in your repo for infra code: Example: infrastructure/ or terraform/
provider โawsโ { region = โus-west-2โ }
resource โaws_db_instanceโ โdefaultโ { allocated_storage = 20 engine = โpostgresโ instance_class = โdb.t3.microโ name = โmydbโ username = โmyuserโ password = โmypasswordโ parameter_group_name = โdefault.postgres15โ }
Youโd adjust this for GCP, Azure, etc.
provider โcloudflareโ { email = โyour@email.comโ api_key = โyour_cloudflare_api_keyโ }
resource โcloudflare_recordโ โgithub_pagesโ { zone_id = โyour_zone_idโ name = โwwwโ value = โyour-username.github.ioโ type = โCNAMEโ ttl = 3600 }
cd terraform/ terraform init terraform plan terraform apply
Best practice: Your terraform (or infrastructure) directory should be at the root of your repository, not inside your app subdirectory.
Why root? Separation of concerns: Infrastructure code is separate from application code. Supports multi-app/monorepo setups: If you later add more apps (e.g., frontend, API, etc.), they can share the same infrastructure code. Standardization: Most teams and cloud providers expect infra code at the root (/terraform, /infrastructure, etc.). CI/CD compatibility: Itโs easier to trigger infra workflows and manage state files from the root. Typical structure:
/ โโโ app/ # Your Rails or main app code โโโ terraform/ # All Terraform code here (main.tf, variables.tf, etc.) โโโ README.md โโโ .gitignore โโโ โฆother root filesโฆ
Summary: Put your terraform directory at the root of your repository. Only put Terraform code inside the app directory if you have a strong, app-specific reason (rare in most projects).
A monorepo (short for โmonolithic repositoryโ) is a single version-controlled code repository that holds the code for multiple projects, applications, or servicesโoften all the code for a company, organization, or product suite.
Key Features of a Monorepo:
Example Monorepo Structure:
/
โโโ apps/
โ โโโ web-frontend/ # React/Next.js frontend
โ โโโ api-server/ # Rails/Node.js backend
โโโ libs/
โ โโโ auth-lib/ # Shared authentication library
โ โโโ shared-utils/ # Common utilities
โโโ infrastructure/
โ โโโ terraform/ # Infrastructure as Code
โโโ package.json # Root package configuration
โโโ .github/ # CI/CD workflows
โโโ README.md # Main documentation
Aspect | Monorepo | Polyrepo |
---|---|---|
Structure | All projects in one repo | Each project in its own repo |
Code Sharing | โ Easy to share code across projects | โ Requires publishing packages |
Dependency Management | โ Unified dependency management | โ Each repo manages its own |
CI/CD | โ Single pipeline for all projects | โ Separate pipelines per repo |
Team Coordination | โ Easy cross-project changes | โ Requires coordination across repos |
Repository Size | โ Can become very large | โ Smaller, focused repositories |
Popular Monorepo Tools:
For Our Rails + Terraform Project:
cover-letter-llm/ # Monorepo root
โโโ CoverLetterApp/ # Rails application
โ โโโ app/
โ โโโ config/
โ โโโ ...
โโโ terraform/ # Infrastructure code (at root level)
โ โโโ environments/
โ โโโ modules/
โ โโโ ...
โโโ docs/ # Project documentation
โโโ .github/ # CI/CD workflows
โโโ README.md # Main project README
Why This Structure Works:
โ Use Monorepo When:
โ Avoid Monorepo When:
Our Projectโs Monorepo Benefits:
In Summary: A monorepo is a single repository that contains code for multiple related projects, making shared development and management easier. For our Cover-Letter-LLM project, the monorepo approach allows us to manage both the Rails application and Terraform infrastructure in one place, with shared documentation and unified CI/CD workflows.
Configure for Local Development If youโre not using GCP for local, youโll want a separate configuration for local resources (e.g., using localstack, Docker, or null_resource/external provider for bootstrapping local PostgreSQL). Donโt keep GCP provider enabled in local configs if youโre not using GCP locally.
HCL terraform { required_version = โ>= 1.1โ required_providers { docker = { source = โkreuzwerker/dockerโ version = โ~> 3.0โ } } }
provider โdockerโ {}
resource โdocker_imageโ โpostgresโ { name = โpostgres:15โ }
resource โdocker_containerโ โpostgresโ { image = docker_image.postgres.latest name = โlocal-postgresโ ports { internal = 5432 external = 5432 } env = [ โPOSTGRES_USER=postgresโ, โPOSTGRES_PASSWORD=passwordโ, โPOSTGRES_DB=cover_letter_app_devโ ] } This runs a PostgreSQL container on your machine, not in the cloud. C. Directory Layout for Local/Cloud Separation Have a structure like:
Code terraform/ environments/ local/ main.tf # Local resources (Docker, null_resource, etc.) development/ main.tf # Cloud resources (GCP, etc.) production/ main.tf modules/ โฆ # Shared modules, if any
Summary Table Environment Uses GCP provider? Uses Docker? Uses GCP resources? Uses Local Containers? local No Yes No Yes dev/prod Yes No Yes No TL;DR For local: Remove GCP provider/resources from your Terraform config. Use Docker provider or local solutions. For dev/prod/staging: Use GCP provider/resources as needed. Keep configs organized and separated (per environment).
sh
terraform init
sh
terraform plan
terraform apply
If youโre still referencing GCP resources, either remove/comment them out for local, or create a new local-only configuration.
Usage instructions:
Place this file in a directory (e.g., terraform/local/). Run terraform init in that directory. Run terraform apply to spin up local PostgreSQL and Redis containers using Docker. Set your appโs environment variables to connect to localhost:5432 for Postgres and localhost:6379 for Redis. Note:
Requires Docker to be installed and running. These resources are local and disposableโno cloud resources will be created. You can customize usernames, passwords, and database names as needed.
Summary Table Step Action/Command
Upgrade Terraform CLI to at least 1.5. Separate local and cloud configs for clarity and safety. Initialize, format, validate, commit, document, and PR as described.
Created the script run_local_infra.sh (all commands are to be ran from dir that stores main.tf)
How it works When you run your script (which runs terraform apply), Terraform checks for the necessary Docker images and containers. If the Postgres (or Redis) container does not exist or is not running, Terraform will create and start it for you.
Assuming you are in your Terraform directory (e.g., terraform/local/):
# Make sure Docker is running (see below)
docker info
# Run your script (or run terraform manually)
./run_local_infra.sh
# or, step by step:
terraform init
terraform apply
sudo systemctl start docker # (if not already running)
sudo systemctl status docker # to check status
docker ps
Terraform Practice: Keep your local and cloud Terraform configs separate. Use modules for shared logic, but have distinct environments (local, dev, prod). Donโt apply GCP modules in your local environment unless you really need GCP resources for local testing. Summary Table Resource Local Dev GCP for Local Dev? Notes PostgreSQL/Redis Docker/Local No Use containers or local install GCS Bucket (file storage) Local/Minio No Use local FS or Minio for emulation GCP Secret Manager .env files No Only use if testing cloud-specific secrets logic BigQuery, Pub/Sub, etc. Local, if possible Sometimes Only if you need cloud features or canโt emulate TL;DR Donโt use GCP for local dev unless you have a specific, unavoidable need. Use local services (via Docker etc.) for speed, cost, and reliability. Save GCP usage for staging, integration, and production environments.