cover-letter-llm

Terraform Infrastructure for Cover-Letter-LLM

This directory contains Terraform configurations for managing the complete infrastructure of the Cover-Letter-LLM application on Google Cloud Platform.

Table of Contents

๐Ÿš€ Quick Start

# 1. Setup environment
export GOOGLE_PROJECT="your-project-id"
gcloud auth application-default login

# 2. Deploy to development
cd terraform/environments/development
cp terraform.tfvars.example terraform.tfvars
# Edit terraform.tfvars with your project details
../../deploy.sh development init
../../deploy.sh development apply

# 3. Access your deployed application
terraform output app_url

๐Ÿ“ Complete Directory Structure

terraform/
โ”œโ”€โ”€ deploy.sh                    # Main deployment script
โ”œโ”€โ”€ SETUP_GUIDE.md              # Detailed setup guide
โ”œโ”€โ”€ README.md                   # This file
โ”œโ”€โ”€ environments/               # Environment-specific configurations
โ”‚   โ”œโ”€โ”€ development/           # Development environment
โ”‚   โ”‚   โ”œโ”€โ”€ main.tf           # Main configuration
โ”‚   โ”‚   โ”œโ”€โ”€ variables.tf      # Input variables
โ”‚   โ”‚   โ”œโ”€โ”€ outputs.tf        # Output values
โ”‚   โ”‚   โ””โ”€โ”€ terraform.tfvars.example
โ”‚   โ”œโ”€โ”€ staging/              # Staging environment
โ”‚   โ”‚   โ”œโ”€โ”€ main.tf           # Production-like configuration
โ”‚   โ”‚   โ”œโ”€โ”€ variables.tf      # Staging variables
โ”‚   โ”‚   โ”œโ”€โ”€ outputs.tf        # Staging outputs
โ”‚   โ”‚   โ””โ”€โ”€ terraform.tfvars.example
โ”‚   โ””โ”€โ”€ production/           # Production environment
โ”‚       โ”œโ”€โ”€ main.tf           # High-availability configuration
โ”‚       โ”œโ”€โ”€ variables.tf      # Production variables
โ”‚       โ”œโ”€โ”€ outputs.tf        # Production outputs
โ”‚       โ””โ”€โ”€ terraform.tfvars.example
โ””โ”€โ”€ modules/                   # Reusable Terraform modules
    โ”œโ”€โ”€ compute/              # Cloud Run services
    โ”‚   โ”œโ”€โ”€ main.tf           # Cloud Run configuration
    โ”‚   โ”œโ”€โ”€ variables.tf      # Compute variables
    โ”‚   โ””โ”€โ”€ outputs.tf        # Service outputs
    โ”œโ”€โ”€ database/             # PostgreSQL database
    โ”‚   โ”œโ”€โ”€ main.tf           # Cloud SQL configuration
    โ”‚   โ”œโ”€โ”€ variables.tf      # Database variables
    โ”‚   โ””โ”€โ”€ outputs.tf        # Database outputs
    โ”œโ”€โ”€ redis/                # Redis cache
    โ”‚   โ”œโ”€โ”€ main.tf           # Memorystore configuration
    โ”‚   โ”œโ”€โ”€ variables.tf      # Redis variables
    โ”‚   โ””โ”€โ”€ outputs.tf        # Redis outputs
    โ”œโ”€โ”€ secrets/              # Secret Manager
    โ”‚   โ”œโ”€โ”€ main.tf           # Secrets configuration
    โ”‚   โ”œโ”€โ”€ variables.tf      # Secret variables
    โ”‚   โ””โ”€โ”€ outputs.tf        # Secret outputs
    โ”œโ”€โ”€ networking/           # VPC and networking
    โ”‚   โ”œโ”€โ”€ main.tf           # Network configuration
    โ”‚   โ”œโ”€โ”€ variables.tf      # Network variables
    โ”‚   โ””โ”€โ”€ outputs.tf        # Network outputs
    โ””โ”€โ”€ monitoring/           # Monitoring and alerting
        โ”œโ”€โ”€ main.tf           # Monitoring setup
        โ”œโ”€โ”€ variables.tf      # Monitoring variables
        โ””โ”€โ”€ outputs.tf        # Monitoring outputs

๐Ÿ—๏ธ Infrastructure Components

Core Services

Environment Configurations

Environment Database Redis Scaling Cost/Month
Development db-f1-micro (1GB) 1GB Basic 0-2 instances ~$50-100
Staging db-custom-1-2048 (2GB) 2GB Basic 0-5 instances ~$150-300
Production db-custom-2-4096 (4GB) 4GB HA 1-20 instances ~$300-800

Deployment Features

๐Ÿ› ๏ธ Deployment Options

# Use the deployment script
./terraform/deploy.sh development plan
./terraform/deploy.sh development apply

2. GitHub Actions (Automated CI/CD)

3. Direct Terraform Commands

cd terraform/environments/development
terraform init
terraform plan
terraform apply

๐Ÿ“‹ Prerequisites

Required Tools

Google Cloud Setup

  1. Create GCP Projects (one per environment)
  2. Enable Required APIs:
    gcloud services enable compute.googleapis.com run.googleapis.com \
      sql-component.googleapis.com redis.googleapis.com \
      secretmanager.googleapis.com storage-component.googleapis.com \
      monitoring.googleapis.com servicenetworking.googleapis.com
    
  3. Create Service Accounts for Terraform deployment
  4. Setup State Storage in Google Cloud Storage

๐Ÿ”ง Configuration

Environment Variables

Each environment requires a terraform.tfvars file:

# Required variables
project_id = "your-gcp-project-id"
container_image = "gcr.io/your-project/cover-letter-llm:latest"

# Optional variables
region = "us-central1"
domain_name = "coverletter.yourcompany.com"

Secrets Setup

Manual secret creation required:

# Rails master key
gcloud secrets create cover-letter-llm-dev-rails-master-key --data-file=config/master.key

# Google AI API key
echo "your-api-key" | gcloud secrets create cover-letter-llm-dev-google-api-key --data-file=-

๐Ÿ“Š Monitoring and Observability

Dashboards

Alerts

Logs

๐Ÿ”’ Security Features

๐ŸŽฏ Environments

Development Environment

Staging Environment

Production Environment

๐Ÿš€ Getting Started

  1. Read the detailed setup guide: SETUP_GUIDE.md
  2. Configure your environment: Copy and edit terraform.tfvars.example
  3. Deploy infrastructure: Use ./deploy.sh script
  4. Setup secrets: Create required secrets in Google Secret Manager
  5. Deploy application: Use GitHub Actions or manual deployment

๐Ÿ†˜ Support

๐Ÿ’ฐ Cost Optimization

๐Ÿงน Cleanup

To destroy infrastructure:

./terraform/deploy.sh development destroy
./terraform/deploy.sh staging destroy
./terraform/deploy.sh production destroy

โš ๏ธ Warning: This will permanently delete all infrastructure and data!

๐Ÿ”„ CI/CD and Backend Architecture

This project implements both GitHub Actions automation and serverless backend storage following modern Infrastructure as Code (IaC) best practices.

GitHub Actions Integration

โœ… Already Implemented - See .github/workflows/terraform.yml

Automated Workflows:

Benefits:

Serverless Backend (Remote State)

โœ… Already Configured - Uses Google Cloud Storage with state locking

Implementation:

# In each environment's main.tf
backend "gcs" {
  bucket = "your-terraform-state-bucket"
  prefix = "cover-letter-llm/production"
}

Benefits:

How They Work Together

graph TD
    A[Developer pushes code] --> B[GitHub Actions triggered]
    B --> C[Terraform init - connects to GCS backend]
    C --> D[Terraform plan/apply]
    D --> E[State stored in Google Cloud Storage]
    E --> F[Infrastructure deployed]
    
    G[Multiple developers] --> H[All use same remote state]
    H --> I[No conflicts, consistent state]

Workflow Example:

  1. ๐Ÿ‘จโ€๐Ÿ’ป Developer pushes Terraform changes to branch
  2. ๐Ÿ”„ GitHub Actions automatically runs terraform plan
  3. ๐Ÿ“Š Plan results posted as PR comment for review
  4. โœ… On merge to main, GitHub Actions runs terraform apply
  5. ๐Ÿ’พ State safely stored in Google Cloud Storage
  6. ๐Ÿš€ Infrastructure deployed consistently

Why This Architecture?

Component Purpose Benefit
GitHub Actions Automation & CI/CD Consistent deployments, collaboration
Remote Backend State management Team collaboration, state safety
Environment Isolation Risk management Safe testing, production protection
Manual Production Safety controls Prevent accidental production changes

This setup follows the โ€œInfrastructure as Codeโ€ principle where your infrastructure is:


๐Ÿ“š Terraform Implementation Guide

What is Terraform?

Terraform is an open-source Infrastructure as Code (IaC) tool by HashiCorp that allows you to define, provision, and manage cloud infrastructure using configuration files written in HashiCorp Configuration Language (HCL).

Key Features:

How Does Terraform Work?

Basic Workflow:

  1. ๐Ÿ“ Write Code: Define infrastructure resources in .tf files
    resource "google_sql_database_instance" "main" {
      name             = "cover-letter-db"
      database_version = "POSTGRES_15"
      region          = "us-central1"
    }
    
  2. ๐Ÿ”ง Initialize: Set up working directory and download providers
    terraform init
    
  3. ๐Ÿ“‹ Plan: Review proposed changes before applying
    terraform plan
    
  4. ๐Ÿš€ Apply: Execute the plan and provision infrastructure
    terraform apply
    
  5. ๐Ÿ’พ State Management: Terraform tracks infrastructure in state files

  6. When is the best time to implement and execute Terraform in your project? Best time:

Before provisioning any cloud infrastructure manually. Early in the project, during the setup of your cloud environment (before deploying your Rails app, databases, etc.). Before scaling up or introducing environments (staging, production, etc.). Why?

You avoid โ€œsnowflakeโ€ environments (where infra is different everywhere). Enables repeatable, automated, and auditable infrastructure deployment. Makes team collaboration and disaster recovery much easier. If you already have some manual infrastructure:

Consider importing existing resources into Terraform (terraform import) and start managing them as code. Avoid manual changes after adopting Terraform to prevent state drift. Summary Table Terraform Phase What to do/expect Project start Plan and write infrastructure code, version it in git Before/after initial cloud setup Use Terraform to provision cloud resources (servers, DBs, buckets, etc.) Ongoing Use Terraform for all infra changes; review with terraform plan before applying Already have infra? Import resources into Terraform, then manage via code Example Use Case in Your Project For a Rails app like yours, you might use Terraform to:

Provision GCP/AWS/Azure resources (compute instances, databases, storage, networking) Set up DNS, SSL, load balancers Manage infrastructure for different environments (dev, staging, prod) Integrate with CI/CD pipelines (GitHub Actions can trigger Terraform plans/applies) TL;DR: Implement Terraform as early as possibleโ€”ideally before manually creating infrastructure. It lets you manage your cloud resources safely, repeatably, and as code. You define your infrastructure, let Terraform provision it, and use version control to track changes.

TL;DR:

TL;DR stands for โ€œToo Long; Didnโ€™t Readโ€ โ€” it means a quick summary or the main point. How to Implement Terraform in Dev for Your Project

  1. What Terraform Does Terraform lets you define all your infrastructure (servers, databases, DNS, etc.) as code in .tf files. You can version control this code, review changes, and safely apply updates.

  2. Should You Start Fresh With Terraform? Best practice: Yes, if you havenโ€™t set up much infrastructure yet, start with Terraform so everything is managed and repeatable. If you already have cloud infrastructure: You can import existing resources into Terraform (terraform import), but it can be tricky. Itโ€™s still worth doing for future maintainability.
  3. How to Use Terraform for GitHub Pages and PostgreSQL A. GitHub Pages Note: GitHub Pages hosting itself is not managed by Terraform (it is a GitHub feature), but you can use Terraform to: Manage DNS records (if you use a custom domain with your Pages site). Manage any cloud resources (if you use a CDN or extra services). B. PostgreSQL Use Terraform to create a managed PostgreSQL instance (on AWS RDS, GCP Cloud SQL, Azure, etc.) or to provision a VM/container with PostgreSQL. You define the DB resource in your .tf files, then run terraform apply to create it.
  4. How to Implement Terraform (Step-by-Step)
  5. Install Terraform: Follow Terraform installation instructions.

  6. Create a new directory in your repo for infra code: Example: infrastructure/ or terraform/

  7. Write your resource definitions Example for AWS RDS PostgreSQL:

provider โ€œawsโ€ { region = โ€œus-west-2โ€ }

resource โ€œaws_db_instanceโ€ โ€œdefaultโ€ { allocated_storage = 20 engine = โ€œpostgresโ€ instance_class = โ€œdb.t3.microโ€ name = โ€œmydbโ€ username = โ€œmyuserโ€ password = โ€œmypasswordโ€ parameter_group_name = โ€œdefault.postgres15โ€ }

Youโ€™d adjust this for GCP, Azure, etc.

  1. (For DNS, GitHub Pages custom domain): Example for Cloudflare DNS record:

provider โ€œcloudflareโ€ { email = โ€œyour@email.comโ€ api_key = โ€œyour_cloudflare_api_keyโ€ }

resource โ€œcloudflare_recordโ€ โ€œgithub_pagesโ€ { zone_id = โ€œyour_zone_idโ€ name = โ€œwwwโ€ value = โ€œyour-username.github.ioโ€ type = โ€œCNAMEโ€ ttl = 3600 }

  1. Initialize & apply:

cd terraform/ terraform init terraform plan terraform apply

  1. Should You Import Existing Resources? If you already have a DB, DNS record, etc.: Use terraform import to bring them under Terraform management. If you donโ€™t: Let Terraform create them for you.
  2. Recommended Approach If possible, start managing all new infra with Terraform now. For existing resources, import them or recreate them if possible. Avoid mixing manual and Terraform-managed resources to prevent confusion and โ€œdrift.โ€ TL;DR for your question: Start using Terraform as soon as possibleโ€”ideally before creating resources manually. For new projects, use Terraform to create your DB and manage DNS for your GitHub Pages. For existing infra, you can import resources or recreate them with Terraform for better control and repeatability.

Best practice: Your terraform (or infrastructure) directory should be at the root of your repository, not inside your app subdirectory.

Why root? Separation of concerns: Infrastructure code is separate from application code. Supports multi-app/monorepo setups: If you later add more apps (e.g., frontend, API, etc.), they can share the same infrastructure code. Standardization: Most teams and cloud providers expect infra code at the root (/terraform, /infrastructure, etc.). CI/CD compatibility: Itโ€™s easier to trigger infra workflows and manage state files from the root. Typical structure:

/ โ”œโ”€โ”€ app/ # Your Rails or main app code โ”œโ”€โ”€ terraform/ # All Terraform code here (main.tf, variables.tf, etc.) โ”œโ”€โ”€ README.md โ”œโ”€โ”€ .gitignore โ””โ”€โ”€ โ€ฆother root filesโ€ฆ

Summary: Put your terraform directory at the root of your repository. Only put Terraform code inside the app directory if you have a strong, app-specific reason (rare in most projects).

A monorepo (short for โ€œmonolithic repositoryโ€) is a single version-controlled code repository that holds the code for multiple projects, applications, or servicesโ€”often all the code for a company, organization, or product suite.

Key Features of a Monorepo:

Example Monorepo Structure:

/
โ”œโ”€โ”€ apps/
โ”‚   โ”œโ”€โ”€ web-frontend/         # React/Next.js frontend
โ”‚   โ””โ”€โ”€ api-server/           # Rails/Node.js backend
โ”œโ”€โ”€ libs/
โ”‚   โ”œโ”€โ”€ auth-lib/             # Shared authentication library
โ”‚   โ””โ”€โ”€ shared-utils/         # Common utilities
โ”œโ”€โ”€ infrastructure/
โ”‚   โ””โ”€โ”€ terraform/            # Infrastructure as Code
โ”œโ”€โ”€ package.json              # Root package configuration
โ”œโ”€โ”€ .github/                  # CI/CD workflows
โ””โ”€โ”€ README.md                 # Main documentation

Monorepo vs Polyrepo

Aspect Monorepo Polyrepo
Structure All projects in one repo Each project in its own repo
Code Sharing โœ… Easy to share code across projects โŒ Requires publishing packages
Dependency Management โœ… Unified dependency management โŒ Each repo manages its own
CI/CD โœ… Single pipeline for all projects โŒ Separate pipelines per repo
Team Coordination โœ… Easy cross-project changes โŒ Requires coordination across repos
Repository Size โŒ Can become very large โœ… Smaller, focused repositories

Popular Monorepo Tools:

Directory Structure Best Practices

For Our Rails + Terraform Project:

cover-letter-llm/                    # Monorepo root
โ”œโ”€โ”€ CoverLetterApp/                  # Rails application
โ”‚   โ”œโ”€โ”€ app/
โ”‚   โ”œโ”€โ”€ config/
โ”‚   โ””โ”€โ”€ ...
โ”œโ”€โ”€ terraform/                      # Infrastructure code (at root level)
โ”‚   โ”œโ”€โ”€ environments/
โ”‚   โ”œโ”€โ”€ modules/
โ”‚   โ””โ”€โ”€ ...
โ”œโ”€โ”€ docs/                           # Project documentation
โ”œโ”€โ”€ .github/                        # CI/CD workflows
โ””โ”€โ”€ README.md                       # Main project README

Why This Structure Works:

When to Use Monorepo

โœ… Use Monorepo When:

โŒ Avoid Monorepo When:

Our Projectโ€™s Monorepo Benefits:


In Summary: A monorepo is a single repository that contains code for multiple related projects, making shared development and management easier. For our Cover-Letter-LLM project, the monorepo approach allows us to manage both the Rails application and Terraform infrastructure in one place, with shared documentation and unified CI/CD workflows.

Best Practice Workflow

  1. Branch Strategy Work in a feature branch (e.g., infra/terraform-local-pg) off of main or your latest stable branch. Make small, logical commits as you iterate.
  2. Directory Structure Recommended: Keep all Terraform code in a top-level infrastructure/ or terraform/ directory, with subfolders for environments/, modules/, etc. Example: Code infrastructure/ environments/ development/ main.tf variables.tf modules/ database/ redis/ secrets/ compute/
  3. Update Terraform Version Your main.tf requires >= 1.5, but you have v1.12.2 (very old). Best practice: Use a Terraform version matching your config requirements. Upgrade your Terraform CLI to v1.5.0 or newer.
  4. Configure for Local Development If youโ€™re not using GCP for local, youโ€™ll want a separate configuration for local resources (e.g., using localstack, Docker, or null_resource/external provider for bootstrapping local PostgreSQL). Donโ€™t keep GCP provider enabled in local configs if youโ€™re not using GCP locally.

  5. Why Separate Configurations? GCP resources (like Cloud SQL, GCS Buckets, GCP IAM, etc.) are cloud-based and require credentials, billing, and network connectivity. For local development, you usually want fast, disposable, and free resourcesโ€”typically run locally (like Docker containers). Keeping cloud provider blocks (like provider โ€œgoogleโ€) in your local Terraform config causes errors if you donโ€™t set up GCP credentials, or it could lead to accidentally creating/altering cloud resources from your laptop.
  6. What Does โ€œConfigure for Local Developmentโ€ Mean? A. Use Only What You Need Locally Remove or comment out the provider โ€œgoogleโ€ block and any GCP-specific resources (e.g., google_sql_database, google_storage_bucket, etc.) in your local environmentโ€™s Terraform files. Replace with resources that make sense for local development: Docker containers for PostgreSQL, Redis, etc. Use Docker provider in Terraform, or manage containers outside Terraform (e.g., Docker Compose). For secrets, use .env files or local secrets managersโ€”not GCP Secret Manager. B. Example: Local PostgreSQL with Terraform Suppose you want to provision a local PostgreSQL container with Terraform (instead of a GCP Cloud SQL instance):

HCL terraform { required_version = โ€œ>= 1.1โ€ required_providers { docker = { source = โ€œkreuzwerker/dockerโ€ version = โ€œ~> 3.0โ€ } } }

provider โ€œdockerโ€ {}

resource โ€œdocker_imageโ€ โ€œpostgresโ€ { name = โ€œpostgres:15โ€ }

resource โ€œdocker_containerโ€ โ€œpostgresโ€ { image = docker_image.postgres.latest name = โ€œlocal-postgresโ€ ports { internal = 5432 external = 5432 } env = [ โ€œPOSTGRES_USER=postgresโ€, โ€œPOSTGRES_PASSWORD=passwordโ€, โ€œPOSTGRES_DB=cover_letter_app_devโ€ ] } This runs a PostgreSQL container on your machine, not in the cloud. C. Directory Layout for Local/Cloud Separation Have a structure like:

Code terraform/ environments/ local/ main.tf # Local resources (Docker, null_resource, etc.) development/ main.tf # Cloud resources (GCP, etc.) production/ main.tf modules/ โ€ฆ # Shared modules, if any

  1. Donโ€™t Keep GCP Provider in Local Configs If you keep the provider โ€œgoogleโ€ block in your local/main.tf, Terraform will always expect GCP credentials and may attempt to create cloud resources. Remove or comment out any GCP-specific providers/blocks from local configs.
  2. Alternatives to Terraform for Local Dev Docker Compose is often simpler for local dev, but using the Docker Terraform provider is fine if you want to keep all infra as code. You can still use .env files to set up your appโ€™s connection strings for local containers.
  3. Summary Table Environment Uses GCP provider? Uses Docker? Uses GCP resources? Uses Local Containers? local No Yes No Yes dev/prod Yes No Yes No TL;DR For local: Remove GCP provider/resources from your Terraform config. Use Docker provider or local solutions. For dev/prod/staging: Use GCP provider/resources as needed. Keep configs organized and separated (per environment).

  4. Initialize Terraform From your environment directory (e.g., infrastructure/environments/development/):
sh
terraform init
  1. Validate and Format sh terraform fmt -recursive terraform validate
  2. Plan and Apply For local development (assuming you have only local resources configured):
sh
terraform plan
terraform apply

If youโ€™re still referencing GCP resources, either remove/comment them out for local, or create a new local-only configuration.

  1. Version Control Add and commit only your terraform scripts, not .terraform/ or .tfstate files. Add/update .gitignore: Code .terraform/ *.tfstate *.tfstate. terraform.tfvars Commit with descriptive messages.
  2. Document Update README.md in your infra directory to include instructions for local setup, required versions, and how to apply.
  3. Peer Review & Merge Open a Pull Request for your branch. Request review from teammates. Merge only after passing review and a successful CI run. Example: Local PostgreSQL with Terraform If you want to spin up local PostgreSQL using Terraform, youโ€™d likely use the local-exec provisioner to run Docker or leverage the docker_container and docker_image resources (via the kreuzwerker/docker provider).

Usage instructions:

Place this file in a directory (e.g., terraform/local/). Run terraform init in that directory. Run terraform apply to spin up local PostgreSQL and Redis containers using Docker. Set your appโ€™s environment variables to connect to localhost:5432 for Postgres and localhost:6379 for Redis. Note:

Requires Docker to be installed and running. These resources are local and disposableโ€”no cloud resources will be created. You can customize usernames, passwords, and database names as needed.

Use Terraform to provision local PostgreSQL and Redis containers on Docker

  1. Ensure Docker is Running Start Docker Desktop or the Docker daemon on your machine.
  2. Create a Terraform Directory Make a directory for your local infrastructure code, e.g.: sh mkdir -p terraform/local cd terraform/local
  3. Add Your Terraform Configuration Save the provided sample code (for PostgreSQL and Redis) to a file, e.g., main.tf in this directory.
  4. Initialize Terraform in the Directory Run: sh terraform init This downloads the required providers (e.g., Docker).
  5. Format and Validate the Terraform Code (Optional but recommended) sh terraform fmt terraform validate
  6. Plan the Terraform Deployment See what Terraform will do: sh terraform plan
  7. Apply the Terraform Configuration Provision the containers: sh terraform apply Confirm when prompted, or use -auto-approve to skip confirmation.
  8. Verify Containers Are Running Check with: sh docker ps You should see local-postgres and local-redis containers running.
  9. Connect Your App to Local Services Use environment variables like: Code DATABASE_URL=postgresql://postgres:postgres@localhost:5432/cover_letter_app_dev REDIS_URL=redis://localhost:6379
  10. (Optional) Destroy Containers When Done To clean up: sh terraform destroy Summary Table Step Action 1 Start Docker daemon 2 Create and enter Terraform directory 3 Add main.tf configuration 4 terraform init 5 terraform fmt and terraform validate 6 terraform plan 7 terraform apply 8 Check containers with docker ps 9 Connect your app to localhost:5432 and localhost:6379 10 (Optional) terraform destroy to clean up

Summary Table Step Action/Command

  1. Branch git checkout -b infra/terraform-local-pg
  2. Upgrade Terraform Install v1.5+
  3. Structure code infrastructure/environments/development/ etc.
  4. Configure local Remove GCP, setup local providers
  5. Init terraform init
  6. Format/validate terraform fmt -recursive / terraform validate
  7. Plan/apply terraform plan / terraform apply
  8. Ignore state Update .gitignore
  9. Document Update README.md
  10. Review/Merge PR, review, merge Action items for you:

Upgrade Terraform CLI to at least 1.5. Separate local and cloud configs for clarity and safety. Initialize, format, validate, commit, document, and PR as described.

Script ./run_local_infra.sh

Created the script run_local_infra.sh (all commands are to be ran from dir that stores main.tf)

How it works When you run your script (which runs terraform apply), Terraform checks for the necessary Docker images and containers. If the Postgres (or Redis) container does not exist or is not running, Terraform will create and start it for you.

Commands in Context

Assuming you are in your Terraform directory (e.g., terraform/local/):

# Make sure Docker is running (see below)
docker info

# Run your script (or run terraform manually)
./run_local_infra.sh
# or, step by step:
terraform init
terraform apply

How to Check if Docker is Running

sudo systemctl start docker   # (if not already running)
sudo systemctl status docker  # to check status

How to Check if PostgreSQL is Running (after apply)

docker ps

GCP V Local Development

Terraform Practice: Keep your local and cloud Terraform configs separate. Use modules for shared logic, but have distinct environments (local, dev, prod). Donโ€™t apply GCP modules in your local environment unless you really need GCP resources for local testing. Summary Table Resource Local Dev GCP for Local Dev? Notes PostgreSQL/Redis Docker/Local No Use containers or local install GCS Bucket (file storage) Local/Minio No Use local FS or Minio for emulation GCP Secret Manager .env files No Only use if testing cloud-specific secrets logic BigQuery, Pub/Sub, etc. Local, if possible Sometimes Only if you need cloud features or canโ€™t emulate TL;DR Donโ€™t use GCP for local dev unless you have a specific, unavoidable need. Use local services (via Docker etc.) for speed, cost, and reliability. Save GCP usage for staging, integration, and production environments.