cover-letter-llm

PersonaForge - AI-Powered Job Application Documents

Ruby Rails PostgreSQL Tailwind CSS Google AI Terraform Docker License

This application uses LLM/AI technology to generate personalized job application documents including cover letters, resumes, professional emails, and elevator pitches based on user profiles and job descriptions.


Table of Contents


Features

πŸš€ Multi-Document Generation

πŸ‘€ User Management & Profiles

πŸ’Ό Job Description Management

🎨 Modern User Interface

πŸ“Š Document Management & History

⚑ Advanced Features

Technology Stack / Key Decisions

This section outlines the primary technologies and design choices for this project, emphasizing modern best practices.

For more detailed commands, code snippets, and initial setup walkthroughs related to these technologies, please see the Full-Stack Implementation Guide.

Prerequisites

For Local Development

For Infrastructure Deployment (Optional)

Setup & Installation

These are the general steps to get the application running. For a more detailed walkthrough with example commands and initial code structures please refer to the Full-Stack Implementation Guide.

  1. Clone the repository: git clone https://github.com/ElReyUno/cover-letter-llm.git
  2. Navigate to the project directory: cd CoverLetterApp
  3. Install Ruby dependencies: bundle install
  4. Install JavaScript dependencies (if applicable): yarn install (or npm install)
  5. Set up database: rails db:create db:migrate db:seed
  6. Configure secrets:
    • Run bin/rails credentials:edit to add your GOOGLE_API_KEY and any other necessary secrets.
  7. … (any other setup steps)

Running the Application

  1. Start the Rails server: rails server (or ./bin/dev if using Foreman/Procfile.dev)
  2. Start Sidekiq: bundle exec sidekiq
  3. Start Redis (if not managed by a service): redis-server (or sudo systemctl start redis)
  4. Access the application at http://localhost:3000

Running Tests

Deployment

Infrastructure as Code with Terraform

This project includes a complete Terraform infrastructure setup for deploying to Google Cloud Platform with automated CI/CD:

Quick Start:

# Run the interactive setup
./terraform/setup.sh

# Or deploy manually
cd terraform/environments/development
terraform init && terraform apply

Documentation: See terraform/README.md and terraform/SETUP_GUIDE.md for complete setup instructions.

Alternative Deployment Options

Key Architectural Choices / Patterns Used

Document Generation Architecture

The application uses a multi-controller architecture to handle different document types:

Controllers & Routes

Models & Relationships

User
β”œβ”€β”€ has_one :user_profile
β”œβ”€β”€ has_many :job_descriptions
β”œβ”€β”€ has_many :cover_letters
β”œβ”€β”€ has_many :resumes
β”œβ”€β”€ has_many :emails  
└── has_many :elevator_pitches

# Each document type belongs_to:
# - :user, :job_description, :user_profile

Generation Flow

  1. Form Submission: User selects document types via checkboxes
  2. Data Validation: Job description and user profile validation
  3. Multi-Document Creation: Batch creation of selected document types
  4. Results Display: Unified results page with links to individual documents
  5. Individual Views: Specialized views for each document type with unique features

Document-Specific Features

Future Capabilities / Roadmap

Contributing

CONTRIBUTE

Quick Start (Local Development)

Current Status βœ…

Setup Summary

The application has been set up for local development with:

Next Steps

  1. Local Development: Application ready at http://localhost:3000
  2. Cloud Deployment: Terraform configurations available in /terraform/
  3. CI/CD: GitHub Actions workflow available but disabled (can be re-enabled)

GitHub Actions Workflow Management

Basic Workflow Commands

# List all workflows
gh workflow list

# View workflow runs
gh run list

# Delete workflow runs script
./delete_workflow_runs.sh <owner/repo>  # Replace <owner/repo> with your GitHub username/repository name

GitHub Workflow Management Script

Your delete_workflow_runs.sh script for bulk workflow run deletion:

#!/bin/bash

# Ensure the repo argument is provided.
if [[ -z "$1" ]]; then
  echo "Usage: $0 <owner/repo>"
  exit 1
fi

repo=$1

# Fetch all workflow runs for the given repository.
runs=$(gh api repos/$repo/actions/runs --paginate | jq -r '.workflow_runs[] | .id')

# Delete each run.
while IFS= read -r run; do
  echo "Deleting run $run..."
  gh api -X DELETE repos/$repo/actions/runs/$run --silent
done <<< "$runs"

echo "All workflow runs for $repo have been deleted."

Usage:

chmod +x delete_workflow_runs.sh
./delete_workflow_runs.sh <owner/repo>  # Replace <owner/repo> with your GitHub username/repository name

Individual Workflow Run Management

# Delete a specific workflow run by ID
gh run delete <run_id>

# Interactive deletion - select runs from a list
gh run delete  # Allows you to select runs from a list

Delete All Runs for a Specific Workflow

# Set environment variables for your repository
export OWNER="my-user"        # Or your organization name
export REPOSITORY="my-repo"
export WORKFLOW="My Workflow"

# Delete all runs for a specific workflow
gh api -X GET /repos/$OWNER/$REPOSITORY/actions/runs --paginate \
| jq '.workflow_runs[] | select(.name == "'"$WORKFLOW"'") | .id' \
| xargs -t -I{} gh api -X DELETE /repos/$OWNER/$REPOSITORY/actions/runs/{}

Complete Workflow Removal

Once all the runs associated with the workflow are deleted, the workflow itself will disappear from the Actions tab on GitHub. To prevent it from reappearing:

  1. Remove the workflow YAML file from your repository in all relevant branches (usually main or master)
  2. Navigate to your repository’s .github/workflows directory
  3. Delete the workflow’s .yml or .yaml file
  4. Commit and push the changes to all relevant branches

Example:

# Remove workflow file
rm .github/workflows/unwanted-workflow.yml

# Commit and push changes
git add .github/workflows/
git commit -m "Remove unwanted workflow file"
git push origin main

By following these steps, you can efficiently delete unwanted workflow runs and remove the associated workflow files, keeping your GitHub Actions tab clean and organized.


πŸš€ Performance Testing with GitHub Actions

Apache JMeter & Locust Integration

This project integrates performance testing tools into the CI/CD pipeline to ensure application scalability and reliability under load.

Performance Testing Tools

GitHub Actions Workflow Setup

JMeter Performance Testing

Create .github/workflows/performance-jmeter.yml:

name: JMeter Performance Tests

on:
  push:
    branches: [ main, staging ]
  pull_request:
    branches: [ main ]
  workflow_dispatch:
    inputs:
      target_url:
        description: 'Target URL for performance testing'
        required: true
        default: 'https://your-app.cloud.run'
      duration:
        description: 'Test duration in seconds'
        required: false
        default: '300'
      threads:
        description: 'Number of concurrent users'
        required: false
        default: '50'

jobs:
  jmeter-test:
    runs-on: ubuntu-latest
    
    steps:
    - name: Checkout code
      uses: actions/checkout@v4
      
    - name: Setup Java
      uses: actions/setup-java@v4
      with:
        java-version: '11'
        distribution: 'temurin'
        
    - name: Download JMeter
      run: |
        wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.6.2.tgz
        tar -xzf apache-jmeter-5.6.2.tgz
        
    - name: Create JMeter Test Plan
      run: |
        mkdir -p performance-tests/jmeter
        cat > performance-tests/jmeter/load-test.jmx << 'EOF'
        <?xml version="1.0" encoding="UTF-8"?>
        <jmeterTestPlan version="1.2">
          <hashTree>
            <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="Cover Letter App Load Test">
              <elementProp name="TestPlan.arguments" elementType="Arguments" guiclass="ArgumentsPanel">
                <collectionProp name="Arguments.arguments"/>
              </elementProp>
              <stringProp name="TestPlan.user_define_classpath"></stringProp>
              <boolProp name="TestPlan.serialize_threadgroups">false</boolProp>
              <boolProp name="TestPlan.functional_mode">false</boolProp>
            </TestPlan>
            <hashTree>
              <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="User Load">
                <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
                <elementProp name="ThreadGroup.main_controller" elementType="LoopController">
                  <boolProp name="LoopController.continue_forever">false</boolProp>
                  <intProp name="LoopController.loops">-1</intProp>
                </elementProp>
                <stringProp name="ThreadGroup.num_threads">$</stringProp>
                <stringProp name="ThreadGroup.ramp_time">30</stringProp>
                <longProp name="ThreadGroup.duration">$</longProp>
                <boolProp name="ThreadGroup.scheduler">true</boolProp>
              </ThreadGroup>
              <hashTree>
                <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Home Page">
                  <elementProp name="HTTPsampler.Arguments" elementType="Arguments">
                    <collectionProp name="Arguments.arguments"/>
                  </elementProp>
                  <stringProp name="HTTPSampler.domain">$</stringProp>
                  <stringProp name="HTTPSampler.port">3000</stringProp>
                  <stringProp name="HTTPSampler.path">/</stringProp>
                  <stringProp name="HTTPSampler.method">GET</stringProp>
                </HTTPSamplerProxy>
              </hashTree>
            </hashTree>
          </hashTree>
        </jmeterTestPlan>
        EOF
        
    - name: Run JMeter Tests
      run: |
        ./apache-jmeter-5.6.2/bin/jmeter -n -t performance-tests/jmeter/load-test.jmx \
          -l performance-tests/jmeter/results.jtl \
          -e -o performance-tests/jmeter/report
          
    - name: Upload JMeter Results
      uses: actions/upload-artifact@v4
      with:
        name: jmeter-results
        path: |
          performance-tests/jmeter/results.jtl
          performance-tests/jmeter/report/
Locust Performance Testing

Create .github/workflows/performance-locust.yml:

name: Locust Performance Tests

on:
  push:
    branches: [ main, staging ]
  pull_request:
    branches: [ main ]
  workflow_dispatch:
    inputs:
      target_url:
        description: 'Target URL for performance testing'
        required: true
        default: 'https://your-app.cloud.run'
      users:
        description: 'Number of concurrent users'
        required: false
        default: '100'
      spawn_rate:
        description: 'User spawn rate per second'
        required: false
        default: '5'
      run_time:
        description: 'Test duration (e.g., 5m, 300s)'
        required: false
        default: '5m'

jobs:
  locust-test:
    runs-on: ubuntu-latest
    
    steps:
    - name: Checkout code
      uses: actions/checkout@v4
      
    - name: Setup Python
      uses: actions/setup-python@v4
      with:
        python-version: '3.11'
        
    - name: Install Locust
      run: |
        pip install locust requests beautifulsoup4
        
    - name: Create Locust Test File
      run: |
        mkdir -p performance-tests/locust
        cat > performance-tests/locust/locustfile.py << 'EOF'
        from locust import HttpUser, task, between
        import random
        
        class CoverLetterAppUser(HttpUser):
            wait_time = between(1, 3)
            
            def on_start(self):
                """Perform login if authentication is required"""
                # Uncomment and modify if authentication is needed
                # self.client.get("/users/sign_in")
                pass
            
            @task(3)
            def view_home_page(self):
                """Test home page load"""
                self.client.get("/")
                
            @task(2)
            def view_cover_letters(self):
                """Test cover letters page"""
                self.client.get("/cover_letters")
                
            @task(2)
            def view_new_cover_letter_form(self):
                """Test new cover letter form"""
                self.client.get("/cover_letters/new")
                
            @task(1)
            def view_resumes(self):
                """Test resumes page"""
                self.client.get("/resumes")
                
            @task(1)
            def view_emails(self):
                """Test professional emails page"""  
                self.client.get("/emails")
                
            @task(1)
            def view_elevator_pitches(self):
                """Test elevator pitches page"""
                self.client.get("/elevator_pitches")
                
            @task(1)
            def simulate_api_heavy_request(self):
                """Simulate document generation (without actual API call)"""
                # This would test the form submission without actually calling LLM APIs
                # Modify based on your application's specific endpoints
                with self.client.get("/cover_letters/new", catch_response=True) as response:
                    if response.status_code == 200:
                        response.success()
        EOF
        
    - name: Run Locust Tests
      run: |
        cd performance-tests/locust
        locust -f locustfile.py --host=$ \
          --users=$ \
          --spawn-rate=$ \
          --run-time=$ \
          --html=report.html \
          --csv=results \
          --headless
          
    - name: Upload Locust Results
      uses: actions/upload-artifact@v4
      with:
        name: locust-results
        path: |
          performance-tests/locust/report.html
          performance-tests/locust/results_stats.csv
          performance-tests/locust/results_failures.csv

Combined Performance Testing Workflow

Create .github/workflows/performance-suite.yml:

name: Performance Testing Suite

on:
  workflow_dispatch:
    inputs:
      environment:
        description: 'Target environment'
        required: true
        default: 'staging'
        type: choice
        options:
        - staging
        - production
      test_duration:
        description: 'Test duration in minutes'
        required: false
        default: '5'

jobs:
  prepare-environment:
    runs-on: ubuntu-latest
    outputs:
      target_url: $
    steps:
    - name: Set Target URL
      id: set-url
      run: |
        if [ "$" = "production" ]; then
          echo "url=https://your-production-app.com" >> $GITHUB_OUTPUT
        else
          echo "url=https://your-staging-app.com" >> $GITHUB_OUTPUT
        fi

  jmeter-test:
    needs: prepare-environment
    uses: ./.github/workflows/performance-jmeter.yml
    with:
      target_url: $
      duration: $
      threads: 50

  locust-test:
    needs: prepare-environment
    uses: ./.github/workflows/performance-locust.yml
    with:
      target_url: $
      users: 100
      spawn_rate: 5
      run_time: $m

  performance-analysis:
    needs: [jmeter-test, locust-test]
    runs-on: ubuntu-latest
    steps:
    - name: Download Test Results
      uses: actions/download-artifact@v4
      
    - name: Analyze Performance Results
      run: |
        echo "## Performance Test Summary" >> $GITHUB_STEP_SUMMARY
        echo "### Test Configuration" >> $GITHUB_STEP_SUMMARY
        echo "- Environment: $" >> $GITHUB_STEP_SUMMARY
        echo "- Duration: $ minutes" >> $GITHUB_STEP_SUMMARY
        echo "- Target URL: $" >> $GITHUB_STEP_SUMMARY
        echo "" >> $GITHUB_STEP_SUMMARY
        echo "### Results Available" >> $GITHUB_STEP_SUMMARY
        echo "- JMeter HTML Report: Available in artifacts" >> $GITHUB_STEP_SUMMARY
        echo "- Locust HTML Report: Available in artifacts" >> $GITHUB_STEP_SUMMARY
        echo "- Raw CSV Data: Available in artifacts" >> $GITHUB_STEP_SUMMARY

Performance Testing Best Practices

Test Scenarios
Key Metrics to Monitor
Integration with Deployment Pipeline
# Add to your main deployment workflow
- name: Run Performance Tests
  if: github.ref == 'refs/heads/main'
  uses: ./.github/workflows/performance-suite.yml
  with:
    environment: staging
    test_duration: 3

- name: Performance Gate Check
  run: |
    # Add logic to fail deployment if performance thresholds are not met
    # This could parse Locust/JMeter results and compare against baselines
    echo "Checking performance thresholds..."

Local Performance Testing

For local development and testing:

# Install tools locally
pip install locust
# Download JMeter from https://jmeter.apache.org/download_jmeter.cgi

# Run Locust locally
cd performance-tests/locust
locust -f locustfile.py --host=http://localhost:3000

# Run JMeter locally (GUI mode)
./apache-jmeter-5.6.2/bin/jmeter.sh

This comprehensive performance testing setup ensures your Rails application can handle expected loads and provides early detection of performance regressions through automated CI/CD integration.

πŸ“Š Local Monitoring with Prometheus & Grafana

Local Monitoring Stack

This project includes local monitoring capabilities using Docker-based Prometheus and Grafana for development and testing environments. No cloud dependencies required.

Monitoring Components

Enable Local Monitoring

To enable local monitoring with Terraform:

# Navigate to development environment
cd terraform/environments/development

# Enable monitoring in variables
# Edit terraform.tfvars or set environment variable:
export TF_VAR_enable_local_monitoring=true

# Apply Terraform configuration
terraform init
terraform apply

Manual Docker Compose Setup

Alternatively, you can run monitoring manually:

# Create docker-compose.monitoring.yml
cat > docker-compose.monitoring.yml << 'EOF'
version: '3.8'

services:
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    ports:
      - "9090:9090"
    volumes:
      - ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--web.console.libraries=/etc/prometheus/console_libraries'
      - '--web.console.templates=/etc/prometheus/consoles'
      - '--web.enable-lifecycle'

  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    ports:
      - "3001:3000"
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=admin123
    volumes:
      - grafana-storage:/var/lib/grafana

volumes:
  grafana-storage:
EOF

# Create basic Prometheus configuration
mkdir -p monitoring
cat > monitoring/prometheus.yml << 'EOF'
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'rails-app'
    static_configs:
      - targets: ['host.docker.internal:3000']
    metrics_path: '/metrics'
    scrape_interval: 5s

  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090']
EOF

# Start monitoring stack
docker compose -f docker-compose.monitoring.yml up -d

Access Monitoring Services

Once running, access your monitoring services:

Rails Application Metrics

To expose metrics from your Rails application, add the prometheus-client gem:

# Add to Gemfile
gem 'prometheus-client'

# Create app/controllers/metrics_controller.rb
class MetricsController < ApplicationController
  def show
    render plain: Prometheus::Client.registry.metrics.map(&:to_s).join("\n")
  end
end

# Add to config/routes.rb
get '/metrics', to: 'metrics#show'

Health Check Scripts

The Terraform monitoring module includes health check scripts:

# After terraform apply, check container status
./check_containers.sh

# Script checks:
# - Prometheus container health
# - Grafana container health  
# - Network connectivity
# - Service endpoint availability

Grafana Dashboard Setup

  1. Access Grafana: Navigate to http://localhost:3001
  2. Login: Use admin/admin123 (change on first login)
  3. Add Data Source:
    • URL: http://prometheus:9090
    • Type: Prometheus
  4. Import Dashboard: Use dashboard ID 3662 for basic Rails metrics

Custom Metrics Examples

Add custom metrics to your Rails application:

# config/initializers/prometheus.rb
require 'prometheus/client'

# Create metrics
REQUEST_COUNTER = Prometheus::Client::Counter.new(
  :http_requests_total,
  docstring: 'Total HTTP requests',
  labels: [:method, :path, :status]
)

RESPONSE_TIME = Prometheus::Client::Histogram.new(
  :http_request_duration_seconds,
  docstring: 'Response time histogram',
  labels: [:method, :path]
)

# Register metrics
Prometheus::Client.registry.register(REQUEST_COUNTER)
Prometheus::Client.registry.register(RESPONSE_TIME)

# app/controllers/application_controller.rb
class ApplicationController < ActionController::Base
  around_action :track_request_metrics

  private

  def track_request_metrics
    start_time = Time.current
    
    begin
      yield
    ensure
      duration = Time.current - start_time
      
      REQUEST_COUNTER.increment(
        labels: {
          method: request.method,
          path: request.path,
          status: response.status
        }
      )
      
      RESPONSE_TIME.observe(
        duration,
        labels: {
          method: request.method,
          path: request.path
        }
      )
    end
  end
end

Monitoring Best Practices

Key Metrics to Monitor
Alerting (Optional)
# Add to prometheus.yml for basic alerting
rule_files:
  - "alert_rules.yml"

# Create alert_rules.yml
groups:
  - name: rails_app
    rules:
      - alert: HighErrorRate
        expr: rate(http_requests_total{status=~"5.."}[5m]) > 0.1
        for: 2m
        labels:
          severity: warning
        annotations:
          summary: "High error rate detected"

Cleanup Monitoring

To stop and remove monitoring containers:

# If using Terraform
terraform destroy

# If using Docker Compose manually
docker compose -f docker-compose.monitoring.yml down -v
docker volume rm $(docker volume ls -q | grep grafana)

Integration with Performance Testing

Combine monitoring with performance testing for comprehensive analysis:

# Start monitoring
docker compose -f docker-compose.monitoring.yml up -d

# Run performance tests while monitoring
cd performance-tests
locust -f locustfile.py --host=http://localhost:3000 --users=50 --spawn-rate=5 --run-time=300s

# Monitor results in:
# - Grafana: http://localhost:3001 (dashboards and graphs)
# - Prometheus: http://localhost:9090 (raw metrics and queries)

This local monitoring setup provides production-like observability without any cloud dependencies, perfect for development and testing environments.