This application uses LLM/AI technology to generate personalized job application documents including cover letters, resumes, professional emails, and elevator pitches based on user profiles and job descriptions.
This section outlines the primary technologies and design choices for this project, emphasizing modern best practices.
jsbundling-rails
(esbuild)
jsbundling-rails
with esbuild
is used for efficient JavaScript asset management, replacing the deprecated Webpacker. For new projects, initialize with:
rails new CoverLetterApp --database=postgresql --javascript=esbuild
tailwindcss-rails
gem.
form_with
helper. Consider simple_form
for more complex scenarios. Client-side validation is encouraged using Stimulus or a gem like client_side_validations
.LlmCoverLetterGeneratorService
). This service handles prompt construction, API client initialization, API calls, and response processing, ensuring a clear separation of concerns and testability.bin/rails credentials:edit
) for development and simpler production setups. Google Secret Manager is recommended for more robust production secret management.For more detailed commands, code snippets, and initial setup walkthroughs related to these technologies, please see the Full-Stack Implementation Guide.
.ruby-version
)jsbundling-rails
with a JS bundler that needs them, though esbuild often doesnβt require Node directly for basic use)These are the general steps to get the application running. For a more detailed walkthrough with example commands and initial code structures please refer to the Full-Stack Implementation Guide.
git clone https://github.com/ElReyUno/cover-letter-llm.git
cd CoverLetterApp
bundle install
yarn install
(or npm install
)rails db:create db:migrate db:seed
bin/rails credentials:edit
to add your GOOGLE_API_KEY
and any other necessary secrets.rails server
(or ./bin/dev
if using Foreman/Procfile.dev)bundle exec sidekiq
redis-server
(or sudo systemctl start redis
)http://localhost:3000
bundle exec rspec
bundle exec rspec spec/models/user_spec.rb
This project includes a complete Terraform infrastructure setup for deploying to Google Cloud Platform with automated CI/CD:
terraform/
directory for complete infrastructure configurationQuick Start:
# Run the interactive setup
./terraform/setup.sh
# Or deploy manually
cd terraform/environments/development
terraform init && terraform apply
Documentation: See terraform/README.md
and terraform/SETUP_GUIDE.md
for complete setup instructions.
Dockerfile
using a current Ruby version like FROM ruby:3.4.4
is provided).RAILS_MASTER_KEY
, Google Secret Manager configuration) are set up correctly on your chosen platform.LlmCoverLetterGeneratorService
).The application uses a multi-controller architecture to handle different document types:
CoverLettersController
: Main orchestrator for multi-document generation
GET /cover_letters/new
- Multi-document generation formPOST /cover_letters
- Processes form and dispatches to appropriate servicesGET /generation_results
- Shows results of multi-document generationResumesController
: Manages resume-specific operations
/resumes/new
, /resumes/:id
, /resumes
EmailsController
: Handles professional email generation
/emails/new
, /emails/:id
, /emails
ElevatorPitchesController
: Manages elevator pitch creation
/elevator_pitches/new
, /elevator_pitches/:id
, /elevator_pitches
User
βββ has_one :user_profile
βββ has_many :job_descriptions
βββ has_many :cover_letters
βββ has_many :resumes
βββ has_many :emails
βββ has_many :elevator_pitches
# Each document type belongs_to:
# - :user, :job_description, :user_profile
pdf-reader
and NLP techniques (potentially another LLM call) to extract skills, experience, and other relevant data to pre-fill user profiles and inform cover letter generation.The application has been set up for local development with:
/terraform/
# List all workflows
gh workflow list
# View workflow runs
gh run list
# Delete workflow runs script
./delete_workflow_runs.sh <owner/repo> # Replace <owner/repo> with your GitHub username/repository name
Your delete_workflow_runs.sh
script for bulk workflow run deletion:
#!/bin/bash
# Ensure the repo argument is provided.
if [[ -z "$1" ]]; then
echo "Usage: $0 <owner/repo>"
exit 1
fi
repo=$1
# Fetch all workflow runs for the given repository.
runs=$(gh api repos/$repo/actions/runs --paginate | jq -r '.workflow_runs[] | .id')
# Delete each run.
while IFS= read -r run; do
echo "Deleting run $run..."
gh api -X DELETE repos/$repo/actions/runs/$run --silent
done <<< "$runs"
echo "All workflow runs for $repo have been deleted."
Usage:
chmod +x delete_workflow_runs.sh
./delete_workflow_runs.sh <owner/repo> # Replace <owner/repo> with your GitHub username/repository name
# Delete a specific workflow run by ID
gh run delete <run_id>
# Interactive deletion - select runs from a list
gh run delete # Allows you to select runs from a list
# Set environment variables for your repository
export OWNER="my-user" # Or your organization name
export REPOSITORY="my-repo"
export WORKFLOW="My Workflow"
# Delete all runs for a specific workflow
gh api -X GET /repos/$OWNER/$REPOSITORY/actions/runs --paginate \
| jq '.workflow_runs[] | select(.name == "'"$WORKFLOW"'") | .id' \
| xargs -t -I{} gh api -X DELETE /repos/$OWNER/$REPOSITORY/actions/runs/{}
Once all the runs associated with the workflow are deleted, the workflow itself will disappear from the Actions tab on GitHub. To prevent it from reappearing:
main
or master
).github/workflows
directory.yml
or .yaml
fileExample:
# Remove workflow file
rm .github/workflows/unwanted-workflow.yml
# Commit and push changes
git add .github/workflows/
git commit -m "Remove unwanted workflow file"
git push origin main
By following these steps, you can efficiently delete unwanted workflow runs and remove the associated workflow files, keeping your GitHub Actions tab clean and organized.
This project integrates performance testing tools into the CI/CD pipeline to ensure application scalability and reliability under load.
Create .github/workflows/performance-jmeter.yml
:
name: JMeter Performance Tests
on:
push:
branches: [ main, staging ]
pull_request:
branches: [ main ]
workflow_dispatch:
inputs:
target_url:
description: 'Target URL for performance testing'
required: true
default: 'https://your-app.cloud.run'
duration:
description: 'Test duration in seconds'
required: false
default: '300'
threads:
description: 'Number of concurrent users'
required: false
default: '50'
jobs:
jmeter-test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Java
uses: actions/setup-java@v4
with:
java-version: '11'
distribution: 'temurin'
- name: Download JMeter
run: |
wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.6.2.tgz
tar -xzf apache-jmeter-5.6.2.tgz
- name: Create JMeter Test Plan
run: |
mkdir -p performance-tests/jmeter
cat > performance-tests/jmeter/load-test.jmx << 'EOF'
<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2">
<hashTree>
<TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="Cover Letter App Load Test">
<elementProp name="TestPlan.arguments" elementType="Arguments" guiclass="ArgumentsPanel">
<collectionProp name="Arguments.arguments"/>
</elementProp>
<stringProp name="TestPlan.user_define_classpath"></stringProp>
<boolProp name="TestPlan.serialize_threadgroups">false</boolProp>
<boolProp name="TestPlan.functional_mode">false</boolProp>
</TestPlan>
<hashTree>
<ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="User Load">
<stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
<elementProp name="ThreadGroup.main_controller" elementType="LoopController">
<boolProp name="LoopController.continue_forever">false</boolProp>
<intProp name="LoopController.loops">-1</intProp>
</elementProp>
<stringProp name="ThreadGroup.num_threads">$</stringProp>
<stringProp name="ThreadGroup.ramp_time">30</stringProp>
<longProp name="ThreadGroup.duration">$</longProp>
<boolProp name="ThreadGroup.scheduler">true</boolProp>
</ThreadGroup>
<hashTree>
<HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Home Page">
<elementProp name="HTTPsampler.Arguments" elementType="Arguments">
<collectionProp name="Arguments.arguments"/>
</elementProp>
<stringProp name="HTTPSampler.domain">$</stringProp>
<stringProp name="HTTPSampler.port">3000</stringProp>
<stringProp name="HTTPSampler.path">/</stringProp>
<stringProp name="HTTPSampler.method">GET</stringProp>
</HTTPSamplerProxy>
</hashTree>
</hashTree>
</hashTree>
</jmeterTestPlan>
EOF
- name: Run JMeter Tests
run: |
./apache-jmeter-5.6.2/bin/jmeter -n -t performance-tests/jmeter/load-test.jmx \
-l performance-tests/jmeter/results.jtl \
-e -o performance-tests/jmeter/report
- name: Upload JMeter Results
uses: actions/upload-artifact@v4
with:
name: jmeter-results
path: |
performance-tests/jmeter/results.jtl
performance-tests/jmeter/report/
Create .github/workflows/performance-locust.yml
:
name: Locust Performance Tests
on:
push:
branches: [ main, staging ]
pull_request:
branches: [ main ]
workflow_dispatch:
inputs:
target_url:
description: 'Target URL for performance testing'
required: true
default: 'https://your-app.cloud.run'
users:
description: 'Number of concurrent users'
required: false
default: '100'
spawn_rate:
description: 'User spawn rate per second'
required: false
default: '5'
run_time:
description: 'Test duration (e.g., 5m, 300s)'
required: false
default: '5m'
jobs:
locust-test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install Locust
run: |
pip install locust requests beautifulsoup4
- name: Create Locust Test File
run: |
mkdir -p performance-tests/locust
cat > performance-tests/locust/locustfile.py << 'EOF'
from locust import HttpUser, task, between
import random
class CoverLetterAppUser(HttpUser):
wait_time = between(1, 3)
def on_start(self):
"""Perform login if authentication is required"""
# Uncomment and modify if authentication is needed
# self.client.get("/users/sign_in")
pass
@task(3)
def view_home_page(self):
"""Test home page load"""
self.client.get("/")
@task(2)
def view_cover_letters(self):
"""Test cover letters page"""
self.client.get("/cover_letters")
@task(2)
def view_new_cover_letter_form(self):
"""Test new cover letter form"""
self.client.get("/cover_letters/new")
@task(1)
def view_resumes(self):
"""Test resumes page"""
self.client.get("/resumes")
@task(1)
def view_emails(self):
"""Test professional emails page"""
self.client.get("/emails")
@task(1)
def view_elevator_pitches(self):
"""Test elevator pitches page"""
self.client.get("/elevator_pitches")
@task(1)
def simulate_api_heavy_request(self):
"""Simulate document generation (without actual API call)"""
# This would test the form submission without actually calling LLM APIs
# Modify based on your application's specific endpoints
with self.client.get("/cover_letters/new", catch_response=True) as response:
if response.status_code == 200:
response.success()
EOF
- name: Run Locust Tests
run: |
cd performance-tests/locust
locust -f locustfile.py --host=$ \
--users=$ \
--spawn-rate=$ \
--run-time=$ \
--html=report.html \
--csv=results \
--headless
- name: Upload Locust Results
uses: actions/upload-artifact@v4
with:
name: locust-results
path: |
performance-tests/locust/report.html
performance-tests/locust/results_stats.csv
performance-tests/locust/results_failures.csv
Create .github/workflows/performance-suite.yml
:
name: Performance Testing Suite
on:
workflow_dispatch:
inputs:
environment:
description: 'Target environment'
required: true
default: 'staging'
type: choice
options:
- staging
- production
test_duration:
description: 'Test duration in minutes'
required: false
default: '5'
jobs:
prepare-environment:
runs-on: ubuntu-latest
outputs:
target_url: $
steps:
- name: Set Target URL
id: set-url
run: |
if [ "$" = "production" ]; then
echo "url=https://your-production-app.com" >> $GITHUB_OUTPUT
else
echo "url=https://your-staging-app.com" >> $GITHUB_OUTPUT
fi
jmeter-test:
needs: prepare-environment
uses: ./.github/workflows/performance-jmeter.yml
with:
target_url: $
duration: $
threads: 50
locust-test:
needs: prepare-environment
uses: ./.github/workflows/performance-locust.yml
with:
target_url: $
users: 100
spawn_rate: 5
run_time: $m
performance-analysis:
needs: [jmeter-test, locust-test]
runs-on: ubuntu-latest
steps:
- name: Download Test Results
uses: actions/download-artifact@v4
- name: Analyze Performance Results
run: |
echo "## Performance Test Summary" >> $GITHUB_STEP_SUMMARY
echo "### Test Configuration" >> $GITHUB_STEP_SUMMARY
echo "- Environment: $" >> $GITHUB_STEP_SUMMARY
echo "- Duration: $ minutes" >> $GITHUB_STEP_SUMMARY
echo "- Target URL: $" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "### Results Available" >> $GITHUB_STEP_SUMMARY
echo "- JMeter HTML Report: Available in artifacts" >> $GITHUB_STEP_SUMMARY
echo "- Locust HTML Report: Available in artifacts" >> $GITHUB_STEP_SUMMARY
echo "- Raw CSV Data: Available in artifacts" >> $GITHUB_STEP_SUMMARY
# Add to your main deployment workflow
- name: Run Performance Tests
if: github.ref == 'refs/heads/main'
uses: ./.github/workflows/performance-suite.yml
with:
environment: staging
test_duration: 3
- name: Performance Gate Check
run: |
# Add logic to fail deployment if performance thresholds are not met
# This could parse Locust/JMeter results and compare against baselines
echo "Checking performance thresholds..."
For local development and testing:
# Install tools locally
pip install locust
# Download JMeter from https://jmeter.apache.org/download_jmeter.cgi
# Run Locust locally
cd performance-tests/locust
locust -f locustfile.py --host=http://localhost:3000
# Run JMeter locally (GUI mode)
./apache-jmeter-5.6.2/bin/jmeter.sh
This comprehensive performance testing setup ensures your Rails application can handle expected loads and provides early detection of performance regressions through automated CI/CD integration.
This project includes local monitoring capabilities using Docker-based Prometheus and Grafana for development and testing environments. No cloud dependencies required.
To enable local monitoring with Terraform:
# Navigate to development environment
cd terraform/environments/development
# Enable monitoring in variables
# Edit terraform.tfvars or set environment variable:
export TF_VAR_enable_local_monitoring=true
# Apply Terraform configuration
terraform init
terraform apply
Alternatively, you can run monitoring manually:
# Create docker-compose.monitoring.yml
cat > docker-compose.monitoring.yml << 'EOF'
version: '3.8'
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
ports:
- "9090:9090"
volumes:
- ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--web.enable-lifecycle'
grafana:
image: grafana/grafana:latest
container_name: grafana
ports:
- "3001:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin123
volumes:
- grafana-storage:/var/lib/grafana
volumes:
grafana-storage:
EOF
# Create basic Prometheus configuration
mkdir -p monitoring
cat > monitoring/prometheus.yml << 'EOF'
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'rails-app'
static_configs:
- targets: ['host.docker.internal:3000']
metrics_path: '/metrics'
scrape_interval: 5s
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
EOF
# Start monitoring stack
docker compose -f docker-compose.monitoring.yml up -d
Once running, access your monitoring services:
To expose metrics from your Rails application, add the prometheus-client
gem:
# Add to Gemfile
gem 'prometheus-client'
# Create app/controllers/metrics_controller.rb
class MetricsController < ApplicationController
def show
render plain: Prometheus::Client.registry.metrics.map(&:to_s).join("\n")
end
end
# Add to config/routes.rb
get '/metrics', to: 'metrics#show'
The Terraform monitoring module includes health check scripts:
# After terraform apply, check container status
./check_containers.sh
# Script checks:
# - Prometheus container health
# - Grafana container health
# - Network connectivity
# - Service endpoint availability
3662
for basic Rails metricsAdd custom metrics to your Rails application:
# config/initializers/prometheus.rb
require 'prometheus/client'
# Create metrics
REQUEST_COUNTER = Prometheus::Client::Counter.new(
:http_requests_total,
docstring: 'Total HTTP requests',
labels: [:method, :path, :status]
)
RESPONSE_TIME = Prometheus::Client::Histogram.new(
:http_request_duration_seconds,
docstring: 'Response time histogram',
labels: [:method, :path]
)
# Register metrics
Prometheus::Client.registry.register(REQUEST_COUNTER)
Prometheus::Client.registry.register(RESPONSE_TIME)
# app/controllers/application_controller.rb
class ApplicationController < ActionController::Base
around_action :track_request_metrics
private
def track_request_metrics
start_time = Time.current
begin
yield
ensure
duration = Time.current - start_time
REQUEST_COUNTER.increment(
labels: {
method: request.method,
path: request.path,
status: response.status
}
)
RESPONSE_TIME.observe(
duration,
labels: {
method: request.method,
path: request.path
}
)
end
end
end
# Add to prometheus.yml for basic alerting
rule_files:
- "alert_rules.yml"
# Create alert_rules.yml
groups:
- name: rails_app
rules:
- alert: HighErrorRate
expr: rate(http_requests_total{status=~"5.."}[5m]) > 0.1
for: 2m
labels:
severity: warning
annotations:
summary: "High error rate detected"
To stop and remove monitoring containers:
# If using Terraform
terraform destroy
# If using Docker Compose manually
docker compose -f docker-compose.monitoring.yml down -v
docker volume rm $(docker volume ls -q | grep grafana)
Combine monitoring with performance testing for comprehensive analysis:
# Start monitoring
docker compose -f docker-compose.monitoring.yml up -d
# Run performance tests while monitoring
cd performance-tests
locust -f locustfile.py --host=http://localhost:3000 --users=50 --spawn-rate=5 --run-time=300s
# Monitor results in:
# - Grafana: http://localhost:3001 (dashboards and graphs)
# - Prometheus: http://localhost:9090 (raw metrics and queries)
This local monitoring setup provides production-like observability without any cloud dependencies, perfect for development and testing environments.