Operations
Day-to-day operations for running and maintaining your self-hosted PullApprove5 instance.
Container Commands
The Docker image supports the following commands:
server— starts the serverworker— starts a background job workermigrate— runs database migrationsenable-admin-user <email>— grants admin access to a user (must have logged in first)python-shell— opens an interactive Python shell for debugging and data inspection
The server and worker commands automatically run preflight checks on startup (skip with --skip-preflight if needed).
The admin UI is available at /admin/. To grant access, run enable-admin-user after the user has logged in:
docker run --env-file .env pullapprove5:5.x.x enable-admin-user user@example.com
# Or on Kubernetes:
kubectl exec -it deploy/pullapprove-server -- enable-admin-user user@example.com
Health Check and Scaling
PullApprove5 exposes a GET /up/ endpoint that returns a 200 OK response when the server is running. Use this for load balancer health checks and Kubernetes readiness/liveness probes.
PullApprove5 is stateless and horizontally scalable. All shared state is stored in PostgreSQL.
- Server — run multiple
servercontainers behind a load balancer. AdjustPULLAPPROVE_SERVER_WORKERSto control the number of workers per container. - Worker — run multiple
workercontainers to increase background job throughput. The job queue uses database-level locking to prevent duplicate processing. The default job timeout is 60 minutes.
Proxy Configuration
The server container serves HTTP on port 8000. Terminate TLS at your reverse proxy and forward traffic to the container.
Your reverse proxy must send the X-Forwarded-Proto: https header. PullApprove redirects HTTP to HTTPS and uses this header to detect whether the original request was already over HTTPS. Without it, requests will enter a redirect loop.
# Change the proxy header
# (default: "X-Forwarded-Proto: https")
PULLAPPROVE_HTTPS_PROXY_HEADER="X-Forwarded-Proto: https"
# Disable HTTPS redirect (default: true)
PULLAPPROVE_HTTPS_REDIRECT_ENABLED=false
If your network requires an HTTP proxy for outbound traffic, contact us for configuration details.
Logging and Monitoring
Both the server and worker containers log to stdout in structured key-value format.
Example log lines:
[INFO] Completed job job_class=app.pullrequests.jobs.PullRequestProcessUpdates job_duration=0.014 job_queue="default"
[INFO] Job worker stats worker_processes=2 jobs_requested=4 jobs_processing=0
[INFO] api_response status=200 from_cache=True
Key fields to be aware of:
request_id— included in request logs for end-to-end tracingjob_classandjob_duration— track background job performancejobs_processingandworker_processes— monitor worker utilizationrate_limit_remaining— track Git provider API rate limit consumption
Log Configuration
# Log level for application logs
# Set to DEBUG for troubleshooting (default: INFO)
PULLAPPROVE_LOG_LEVEL=INFO
# "keyvalue" (default) or "json"
# Use "json" for log aggregation (Datadog, Splunk, ELK)
PULLAPPROVE_LOG_FORMAT=keyvalue
# "split" (default), "stdout", or "stderr"
# "split" sends INFO to stdout, WARNING+ to stderr
PULLAPPROVE_LOG_STREAM=split
Sentry
PullApprove5 comes with Sentry support built-in — just provide a DSN to enable error tracking and performance monitoring.
# Sentry DSN for error tracking
SENTRY_DSN=https://your-sentry-dsn@sentry.io/project
# Environment name shown in Sentry
# (e.g., production, staging)
SENTRY_ENVIRONMENT=production
What to Monitor
- Webhook endpoint — response codes and latency on your webhook path (e.g.,
POST /webhooks/github/) - Background job health — the worker periodically logs
Job worker statswithjobs_processingandworker_processescounts. Alert ifjobs_processingstays elevated or workers drop to zero. - Rate limits — logs containing
rate_limit_remainingtrack Git provider API quota consumption - Database — connection count, cache hit rate, and disk usage
Rate Limits
GitHub Apps have an API rate limit per installation (per org the app is installed on). PullApprove monitors rate limit consumption and automatically pauses background sync jobs (history, insights) when remaining requests drop below a safe threshold. This preserves API capacity for real-time pull request processing, which is never paused. Paused jobs retry automatically when the rate limit window resets.
API calls are cached where possible and retried with exponential backoff on transient errors. Webhook deliveries are handled idempotently — if GitHub retries a webhook (due to a timeout or error), PullApprove will process it safely without duplicating work.
Upgrading
To upgrade to a new version:
- Take a database snapshot before upgrading
- Load the new Docker image and push it to your registry
- Run migrations:
docker run --env-file .env pullapprove5:5.x.x migrate - Restart the
serverandworkercontainers with the new image
Migrations run separately from the application, so rolling deploys generally work without downtime. Release notes will call out any exceptions that require a maintenance window.
We recommend upgrading sequentially through each release rather than skipping versions, as skipping versions is untested. Release notes are provided with each new version.
Rollback: Database migrations do not have reverse operations. To roll back a failed upgrade, restore your database from the snapshot taken in step 1 and revert to the previous Docker image.
Your license agreement allows non-production instances, so you can run a staging environment to test upgrades before applying them to production.
Backups and Data Retention
PostgreSQL is the only stateful component. Back up your database using your preferred method (e.g., pg_dump, managed database snapshots). There is no additional file storage to back up.
If PullApprove5 is temporarily unavailable, webhooks from GitHub and GitLab will be retried automatically by the provider. When the instance comes back up, incoming webhooks will re-trigger processing for open PRs. Historical insights and metrics data cannot be recovered from the Git provider, so regular database backups are recommended.
PullApprove automatically cleans up some data:
- Background job history — deleted after 7 days
- Insights summaries — deleted after the organization's configured retention period (default 60 days)
Pull request records, processing results, and event history are not automatically cleaned up and will grow over time. For most organizations this growth is modest.