Custom Metrics and Autoscaling
Learn how to configure custom metrics collection and autoscaling in Porter
Custom Metrics and Autoscaling in Porter
Porter now supports exporting custom metrics from your applications and using them for autoscaling. This guide explains how to configure these features.
Configuring Metrics Scraping
Note: Metrics scraping is only available for web services.
You can now configure Porter to scrape metrics from your application’s /metrics
endpoint. This is useful for:
- Collecting application-specific metrics
- Setting up custom autoscaling based on your metrics
- Monitoring application performance
How to Enable Metrics Scraping
- Navigate to your application dashboard
- Select your web service
- Go to the “Advanced” tab under service settings
- Find the “Metrics scraping” section
- Enable “Enable metrics scraping”
- Configure the following options:
- Port: The port where your metrics endpoint is exposed (defaults to your web service’s default port)
- Path: The path where metrics are exposed (defaults to
/metrics
)
Important: Our telemetry collector will automatically send requests to the specified port and path to collect metrics from your service.
Metrics scraping configuration in the Advanced tab of a web service
Prometheus Metrics Format
Your application must expose metrics in Prometheus format, which follows these basic principles:
- Metrics are exposed as HTTP endpoints (typically
/metrics
) - Each metric follows the format:
metric_name{label1="value1",label2="value2"} value
- Common metric types:
- Counter: Values that only increase (e.g., total_http_requests)
- Gauge: Values that can go up and down (e.g., current_memory_usage)
- Histogram: Observations distributed into buckets (e.g., request_duration_seconds)
Example metrics output:
For detailed information about implementing Prometheus metrics in your application, refer to:
Custom Autoscaling
With metrics scraping enabled, you can now set up autoscaling based on your custom metrics instead of just CPU and memory usage.
How to Configure Custom Autoscaling
- Navigate to your application dashboard
- Select your service
- Go to the “Resources” tab
- Configure basic autoscaling:
- Enable “Enable autoscaling (overrides instances)“
- Set “Min instances” (e.g., 1)
- Set “Max instances” (e.g., 10)
- Switch to custom metrics mode by clicking the customize icon
- Configure custom metrics:
- Metric Name: Select a metric from your exposed Prometheus metrics
- Query: Write or modify the PromQL query (defaults to
avg(<metric_name>)
) - Threshold: Set the threshold value that triggers scaling
When your selected metric exceeds the threshold, Porter will automatically scale your service between the min and max instances you’ve specified.
Custom autoscaling configuration in the Resources tab of a service
Query Requirements for Autoscaling
When using custom metrics for autoscaling, your PromQL query must:
- Return a single numeric value (scalar)
- Examples of valid queries:
avg(metric_name)
→ Returns a single average valuesum(rate(http_requests_total[5m]))
→ Returns a single sum valuemax(some_latency_metric)
→ Returns a single maximum value
Invalid query examples:
- Vector results (multiple time series)
- String results
- No data/empty results
If your query returns multiple values or time series, use aggregation operators like avg()
, sum()
, or max()
to reduce it to a single value.
Switching Between Autoscaling Modes
You can switch between:
- Default Mode: Autoscale based on CPU/Memory usage
- Custom Mode: Autoscale based on your application metrics
Click the customize/restore icons to switch between modes.
Example Use Case: Data Processing Pipeline
Consider a data processing pipeline with two separate Porter applications:
Analytics Ingestion API
A Flask/FastAPI service that ingests analytics events from multiple client applications and publishes them to RabbitMQ for processing.
Metrics Configuration:
Event Processing Service
A separate application with Celery workers that process events from RabbitMQ and stores them in your data warehouse.
Custom Autoscaling Configuration:
- Metric Name:
rabbitmq_queue_messages{queue_name="user_events"}
- Query:
sum(rabbitmq_queue_messages{queue_name="user_events"})
- Threshold:
1000
(scale up when more than 1000 events are waiting)