Customizing Network Settings for an Application
Most of the time, you can customize the NGINX configuration by adding an "Ingress Annotation" when deploying a web service. This can be found in the "Advanced" tab of the web template:
To add an annotation, simply click "Add row" and add key-value pairs for the annotation.
This document attempts to cover the most common use-cases. To view the full list of annotations, go here.
Setting Custom Read/Write Timeouts
Read/write timeouts are very application-specific, and are subject to change on the Porter templates. For example, if you have a websocket application that does not handle dropped connections gracefully, you may be inclined to set your read/write timeout to be much higher than what is standard (~30 seconds). For those coming from Heroku, the Heroku router enforces a write timeout of 30 seconds, and will keep a connection alive if a single byte is sent within a 55 second window.
On Porter, you can configure your own read/write timeouts by adding annotations:
For an explanation of these values:
|The timeout for NGINX to establish a connection with your application.|
|The timeout for NGINX to transmit a request to your application.|
|The timeout for NGINX to read a response from your application.|
It is thus recommended to set these values with the ordering
proxy-connect-timeout <= proxy-send-timeout <= proxy-read-timeout.
Client Max Body Size
If you are getting undesired
413 Request Entity Too Large errors, you can increase the maximum size of the client request by setting the field client_max_body_size. You can do this by adding the following annotation:
This will set the maximum client request body size to 8 megabytes.
To learn more about NGINX units, see this document.
If you are occasionally getting 500-level errors on certain endpoints, your application may be sending response headers which are too large for NGINX to process. You can try increasing the maximum value of the response header size by setting something like the following (default is
Preserving Client IP Addresses
Note that the preserved client IP will be accessible in the
X-Forwarded-For header if you implement the configuration below.
Changing this configuration may result in a few minutes of downtime. It is recommended to set up client IP addresses before the application is live, or update it during a maintenance window. For more information, see this Github issue.
You will need to update your NGINX config to support proxying external IP addresses to Porter.
ingress-nginx application, you'll be modifying the following Helm values:
use-proxy-protocol: "true" # <-- CHANGE
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" # <-- CHANGE
Update: December 10, 2021
Note that clusters provisioned on Amazon EKS after December 10, 2021 will have support for proxying external IPs configured by default.
You must have a health check endpoint for your application. This endpoint must return a
200 OK status when it is pinged.
On Porter clusters provisioned through GCP, traffic flows through a regional TCP load balancer by default. These load balancers do not support a proxy protocol (only global TCP load balancers or regional/global HTTP(S) load balancers support this), and thus the client IP cannot be accessed by using the default load balancer. As a result, to get client IP addresses to your applications, you must create a new load balancer with a custom IP address. This guide will show you how to do that.
- You must first create a static global IP address in the GCP console. You can do this by navigating to External IP addresses (VPC Network > External IP Addresses), and clicking "Reserve External Address". Name this address something like
porter-ip-addressand select "Global" for the type:
Copy the created IP address to the clipboard.
In your DNS provider, configure a custom domain to point to that IP address, which you can do by creating an A record with your domain as the value. Check that the domain is pointing to the IP address through
nslookup <domain>, where the address in the response should be the IP address you just created.
Install an HTTPS issuer on the Porter dashboard by going to Launch > Community Addons > HTTPS Issuer. Toggle the checkbox Create GCE Ingress. If you have already installed the HTTPS issuer, you will have to delete your current issuer and create a new one.
- Create the web service by going to the Porter dashboard and navigating to Launch > Web service. Link up your source, and then configure the following three settings:
Toggle "Configure Custom Domain" at the bottom of the "Main" tab, and add your custom domain.
Go to the "Advanced" tab. In the "Ingress Custom Annotations" section, add the following three parameters:
kubernetes.io/ingress.global-static-ip-name: porter-ip-address # IMPORTANT: replace this with the name of your static ip address!
It should look something like this:
- Still in the "Advanced" tab, you must set up a custom health check at an application endpoint. This is by default set to
/healthz, but you can choose whichever path you'd like. This endpoint must return a
200 OKstatus when it is pinged.
- Click "Deploy". It will take 10-15 minutes for the load balancer to be created and the certificates to be issued.