Skip to main content
Porter simplifies database provisioning by automatically setting up all networking components between your cluster and Porter-provisioned databases.
Datastores are currently supported on AWS only. GCP and Azure datastore support is on the roadmap.

AWS Architecture

Datastores are provisioned in a VPC that is separate from the VPC of your clusters. Porter automatically:
  • Peers the datastore VPC to your cluster VPC
  • Configures subnets, routing tables, and security groups
  • Ensures traffic flows exclusively through private subnets
This architecture keeps your database secure and accessible only from applications running in your cluster.

Setup

1

Create a datastore

Navigate to Add-ons in your Porter dashboard and select the datastore type you want to create (Postgres or Redis).
2

Configure settings

Configure your datastore settings including instance size, storage, and high availability options.
3

Connect to your application

Porter creates an environment group with the connection details. Inject this environment group into your applications.
4

Deploy

Deploy your application. It can now connect to the database using the injected environment variables.

Connecting from your laptop

To connect to a datastore from your local machine, use the Porter CLI:
porter datastore connect my-datastore
psql -h localhost -p <port> -U <username> -l
If your cluster control plane access is set to private, using this command requires Tailscale VPN to be configured for your cluster.

Postgres

Postgres datastores can be deployed in different configurations depending on your needs:
ConfigurationUse CaseRecommended For
In-clusterQuick setup for developmentDev/staging environments
Single RDS instance (Multi-AZ)Standard managed databaseProduction workloads
Aurora cluster (Multi-AZ)Auto-scaling storage and enhanced failoverProduction workloads with stringent high-availability requirements

In-cluster Postgres

Deploys Postgres as a container within your cluster. This is the fastest way to get started but is not recommended for production data.

RDS Instance

Provisions a standard Amazon RDS instance with Multi-AZ deployment for automatic failover. This is the recommended option for most production workloads.

Aurora Cluster

Aurora provides:
  • Automatic storage autoscaling
  • Enhanced failover capabilities
  • High availability settings
You can create an Aurora datastore with a single instance or with an additional read replica.

Read Replicas

To enable a read replica, select the HA toggle when creating the datastore. With read replicas:
  • The dashboard displays connection details for both primary and replica
  • Modifications automatically failover the primary and promote the replica
  • This ensures minimum downtime during operations

Configuration

The following table outlines the configurable fields and behaviors for each datastore type during creation and updates:
ConfigurationIn-clusterStandard RDSAurora
Connected clusterLocal (via K8s Service)External (via VPC Peering)External (via VPC Peering)
RegionMatches connected clusterMatches connected clusterMatches connected cluster
Database name-User-definedUser-defined
Master username-User-definedUser-defined
Postgres version-Postgres 12-18Postgres 12-18
Instance typeCPU/RAM LimitsAll RDS compatible instancesAll Aurora compatible classes, excepts serverless
Allocated storageFixed (Cannot be modified)Modifiable (Increase only)Managed (Auto-scales)
Snapshot restore-From RDS snapshotFrom Aurora snapshot
Cloning--Fast Database Cloning

Redis

Redis datastores can be provisioned in different configurations:
ConfigurationUse CaseRecommended For
In-clusterQuick setup for developmentDev/staging environments
Elasticache replication groupManaged cache with automatic failoverProduction workloads

In-cluster Redis

Deploys Redis as a container within your cluster. This is the fastest way to get started but is not recommended for production data.

Elasticache Replication Group

Provisions an Amazon Elasticache replication group with:
  • Primary and reader replica by default
  • Automatic failover if the primary fails
  • Minimal downtime during modifications

Monitoring

You can monitor the performance of your database from the Porter dashboard. Metrics are available in the “Metrics” tab when opening a datastore. The metrics that are currently displayed are:
  • CPU utilization
  • RAM utilization
  • Storage capacity

Disaster Recovery

Porter supports some options for disaster recovery of RDS datastores.

Restoring from a snapshot

You can restore a snapshot to a new datastore, and the datastore will be accessible from the applications running in your cluster. This can significantly reduce the time to recovery during an emergency.
1

Enable

Create a new datastore in the dashboard. In the creation form, click on “Enable snapshot restore”
2

Confirm

Select one of the available snapshots to restore or enter the id manually. Only snapshots in the same region as the database are listed.
3

Create

Create the datastore. This will start the process of restoring the snapshot.

Cloning an Aurora cluster

Aurora clusters support cloning an existing cluster using fast-cloning. This process is faster than restoring from a snapshot, and can be used to recover from an emergency, or to quickly create copies of your database for experiments.
1

Enable

Create a new datastore in the dashboard. In the creation form, click on “Enable database cloning”
2

Confirm

Select one of the existing Aurora clusters or enter the identifier manually. Only clusters in the same region as the selected one are listed.
3

Create

Create the datastore. This will start the process of cloning the cluster.

Compliance

If the compliance feature is enabled for your project, Porter automatically configures monitoring alarms for RDS and Aurora datastores:
  • CPU utilization alarms
  • Memory utilization alarms
  • Storage capacity alarms
These alarms help ensure your databases remain healthy and within operational thresholds.

Roadmap

The following features are not yet supported natively in Porter, but reach out to the support team for help setting them up.
  • Connection pooling
  • External access to datastores