Your application is now successfully deployed on your very own Kubernetes cluster. Although the complexity has been abstracted away from you, your application will be able to take advantage of the full power of Kubernetes that is running under the hood.
Kubernetes clusters provisioned by Porter are architected to be highly available and scalable - the infrastructure is battle-tested from real world situations and is production-grade out of the box, equipping you with the same kind of reliability and scalability that typically only larger companies have the resources to attain.
You can configure your applications from the dashboard and scale them effortlessly to serve millions of traffic per day. Here are the common operations.
Setting the Start Command and Port
You can configure your application to run any start command by simply editing it from the Porter Dashboard. If your application was built using a
Dockerfile, the start command you put in here will override the
CMD directive that is specified in your
If your application was built using buildpacks, please make sure that the command
launcher is prepended to the start command you are trying to run.
This is a requirement of the Cloud Native Buildpacks.
If you’re deploying a web application, you must also set the port that Porter will expose to traffic. This should be set to the port number at which your application process runs on.
From the Resources tab, you can assign vCPU and RAM to each of your applications as granuarly as you desire, down to 1 MB and 0.01 vCPU. The only constraint on the amount of resources you can assign is
an upper limit imposed by the machine type of EC2 instances that comprise your Kubernetes cluster. By default, your cluster is provisioned with the
t3.xlarge instance, which means you can assign up to 8 vCPU and 32 GB RAM to a single application.
This is discussed in more detail in the previous Provisioning section
You can also set the number of replicas you’d like to run for your application from this tab.