Resources
Each Kubernetes cluster consists of underlying nodes. Nodes are basic compute instances (e.g. EC2 on AWS) on which your applications run. One Kubernetes cluster manages multiple nodes, and each node can run multiple applications. Kubernetes will intelligently allocate your applications on these nodes depending on how much resources each node has available. An application you deploy on Porter consists of one or multiple pods. Each pod is a replica of your application, and incoming requests will automatically be dispersed across all pods of your application. You can vertically scale each pod by allocating more resources, or horizontally scale them by adding more pods to your application. A useful analogy is to think of these nodes as buckets in which your pods are placed. You can fill up an entire bucket with just one big pod, or you can run multiple smaller pods on one node. You can assign however much resources you want to each pod, and Kubernetes will intelligently allocate them across the nodes. The only rule is that you cannot allocate more resources to a single pod than what is available on a single node - a bucket simply cannot fit a pod that is larger than itself.Takeaway: You cannot allocate more resources to a single pod than what is available on a single node. The instance type you choose when provisioning the cluster should be at least double the amount of resources you want to assign to a single pod.