Resource Management in kubernetes
Updated: Jul 6, 2021
There are two main resource types in kubernetes when it comes to resources - CPU and Memory
Unit of CPU is CPU core. We can express in decimal or with xm format. eg: 0.1 and 100m are the same, where m is millicores
Memory is expressed in bytes, kilobytes, megabites etc . xMi, xGi etc There are mebibyte, gitbibyte etc
Resources are specfied on containers and not on pods. There are two ways for resource management in kubernetes.
Requests - the value the container is guaranteed to get when the pod is scheduled. If the scheduler cant find a node with this amount , then the pod is not scheduled.
Limits - This is the limit placed on the CPU and meory. The container will never use more than this.
Requests can never be higher than limits.
...
containers:
- name: mycontainer1
image: my_image:v3
resources:
requests:
memory: "64Mi"
cpu: "200m"
limits:
memory: "128Mi"
cpu: "600m"
- name: mycontainer2
image: my_image:v2
resources:
requests:
memory: "32Mi"
cpu: "100mi"
limits: "64Mi"
memory: "300mi"
In the above example, the total requests and limits for the pod are sa below.
Request
total cpu : 300 millicore
total memory: 96 Mi
Limit
total cpu: 900 millicore
total memory: 193Mi
If you do not specify a cpu limit, then the container has no upper bound and could use all of the available cpu on its node. If the namespace has a default limit, then it will inherit that.
If you specify only limit, but do not specify request, then request will match the limit.
Resource Quota and LimitRanges at the namespace level
Resource Quta
Resourcequotas set limits for all containers in a namespace. It is not specific for a node.
"requests.cpu" is the total requests for cpu for all containers in a namespace
and "requests.memory" is the total requests from memory for all containers in a namespace.
Similarly "limits.cpu" is the total limit of all containers in a namespace
and "limits.memory" is the total limit of all containers in a namespace
apiVersion: v1
kind: ResourceQuota
metadata:
name: asd
spec:
hard:
requests.cpu: 500m
requests.memory: 256Mi
limits.cpu: 1000m
limits.memory: 1024Mi
LimitRange
Unlike a resourcequota a limitrange is for a container rather than totally for all containers in a namespace. This provides control on the size of containers by providing a minimum and maximum value to ensure that all the containers are given a reasonable limits. Prevents creating a super large or super small containers.
default sets default CPU and memory for limit settings for pods which do not have this setting defined. This will act as a control on pods which does not have limits defined.
defaultRequest sets the default request for CPU and memory if they are not specified for contianer in a pod.
by using min and max we can set limit on an individual container
apiVersion: v1
kind: LimitRange
metadata:
name: mylimitrange
spec:
limits:
- default:
cpu: 500m
memory: 256Mi
- defaultRequest:
cpu: 250m
memory: 256Mi
- min:
cpu: 256m
memory: 128Mi
- max:
cpu: 1000m
memory: 1Gi
type: Container
At the time of scheduling a pod, the scheduler runs checks on nodes to determine if there is enough capacity on the node. While the actual CPU and memory usage will be lower, the scheduler uses the maximum specified to give room for load later. The pod is scheduled only if there is enough room in the node, checks other nodes . The pod will be in pending state until then.
Conclusion
Resource management is a very important aspect in kubernetes. CPU, memory are wisely managed by resource controllers, enabling proper application scheduling based on availability.