Twitter update

Karpenter Vs Kubernetes Cluster Autoscaler

 

What is Karpenter?

Karpenter is open source, high-performance Kubernetes cluster auto scaler which can basically schedule the right kind of compute resources to your cluster in response to changing application load.

With the usage of Karpenter in your cluster you can improve efficiency and cost of running workload.

Basically, once you configure Karpenter in your cluster it will observe the requests of unscheduled workload which is marked unschedulable. It will evaluate and take the decision to schedule the load based on parameters like node selectors, resource requests, and provision nodes that meet the requirement of the workload.

It also helps by terminating the nodes which are no longer needed. Moreover, all this activity of node autoscaling will be happening automatically for you once the setup is done.

Karpenter manages each instance directly without using node groups. This allows it to retry in seconds once capacity is not available and leverages the different types of instances available in the cloud platform.

Example: The usage of spot instances and varying compute-optimized instance types, on-demand, etc basically helps in scheduling your workload efficiently and saves on unexpected cloud bills too.

What is Kubernetes Cluster Autoscaler?

Kubernetes Cluster Autoscaler is the utility using which automatic adjustment of nodes is possible when the pod fails or is rescheduled to other nodes based on node utilization metrics. It is used to get high availability.

It works automatically and there is no manual creation required whenever new nodes are required for the workload.

How Karpenter works?

Karpenter works with the Kubernetes scheduler by observing the incoming pods, while it is usual and enough capacity is present Kubernetes scheduler will work normally and schedule the pods. When pods cannot be scheduled in the current capacity of the cluster then Karpenter comes into the picture and by overtaking the Kubernetes scheduler it works directly with cloud compute service (Example Amazon EC2 ) to provision the right node instances and schedule the workloads on them. Once pods are removed or rescheduled it looks for feasibility to terminate the nodes.

Thank you

Tripura Kant

https://www.linkedin.com/in/tripurakant/

No comments:

Post a Comment