Twitter update

10 Books Every DevOps Engineer Should Read

Introduction:

DevOps is a set of practices that combines software development (Dev) and IT operations (Ops) to shorten the systems development life cycle and provide continuous delivery with high quality. DevOps engineers are responsible for implementing and maintaining these practices.

There are many great books available on DevOps. Here are 10 of the best books that every DevOps engineer should read:

  1. The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations by Gene Kim, Jez Humble, Patrick Debois, John Allspaw, and John Willis. This book is considered to be the "DevOps bible" and is a comprehensive guide to the principles and practices of DevOps.
  2. The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win by Gene Kim, Kevin Behr, and George Spafford. This book is a fictional story that illustrates the benefits of DevOps through the eyes of a fictional IT manager.

Kubernetes Workloads (Deployments, Jobs, CronJobs, etc.)

 

Deployments

In Kubernetes, Deployments provides updates for pod and replica sets. It is basically a tool to manage how the pod will behave inside the cluster.

Once we declare the desired state in the YAML file, the deployment controller makes sure the actual state to desired

Linux Services KodeKloud Engineer Task Success

 As per details shared by the development team, the new application release has some dependencies on the back end. There are some packages/services that need to be installed on all app servers under Stratos Datacenter. As per requirements please perform the following steps:


a. Install cups package on all the application servers.

b. Once installed, make sure it is enabled to start during boot.





ssh tony@stapp01. (then enter password)


sudo su - 


sudo yum install cups -y


sudo systemctl start cups 


sudo systemctl enable cups 


sudo systemctl status cups 


repeat for all servers with the same commands



Notes

Karpenter Vs Kubernetes Cluster Autoscaler

 

What is Karpenter?

Karpenter is open source, high-performance Kubernetes cluster auto scaler which can basically schedule the right kind of compute resources to your cluster in response to changing application load.

With the usage of Karpenter in your cluster you can improve efficiency and cost of running workload.

Basically, once you configure Karpenter in your cluster it will observe the requests of unscheduled workload which is marked unschedulable. It will evaluate and take the decision to schedule the load based on parameters like node selectors, resource requests, and provision nodes that meet the requirement of the workload.

It also helps by terminating the nodes which are no longer needed. Moreover, all this activity of node autoscaling will be happening automatically for you once the setup is done.

Karpenter manages each instance directly without using node groups. This allows it to retry in seconds once capacity is not available and leverages the different types of instances available in the cloud platform.

Example: The usage of spot instances and varying compute-optimized instance types, on-demand, etc basically helps in scheduling your workload efficiently and saves on unexpected cloud bills too.

What is Kubernetes Cluster Autoscaler?

Kubernetes Cluster Autoscaler is the utility using which automatic adjustment of nodes is possible when the pod fails or is rescheduled to other nodes based on node utilization metrics. It is used to get high availability.

It works automatically and there is no manual creation required whenever new nodes are required for the workload.

How Karpenter works?

Karpenter works with the Kubernetes scheduler by observing the incoming pods, while it is usual and enough capacity is present Kubernetes scheduler will work normally and schedule the pods. When pods cannot be scheduled in the current capacity of the cluster then Karpenter comes into the picture and by overtaking the Kubernetes scheduler it works directly with cloud compute service (Example Amazon EC2 ) to provision the right node instances and schedule the workloads on them. Once pods are removed or rescheduled it looks for feasibility to terminate the nodes.

Thank you

Tripura Kant

https://www.linkedin.com/in/tripurakant/

DevOps Series-Day2- Become Linux Pro


Pre-requisites:


Oracle VM VirtualBox

Download Iso Image of CentOS 7


About Linux OS

  • Just like Windows, iOS, and Mac OS, Linux is an operating system. 
  • An operating system is a software that manages all of the hardware resources associated with your desktop or laptop. To put it simply, the operating system manages the communication between your software and your hardware. Without the operating system (OS), the software wouldn’t function.
  • It is an open-source operating system. The source code of Linux is easily available to everyone.
  • Linux provides security.
  • Customizations are possible in Linux.
  • Free to use.
  • The cost of Linux is low.
  • Has large community support.