In this video Kelsey Hightower, Principle Engineer at Google talks about the evolution and functionality of Kubernetes.
In addition he also looks at the current state of managing Kubernetes-based infrastructure by means of collaboration and how GitOps helps in this process.
Role of GitOps
At 11:32 Kelsey begins the GitOps live demo where he describes how the app gets deployed to the proper environments.
He states that GitOps is a way to do Kubernetes cluster management and application delivery. GitOps combines Kubernetes convergence properties and helps in delivering Kubernetes-based infrastructure by acting as an operating model.
Once the code is written by a developer, the code goes into a version control system, and then artifacts will be produced in an ideal state. He adds that Kubernetes requires a container as the price of admission. The container registry component is responsible for normalizing your workflow across all applications. The process of building the docker files can be automated.
At 18:10 Kelsey explains that in Github the developers can set some actions. Whenever a developer pushes a repository or tags a branch in the repository, a set of actions will be kicked off. Developers can rely on Github actions to take over a specific commit through the build process and build a docket image. Once the image is pushed, the commit is pushed to the repository.
All these processes are done in the background by Git whenever a commit performed or a branch is tagged. Each of the tags produces the container images. These images can be pulled locally by the developers on their laptops like Kubernetes or some serverless platform that supports containers.
Declarative YAML configuration files
At 20:25 Kelsey mentions that the application is now ready to be released to Kubernetes. The next step will be to write YAML files. He adds that GitOps require the developers to precisely describe the desired state of the system, by means of a declarative specification for each environment. Using robust editors like VS code will help in writing YAML deployment files.
The main idea of the YAML file is to articulate what you want the system to do. Developers can mention the version of the image to be deployed, the memory, and CPU limits. There are tools that help in generating these configuration files. These configuration fields can be pushed into any Kubernetes cluster.
Splitting the configuration repository from the source code repository is recommended. The configuration repository will have all the data artifacts, configuration files that can be consumed by other systems.
At 23:15 Kelsey says that the configuration files must be pushed to the targets only when they are ready. The best practice is to have different Kubernetes clusters have different versions of the config at the same time.
At 23:55 he elaborates on the pull model that is being used in the specific scenario. He uses the ‘Config sync’ tool which helps in linking the clusters to the configuration repository. In addition, you can also leverage multiple branches and deploy them to different clusters of your choice.
He adds that the online tool ‘Nomos’ is part of the command-line tool that allows interaction with the configuration sync repositories. The nomos command will help in deriving the status of the current deployment.
Once the ‘status’ command is run, a connection will be established to the clusters and will tell the current status of the repositories that it’s been syncing to. You can make changes to the code and check-in your code changes.
After this, the build process will kick-off. You will now find a new tag that is eligible for deployment. Now, the developer is required to make a change in the config repository. He also adds that this process of updating the config repository can be automated.
At 27:04 Kelsey explains that the config management directory is critical during the process of releasing the change to the targets. This config management directory is created during the process of cluster creation. A separate load balancer is set up that’s sending traffic to the clusters. You will notice that each of the load balancers has its own IP address, own set of domain names, and is always pointed to only one cluster.
Once the config is in place, the Kubernetes cluster will go to the new branch of the config repository and then try to update the state of the world based on the configs. The best part is that the developers can keep checking in codes and tagging releases that will fall into the container registry.