- Blog
Declarative deployment of Cloud Native applications without compromise with Kubernetes
July 2 — 2020
For several years, design and development of digital Web products has led us to deploy them to different cloud providers such as Amazon Web Services, Google Cloud and Azure.
Each of these providers offers a turnkey solution for deploying Cloud Native applications that support provisioning Docker containers, configuring their respective environment variables, as well as orchestrating deployment processes and other management operations. We’ve put into production and maintained several projects using one or the other, but have turned towards using Kubernetes for the past few years.
Kubernetes is an alternative that stands out by offering numerous advantages without compromise compared to other solutions.
Originally developed at Google to help manage their huge fleet of containers deployed through their data centers, Kubernetes is now evolving thanks to a vibrant open-source community. It is also currently used by thousands of companies around the world to do the same thing, on a scale obviously much smaller than that of Google.
Benefit #1: Declarative philosophy (Infrastructure as code)
The philosophy behind Kubernetes is interesting. It’s not a traditional orchestration tool which is provided a list of defined tasks to perform (eg. A → B → C). Rather, it is a set of independent, composable processes that continuously bring the system to a desired and declarative state.
Suppose we currently have two container instances that run the image foo:1.0.0. We want to replace this image with foo:2.0.0 while moving to four instances. Rather than imperatively asking Kubernetes to perform these steps sequentially:
- Find the instances currently deployed with the image foo:1.0.0;
- Deploy four new instances that roll the image foo:2.0.0;
- When all the new instances are ready, stop the old ones found in the first step;
- Make new instances available.
We declaratively express the state of the system that we want to achieve:
There should be four instances deployed with the image foo:2.0.0.
Using this declarative method, Kubernetes will autonomously make the decision to bring the current state (the two containers deployed with the image foo:1.0.0) to the declared state.
Ultimately, the steps will be very similar to those listed above, but Kubernetes will perform them automatically in the background and will do everything to make them as optimal as possible. However, there is nothing magical here; if you want Kubernetes to not cause downtime during deployment, it must be configured so that it’s able to know the state of the instances and know when they are really ready.
In short, we don't do orchestration operations with Kubernetes. We declare a state of the desired system and Kubernetes performs the necessary operations in the most optimal way to reach this state.
Benefit #2: Portability and knowledge sharing
Kubernetes is completely agnostic of the cloud provider where it is used. It’s not a proprietary solution specific to a supplier. This portability of expertise gives us an undeniable advantage.
For a company like Mirego that often deploys and maintains dozens of Cloud Native applications in different environments, the more we can rely on a technology with this independence, the more we ensure efficiency in our knowledge sharing and continuous improvement.
Choosing the cloud provider where a web product will be deployed does not add additional complexity and therefore allows our development teams to remain versatile by not having to specialize in each of the solutions specific to each provider. Obviously, that doesn't mean that Kubernetes is not complex. On the contrary, this means that the time we spend mastering its complexity benefits more of our customers, regardless of their situation.
We have even set up a boilerplate to group our good practices in one place, and we plan to publish this project soon for the benefit of the community.
Kubernetes allows us to focus our development and maintenance efforts on building the best possible digital products, regardless of the cloud provider on which they will be deployed or the technological stack they use (Node.js, Elixir, Ruby, PHP, etc. .).
Benefit #3: Managed solution at suppliers
Not being linked to a specific cloud provider does not mean we are left on our own with the main providers.
Kubernetes has become an important enough reference in the DevOps community for the three major cloud providers (Amazon Web Services, Google Cloud and Azure) to now include a managed Kubernetes solution in their offered services.
These solutions are a little more expensive because of their management costs, but this means we do not have a provisioning machine or manual security updates to perform on them, which saves a significant amount of time and gives us greater confidence in the stability and security of our infrastructure.
Benefit #4: Community and unified tools
Kubernetes has a set of tools designed by developers, for developers, which allows us to be productive and efficient in our tasks relating to introspection and cluster manipulation.
CLI (command-line interface) tools, often very popular with developers, are not considered second-class citizens; they are the foundation of the Kubernetes ecosystem and serve as the foundation in the development of additional tools for the community.
The vHub digital platform was developed to revolutionize the freight transportation industry. Since its launch, we have continued to develop it and is now made up of several complete and independent components sharing resources.
Using Kubernetes as an orchestration and application deployment system as soon as the platform went live has allowed us to dramatically reduce infrastructure maintenance time.
Thanks to the principles of Infrastructure as code and Kubernetes' declarative philosophy, adding an important new piece to the infrastructure doesn’t mean duplicating its complexity. Consequently, this makes the evolutionary development of projects more efficient; We can develop new functionalities there and add new components with confidence.
Kubernetes allows us to remain generalist in our approach with regards to developing and maintaining Cloud Native infrastructures without compromising too much on the advantages of working with a platform linked to a cloud provider.
Its declarative philosophy and its solid ecosystem of CLI tools allow us to be productive and confident when we have to deploy or modify these infrastructures, and its portability allows us to constantly improve our DevOps processes since our findings can be shared across all our projects, regardless of the cloud provider used.