Skip to content

Commit

Permalink
text corrections from pauldotyu
Browse files Browse the repository at this point in the history
  • Loading branch information
palma21 committed May 21, 2024
1 parent 2820120 commit 1b1fbf6
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions blog/_posts/2024-05-21-aks-past-present-future.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,11 @@ categories: general

Hi! My name is Jorge Palma, I’m a PM Lead for AKS and I’m excited to be inaugurating our new AKS Engineering Blog. In this new blog, we will complement and extend some of our existing channels, providing extra context to announcements, sharing product tips & tricks that may not fit in our core [documentation](https://learn.microsoft.com/azure/aks), and giving you a peak behind the curtain of how we build the product.

In this initial post, taking inspiration from Mark Russinovich who has [named so many similar series and talks like this](https://ia600807.us.archive.org/23/items/InsideNTFS/Inside%20NTFS.pdf), I hope to take you in a (shortened) journey through the history of the Azure Kubernetes Service. We will talk about the past, how we started and how we got here, where we are today and also some thoughts about what the future holds for AKS.
In this initial post, taking inspiration from Mark Russinovich who has [named so many similar series and talks like this](https://ia600807.us.archive.org/23/items/InsideNTFS/Inside%20NTFS.pdf), I hope to take you on a (shortened) journey through the history of the Azure Kubernetes Service. We will talk about the past, how we started and how we got here, where we are today and also some thoughts about what the future holds for AKS.

## Past - How we got here

An AKS history recap should start a little before it actually came to existence. Circa 2015, containerization was starting to bubble up, containers were becoming not only a developer tool or software packaging mechanism, but brewing to be much more, a whole new way to build and deliver software. But before this was possible a key piece was still required; how could we go from delivering software to running services at all scale? This meant addressing the requirements to run containers in production, and more than that to run full services, platforms and applications in production. These requirements could range from container placement and scheduling, guaranteeing its health and execution, to facilitating communication between different containers that could represent different parts of the application/service or with external services like a PaaS database for example. These and many more tasks became the purview of the emerging “Container Orchestrators” at the time. However, bootstrapping and configuring your container orchestrator was not a simple task, it could involve cumbersome tasks from setting up infrastructure to configuring dozens of components within it to create your “cluster”, the set of hardware or virtualized infrastructure that would host your containers. To assist our users the Azure team at Microsoft decided to create, first a tool/project – ACS-engine, and then a service based on it, Azure Container Service (ACS), whose main mission was to help users quickly bootstrap a cluster of some of the most popular container orchestrators at the time. One of those orchestrator options was Kubernetes, which went to General Availability (GA) in ACS in February of 2017.
An AKS history recap should start a little before it actually came to existence. Circa 2015, containerization was starting to bubble up, containers were becoming not only a developer tool or software packaging mechanism, but brewing to be much more, a whole new way to build and deliver software. But before this was possible a key piece was still required; how could we go from delivering software to running services at scale? This meant addressing the requirements to run containers in production, and more than that to run full services, platforms and applications in production. These requirements could range from container placement and scheduling, guaranteeing its health and execution, to facilitating communication between different containers that could represent different parts of the application/service or with external services like a PaaS database for example. These and many more tasks became the purview of the emerging “Container Orchestrators” at the time. However, bootstrapping and configuring your container orchestrator was not a simple task, it could involve cumbersome tasks from setting up infrastructure to configuring dozens of components within it to create your “cluster”, the set of hardware or virtualized infrastructure that would host your containers. To assist our users the Azure team at Microsoft decided to create, first a tool/project – ACS-engine, and then a service based on it, Azure Container Service (ACS), whose main mission was to help users quickly bootstrap a cluster of some of the most popular container orchestrators at the time. One of those orchestrator options was Kubernetes, which went to General Availability (GA) in ACS in February of 2017.

It would be fair to say that the mission of providing a fully configured container orchestrator was in fact achieved by that service, but the journey was only the beginning. On one hand, the user community and market at large clearly self-elected Kubernetes as the standard container orchestrator, due to its extensible and pluggable architecture that made a whole ecosystem surge alongside it as well as its enterprise backing (such as Microsoft) that made strides alongside other key contributors to ensure key enterprise requirements were baked in or facilitated from early on. On the other hand, getting a running Kubernetes cluster that would orchestrate your containers turned out to be only the beginning of user’s needs with respect to container orchestrators. As tools continued to advance and improve their UX, creating clusters by yourself also became achievable to most people, however you’d still need to maintain and operate those clusters, often having to understand a lot of its internal components, behaviors and inner workings as well as ensuring the resiliency and business continuity of the control plane and system components that sustained the cluster to run the user applications.

Expand Down

0 comments on commit 1b1fbf6

Please sign in to comment.