Deploying ASP.NET Core apps on Kubernetes/Container Engine

In my previous post, I talked about how to deploy a containerised ASP.NET Core app to App Engine (flex) on Google Cloud. App Engine (flex) is an easy way to run containers in production: Just send your container and let Google Cloud figure out how to run it at scale. It comes with some nice default features such as versioning, traffic splitting, dashboards and autoscaling. However, it doesn’t give you much control.

Sometimes, you need to create a cluster of containers and control how each container is deployed and scaled. That’s when Kubernetes come into play. Kubernetes is an open source container management platform that helps you to manage a cluster of containers and Container Engine is Kubernetes managed by Google Cloud.

In this cloud minute, I show how to deploy an ASP.NET Core app to Kubernetes running on Container Engine.

If you want to go through these steps yourself, we also have a codelab for you that you can access here.

Advertisements

From the Monolith to Microservices

I remember the old days where we used to package all our modules into a single app (aka the Monolith), deployed everything all at once and called it an enterprise app. I have to admit, the first time I heard the term enterprise app, it felt special. Suddenly, my little module was not so little anymore. It was part of something bigger and more important, at least that’s what I thought. There was a lot of convention and overhead that came with working in this enterprise app model but it was a small price to pay for consistency, right?

This approach worked for small projects with small number of modules. As the projects got bigger and the number of teams and modules involved increased, it became obvious to me that the monolith approach wasn’t scalable anymore for a number of reasons.

  1. Integration was way too difficult. To create a single app, we had to bundle a number of modules and that was not only difficult but it always happened too late in the release cycle. This meant that we didn’t really test our integrated app end-to-end very late in the release cycle. Integration time was a constant cause of stress.
  2. Not agile at all. We had to wait for the slowest module to finish its development cycle before we can release any of our modules. This wasn’t agile at all.
  3. Debugging was way too difficult. I could debug my module on its own but debugging the whole app with all the modules was almost impossible. I didn’t have access to the source code of other modules and the whole app was so heavy that I could not run it on my laptop anyway.
  4. Environmental inconsistencies. Everything worked fine on my laptop but the production environment was always slightly different and caused hard-to-debug and hard-to-anticipate bugs.

In the last few years, a number of things happened that helped with these problems. The industry came to the realisation that the Monotith approach is not scalable and there was a shift towards smaller manageable microservices. This took care of the integration, debugging and agility issues. Docker provided a consistent context for those microservices. This took care of the environmental inconsistency problem.

But we still needed to run containers in production and deal all the issues that come with it. We had to find a way to provision nodes for containers. We had to make sure that containers are up and running. We had to do reliable rollouts and rollbacks. We had to write health checks and all the other things we need to do to run software in production.

Thankfully, we started seeing open-source container management platforms like Kubernetes. Kubernetes provided us a high level API to automate deployments, manage rollouts/rollbacks, scale up/down and much more. The best thing is that Kubernetes runs anywhere from your laptop to the cloud and it can span multiple clouds so there is no lock-in.

As a result, I feel like we brought some sanity back in how we build and run software and that’s always a good thing!