Istio + Kubernetes on Windows


I’ve been recently looking into Istio, an open platform to connect and manage microservices. After Containers and Kubernetes, I believe that Istio is the next step in our microservices journey where we standardize on tools and methods on how to manage and secure microservices. Naturally, I was very excited to get my hands on Istio.

While setting up Istio on Google Kubernetes Engine (GKE) is pretty straightforward, it’s always useful to have a local setup for debugging and testing. I specifically wanted to setup Istio on my local Minikube Kubernetes cluster on my Windows machine. I ran into a few minor issues that I want to outline here in case it is useful to someone out there.

I assume you have a Minikube cluster setup already and running. If not, you can check out my previous post on how to setup and run a Minikube cluster on your Windows machine. Istio has a Quickstart tutorial for Kubernetes. I’ll follow that but it’s Linux-centric and some of the commands have to be adopted for Windows.

Download Istio

Here is the command to download Istio from Quickstart:

curl -L | sh -

This is a Linux shell command and it won’t work on Windows cmd or PowerShell. Thankfully, someone already wrote an equivalent PowerShell script here. I used the script as is, only changed the IstioVersion to 0.5.1, the latest Istio version as of today:

 [string] $IstioVersion = "0.5.1"

The script downloads Istio and sets an ISTIO_HOME as environment variable.

PS C:\dev\local\istio> .\getLatestIstio.ps1
Downloading Istio from
0.5.1/ to path C:\dev\local\istio

Then, I added %ISTIO_HOME%\bin to PATH  to make sure I can run istoctl commands.

Install and Verify Istio

To install Istio and enable mutual TLS authentication between sidecars, I ran the same command in the quickstart:

PS C:\istio-0.5.1> kubectl apply -f install/kubernetes/istio-auth.yaml
namespace "istio-system" created
clusterrole "istio-pilot-istio-system" created
clusterrole "istio-sidecar-injector-istio-system" created
clusterrole "istio-mixer-istio-system" created
clusterrole "istio-ca-istio-system" created
clusterrole "istio-sidecar-istio-system" created

And verify that all the Istio pods are running:

PS C:\istio-0.5.1> kubectl get pods -n istio-system
NAME                           READY STATUS  RESTARTS AGE
istio-ca-797dfb66c5-x4bzs      1/1   Running  0       2m
istio-ingress-84f75844c4-dc4f9 1/1   Running  0       2m
istio-mixer-9bf85fc68-z57nq    3/3   Running  0       2m
istio-pilot-575679c565-wpcrf   /2    Running  0       2m

Deploy the sample app

Deploying an app is a little different on Windows as well. To deploy the BookSample app with Envoy container injection, this is the command you would normally run on Linux:

kubectl create -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo.yaml)

The redirection causes problems on PowerShell. Instead, you can first run the istioctl command and save it to an intermediate yaml:

istioctl kube-inject -f .\samples\bookinfo\kube\bookinfo.yaml > bookinfo_inject.yaml

Then, you can apply the intermediate yaml:

PS C:\istio-0.5.1> kubectl create -f .\bookinfo_inject.yaml
service "details" created
deployment "details-v1" created
service "ratings" created
deployment "ratings-v1" created
service "reviews" created
deployment "reviews-v1" created
deployment "reviews-v2" created
deployment "reviews-v3" created
service "productpage" created
deployment "productpage-v1" created
ingress "gateway" created


With that, you will have BookInfo app deployed and managed by Istio. Hope this was useful to get Istio + Kubernetes running in Minikube on Windows.


Minikube on Windows

When I’m playing with Kubernetes, I usually get a cluster from Google Kubernetes Engine (GKE) because it’s literally a single gcloud command to get a Kubernetes cluster up and running on GKE.  It is sometimes useful though to have a Kubernetes cluster running locally for testing and debugging. Minikube is perfect for this.


Minikube runs a single-node Kubernetes cluster inside a VM on your laptop. There are instructions on how to install it on Linux, Mac and Windows. Unfortunately, instructions for Windows is a little lacking, so I want to document how I got Minikube up and running on my Windows 10 machine.

First, you need to download the minikube-windows-amd64.exe file, rename it to minikube.exe and add it to your path, as explained in the instructions for Windows. Instructions make it sound like you’re done at this point but if you try to start minikube in PowerShell admin mode, you will get an error like this:


This is because Minikube needs a VM and by default, it tries to use VirtualBox. On Windows, you can either use Hyper-V (available natively on Windows 10) or you can install VirtualBox. Many people (here and here) recommend VirtualBox to avoid issues but I didn’t want to install something extra like VirtualBox, so I decided to go with Hyper-V. You can tell Minikube via –vm-driver=hyperv flag to use Hyper-V but this also failed with the same error earlier.

carbon (1)

What’s happening? Turns out, there’s a bug on Minikube and Windows 10. This GitHub issue explains the details if you’re interested and this very useful comment explains what you need to do. You basically need to create a Virtual Switch in Hyper-V and allow your actual internet connection share its connection with this Virtual Switch.

After you create the Virtual Switch, you can start minikube with Hyper-V with additional –hyperv-virtual-switch flag. It’s a good idea to also delete .minikube folder under your home folder and don’t forget to start PowerShell in admin mode.


This takes a while for the first time as it needs to download ISO images etc. but once done, you can test the local cluster by creating a deployment and deploying an echoserver pod:

carbon (1)

That’s it! Hope this was useful for you to get Minikube up and running on Windows.

Deploying ASP.NET Core apps on Kubernetes/Container Engine

In my previous post, I talked about how to deploy a containerised ASP.NET Core app to App Engine (flex) on Google Cloud. App Engine (flex) is an easy way to run containers in production: Just send your container and let Google Cloud figure out how to run it at scale. It comes with some nice default features such as versioning, traffic splitting, dashboards and autoscaling. However, it doesn’t give you much control.

Sometimes, you need to create a cluster of containers and control how each container is deployed and scaled. That’s when Kubernetes come into play. Kubernetes is an open source container management platform that helps you to manage a cluster of containers and Container Engine is Kubernetes managed by Google Cloud.

In this cloud minute, I show how to deploy an ASP.NET Core app to Kubernetes running on Container Engine.

If you want to go through these steps yourself, we also have a codelab for you that you can access here.

From the Monolith to Microservices

I remember the old days where we used to package all our modules into a single app (aka the Monolith), deployed everything all at once and called it an enterprise app. I have to admit, the first time I heard the term enterprise app, it felt special. Suddenly, my little module was not so little anymore. It was part of something bigger and more important, at least that’s what I thought. There was a lot of convention and overhead that came with working in this enterprise app model but it was a small price to pay for consistency, right?

This approach worked for small projects with small number of modules. As the projects got bigger and the number of teams and modules involved increased, it became obvious to me that the monolith approach wasn’t scalable anymore for a number of reasons.

  1. Integration was way too difficult. To create a single app, we had to bundle a number of modules and that was not only difficult but it always happened too late in the release cycle. This meant that we didn’t really test our integrated app end-to-end very late in the release cycle. Integration time was a constant cause of stress.
  2. Not agile at all. We had to wait for the slowest module to finish its development cycle before we can release any of our modules. This wasn’t agile at all.
  3. Debugging was way too difficult. I could debug my module on its own but debugging the whole app with all the modules was almost impossible. I didn’t have access to the source code of other modules and the whole app was so heavy that I could not run it on my laptop anyway.
  4. Environmental inconsistencies. Everything worked fine on my laptop but the production environment was always slightly different and caused hard-to-debug and hard-to-anticipate bugs.

In the last few years, a number of things happened that helped with these problems. The industry came to the realisation that the Monotith approach is not scalable and there was a shift towards smaller manageable microservices. This took care of the integration, debugging and agility issues. Docker provided a consistent context for those microservices. This took care of the environmental inconsistency problem.

But we still needed to run containers in production and deal all the issues that come with it. We had to find a way to provision nodes for containers. We had to make sure that containers are up and running. We had to do reliable rollouts and rollbacks. We had to write health checks and all the other things we need to do to run software in production.

Thankfully, we started seeing open-source container management platforms like Kubernetes. Kubernetes provided us a high level API to automate deployments, manage rollouts/rollbacks, scale up/down and much more. The best thing is that Kubernetes runs anywhere from your laptop to the cloud and it can span multiple clouds so there is no lock-in.

As a result, I feel like we brought some sanity back in how we build and run software and that’s always a good thing!