Istio + Kubernetes on Windows

23534644

I’ve been recently looking into Istio, an open platform to connect and manage microservices. After Containers and Kubernetes, I believe that Istio is the next step in our microservices journey where we standardize on tools and methods on how to manage and secure microservices. Naturally, I was very excited to get my hands on Istio.

While setting up Istio on Google Kubernetes Engine (GKE) is pretty straightforward, it’s always useful to have a local setup for debugging and testing. I specifically wanted to setup Istio on my local Minikube Kubernetes cluster on my Windows machine. I ran into a few minor issues that I want to outline here in case it is useful to someone out there.

I assume you have a Minikube cluster setup already and running. If not, you can check out my previous post on how to setup and run a Minikube cluster on your Windows machine. Istio has a Quickstart tutorial for Kubernetes. I’ll follow that but it’s Linux-centric and some of the commands have to be adopted for Windows.

Download Istio

Here is the command to download Istio from Quickstart:

curl -L https://git.io/getLatestIstio | sh -

This is a Linux shell command and it won’t work on Windows cmd or PowerShell. Thankfully, someone already wrote an equivalent PowerShell script here. I used the script as is, only changed the IstioVersion to 0.5.1, the latest Istio version as of today:

param(
 [string] $IstioVersion = "0.5.1"
)

The script downloads Istio and sets an ISTIO_HOME as environment variable.

PS C:\dev\local\istio> .\getLatestIstio.ps1
Downloading Istio from https://github.com/istio/istio/releases/download/
0.5.1/istio_0.5.1_win.zip to path C:\dev\local\istio

Then, I added %ISTIO_HOME%\bin to PATH  to make sure I can run istoctl commands.

Install and Verify Istio

To install Istio and enable mutual TLS authentication between sidecars, I ran the same command in the quickstart:

PS C:\istio-0.5.1> kubectl apply -f install/kubernetes/istio-auth.yaml
namespace "istio-system" created
clusterrole "istio-pilot-istio-system" created
clusterrole "istio-sidecar-injector-istio-system" created
clusterrole "istio-mixer-istio-system" created
clusterrole "istio-ca-istio-system" created
clusterrole "istio-sidecar-istio-system" created

And verify that all the Istio pods are running:

PS C:\istio-0.5.1> kubectl get pods -n istio-system
NAME                           READY STATUS  RESTARTS AGE
istio-ca-797dfb66c5-x4bzs      1/1   Running  0       2m
istio-ingress-84f75844c4-dc4f9 1/1   Running  0       2m
istio-mixer-9bf85fc68-z57nq    3/3   Running  0       2m
istio-pilot-575679c565-wpcrf   /2    Running  0       2m

Deploy the sample app

Deploying an app is a little different on Windows as well. To deploy the BookSample app with Envoy container injection, this is the command you would normally run on Linux:

kubectl create -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo.yaml)

The redirection causes problems on PowerShell. Instead, you can first run the istioctl command and save it to an intermediate yaml:

istioctl kube-inject -f .\samples\bookinfo\kube\bookinfo.yaml > bookinfo_inject.yaml

Then, you can apply the intermediate yaml:

PS C:\istio-0.5.1> kubectl create -f .\bookinfo_inject.yaml
service "details" created
deployment "details-v1" created
service "ratings" created
deployment "ratings-v1" created
service "reviews" created
deployment "reviews-v1" created
deployment "reviews-v2" created
deployment "reviews-v3" created
service "productpage" created
deployment "productpage-v1" created
ingress "gateway" created

 

With that, you will have BookInfo app deployed and managed by Istio. Hope this was useful to get Istio + Kubernetes running in Minikube on Windows.

Advertisements

From the Monolith to Microservices

I remember the old days where we used to package all our modules into a single app (aka the Monolith), deployed everything all at once and called it an enterprise app. I have to admit, the first time I heard the term enterprise app, it felt special. Suddenly, my little module was not so little anymore. It was part of something bigger and more important, at least that’s what I thought. There was a lot of convention and overhead that came with working in this enterprise app model but it was a small price to pay for consistency, right?

This approach worked for small projects with small number of modules. As the projects got bigger and the number of teams and modules involved increased, it became obvious to me that the monolith approach wasn’t scalable anymore for a number of reasons.

  1. Integration was way too difficult. To create a single app, we had to bundle a number of modules and that was not only difficult but it always happened too late in the release cycle. This meant that we didn’t really test our integrated app end-to-end very late in the release cycle. Integration time was a constant cause of stress.
  2. Not agile at all. We had to wait for the slowest module to finish its development cycle before we can release any of our modules. This wasn’t agile at all.
  3. Debugging was way too difficult. I could debug my module on its own but debugging the whole app with all the modules was almost impossible. I didn’t have access to the source code of other modules and the whole app was so heavy that I could not run it on my laptop anyway.
  4. Environmental inconsistencies. Everything worked fine on my laptop but the production environment was always slightly different and caused hard-to-debug and hard-to-anticipate bugs.

In the last few years, a number of things happened that helped with these problems. The industry came to the realisation that the Monotith approach is not scalable and there was a shift towards smaller manageable microservices. This took care of the integration, debugging and agility issues. Docker provided a consistent context for those microservices. This took care of the environmental inconsistency problem.

But we still needed to run containers in production and deal all the issues that come with it. We had to find a way to provision nodes for containers. We had to make sure that containers are up and running. We had to do reliable rollouts and rollbacks. We had to write health checks and all the other things we need to do to run software in production.

Thankfully, we started seeing open-source container management platforms like Kubernetes. Kubernetes provided us a high level API to automate deployments, manage rollouts/rollbacks, scale up/down and much more. The best thing is that Kubernetes runs anywhere from your laptop to the cloud and it can span multiple clouds so there is no lock-in.

As a result, I feel like we brought some sanity back in how we build and run software and that’s always a good thing!