Migrating from Knative Build to Tekton Pipelines

Knative 0.8.0 and Build Deprecation

Knative 0.8.0 came out a couple of weeks ago with a number of fixes and improvements. One of the biggest changes in 0.8.0 is that Knative Build is now deprecated according to docs:

Knative Installation docs also only include Knative Serving and Eventing without mentioning Build:

kubectl apply 
-f https://github.com/knative/serving/releases/download/v0.8.0/serving.yaml \
-f https://github.com/knative/eventing/releases/download/v0.8.0/release.yaml \
-f https://github.com/knative/serving/releases/download/v0.8.0/monitoring.yaml

Good to know but there’s no explanation on why Knative Build was deprecated and any guidance on what is the replacement, if any. After a little bit of research, I have more information on deprecation and also a migration path that I’d like to share in this post. 

There’s a Knative issue (614) with more details but basically, it has been decided that building and pushing an image for a service should not be one of the core responsibilities for Knative. 

Instead, Knative users can rely on a number of other better tools. One of those tools is called Tekton Pipelines. Inspired by Knative Build, The Tekton Pipelines project provides Kubernetes style resources for declaring CI/CD-style pipelines. It does everything Knative Build does and some more.  

Hello Tekton Pipelines

In Tekton Pipelines, you can create a simple one-off tasks or more complicated CI/CD pipelines. There are 4 main primitives for Tekton Pipelines:

  • Task defines the work that needs to be executed with 1 or more steps.
  • PipelineResources defines the artifacts that can be passed in and out of a task. 
  • TaskRun runs the Task you defined with the supplied resources.
  • Pipeline defines a list of tasks to execute in order. 

Before you can use Tekton Pipelines, you need to install it in your Kubernetes cluster. Detailed instructions are here but it’s as easy as: 

kubectl apply -f https://storage.googleapis.com/tekton-releases/latest/release.yaml

Once you have it installed, you can check the Tekton pods:

kubectl get pods -n tekton-pipelines

NAME                                           READY   STATUS
tekton-pipelines-controller-55c6b5b9f6-8p749   1/1     Running
tekton-pipelines-webhook-6794d5bcc8-pf5x7      1/1     Running

Knative Build ==> Tekton Pipelines

There’s basic documentation on Migrating from Knative Build to Tekton. In a nutshell, these are Tekton equivalents of Knative Build constructs:

KnativeTekton
BuildTaskRun
BuildTemplateTask
ClusterBuildTemplateClusterTask

Additionally, the Tekton Catalog aims to provide a catalog of re-usable Tasks, similar to what Knative BuildTemplate repository used to do before.

Build with Kaniko Task

As an example, let’s take a look at how to build and push an image to Google Container Registry (GCR) using Tekton Pipelines. 

In Tekton world, you start with either defining your custom Task (example) or re-using someone else’s Task (example). Let’s use the Kaniko Task already available on Tekton Catalog. 

First, install the Kaniko Task and make sure it’s installed:

kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/master/kaniko/kaniko.yaml

kubectl get task

NAME     AGE
kaniko   45m

Second, define a TaskRun to use the Task and supply the required parameters:

apiVersion: tekton.dev/v1alpha1
kind: TaskRun
metadata:
  name: build-kaniko-helloworld-gcr
spec:
  taskRef:
    name: kaniko
  inputs:
    resources:
    - name: source
      resourceSpec:
        type: git
        params:
        - name: url
          value: https://github.com/meteatamel/knative-tutorial
    params:
    - name: DOCKERFILE
      value: Dockerfile
    - name: CONTEXT
      value: serving/helloworld/csharp
  outputs:
    resources:
    - name: image
      resourceSpec:
        type: image
        params:
        - name: url
          # Replace {PROJECT_ID} with your GCP Project's ID.
          value: gcr.io/{PROJECT_ID}/helloworld:kaniko-tekton

Finally, start the TaskRun and check that it’s succeeded:

kubectl apply -f taskrun-build-kaniko-helloworld-gcr.yaml

kubectl get taskrun

NAME                          SUCCEEDED
build-kaniko-helloworld-gcr   True

At this point, you should see the container image built and pushed to GCR.


Hopefully, this blog post provided you the basics needed to move from Knative Build to Tekton Pipeliens. I also updated my Knative Tutorial for the 0.8.0 release. Check it out for more examples of converting Knative Build to Tekton Pipelines: 


Istio 101 (1.0) on GKE

Istio 1.0 is finally announced! In this post, I updated my previous Istio 101 post with Istio 1.0 specific instructions. Most of the instructions are the same but with a few minor differences about where things live (folder names/locations changed) and also most commands now default to kubectl instead of istioctl.

For those of you who haven’t read my Istio 101 post, I show how to install Istio 1.0 on Google Kubernetes Engine (GKE), deploy the sample BookInfo app and show some of the add-ons and traffic routing.

Create Kubernetes cluster

First, we need a Kubernetes cluster to install Istio. On GKE, this is a single command:

gcloud container clusters create hello-istio \
 --cluster-version=latest \
 --zone europe-west1-b \
 --num-nodes 4

I’m using 4 worker nodes. That’s the recommended number of nodes for BookInfo sample.

Once the cluster is created, we also need to create a clusterrolebinding for Istio to be able to manage the cluster:

kubectl create clusterrolebinding cluster-admin-binding \
 --clusterrole=cluster-admin \
 --user=$(gcloud config get-value core/account)

Download & Setup Istio

Now that we have a cluster, let’s download the latest Istio (1.0.0 as of today):

curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.0.0 sh -

Add Istio’s command line tool istioctl to your PATH. We’ll need it later:

export PATH="$PATH:./istio-1.0.0/bin"

Install Istio

It’s time to install Istio with mutual authentication between sidecars:

kubectl apply -f install/kubernetes/istio-demo-auth.yaml

Once it’s done, you can check that pods are running under istio-system namespace:

kubectl get pods -n istio-system

You’ll realize that in addition to Istio base components (eg. pilot, mixer, ingress, egress), a number of add-ons are also installed (eg. prometheus, servicegraph, grafana). This is different from the previous versions of Istio.

Enable sidecar injection

When we configure and run the services, Envoy sidecars can be automatically injected into each pod for the service. For that to work, we need to enable sidecar injection for the namespace (‘default’) that we will use for our microservices. We do that by applying a label:

kubectl label namespace default istio-injection=enabled

And verify that label was successfully applied:

kubectl get namespace -L istio-injection

Deploy BookInfo app

Let’s deploy the BookInfo sample app now:

kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

And make sure all the pods are running. Notice that there are 2 pods for each service (1 the actual service and 1 sidecar):

kubectl get pods

Deploy BookInfo Gateway

In Istio 1.0.0, you need to create a gateway for ingress traffic. Let’s go ahead and create a gateway for BookInfo app:

kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml

Use BookInfo app

We can finally take a look at the app. We need to find ingress gateway IP and port:

kubectl get svc istio-ingressgateway -n istio-system

To make it easier for us, let’s define a GATEWAY_URL variable:

export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT

Let’s see if the app is working. You should get 200 with curl:

curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage

You can also open a browser and see the web frontend for product page. At this point, we got the app deployed and managed by a basic installation of Istio.

Next, we’ll take a look at some of the add-ons. Unlike previous versions, add-ons are automatically installed already. Let’s start sending some traffic first:

for i in {1..100}; do curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage; done

Grafana dashboard

There’s Grafana for dashboarding. Let’s setup port forwarding first:

kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 8080:3000

Navigate to http://localhost:8080 to see the dashboard:

Istio Dashboard in Grafana

Prometheus metrics

Next, let’s take a look at Prometheus for metrics. Set port forwarding:

kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') 8083:9090

Navigate to http://localhost:8083/graph to see Prometheus:

Prometheus in Istio

ServiceGraph

For dependency visualization, we can take a look at ServiceGraph:

kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=servicegraph -o jsonpath='{.items[0].metadata.name}') 8082:8088

Navigate to http://localhost:8082/dotviz:

Screen Shot 2018-06-07 at 10.02.38 AM.png

Tracing

For HTTP tracing, there is Jaegar and Zipkin. Let’s take a look at Jaeger. Setup port forwarding as usual:

kubectl port-forward -n istio-system $(kubectl get pod -n istio-system -l app=jaeger -o jsonpath='{.items[0].metadata.name}') 8084:16686

Navigate to http://localhost:8084

Screen Shot 2018-06-07 at 10.05.11 AM

Traffic Management

Before you can use Istio to control the Bookinfo version routing, you need to define the available versions, called subsets, in destination rules. Run the following command to create default destination rules for the Bookinfo services:

kubectl apply -f samples/bookinfo/networking/destination-rule-all-mtls.yaml

You can then see the existing VirtualServices and DestinationRules like this:

kubectl get virtualservices -o yaml
kubectl get destinationrules -o yaml

When you go to the product page of BookInfo application and do a browser refresh a few times, you will see that the reviews section on the right keeps changing (the stars change color). This is because there are 3 different reviews microservices and everytime, a different microservice is invoked. Let’s pin all microservices to version1:

kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml

This creates VirtualServices and DestinationRules needed to pin all microservices to version1. Now, if you back to the product page and do a browser refresh, nothing changes because reviews microservice is pinned to version1 now.

To pin a specific user (eg. Jason) to a specific version (v2), we can do the following:

kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml

With this rule, if you login to the product page with username “Jason”, you should see the v2 version of reviews microservice.

To clean up all destination rules, run the following and now we’re back to the beginning with 3 different versions of the microservices:

kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml

Cleanup

This wraps up all the basic functionality of Istio 1.0.0 that I wanted to show on GKE. To cleanup, let’s first delete the BookInfo app:

kubectl delete -f samples/bookinfo/networking/bookinfo-gateway.yaml
kubectl delete -f samples/bookinfo/platform/kube/bookinfo.yaml

Confirm that BookInfo app is gone:

kubectl get gateway
kubectl get virtualservices
kubectl get pods

Finally, cleanup Istio:

kubectl delete -f install/kubernetes/istio-demo.yaml

Confirm that Istio is gone:

kubectl get pods -n istio-system

.NET Days in Zurich, Shift Conf in Split

Last week was a quite interesting week in terms of travel. First I got to visit Zurich again after a while for .NET Day and then I got to visit the Croatian coastal town Split for the first time for Shift Conference.

.NET Days in Zurich

When I used to work at Adobe, part of my team was based in Basel, Switzerland. As a result, I used to visit Basel, Zurich and other Swiss cities quite often. Since I left Adobe, I visited Switzerland only once 2 years ago. I was naturally excited to visit Zurich again for .NET Day.

I arrived a day early for the conference and explored Zurich a little bit. Google has a big office in Zurich with strong engineering. I got to visit that office as well for the first time and spent half day there working from the office.

Talk & Questions

.NET Day is a small .NET focused conference with 2 tracks and about 200 attendees. It was the first time presenting there. I did my “Google Home meets .NET Containers” talk where I show how to connect Google Home mini to a .NET container running in Google Cloud. It’s a fun talk and always get a good reaction from the crowd.

Conference organizers did a couple of special things for speakers. First, we got speaker t-shirts with our names on it. I think this was the first time I got a t-shirt with my name which was nice. Second, they organized a photo shoot with the conference photographer, Irene Bizic. She did an amazing job and as a result, I got a few very nice pictures of myself.

After my talk, I got some questions on the pricing model of Vision API. Someone also asked me about how to test Dialogflow end to end.

Shift Conference in Split

After Zurich, I flew to Split, Croatia. As you might remember, I was in Zagreb, the Croatian capital last October but this was the first time I got to visit the coastal part of Croatia.

I have to say I was impressed with Split. It’s a small town with rich history, great food and good beaches. The weather was very good with 30 degrees and sunny almost every day. I tried food in 3-4 different places and every place was very good. I had opened the beach season back in January in Rio but it had been a while since then and it was nice to swim again one afternoon in Split.

Talk & Questions

This was the first time I spoke at Shift Conference. I was expecting a small conference in a small town but I was totally wrong. Shift is a big well-organized conference (1000+ attendees) with a single track (and a workshop) over 2 days. There were lots of speakers from all over the place, a ton of technical content. The conference happens in an old theatre kind of place and I was super impressed with the stage. It was probably the most impressive stage I ever spoke at.

I did my “Google Home meets .NET containers” talk again. It was super fun again and I got reaction from the crowd both during and after my talk. After the conference, I got some general questions about Google Cloud and Dialogflow.

I have to say the organizers did an amazing job with the conference. There were speaker dinners and parties every night and they really tried to make it a fun event not just for attendees but for speakers as well.

I hope to visit Split again next year and explore more of Croatia and surroundings.

Istio 101 (0.8.0) on GKE

In one of my previous posts, I showed how to install Istio on minikube and deploy the sample BookInfo app. A new Istio version is out (0.8.0) with a lot of changes, especially changes on traffic management, which made my steps in the previous post a little obsolete.

In this post, I want to show how to install Istio 0.8.0 on Google Kubernetes Engine (GKE), deploy the sample BookInfo app and show some of the add-ons and traffic routing.

Create Kubernetes cluster

First, we need a Kubernetes cluster to install Istio. On GKE, this is a single command:

gcloud container clusters create hello-istio \
 --cluster-version=latest \
 --zone europe-west1-b \
 --num-nodes 4

I’m using 4 worker nodes. That’s the recommended number of nodes for BookInfo sample.

Once the cluster is created, we also need to create a clusterrolebinding for Istio to be able to manage the cluster:

kubectl create clusterrolebinding cluster-admin-binding \
 --clusterrole=cluster-admin \
 --user=$(gcloud config get-value core/account)

Download & Setup Istio

Now that we have a cluster, let’s download the latest Istio (0.8.0 as of today):

curl -L https://git.io/getLatestIstio | ISTIO_VERSION=0.8.0 sh -

Add Istio’s command line tool istioctl to your PATH. We’ll need it later:

export PATH="$PATH:./istio-0.8.0/bin"

Install Istio

It’s time to install Istio with mutual authentication between sidecars:

kubectl apply -f install/kubernetes/istio-demo-auth.yaml

Once it’s done, you can check that pods are running under istio-system namespace:

kubectl get pods -n istio-system

You’ll realize that in addition to Istio base components (eg. pilot, mixer, ingress, egress), a number of add-ons are also installed (eg. prometheus, servicegraph, grafana). This is different from the previous versions of Istio.

Enable sidecar injection

When we configure and run the services, Envoy sidecars can be automatically injected into each pod for the service. For that to work, we need to enable sidecar injection for the namespace (‘default’) that we will use for our microservices. We do that by applying a label:

kubectl label namespace default istio-injection=enabled

And verify that label was successfully applied:

kubectl get namespace -L istio-injection

Deploy BookInfo app

Let’s deploy the BookInfo sample app now:

kubectl apply -f samples/bookinfo/kube/bookinfo.yaml

And make sure all the pods are running. Notice that there are 2 pods for each service (1 the actual service and 1 sidecar):

kubectl get pods

Deploy BookInfo Gateway

In Istio 0.8.0, traffic management completely changed and one of those changes is that you need to create a gateway for ingress traffic. Let’s go ahead and create a gateway for BookInfo app:

istioctl create -f samples/bookinfo/routing/bookinfo-gateway.yaml

Use BookInfo app

We can finally take a look at the app. We need to find ingress gateway IP and port:

kubectl get svc istio-ingressgateway -n istio-system

To make it easier for us, let’s define a GATEWAY_URL variable:

export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http")].port}')
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT

Let’s see if the app is working. You should get 200 with curl:

curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage

You can also open a browser and see the web frontend for product page. At this point, we got the app deployed and managed by a basic installation of Istio.

Next, we’ll take a look at some of the add-ons. Unlike previous versions, add-ons are automatically installed already. Let’s start sending some traffic first:

for i in {1..100}; do curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage; done

Grafana dashboard

There’s Grafana for dashboarding. Let’s setup port forwarding first:

kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 8080:3000

Navigate to http://localhost:8080 to see the dashboard:

Istio Dashboard in Grafana

Prometheus metrics

Next, let’s take a look at Prometheus for metrics. Set port forwarding:

kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') 8083:9090

Navigate to http://localhost:8083/graph to see Prometheus:

Prometheus in Istio

ServiceGraph

For dependency visualization, we can take a look at ServiceGraph:

kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=servicegraph -o jsonpath='{.items[0].metadata.name}') 8082:8088

Navigate to http://localhost:8082/dotviz:

Screen Shot 2018-06-07 at 10.02.38 AM.png

Tracing

For HTTP tracing, there is Jaegar and Zipkin. Let’s take a look at Jaeger. Setup port forwarding as usual:

kubectl port-forward -n istio-system $(kubectl get pod -n istio-system -l app=jaeger -o jsonpath='{.items[0].metadata.name}') 8084:16686

Navigate to http://localhost:8084

Screen Shot 2018-06-07 at 10.05.11 AM

Traffic Management

Traffic Management changed dramatically in 0.8.0. You can read more about it here but basically instead of routing rules, we now have VirtualServices and DestinationRules.

You can see the existing VirtualServices and DestinationRules like this:

istioctl get virtualservices -o yaml
istioctl get destinationrules -o yaml

When you go to the product page of BookInfo application and do a browser refresh a few times, you will see that the reviews section on the right keeps changing (the stars change color). This is because there are 3 different reviews microservices and everytime, a different microservice is invoked. Let’s pin all microservices to version1:

istioctl create -f samples/bookinfo/routing/route-rule-all-v1-mtls.yaml

This creates VirtualServices and DestinationRules needed to pin all microservices to version1. Now, if you back to the product page and do a browser refresh, nothing changes because reviews microservice is pinned to version1 now.

To pin a specific user (eg. Jason) to a specific version (v2), we can do the following:

istioctl replace -f samples/bookinfo/routing/route-rule-reviews-test-v2.yaml

With this rule, if you login to the product page with username “Jason”, you should see the v2 version of reviews microservice.

To clean up all destination rules, run the following and now we’re back to the beginning with 3 different versions of the microservices:

istioctl delete -f samples/bookinfo/routing/route-rule-all-v1.yaml

Cleanup

This wraps up all the basic functionality of Istio 0.8.0 that I wanted to show on GKE. To cleanup, let’s first delete the BookInfo app:

samples/bookinfo/kube/cleanup.sh

Confirm that BookInfo app is gone:

istioctl get gateway
istioctl get virtualservices
kubectl get pods

Finally, cleanup Istio:

kubectl delete -f install/kubernetes/istio-demo.yaml

Confirm that Istio is gone:

kubectl get pods -n istio-system