One year on


I’ve been neglecting my blog recently. Not only I was really busy with work but I also gravitated towards blogging on Google Cloud Platform (GCP) blog. I will continue writing on GCP blog but it is my goal in 2017 to write here more often on broader tech and non-tech related topics.

As some of you might know, I started working at Google as Developer Advocate for Google Cloud almost a year ago. As we start the new year and as I get closer to my one year anniversary at Google, I thought this would be a good time to reflect on the past year.

2016 has been a crazy ride for me. I had a feeling that this job would be fun and different from any of my previous jobs but I never imagined that it would be this great in so many ways.

My job involves speaking/teaching at tech conferences which requires quite a bit of travel. In 2016, I visited 33 cities in 21 countries. I probably traveled to more places in 2016 than all my life combined before.

I was a speaker/teacher/attendee in dozens of conferences. Preparing for so many conferences wasn’t easy but rewards in the end were great. I was never exposed to this number of diverse conferences in this short amount of time. I learned a lot and got to meet a lot of talented engineers from all over the globe.

In terms of my talks, I had a lot of topics to choose from because Google Cloud is a huge platform with so many different pieces. I gravitated towards Kubernetes, gRPC, Node.js and Dataflow talks. Due to my .NET background, I started supporting our .NET story on Google Cloud more recently and I expect this to continue this year.

Being a Developer Advocate means you need to juggle many different tasks all at once. In a day, you might be finding yourself writing code for a demo, submitting talks to a conference, writing friction logs for a product or attending a customer meeting while figuring out your next travel plans. And sometimes you have to do all of this on the road. I have to admit, there were times I was stressed. There was a week where I was in 4 different cities in 4 different countries and I was overwhelmed. But I learned my lesson. This year, I will try to be planned with my travels and make sure to plan for recovery time as it’s really important.

Overall, 2016 has been a very exciting, rewarding and productive year for me professionally. 2017 is already shaping up to be an even better year and in one of my next posts, I’ll talk about some of the cool conferences that I’m excited to be part of in 2017.


Cloud Minute: Online Resizing of Persistent Disks

Google Cloud Platform introduced online resizing of Google Cloud Persistent Disks almost a month ago. When I first read about this feature, I was so amazed that I had to try it right away.

I started with a Compute Engine instance with a persistent disk of size 100GB and doubled it to 200GB with a few clicks and resize2fs command.

Not only it worked flawlessly but it was also very quick. I documented my experience in this Cloud Minute video.


Cloud Functions

In this post, I want to take a look at Cloud Functions. It’s still in Alpha but you can already play with it and I really like the idea of deploying functions without having to worry about the underlying infrastructure.

What are Cloud Functions?

In a nutshell, Cloud Functions enable you to write managed functions to respond to events in your cloud environment.

  • Managed: Cloud Functions are written in JavaScript and run on a fully managed Node.js environment on Google Cloud Platform. No need to worry about instances or infrastructure. Just deploy your function to the cloud and you’re good to go.
  • Events: Cloud Storage and Cloud Pub/Sub events or simple HTTP invocation can act as triggers to cloud functions.
  • Response: Cloud functions can respond in async (with Storage and Pub/Sub events) or sync (with HTTP invocation) fashion.

Writing Cloud Functions

  • Cloud Function are written in JavaScript as Node.js modules.
  • Each function must accept context and data as parameters and must signal completion by calling one of context.success, context.failure, and context.done methods.
  • console can be used to log error and debug messages and logs can be viewed using gcloud get-logs command.

Deploying Cloud Functions

Cloud Functions can be deployed using gcloud deploy from 2 locations:

  1. Local filesystem: You can create your function locally and use gcloud to deploy it. (One caveat is that you need to create a Cloud Storage bucket for gcloud to store your function before it can deploy it.)
  2. Cloud Source repository: You can put your function to Cloud Source repository, (A Git repository hosted on Google Cloud Platform) and deploy it from there using gcloud.

Triggering Cloud Functions

Cloud Functions can be triggered (async or sync) in 3 ways:

  1. Cloud Pub/Sub: A new message to a specific topic in Cloud Pub/Sub (async).
  2. Cloud Storage: An object created/deleted/updated in a specific bucket (async).
  3. HTTP Post: A simple HTTP Post (sync). (This requires an HTTP endpoint in Cloud Function and this endpoint is created by specifying –trigger-http flag during deployment of the function.)


Long time and new beginnings at Skype

London Eye
London Eye

I can’t believe it’s been more than 8 months since I last posted. It’s been hectic both personally and professionally but that’s no excuse as life is always hectic and I hope to get back into the habit of posting more often. There’s a lot to share.

Lots have changed since last time I posted but the biggest change is that I decided to leave Adobe after 6.5 years. I had a wonderful time at Adobe but I felt like it was time to try something new so I accepted a new role at Skype. I always loved and relied on Skype for communicating with family, friends, and more recently with co-workers in the US and Switzerland when I was working remotely from Cyprus and I thought it’d be cool to be part of such a positive brand and work on something that millions of people use every day. I started working for Skype on April 15 as a Senior Software Development Engineer in London.

If you didn’t know, Skype is part of Microsoft now and as a Mac and Java guy, it’s been quite an experience to switch to Microsoft land. I plan to share my experiences on switching from Java to C# from Eclipse to Visual Studio from Amazon Web Services to Windows Azure from Android to Windows Phone 🙂

At Skype, I’m part of a group split between London and Palo Alto and we do technology research and incubation. The coolest part of my new role is that I get to play with different programming languages, frameworks, technologies and I’m no longer constrained to just Java. This means that I’ll be able to share with my followers on more diverse technologies like Node.js, .NET/C#, Azure, etc.

Software lessons learned in 2011

It’s hard to believe that it’s been a year since my Software lessons learned in 2010 post but it’s time again to reflect on lessons learned in software development in 2011.

Build on what you already know

I worked on a number of diverse client and server side features in 2011 and I was pleased to see that lessons I outlined in 2010 were very useful in keeping me focused and guiding me in the right direction throughout the year. For example, I was tasked with implementing a JavaScript/HTML5 client library for Data Services but the problem was that I didn’t really know JavaScript all that well. I tackled that by allocating enough time for research and prototype and in the end, we ended up with a great performing JavaScript library using Google Closure. Another example is specs. I made sure the specs of features I worked on were up to date all the time and in the end, I was pleased to see our documentation and QA teams had almost no questions for me about the features because everything was outlined in detail in the specs.

I know this sounds obvious but wisdom accumulates over time and learning new things should not come at the expense of forgetting what you already know. Keep good lessons around and keep building on them to get better at what you do.

Don’t get attached

2011 has definitely been an interesting and challenging year with Adobe’s announcement on Flash mobile, Flex/BlazeDS going to Apache and re-orgs. In the re-org, I got assigned to a new group with a new technology stack, so it’s been quite a learning experience and adjustment for me towards the end of the year. Announcements about Flash mobile and Flex definitely made a lot of people think about their choice of technology in their organizations/projects including myself.

One big lesson I learned is to never get attached to a particular technology or group. Technologies come and go and even though one might be better in something than others, no technology is better in everything, so it’s not wise to invest a lot of time in one technology at the expense of ignoring the rest because when the day comes for that technology to become obsolete (and it will happen), you don’t want to feel abandoned or lost. Similarly, you might be working on the most interesting project ever with the smartest people around but don’t let the daily grind fool you into thinking that you’ll do this forever. It’s always wise to think and plan for the next step in your career.

Treat tests as part of the product

Most of the times, “product” is thought to be what gets shipped to the customer but I think this is a mistake in software craftsmanship. What gets shipped to the customer is a small portion of what actually is needed to build something shippable. Without unit tests, integration tests, performance tests, documentation, build scripts, etc. it’s impossible to ship anything decent. By not treating these as part of the product, we often let them be at a lower standard than the actual product and that’s a mistake because tests eventually start rotting and that makes the actual product harder and harder to change and maintain. I suggest that we give tests the same amount of attention, dedication and time by considering them as part of the product and you’ll be surprised if you do.

One example is my Java client SDK. In 2011, I wrote a Java client SDK but this time instead of writing a few unit tests and handing it over to QA, I wrote a fully automated unit test suite that tested every single functionality of the SDK. It took time and effort to cover everything and make sure it runs fast and consistently but in the end, it paid off big time. Shortly after Java client SDK, we decided to work on an Android client SDK, then JavaScript/HTML5 client SDK, then Objective-C client SDK and we simply ported these tests to respective languages. Throughout the process, we ran into issues and it was extremely useful to have a fully automated, fast testing readily available for the whole team, developers and QA. If something needs to change in any of the clients now, we make the change with no fear because we know that there are solid unit tests behind us to make sure everything still works. Investing in testing pays off in ways that you can’t even anticipate so why not treat them as part of the product?

Just do it

I wanted to work on a side project and I spent a lot of time in 2011 to read on different technology blogs about different technology stacks, different RIA frameworks, different servers, different mobile development platforms etc. in order to find the right fit for my project. In the end, I learned a lot but I didn’t have anything tangible until late in the year when I actually started working on the project. Reading on things is definitely useful but it’s not as rewarding as actually building something. After I started to actually build my project, I realized that I’m learning more and I actually felt like I was doing something more tangible and useful, I was solving a real problem rather than reading up. So, my plan for 2012 is to read less, do more, and get more side projects finished because in the end, shipping quality software that solves something useful in some way is what matters.

Happy and prosperous 2012!