Author Archives: Mete

Microsoft Azure vs. Google Cloud – Part 2

Autumn Leaves

In my previous post, I talked about the similarities between Microsoft Azure and Google Cloud in their Compute sections. In this post, I want to compare the Storage options provided by the two providers.

Microsoft Azure provides the following storage options under its Data & Storage section:

  • SQL Database: Relational SQL DB as a service.
  • Storage: Blobs, tables (NoSQL), queues, files and disks.
  • Redis Cache: High throughput, low latency cache.
  • DocumentDB: NoSQL document DB as a service.
  • StorSimple:Cloud storage for enterprises. 
  • Azure Search: Search as a service for mobile and app development.
  • SQL Data Warehouse (Preview): Elastic data warehouse as a service.

Google Cloud provides the following storage options under its Storage section:

  • Cloud SQL: MySQL DB in the cloud.
  • Bigtable: High-volume, low-latency data store (no queries)
  • Datastore: Scalable store for NoSQL data.
  • Cloud Storage: Binary/object store.
  • Memcache: Key/value cache.
  • Persistent Disk: Network attached block storage. 

Microsoft Azure’s and Google Cloud’s storage options are not listed in a similar way, so let’s break it down in a way that makes sense for both.


Azure’s SQL Database and Google’s Cloud SQL are pretty much the same thing: relational DB in the cloud, except the former uses Microsoft SQL Server whereas the latter uses MySQL. Azure also has something new called SQL Data Warehouse which is an enterprise-class distributed database for petabyte volumes of data but it’s in preview mode currently.


In Google Cloud, we have Bigtable and Datastore for NoSQL support.

  • Bigtable is a scalable, distributed, highly available and structured storage. It’s not really a database as it doesn’t support querying. It provides strong consistency for a single row and eventual consistency for a multi-level rows. In Bigtable data model, a row has a key and one or more columns, basically a key/value store. It supports CRUD on a single row and preserves single-row consistency and allows range queries by keys. It provides scalability by automatic sharding, reliability by replication and performance by reduced lock granularity and co-location of data.
  • Datastore is built on top of Bigtable and it’s a database for entities and properties on those entities. It support queries by doing index scans on the property being queried (not on the actual underlying Bigtable), so all complex queries require a composite index table to be built upfront (single-property indexes come for free) but this means that the performance of queries depend on the size of the result set, rather than the size of the whole data set.

Azure provides its own Table and DocumentDB for NoSQL support.

  • Table (listed under storage) is quite similar to Datastore in functionality. I haven’t run extensive tests on either Azure Table or Datastore to compare their scalability promises though.
  • DocumentDB is a NoSQL document database service designed from the ground up to natively support JSON and a unique NoSQL offering from Azure.


Google provides Cloud Storage as binary/object store. In Azure world, this kind of functionality is provided via Blobs (listed under storage).

Google Cloud also provides Persistent Disk which is network-attached block storage and Azure provides Disks for similar functionality.

Cache (or in-memory key-value store)

Azure provides Redis Cache, basically an in-memory database based on open source Redis. Google Cloud offers a key-value cache via Memcache.


Azure provides StorSimple, cloud storage for enterprises and Google Cloud does not seem to have anything similar. Azure Search is also listed under storage which is weird, considering search is not really a storage feature. We’ll cover Azure Search when we talk about services.

This sums up our storage investigation in Azure vs. Google Cloud. In the next post, I want to look at different networking options provided by the two cloud platforms.

Leave a comment

Posted by on November 3, 2015 in Azure, Cloud Platforms, Google Cloud


Microsoft Azure vs. Google Cloud – Part 1

Thames in Fog

I’ve been working with Microsoft Azure Cloud Platform for a while now and in my free time, I’ve been also looking into Google’s Cloud Platform out of curiosity. It’s interesting to see how the two platforms compare. On one hand, both platforms provide similar functionalities but usually under different names. On the other hand, there are some unique offerings by Microsoft Azure and some unique offerings by Google Cloud.

In this series of posts, I want to go through Microsoft Azure and Google Cloud and highlight similarities and differences as we go along. My goal is not to rank one cloud platform as better or worse than the other, that’s not my intention. Rather, I want to technically look at both platforms, clarify similarities and differences, and hopefully make lives of fellow cloud developers a little bit of easier.

Many organizations use cloud resources to compute and not surprisingly, both Microsoft Azure and Google Cloud group a number of features under “Compute”, so this is a good starting point for our investigation.

In Microsoft Azure, Compute section contains the following features:

  • Virtual Machines (IaaS): Fully customizable Windows and Linux VMs and apps (typically Microsoft apps, some Linux apps, Docker and a few more).
  • Cloud Services (PaaS): Web and Worker Roles. Support for Java, Node.js, PHP, Python, .NET, Ruby. Autoscaling, load balancing and health monitoring of instances with automatic OS and application patching.
  • Batch: Cloud-scale job scheduling and compute management. Stage data and execute compute pipelines. Takes care of scheduling, queuing, dispatching and monitoring jobs.
  • RemoteApp: Install Windows apps on a Windows Server in the cloud and let Remote Desktop clients access those apps on their Internet-connected laptop, tablet, or phone.

In Google Cloud, Compute section contains the following features:

  • Compute Engine (IaaS): Unmanaged Linux VMs running in Google’s infrastructure.
  • Container Engine: Run Docker containers in the cloud.
  • App Engine (PaaS): Managed platform for building web and mobile backends. Support for Java, Python, PHP and Go. Autoscaling and health monitoring of instances with automatic OS updates.

In terms of similarities, conceptually Virtual Machines = Compute Engine and Cloud Services = App Engine. Azure does not have a separate offering for Docker containers but Virtual Machines can be configured to run Docker, so Virtual Machines can be thought to contain the Container Engine as well.

In terms of differences, RemoteApp is a Windows specific service, so it’s not surprising that Google Cloud Platform does not have anything similar. Azure’s Batch service has no corresponding service under Google Cloud Compute section but Google Cloud has something called Cloud DataFlow under Big Data section that looks very similar. We will cover Big Data section of Google Cloud in later posts.

This wraps up other first post on the topic. In the next post, I will be looking look at different storage options provided by Microsoft Azure and Google Cloud.

1 Comment

Posted by on November 2, 2015 in Azure, Cloud Platforms, Google Cloud


Unnecessary complexity, why does it happen?

In software industry, there’s a strange disease called complexity. Especially complexity that exists for no good reason. I don’t know why it happens exactly but I know it’s widespread from experience. Software development is a complex process and the problems we deal with are somewhat complex as well. But these problems are hardly rocket science. They are nothing compared to space exploration or cancer research, for example.

Yet, every day, I see unnecessary complexity scattered around software projects. It’s usually a hassle to read someone else’s code. Trying to use a new framework is a steep learning curve. A new programming language? That’s even worse. Something that intuitively feels like it should be simple, usually ends up being quite complex to implement. Why is that? I don’t know the exact answer but I have some ideas.

First has to do with the people involved in software development. These are usually smart, motivated people who like challenges and puzzles. Otherwise, they wouldn’t be in the business of writing thousands and thousands of lines of code in-front of a screen, day in and day out. Smart people have the tendency to over think, over engineer, over analyze. While regular people have difficulty in grasping complexity, these people have difficulty in appreciating simplicity. When you have so many smart people around working on the same problem, complexity arises naturally and simple solution are often forgotten or ignored.

Second has to do with human nature. People are social animals with egos and they like to assert their dominance. In animals, dominance is often asserted with raw physical power whereas in humans, it’s more about intellectual power, at least in software development. In a software project, whoever shows the mastery of complex problems and solutions is often assumed to be a competent person. In design meetings, everyone loves to yell out complex ideas and buzz words to assert their competence within the group. While mastery of complex concepts is definetely a good attribute, a better attribute is to have the ability to find the simplest solution to a given problem but the latter is often ignored because it doesn’t help in asserting dominance.

Third has to do with the overall lack of discipline in software design process. While designing software systems, there are a lot of moving parts, a lot of people involved and there’s limited time and patience. The whole process is often rushed due to time and patience constraints. In that environment, it’s easier to find a “good-enough” solution and move on. That works for part of the problem but when you add many “good-enough” solutions, you usually end up with an over-engineered, over-complex system overall. There’s most definitely a simpler and more elegant solution but it requires a more disciplined and more rigorous design process that hardly exists in a typical software shop.

Given all this, simple problems that should have simple solutions end up with unnecessarily complex solutions. Complexity is like a virus. Once you have it in an inner layer of the system, all the outer layers eventually suffer from the same problem. Pretty soon, you have a overly complex system that nobody knows how to maintain, every change causes a ton of other changes. Unnecessary complexity turns the optimistic, positive, creative process of software development into a fearful, dull and frustrating process both for creators, testers and users.

All of this happens because we didn’t take the time to find a simple solution and we didn’t resist the waves of complexity during software design. Next time you design a piece of software, make sure you spend some time to find a simple solution. A solution that is easy to grasb, easy to explain, easy to implement and easy to test. Such a solution exists, you just need to spend the time to look for it and you need to have to courage and will to resist the evil of unnecessary complexity. The success of your projects depends on this.

Leave a comment

Posted by on December 25, 2014 in software industry


Accountability and Sense of Ownership in Software Development

One of the most overlooked concepts in software development is accountability. Accountability means a team is totally responsible for successful implementation and execution of a piece of software or a service from the beginning to the end. From the time the software/service is envisioned and designed, to the time the software/service implemented and gets used by the end users, a single group of people is involved in the whole process and once it goes out of the door, that team is held accountable for the successes and failures for the end result.

This is the only way to make sure there’s a sense of ownership in a software project. Sense of ownership ensures that people in a project feel some kind of emotional attachment and responsibility towards the end result. Without a healthy dose of sense of ownership, it’s very easy for teams to develop an “ain’t my problem” kind of attitude and that’s not healthy to have in any project.

In big software companies, priorities often change and as a result, some teams are asked to stop working on a certain project, transition the project to some other team and start another new project. To upper level management, this is not a big deal, they’re just moving “resources” around from one project to another. But anyone who has done some kind of software development knows that this is much more than simply moving resources around.

By switching teams and projects around, accountability and sense of ownership in a software project are ruined. The team leaving the project will not care about the project or held accountable for it anymore because they will have a new project to work on. The team inheriting the project cannot be held accountable or feel ownership for the project either because they were not involved from the initial design. In the end, customers suffer due to poor quality. After a few project shuffles, people in the company gradually stops caring about accountability or sense of ownership because they expect that sooner or later their current project will be reshuffled and they are not going to be accountable for that nasty production performance bug.

To sum up, if someone asks you how to make sure a software project fails? Quite easy. Assign it to one team, then reassign it to another team in the middle. In the end, you’ll have an orphan piece of software that nobody feels responsible or accountable for and you planted the seeds of a non-caring unaccountable culture. Sure way of killing current and future software projects.

Leave a comment

Posted by on November 19, 2014 in software industry


Team formation: another broken process in software development

In my previous post, I talked about why I think the technical interview process is totally broken. In this post, I want to talk about yet another broken process in software industry: how teams are assembled (or more like misassembled!) for a software project.

In a typical software company, a person is hired by a team to work on a particular project within that team. This makes total sense. However, after some time, things change. The actual time frame varies from place to place, it could be as soon as 1 month or something longer like 1 year or more. Either way, change is inevitable, the project either ends or completely changes or the team gets a completely new assignment. This is the point where it gets really weird. Now, the person who was hired to work on a particular project is asked to work on a completely new project. The hired person is basically stuck with the team he/she was hired into to work on a completely different project that he/she did not sign up for.

If the project and the team sound interesting, he/she can carry on and all is good. However, the problem arises when either the project is not interesting or does not align with the person’s goals or the team is not what the person hoped to be. At this point, the hired person has a few choices but none of them are good. He/she can try to get out of the project/team but that means he/she needs to find something else to do within the company. In big companies, there’s a lot bureaucracy around switching to a new group. You usually need to get your manager’s approval (which is quite awkward as it is) and then go through the process of making yourself accepted in the new group. It’s just not straightforward as it should be.

The other choice is to quit the company and join to some other company which is even a bigger deal with more interviews and process. Either way, you end up spending a lot of time not doing any real work. So, what usually happens is that, many people go on, accept their faith and continue working on uninteresting projects with uninspiring people because it’s easier for the time being. This naturally results in uninspired mediocre work and it goes on until either the company or the person realizes how ridiculous the whole situation is. The person either quits or gets fired.

I’ve seen this happen a few times to solid people and it is quite sad. It does not happen because person is incompetent but it rather happens because there is a misalignment between the person’s abilities and his/her place in the company. And there isn’t a good and open process to fix that alignment within the company. This is especially true in big companies.

I think the solution is not that difficult though. In big companies, there should not be strict hierarchies with solid lines between managers and developers. The weird notion that a single manager effectively owning a software engineer within a particular group should change. Instead, software engineers should be treated more like free agents. They should be assigned to projects for a period of time and when that time elapses, there should be an opportunity for an open discussion among the employee, manager and possibly someone from HR to figure out what makes sense as the next step. Software engineers should be presented with project choices within the company at that point in time and they should have the real option of staying in the current group with the current project or move to another project with little or no process. The company hierarchy should not dictate what people do and where. Rather, the business needs should dictate what the projects should be and then people should be able to gather around those projects freely, kind of like open source project model but within an organization.

Big companies will tell you that they already encourage people to move around within the company but in reality, this is far from true. First, the amount of process and bureaucracy involved is so high that many people do not even bother. Second, the culture at these companies does not encourage moving around. The very existence of a strict hierarchy dictates the boundaries that an engineer needs to adhere to and it is quite difficult to get out of those boundaries. Third, a software engineer is not presented with other choices once he/she is hired into a role and there is hardly ever an open communication about the employee’s next step. Employees end up sneaking around the company hierarchy to find the right alignment and it just should not be like that.

Open source software is so successful mainly due to people involved in those projects. Most of open source people put time and effort into those projects because they really care about them, not because of a random hierarchy. The same should happen in software companies as well. The company that figures out a way to let people gather around projects they care about voluntarily will create the winning innovative culture that is so lacking in big companies nowadays.

Leave a comment

Posted by on September 5, 2014 in software industry


Why the technical interview process is so broken in software industry?

I’m convinced that the technical interview process, the process that a software team uses to select good candidates to add to a project, is totally broken in our industry. It’s broken mainly because most technical interviews try to answer one aspect of a good software developer (Is he/she a good coder?), whereas software development requires skills well beyond just coding and those skills are completely ignored in most interviews.

First of all, we need to talk a little about what some characteristics of a good software developer are. The list might vary a little from person to person but overall I think most developers will agree with me with the following list:

  • Great coder. Writes maintainable, testable code for others to read and understand.
  • Passionate. Cares about the work and tries to produce the best work possible.
  • Team player. Cares about team members, seeks help and provides help to the team as needed.
  • Independent. Pulls his/her own weight and gets stuff done for the team.
  • Takes initiative. Does not just try to cruise along.
  • Great communicator. Communicates clearly and concisely both verbally and written.
  • Fun to be around. He/she is someone you would like to hang out with after work.

Over the years, I found that rockstar developers have most, if not all, of the characteristics above. What’s alarming is that only the first point (great coder) is traditionally measured in technical interviews. A candidate goes through hours and hours of hard technical questions (that sometimes the interviewer does not know the answer to, happens more often than you think!) and while coding skills eat majority of the time, the rest non-technical skills are either completely ignored or at best guessed through a series of informal chatty questions.

But why is that? Why don’t interviews test non-technical skills that are just as important? The main answer I can give from my experience is the lack of time and effort. In a typical interview setting, one has 30 mins to 1 hour max to test out a candidate. That’s barely enough time to get to know the person and gauge his/her technical background. It’s hard to predict if someone is passionate or a team player in 30 minutes, there are no questions that can answer “Is this an independent person?”. The other part is that it’s much easier for developers to throw couple technical questions to a candidate than to really try to get to know the candidate. Coming up with good technical and behavioral questions is hard and most developers do not have time or willingness to put their day work aside (where they get assessed at the end of the year for bonus) and spend time for interview preparations (where they have no incentive for a bonus). As a result, short technical interviews are bound to fail because they can only test whether the person is a great coder or not and nothing else.

So, are technical interviews doomed? Not really, if we put more time and effort into the process. In one of the groups I was part of at Adobe, after the initial interview, we used to give candidates a 1-2 day programming exercise to complete at home. The exercise was deliberately vague in order to force the candidate to ask questions and communicate. After the exercise was complete, we’d go through the code, asking him/her to explain code. This was very valuable because we got to see the coding style, we got a sense of the design style, communication style and got a more thorough view, away from the pressures of a regular interview process.

The same idea could be extended even further. Why not bring candidates for a trial period for a week or two weeks to your project and ask them to fix something about your project? They get paid, of course, you get to work with them as if they are coworkers and let them show themselves in a natural setting rather than the interview setting. At the end of the trial period, the team decides whether they want to work with that person or not. There are practical limitations (what if the candidate is currently employed?) and this is definitely much harder and time consuming for the team than the traditional interview process but it could definitely be made to work and it’s much better than hiring the wrong candidate and having him/her quit in the middle of the project.

The success and failure of a project depends a lot on the people who are part of the project. If we don’t have a good process to select good candidates for our project, what chance do we have to create the best software we can possibly create as a team?

Leave a comment

Posted by on August 26, 2014 in Interview


Thoughts on performance

Performance can make or break a piece of software, this is clear; nobody puts up with an unresponsive client UI or a slow back-end server in today’s age of software abundance and choice. Despite this, performance is often overlooked until late in the release cycle and doesn’t get the proper attention it deserves. This might not be a big deal in one release cycle, but after a few release cycles,  you can end up with a slow-moving giant that nobody knows how to fix instead of a lean fast machine that you used to have.  At this point, you either accept what you have or you take the hit and go through the painful process of profiling, analyzing, fixing, and in some cases redesigning. How does software end up like this in the first place? I can think of four reasons.

First, it’s not easy to sell performance. New features, especially visual ones where people can see and play, often are much easier to market and sell then subtle yet more important features like performance. Two more features on the release cycle looks better than 20% increase in throughput for example, so performance is not treated as a proper feature but rather seen as a thing to check at the end of the release cycle. As a result, performance does not get the time and resources it needs.

Second, it’s not easy to reason about performance. You need to define what metrics are being measured in the name of performance, define what qualifies as acceptable performance, define use cases where performance is important. This requires through understanding of the software and the use cases around it. It’s hard to get the scope of performance work right, it’s either too broad to implement or too narrow to produce anything useful.

Third, performance work is hard, sometimes harder than implementing the software. It usually needs additional tools/software outside of the software itself in order to write tests to simulate the agreed upon use cases and track some numbers around those cases. In most places, there is simply not enough time left over outside the feature development to build those tools. Even if you have all these tools, you need time to run the complicated performance scenarios and if numbers don’t look right, you need time to find out why; it can be anywhere in the code. You also need to do this all over again in every release cycle or get some time to implement automated performance tests that can track performance for you. This is a lot of work.

Fourth, performance is usually not tightly integrated with the overall feature development. When a new feature is being developed, there is a lot of focus on the new capabilities that the new feature brings from Engineering, QA, Product Management but not as much focus on two things: 1. How does this feature perform by itself? 2. How does this feature affect the overall performance? The result of ignoring #1 is that a new feature gets designed and developed without performance in mind, and by ignoring #2 the overall performance of existing system gets worse which is even worse.

Despite all this, professional software developers have the obligation to design and implement performant software, no matter what the realities of the workplace is and I think with some effort, performance can be saved and maintained over the release cycles with a few guidelines that I hope to share in a future post.

Leave a comment

Posted by on June 16, 2014 in Performance, Programming

%d bloggers like this: