Nov 052009

In my last blog, I wrote how Windows Azure could benefit the customer by decoupling the application from the infrastructure and allowing the customer to concentrate on his application logic alone. So this time I decided to blog about the technical implementation of Windows Azure. I am new to this, so please bear with me if its not detailed enough Here is a diagram showing the architecture of how Azure works azure1 As you can see in the diagram, Windows Azure separates the application from the underlying hardware. There is a fabric controller which is an intermediary between the application and the hardware. This fabric controller internally uses Services to perform many activities like Load Balancing. The Fabric also monitors the state of the servers i.e. in case a Server fails, it internally switches the VM to another server in real time. i.e. no downtime is involved. The servers installed in the data center are not high end machines. They have medium capacity processing power and no RAID setup

One more important component of Azure is the Storage Services. Storage in Azure is done in three forms – blobs, tables and queue. Why we need this kind of storage is that the storage needs to be available to be copied to multiple VMs on the fly. The simplest way to store data would be in the form of blobs going to a maximum of 50 GB. However, we may need a more structured data handling mechanism at time. Here is where tables and queues come in. Tables are hierarchical data structures and not relational ones. They are indexed tables which are queried by LINQ rather than SQL. These tables can reside in several servers at the same time.  Also to ensure data is not corrupted, this data is stored in three different locations. This more than compensates for absence of RAID setup in the servers.

Queues are somewhat different than both than blobs and tables that they are uses as a means of communication. Suppose you have an application for downloading and playing MP3 songs. For this, any request for a particular song would be stored in a queue. This will be read by a server and processed and removed from the queue. The queue system decouples the front end from the back end and as long as both are able to understand and process queue messages, they can work irrespective of the technology.

Well thats it for today. More to follow about Windows Azure services.

Nov 042009

Though Azure has been a “hot” word for nearly an year now, I didnt really look into it much. Sure everyone was talking about Cloud Computing and how it would define the web in the years to come, but there was no motivation for me to go and dive head first into Azure. That motivation came today at the Tech Days event at a Session on Azure. So I decided to write a blog on how Azure would be beneficial from a customer’s point of view. Will follow it up with a post on the technical details.


Azure is Microsoft’s very own Cloud Computing platform. It will allow organizations to focus their attention solely on their business purposes rather than IT centric points like capacity, load balancing etc. So the only activity for maintaining an application would be in development and testing, and Azure would do the rest of the magic for you. This means lesser resources would be required for maintaining an application and you wont  require a full time Support guy on call just in case something goes wrong.

So how does this work? Microsoft has many data centers across US and is in the process of opening many more. These data centers house tens of thousands of servers, which are used for virtualization and creating VMs where the client application is hosted on. Each VM is a system with a 2 Ghz processor and 250GB disk space, which acts as the server for your application. Suppose there is a sudden surge in traffic and the application VM is getting overloaded, the client can immediately request for another VM instance (it costs extra money of course). In real time, a new VM instance would be created and the application be copied to it and with that your site has instant extra capacity, with absolutely no downtime at all!!! So no more need for planning for that sudden festive season rush to your site and paying for the extra hardware all year long. Azure’s pay as you go model is extremely economical and would work out to be far more cheaper than what most companies currently spend on their IT Infrastructure.

The first thought that came to my mind was whether organizations would be able to host Azure on their own datacenters rather than relying on Microsoft to provide hosting. But Alas, it is not to be. Steven Martin wrote on his blog quite some time back “We don’t envision something on our price list called ‘Windows Azure’ that is sold for on-premises deployment”. I am hoping that this is temporary, till Azure is stable enough to hand over a third party and some day, it would be available for on-premise development. For very large organizations, it makes a lot of sense to maintain their own data center which would be more reassuring from a security point of view. And VMs could be shared between applications. Instead of having buffer capacity for each application as in the current model, all the applications on the network could share a common buffer which could be interchanged between apps instantly.

Another issue is the permissions. On dedicated servers, we have the luxury of operating with full privileges for both Windows and IIS. On Azure, where inspite of having a separate VM, a single server could be shared for multiple applications, would the customers have the same level of control.

There are many more questions and concerns like these which need to clarified. The PDC is coming up in another 2 weeks. Lets hope there will be more clarity then.