Wednesday, March 11, 2009

Application Delivery Controller and cloud integration

My colleagues ask me this question "what is needed to integrate
application delivery controllers in cloud computing data centers?"
I happen to read about a switch vendor claiming cloud computing ready
layer2 switches. After going through their website, I felt there
is nothing special they do when compared to their competitors but
market it with eye catching cloud computing terms.

Coming to the load balancer market, vendors show up to the customers
that application intelligence, optimization, providing complex
configuration tools are needed for cloud computing. There are some
specialized devices to do those jobs. Let the application delivery
controller not mess up with applications by talking its language.


In my opinion, any network vendors claiming cloud ready should
have the following.


Scalability: Generally, load balancers have up to 1024 real servers.
That is not enough to position a load balancer in cloud data centers.
There will be several thousand of servers running. The load balancer
must be scalable to several thousand of real servers, virtual servers,
IP interfaces etc.,
The load balancer must not be a bottle neck to the cloud performance.
For example, the load balancer should be able to do health checks for
thousands of servers with out its CPU going 100%
It should be able to support a huge routing table and session table to
support huge traffic that is expected in cloud data centers.


Ease of configuration: The load balancers must support a simple XML
interface to allow external management applications configure the
load balancer settings. For example, many LB vendors came up with
applications to work with vmware's virtual Center(VC). Those tiny
applications work with VC and adds new servers to server farm when
there is huge traffic. Cloud data centers does not just have load
balancers and servers. It comprises of many network devices and
all these network devices are to be managed in a simple way.
Instead of independently making a decision to add new server to
server farm, load balancers must provide a simple XML interface
for more capable external applications that can understand and talk
to much more network devices. Load balancers must evolve to be
virtualization vendor independent.


Virtualized CLI and reports: The CLI must be remotely administered
and integrated with cloud computing management tool of choice. CLI
should be virtualised and subscriber of cloud data centers can see
only their settings as if a dedicated box for them. The reports
generated must be customized per subscriber. That gives easiness
for the administrators to manage it easily. For example, the admin
may would like to decommission it after a specific job is completed.
If the reports are customized then it enables the admins to calculate
the cost involved per subscriber based on the usage of the services,
bandwidth consumed etc.,

Above explained are few things that are mandatory for the load balancers
to immediate deploy them in cloud computing data centers.

1 comment:

oweng said...

Ravi,

I agree - any load balancer claiming it can be run in the cloud must be able to scale to 10,000's of dedicated users (each user has full admin access to their own load balancer). I'd add an important point - the scaling must be done on the x86 infastructure the cloud is built on. It's no good if you have to buy appliances or single-purpose blades to scale. The whole business model of the cloud is that you invest in general purpose, reusable compute, network and storage and carve it up dynamically according to customer demand.

As far as I know, the only way to achieve this is to use a software load balancer and give each cloud user their own copy as a virtual machine. Zeus offer this with their ZXTM ADC and their SPLA pricing, and Joyent are one cloud provider to take it up.

Owen