Showing posts with label cloud computing. Show all posts
Showing posts with label cloud computing. Show all posts

Friday, January 6, 2012

Distributed load sharing with virtual appliances

Cloud computing has been a revolution in the recent past. All network device manufactures are evolving their products to meet the new requirements. We can see lot many networking appliances for firewall, load balancers, IPS etc., now available to the market.

Load Distributors:

Even though VM appliances provide lot of flexibility for the network administrators in maintaining them, one setback in most of the cases is the scalability, performance and reliability. Hardware appliances gain over software appliances in this regard by considerable edge.

To increase the scalability, either we need to increase the resources of the virtual appliance or add multiple instances which can share the load.

In case of multiple instances, load sharing must be done to make use of it most. We can do it by deploying load distributor appliance. It takes care of distributing the traffic between the network appliances.


One such architecture is released by Embrane. Here is short overview of their products.
But, this Load distributor must not be bottleneck by itself in performance and scalability. One way is to add more CPUs/resources to the load distributor.

There is a good article that talks about the same problem and suggests to add more functionality like ALG like intelligence for the load distributors and make use of openflow and create flows in the L2 switches(openflow enabled switches will be widely used in future). Thus, a fast path gets created and traffic flows from L2 switches to network service appliances directly. For the new connections, the openflow switches send the traffic to Load distributor to get flow created.

There is a still challenge of single point of failure. VmWare has an option of VMotion. The load distributor VM will be transferred to other ESX server. For other virtualization solutions it will be challenging to handle it.

One would ask about the failures on L2 openflow switch. It is interesting to see what solution fits here. Traditional VRRP solutions are for L3 and above. In future, we would see backup switches too??

SoftADC as load distributor:

Coming back to the load distributors, I feel softADCs can be tuned for that role. The feature rich softADC can be stripped to play like load distributor. A simple L7 load balancer appliance can be best bet to act as openflow controller. They can provide advantages in persistence, fail over handling by VRRP etc., The time to market and ROI will be good as they can be easily tuned to act as load distributor. The vendors like Radware, Citrix etc., should think of adding open flow controller capabilities and L2 switches with openflow enabled data paths for the same.

Wednesday, March 11, 2009

Application Delivery Controller and cloud integration

My colleagues ask me this question "what is needed to integrate
application delivery controllers in cloud computing data centers?"
I happen to read about a switch vendor claiming cloud computing ready
layer2 switches. After going through their website, I felt there
is nothing special they do when compared to their competitors but
market it with eye catching cloud computing terms.

Coming to the load balancer market, vendors show up to the customers
that application intelligence, optimization, providing complex
configuration tools are needed for cloud computing. There are some
specialized devices to do those jobs. Let the application delivery
controller not mess up with applications by talking its language.


In my opinion, any network vendors claiming cloud ready should
have the following.


Scalability: Generally, load balancers have up to 1024 real servers.
That is not enough to position a load balancer in cloud data centers.
There will be several thousand of servers running. The load balancer
must be scalable to several thousand of real servers, virtual servers,
IP interfaces etc.,
The load balancer must not be a bottle neck to the cloud performance.
For example, the load balancer should be able to do health checks for
thousands of servers with out its CPU going 100%
It should be able to support a huge routing table and session table to
support huge traffic that is expected in cloud data centers.


Ease of configuration: The load balancers must support a simple XML
interface to allow external management applications configure the
load balancer settings. For example, many LB vendors came up with
applications to work with vmware's virtual Center(VC). Those tiny
applications work with VC and adds new servers to server farm when
there is huge traffic. Cloud data centers does not just have load
balancers and servers. It comprises of many network devices and
all these network devices are to be managed in a simple way.
Instead of independently making a decision to add new server to
server farm, load balancers must provide a simple XML interface
for more capable external applications that can understand and talk
to much more network devices. Load balancers must evolve to be
virtualization vendor independent.


Virtualized CLI and reports: The CLI must be remotely administered
and integrated with cloud computing management tool of choice. CLI
should be virtualised and subscriber of cloud data centers can see
only their settings as if a dedicated box for them. The reports
generated must be customized per subscriber. That gives easiness
for the administrators to manage it easily. For example, the admin
may would like to decommission it after a specific job is completed.
If the reports are customized then it enables the admins to calculate
the cost involved per subscriber based on the usage of the services,
bandwidth consumed etc.,

Above explained are few things that are mandatory for the load balancers
to immediate deploy them in cloud computing data centers.