Friday, January 6, 2012

Distributed load sharing with virtual appliances

Cloud computing has been a revolution in the recent past. All network device manufactures are evolving their products to meet the new requirements. We can see lot many networking appliances for firewall, load balancers, IPS etc., now available to the market.

Load Distributors:

Even though VM appliances provide lot of flexibility for the network administrators in maintaining them, one setback in most of the cases is the scalability, performance and reliability. Hardware appliances gain over software appliances in this regard by considerable edge.

To increase the scalability, either we need to increase the resources of the virtual appliance or add multiple instances which can share the load.

In case of multiple instances, load sharing must be done to make use of it most. We can do it by deploying load distributor appliance. It takes care of distributing the traffic between the network appliances.


One such architecture is released by Embrane. Here is short overview of their products.
But, this Load distributor must not be bottleneck by itself in performance and scalability. One way is to add more CPUs/resources to the load distributor.

There is a good article that talks about the same problem and suggests to add more functionality like ALG like intelligence for the load distributors and make use of openflow and create flows in the L2 switches(openflow enabled switches will be widely used in future). Thus, a fast path gets created and traffic flows from L2 switches to network service appliances directly. For the new connections, the openflow switches send the traffic to Load distributor to get flow created.

There is a still challenge of single point of failure. VmWare has an option of VMotion. The load distributor VM will be transferred to other ESX server. For other virtualization solutions it will be challenging to handle it.

One would ask about the failures on L2 openflow switch. It is interesting to see what solution fits here. Traditional VRRP solutions are for L3 and above. In future, we would see backup switches too??

SoftADC as load distributor:

Coming back to the load distributors, I feel softADCs can be tuned for that role. The feature rich softADC can be stripped to play like load distributor. A simple L7 load balancer appliance can be best bet to act as openflow controller. They can provide advantages in persistence, fail over handling by VRRP etc., The time to market and ROI will be good as they can be easily tuned to act as load distributor. The vendors like Radware, Citrix etc., should think of adding open flow controller capabilities and L2 switches with openflow enabled data paths for the same.

Monday, November 15, 2010

IPv6: How GSLB can gain?

With advent of smartphones and intelligent house hold devices connecting to internet, the address space problem is going to blow up soon. IPv6 is soon to find its place in large extent in ADCs. In one of my articles, I have written the cons of DNS GSLB. It described how client proximity feature can help a client to reach nearest site. I was wondering if we can solve our GSLB problems with IPv6.

IPv6 provides anycast address. For quick reference, I pasted here few text.
An identifier for a set of interfaces (typically
belonging to different nodes). A packet sent to an
anycast address is delivered to one of the interfaces
identified by that address (the "nearest" one,
according to the routing protocol's measure of
distance).


As the above text explains router can find nearest host that listens on the anycast address. This can be used as advantage in GSLB. All the sites can have anycast address as a virtual IP. When the domain name resolves to this anycast address, V6 router can find a nearest site for the client.

Implementation must note the cases with site failover in which case the router will chose a different site and the existing connection will break. In order to maintain site stickiness I would leave to the implementation. One way is to redirect the client to an unicast address as soon a site is selected.

In summary, IPv6 brought in advantage with anycast address in GSLB. Soon, we will see this implementation in ADCs.

Tuesday, March 23, 2010

WAN optimization on Smart phones

With increasing usage of smart phones in enterprises, the ADC vendors got a new market open up. Mobile users would want to access documents, send emails seamlessly.
Some vendors have already ventured into this smart phone ADC market. They started giving a proxy application that can run on the smart phones. Smart phone would require considerably large memory, CPU processing speed and battery power to run these proxies. Seeing the evolution of mobile phones these days, these things should not be problem in the days to come.

I visualize two different paths
1) WAN optimization client proxy combined with IPsec client as a offering.

Apart from providing security this offering would give benefits by WAN optimization.
And, the enterprises can restrict the access level to their documents from the remote user.

2) WAN optimization client proxy as a plug-in in the smart phone browsers.

A browser plug-in for WAN optimization in a smart phone can work in hand with browser caching capabilities and can only download the changes for a given file from the WAN optimization device/appliance running in their head office. This also holds good for enterprise laptops who would roam mostly out of their office networks.

In any case, the market would see ADC vendors emerging into smart phone segment.

Sunday, March 21, 2010

Reduce costs: WAN optimization in your hands

WAN optimization market is expected to get a revenue of 1.2billion is 2010 as per this site
Indeed its a growing market and many vendors are venturing into this market.

The basis for any WAN optimization technique is to reduce usage of WAN bandwidth. They help in data de-duplication, compression etc.,

But,the commerical solutions are very expensive. There are open source projects for WAN optimization. Check these links if you want to deploy at a low cost.
WANProxy
TrafficSqueezer

The basis for WANProxy is rsync utility.Almost every linux user knows about rsync utility. As a first step towards WAN optimization users can rely on this.
Setup a rsync server at each site of your company and sync the files when needed or on a daily basis. It would save lot of money if your requirement is not so critically prominent. Here is a good tutorial to setup rsync. File transfers can be encrypted and that adds to the security of your file transfer.

One good use case in a mutli branch environment- clearcase servers cannot be hosted at all branches due to cost constraint. Users login onto remote servers and use clearcase remotely. Users tend to download the files and binaries that are built on the remote site by FTP/SCP. In some cases, they may be in several megabytes and transfer over WAN links is very slow. In such scenarios, I would recommend to setup a rsync daemon on remote server. rysnc on client machine can fetch you only the differences in file content saving lot of time and cost.

To Developers: Way to improve ADC performance

In any ADC devices or networking appliances, there exists a data that will be accessed very often like session table, routing table, shared memory for IPC communications. In case of ADC, its cached objects, signature files for WAN optimization etc., Depending on the size, these memory entities will spawn in multiple pages. When ever there is a context switch, CPU needs to load these pages and that involves TLB lookup and cache lookup.

TLB(A translation lookaside buffer) is a CPU cache used to improve virtual address to physical address mapping.A TLB has a fixed number of slots that contain page table entries, which map virtual addresses to physical addresses. And,number of TLB entries are very few. If the requested address is present in the TLB, its a TLB hit and If the requested address is not in the TLB, the translation proceeds by looking up the page table in a process called a page walk. The page walk is an expensive process, as it involves reading the contents of multiple memory locations and using them to compute the physical address. After the physical address is determined, the virtual address to physical address mapping is entered into TLB.

Less number of TLB misses will give good performance. Since, session tables, routing tables, cached objects will be contiguous, using that fact if one TLB entry is allotted for such contiguous memory need for more TLB entries can be avoided. Each TLB entry mapping to huge chunks of contiguous memory is made possible by "hugetlbfs"

Allocate the session table, routing table, cached object memory from this hugetlfs and CPU performance is improved greatly as it conserves TLB entries and avoids costly page walk.
You can check this link to see how MySql was benefited with hugetlfs.

You can go through this linux documentation for implementation details and for more information.

References:
http://netsecinfo.blogspot.com

TLB wikipedia

Friday, December 18, 2009

SoftADC - Future is here! (Part -1)

In my last article, I wrote about different types of virtualization and which is best for ADC. SoftADC, is a software version of ADC product. It can be classified further into
1) Software that can be run as a virtual appliance on a generic hardware on top of hypervisor like vmware, xen or hyper-v
From Vmware's website, three vendors provide softADC.
- Zeus
- JetNexus
- Citrix Netscaler

2) Software that can be run on a environment that provides virtualization customized to run networking appliances.
- Nortel/Radware Virutal services switch

Most of ADC vendors have shown inclination towards the first type of softADC. The customized virtualization environment solution seems to have not been successful so far. The reason could be the attempt is not so well executed or due to Nortel financial crisis. I will leave that to your analysis.

But, I back the idea of the customized networking virtualization as it is designed to provide high efficiency for networking virtual appliances like firewalls, SSL acceleration, Application Delivery controllers, IPS etc.,
- Built in Hardware support for SSL acceleration
- Advantage of Resource virtualization and network virtualization
- Simplified hyervisor to suit networking appliances, so less overhead.
- Ability to run thread of virtual appliances for a given client connection ( For example: Firewall - SSL - Firewall - IPS - SSL)

SoftADC has its own advantages. The fact that is running on generic hardware and work with established cloud computing infrastructure, it is simple to configure and manage from administrator point of view. The performance it could generate suits for small to medium scale businesses. And, it is also scalable by adding more cores and gain the performance.

Some softADCs seems to have identified the bottlenecks associated when running on generic hardware. For example, Zeus has support for SSL acceleration on hardware when used.

SoftADC are here to stay especially in this era of cloud computing. The business requirements will define what type of SoftADC to deploy.

Monday, November 9, 2009

Alteon is back

Its wonderful to see Alteon bouncing back to retain its lost market share.
Check there fancy ad at http://alteonisback.com
Quickly, you can also see at http://www.youtube.com/watch?v=1FlvEdIs_5s

The Alteon 5412 is the power source that brings Alteon back to life and gives him his super-abilities.

From the website:
This new Alteon 5412 high-end platform is an exciting, stand out addition to the Alteon Application Switch product line – making Alteon performance SCREAM again! The Alteon 5412 features:

* An incredible 20 Gbps throughput
* Four 10GE ports and 12 Gigabit Ethernet ports
* 340K of Layer 4 TPS & 340K of Layer 7 TPS
* 2.5M DNS QPS
* Virtual Matrix Architecture
* Renowned Alteon OS
* On demand scalability
* 5-year, guaranteed longevity

Radware's new product derived from Alteon expertise is "screaming back"
Look at their brochures
http://www.radwarealteon.com/products/applicationswitches/series-5/
http://www.radwarealteon.com/wp-content/uploads/Documents/DataSheets/NAS/Radware_Alteon_5412_Data_Sheet.pdf

More to come

Monday, August 17, 2009

ADC - Design guidelines for Virtual DCs

Different types of virtualization


As we are aware, there are different types of virtualization technologies. In this article, I will try to analyze which one to opt for ADCs

- CLI virtualization
- OS virtualization
- Network Virtualization

Many ADCs have already implemented CLI virtualization.

CLI virtualization is a quick solution to addresses various customers needs in Virtual Data centers.
The main challenge ends up in the resource reservation per subscriber. Such a Virtualization, can only distribute the number of Virtual IPs among the customers and cannot restrict the throughput per user basis. It cannot restrict a subscriber using up all the route entries or Layer 2 forwarding data base tables etc.,
Most importantly, it cannot suit Virtual DCs with overlapped networks and IPs.

OS virtualization suits perfectly for the software based ADC. Any attempt to do it in hardware based
LBs will be a risky attempt as it impacts the performance and makes it more complex to manage and enhance in future for scalability or version upgrades. But, the advantage with single interface for provisioning the instances. This type of virutalization adds lot of OS overhead to each instance. But, this provides us memory and crash protection from other instances which is worth it. But, creating multiple VMs on a high end hardware still has its own disadvantages. The Virtualization OS is not designed or tailor made to run network switching applications. It carries lot of OS overhead and CPU scheduling and work load sharing that may not match ADC requirements. Running more VMs will also impact each ADC instance performance.
Right approach is to have specialized hardware with tailor made OS virtualization to suit ADC enviroment is a best bet.

Network Virtualization works by virutalizing the network stack. Its like each customer having a different
network stack from Layer 2 to Layer 7. It means, the Layer 2 FDB, L3 Routing tables and SLB tables are different for each subscriber. The differentiating factor would be VLANs. Each subscriber is allocated one VLAN. This helps in Virtual DC environments with overlapped networks and IPs. Ofcourse, this is not crash proof from other instances. And, it has no overhead as in OS virtualisation.

Now the question,which is best?
Every virtualization technology has its own pros and cons. It does not matter what path it takes, but it should
pass the customer needs.
What should the market look up in choosing the virtualization ADCs?
- Scalability: To suit the virtual DCs, scalability is must. Ideally, the ADC should not restrict the
connections per second, throughput etc., But, ofcourse, it can be license based provided it can
scalable to maximum possible for that system and should be reasonable to use in Virtual DCs.

- Provision ready: A third party solution should be able to provision the Virtualized ADC to create instances as well LB related config. These ADCs should support XML based configuration. Hope, a standard emerges to configure any type of ADCs.

- Seamless updates: Dynamic configuration changes must be supported. It should not impact other instance run time behavior with configuration changes or version upgrades. When one instance shuts down or restarts, there should not be problem for other instances even wrt to resources.

- Throughput protection : The virtualised ADCs can have abilitly to add more subscribers, but it should not effect the throughput promised to the subscriber.

- Overlapped networks: Should support overlapped IPs that are quite possible in Virtual DCs.

- Usage reports per subscriber: These reports are useful for Virtual DC administrators to know about the subscriber usage of the ADC for accounting or debug purposes.

Conclusion:
Every virtualization has its own pros and cons. Network and OS virtualization are better than CLI virtualization. OS virtualization on tailor made hypervisor to suit running ADCs is a better bet. It can provide better support from the vendor and its crash proof from other instances. ADCs running on hypervisors like Xen, Vmware etc., will be completely crash proof but tuning the parameters for the hypervisor to give good performance is a challenge. Its more visisble with more number of VMs running on the hypervisor. One should look for overall network performance with ADCs as VMs instead of just ADC instance performance.
Network virtualization gains in providing better performance as it has less OS overhead but loses wrt to crash proof from other instances.

Sunday, August 16, 2009

Guide to select best GSLB solution

There was a question to me
'Ok, If a XX ADC supports client proximity which is not based on DNS, is it best to go with. Do you suggest any other features to look for?'

As I see all the current ADCs do support most eye catching features, there are some things to check before selecting a GSLB equipped ADC.

Here is a check list
- Client proximity: Most LBs calculate Round Trip Time (RTT) by ICMP, TCP probe etc.,
But, think if this works in real time scenario.
a) RTT calculation should work with firewalls sitting at the client network
b) It should not add overhead in TCP communication with the client
c) Most importantly, RTT should be calculated to the real client. Assume a proxy between the client and ADC. If the ADC under consideration does ICMP or TCP probe, it may not reach the real client. So, the RTT is to the proxy but not the client.
d) VPN environment - If the client is behind a VPN gateway. The original location of the client is from private network. This private network can be geographically anywhere. But the source IP of the client NATs to a public IP of the VPN gateway that is connected to Internet. In this case, TCP probe or ICMP based RTT will go completely wrong. This is especially true with Hub and Spoke networks.
e) This solution should work for all protocols including HTTP, SIP, RTSP, SMTP etc.,
f) The client proximity solution should honor Layer 7 features like cookie persistency.
g) The client proximity information obtained must be in sync with all other sites.

- Site health checks- Health checks should be content aware and must be done to the remote server farms.

- system properties - The GSLB solution must consider system health properties like CPU, session utilization of all sites

- Persistency and Availability - GSLB persistency must be maintained. If a client connects to site B because site A is down. The next request should continue to go to Site B even when Site A comes back.
- Ask for throughput and connections per second with GSLB configured in the ADCs.
- Remote forward - When local servers are down, the client request must be proxied to remote site transparent to the client. Client should not even know that site is down.

Wednesday, June 3, 2009

Journey after DNS GSLB

It is interesting to see discussion in load balancing mailing list on GSLB and a popular URL on page of shame. I thought to share my thoughts how the GSLB has advanced to this time.

In the first article of “why DNS based GSLB does not work?” the emphasis is mostly on problems associated with DNS cache and layer7 cookie information. But, at the same time, given its advantage in providing services under site break down these were assumed to be fine.

Before we go into details of modern DNS GSLB, lets see what the second article on GSLB highlights

  • Clients are often not topographically close to their caching servers

  • calculation of proximity can itself significantly degrade the user experience

  • problems with static proximity and Geo-targeting

These problems are very much relevant to DNS based GSLB implementations. There are some enhancements that went into GSLB devices in recent times. Remember, Alteon Content Director?

ACD implemented client proximity in way back in 2000.

From ACDs documentation

Site-Selection Proximity Methods in ACD

The site-selection methods, are:

! FastPath

! SmartPath

! DualPath

! DNS Round-Robin

! DNS FastPath

! MediaPath

The first three methods determine which local domain can provide the fastest route from its

server to the client. The DNS Round-Robin method uses round-robin selection to equalize

domain traffic across distributed sites instead of client proximity to the site. (DNS Round-

Robin does not require an ACD for each local domain.) DNS FastPath awards the connection

based on the fastest route from its server to the client’s DNS server. MediaPath is specifically

designed for MMS requests

True client proximity is achieved by first three methods listed above. Fast path for example, upon receiving the client request, searches its cache to find nearest site. If there is no entry it starts creating one by sending a redirect message to all the sites. Each site responds with REDIRECT packet to client by optional changes in the URL path. Client will open a connection to the first received REDIRECT packet. That ensures the site is closer to the client.

The above implementation carries a security drawback. The firewalls in all the sites should have rules to allow REDIRECT packets that would not look coming up from their internal networks.

In the recent GSLB implementations of some products like Alteon application Switch (NAS) , true client proximity is achieved by addressing the limitations in ACD.

“After receiving the client request, the GSLB device in that site, scans client proximity table. If it does not find entry, it goes on to calculate response time to the client from each site. It sends a response to the client with HTTP text data with the following information

(Assume there are three sites – A, B and C)

http:///cntpurl

http:///cntpurl

http:///cntpurl

cntpurl—special type of url used by sites to compute RTT

Client X sends an HTTP request to Site A, Site B, and Site C. Client X establishes a TCP connection with Site B and Site C, and sends a cntpurl request. Site B and C respond with a dummy response and

in the process compute the RTT of their TCP connections with the Client X. Site B and Site C update the computed RTTs to Site A. On receiving RTT from Sites B and C, Site A sends the consolidated

RTT list to all sites.

- At this time, Site A serves the request from the client.

- During the next request from the Client X, Site A redirects the HTTP request to the closest RTT Site (Site C in this example).

- Client X opens a new connection with Site C.

- Site C serves the HTTP request.

If the best site is down, the client is given next best site based on RTT in the Client proximity table.

The client proximity table is a cache maintained in all the sites and is completely in sync.

This solution works for all HTTP traffic and works in hand with DNS GSLB. Now lets again go back and see what 2nd article said. This solution

  • Always calculates the true client proximity with a solution that works in hand with DNS GSLB.

  • Does not depend on Geo location so no problem with VPN clients

  • A little overhead in traffic only during first contact by the client.

I agree completely with the views mentioned in GSLB article and I thought to share the latest features that addressed those problems mentioned in that article.


Few words

I got lot of mails encouraging me to write more technical articles.
Thank you guys.
I promise I will write more in coming days. So come back to my blog and check out new articles.

You can mail me your comments. And, you are welcome to suggest any topics
you wish to see here. Reach me at ravivsn@gmail.com

Tuesday, March 24, 2009

Need for Third party certification

Are there any third party certifications for the load balancers/ADCs?

I came across Tolly group which conducts tests when the vendor
equests them. That too, Tolly group just concentrates on a given set
of tests and will not cover entire box and features wide. And there is
Gartner group classifying the lbs into magic quadrants based on the
feature set but does not depend on the test results.

The LB market lacks a genuine third party certification like NSS for
Intrusion prevention systems.

LBs are now ADCs, with lot of more complex jobs to do. These days,
ADCs are pitched against application firewalls and protocol anomoly
detection systems. These products (application firewall and IDP)
always go for third party certification. These certifications gives
confidence to the customer that these products are effective against
zero day attacks, known vulnerabilities or exploits. Since, ADC are
also entering that segment, they need third party certifications.
These ADCs are front ending the servers and these devices
themselves should not be vulnerable to attacks. These third party
will also certify how hardend is the ADC OS and its
proxy/applications.

I suggest either the vendors go with groups like NSS Labs or ask
Tolly group to come out with complete set of test cases for ADCs.

The third party certifications will help the customers to choose
the best ADC based on the performance and test results rather
than carried away with the feature rich marketing terminology.

Monday, March 23, 2009

Implementation suggestion for LBs to use in Virtual Data Center (VDC)

Virtual Data Center (VDC) uses virtualization technologies in the
data centers. Virtualization platforms like Xen, vmware, HyperV
from Microsoft are used in the VDCs. With increasing number of
subscribers, and many relying on the cloud data centers, there
will be IP address exhaustion for internal servers allocation.

Fortunately, the VDCs with the help of virtualization platforms
can have overlapping IP addresses. The subscribers are given
freedom to choose their own set of IP addresses. In such
setups, the network devices should be programmed such a way
that they identify the server not just by IP address ,its based
on subscriber id too.

Lets take an example. The data center has two subscribers.
There is a load balancer before the real servers. Subscriber A
has VIP1 and the subscriber B has VIP2. But they chose to
have real servers in 10.x.x.x network and chose same IPs
for their real servers.


-------------10.1.1.1(Real 1)
-------------10.1.1.2(Real 2)
VIP1 ------------10.1.1.3(Real 3)

VIP2-------------10.1.1.1(Real 4)
_________10.1.1.2(Real 5
_________10.1.1.3(Real 6)


In the above setup, the load balancer has to distinguish
between real servers based on the subscriber. Traffic
coming to VIP1 must be load balanced between real 1,
real 2 and real 3. While the traffic to VIP2 must be sent
among real 4, real 5 and real 6.

In order to achieve the above setup, the load balancers
must have intelligence to classify the traffic between
subscribers. This can be achieved by allocating a different
vlan id per subscriber. Subscriber A will be allocated a
vlan-a and the other subscriber B should be given a different
vlan, vlan-b. The load balancer after taking a lb decission ,will
tag the packets with respective vlanid. Thus, the packet will
make to its destination.



/--------10.1.1.1(Real 1)
/---------10.1.1.2(Real 2) --vlan a
VIP1 ----------10.1.1.3(Real 3)

VIP2-------------10.1.1.1(Real 4)
\_______10.1.1.2(Real 5) --vlan b
\______10.1.1.3(Real 6)

- Should have different VLANs per subscriber. VLAN will be the
identification parameter for the subscribers.
- LB should be able to take same IP for different real servers and
associate with vlanid
- A different routing table per subscriber. The routing table should
consider vlanid for the route lookups or maintain a different
routing table per subscriber.


The load balancers should implement the above mentioned to
tune to the Virtual Data center and solve future problems.

Wednesday, March 11, 2009

Future talk - What ADCs need to replace IDP?

Few weeks back there was a question in load balancing mailing list.
"What is the difference between ADC and Load balancer?" I would say
the ADCs are next generation Load Balancers. ADCs are capable of
doing more jobs than just Server Load Balancing. The present generation
of ADCs are implementing
a) Application compression and optimization
b) Application validation by protocol anomaly detection
c) Improving network utilization by TCP connection pooling
d) XML parsing and validation

Some implementations went beyond and started application data
modification based on user inputs. In summary, the ADC products
are doing sort of doing application protection like Intrusion
Detection and Prevention devices
(IDP).

With all these features, ADCs became multi functional.
And, the market and experts too feels that its end of load balancing
and emerging ADC market.


First generation of load balancers were not using TCP stack.
They were not vulnerable to TCP/IP exploits and fortunately,
that way they gained huge performance with no stack and in some
cases Operating system overheads. The current generation load balancers
called ADC are running application proxies. The ADCs required
a network stack and as well as application intelligence to does the
application processing, compression, validation etc., All these
features now opened up many vulnerabilities. All the cross site
scripting (XSS), TCP vulnerabilties etc., now started appearing in
ADC products.
Check out the BigIP and Netscaler vulnerabilities. ADCs should
be written following secure coding standards and not leave any scope
for buffer overflow and other stack exploits and evolve further.

ADC does have protocol anomaly detection. But, its not as classic
as IDP devices. ADC do have the potential to replace the IDP provided
they implement the following to evolve into next league
a) Statistical anomaly detection
b) Protocol anomaly detection algos
c) Signature based detection
d) Automatic Live updates for the signatures and software updates
e) Useful reporting to detect zero day attacks

Most of the ADCs in the market do have protocol anomaly. That is not
sufficient but should also have live signature updates to stop newly
found application exploits before the admins patch up the server
applications. Currently, admins still depend up on the IDP to prevent
from exploits even with ADCs having application firewalls and protocol
anomaly detection. There is not much time we would see ADCs competing
with IDP devices..

Application Delivery Controller and cloud integration

My colleagues ask me this question "what is needed to integrate
application delivery controllers in cloud computing data centers?"
I happen to read about a switch vendor claiming cloud computing ready
layer2 switches. After going through their website, I felt there
is nothing special they do when compared to their competitors but
market it with eye catching cloud computing terms.

Coming to the load balancer market, vendors show up to the customers
that application intelligence, optimization, providing complex
configuration tools are needed for cloud computing. There are some
specialized devices to do those jobs. Let the application delivery
controller not mess up with applications by talking its language.


In my opinion, any network vendors claiming cloud ready should
have the following.


Scalability: Generally, load balancers have up to 1024 real servers.
That is not enough to position a load balancer in cloud data centers.
There will be several thousand of servers running. The load balancer
must be scalable to several thousand of real servers, virtual servers,
IP interfaces etc.,
The load balancer must not be a bottle neck to the cloud performance.
For example, the load balancer should be able to do health checks for
thousands of servers with out its CPU going 100%
It should be able to support a huge routing table and session table to
support huge traffic that is expected in cloud data centers.


Ease of configuration: The load balancers must support a simple XML
interface to allow external management applications configure the
load balancer settings. For example, many LB vendors came up with
applications to work with vmware's virtual Center(VC). Those tiny
applications work with VC and adds new servers to server farm when
there is huge traffic. Cloud data centers does not just have load
balancers and servers. It comprises of many network devices and
all these network devices are to be managed in a simple way.
Instead of independently making a decision to add new server to
server farm, load balancers must provide a simple XML interface
for more capable external applications that can understand and talk
to much more network devices. Load balancers must evolve to be
virtualization vendor independent.


Virtualized CLI and reports: The CLI must be remotely administered
and integrated with cloud computing management tool of choice. CLI
should be virtualised and subscriber of cloud data centers can see
only their settings as if a dedicated box for them. The reports
generated must be customized per subscriber. That gives easiness
for the administrators to manage it easily. For example, the admin
may would like to decommission it after a specific job is completed.
If the reports are customized then it enables the admins to calculate
the cost involved per subscriber based on the usage of the services,
bandwidth consumed etc.,

Above explained are few things that are mandatory for the load balancers
to immediate deploy them in cloud computing data centers.