come, catch up with latest happenings and future direction of Application Delivery Controllers.Mail me what you want to see here..
Sunday, March 21, 2010
To Developers: Way to improve ADC performance
TLB(A translation lookaside buffer) is a CPU cache used to improve virtual address to physical address mapping.A TLB has a fixed number of slots that contain page table entries, which map virtual addresses to physical addresses. And,number of TLB entries are very few. If the requested address is present in the TLB, its a TLB hit and If the requested address is not in the TLB, the translation proceeds by looking up the page table in a process called a page walk. The page walk is an expensive process, as it involves reading the contents of multiple memory locations and using them to compute the physical address. After the physical address is determined, the virtual address to physical address mapping is entered into TLB.
Less number of TLB misses will give good performance. Since, session tables, routing tables, cached objects will be contiguous, using that fact if one TLB entry is allotted for such contiguous memory need for more TLB entries can be avoided. Each TLB entry mapping to huge chunks of contiguous memory is made possible by "hugetlbfs"
Allocate the session table, routing table, cached object memory from this hugetlfs and CPU performance is improved greatly as it conserves TLB entries and avoids costly page walk.
You can check this link to see how MySql was benefited with hugetlfs.
You can go through this linux documentation for implementation details and for more information.
References:
http://netsecinfo.blogspot.com
TLB wikipedia
Friday, December 18, 2009
SoftADC - Future is here! (Part -1)
1) Software that can be run as a virtual appliance on a generic hardware on top of hypervisor like vmware, xen or hyper-v
From Vmware's website, three vendors provide softADC.
- Zeus
- JetNexus
- Citrix Netscaler
2) Software that can be run on a environment that provides virtualization customized to run networking appliances.
- Nortel/Radware Virutal services switch
Most of ADC vendors have shown inclination towards the first type of softADC. The customized virtualization environment solution seems to have not been successful so far. The reason could be the attempt is not so well executed or due to Nortel financial crisis. I will leave that to your analysis.
But, I back the idea of the customized networking virtualization as it is designed to provide high efficiency for networking virtual appliances like firewalls, SSL acceleration, Application Delivery controllers, IPS etc.,
- Built in Hardware support for SSL acceleration
- Advantage of Resource virtualization and network virtualization
- Simplified hyervisor to suit networking appliances, so less overhead.
- Ability to run thread of virtual appliances for a given client connection ( For example: Firewall - SSL - Firewall - IPS - SSL)
SoftADC has its own advantages. The fact that is running on generic hardware and work with established cloud computing infrastructure, it is simple to configure and manage from administrator point of view. The performance it could generate suits for small to medium scale businesses. And, it is also scalable by adding more cores and gain the performance.
Some softADCs seems to have identified the bottlenecks associated when running on generic hardware. For example, Zeus has support for SSL acceleration on hardware when used.
SoftADC are here to stay especially in this era of cloud computing. The business requirements will define what type of SoftADC to deploy.
Sunday, August 16, 2009
Guide to select best GSLB solution
'Ok, If a XX ADC supports client proximity which is not based on DNS, is it best to go with. Do you suggest any other features to look for?'
As I see all the current ADCs do support most eye catching features, there are some things to check before selecting a GSLB equipped ADC.
Here is a check list
- Client proximity: Most LBs calculate Round Trip Time (RTT) by ICMP, TCP probe etc.,
But, think if this works in real time scenario.
a) RTT calculation should work with firewalls sitting at the client network
b) It should not add overhead in TCP communication with the client
c) Most importantly, RTT should be calculated to the real client. Assume a proxy between the client and ADC. If the ADC under consideration does ICMP or TCP probe, it may not reach the real client. So, the RTT is to the proxy but not the client.
d) VPN environment - If the client is behind a VPN gateway. The original location of the client is from private network. This private network can be geographically anywhere. But the source IP of the client NATs to a public IP of the VPN gateway that is connected to Internet. In this case, TCP probe or ICMP based RTT will go completely wrong. This is especially true with Hub and Spoke networks.
e) This solution should work for all protocols including HTTP, SIP, RTSP, SMTP etc.,
f) The client proximity solution should honor Layer 7 features like cookie persistency.
g) The client proximity information obtained must be in sync with all other sites.
- Site health checks- Health checks should be content aware and must be done to the remote server farms.
- system properties - The GSLB solution must consider system health properties like CPU, session utilization of all sites
- Persistency and Availability - GSLB persistency must be maintained. If a client connects to site B because site A is down. The next request should continue to go to Site B even when Site A comes back.
- Ask for throughput and connections per second with GSLB configured in the ADCs.
- Remote forward - When local servers are down, the client request must be proxied to remote site transparent to the client. Client should not even know that site is down.
Wednesday, June 3, 2009
Journey after DNS GSLB
It is interesting to see discussion in load balancing mailing list on GSLB and a popular URL on page of shame. I thought to share my thoughts how the GSLB has advanced to this time.
In the first article of “why DNS based GSLB does not work?” the emphasis is mostly on problems associated with DNS cache and layer7 cookie information. But, at the same time, given its advantage in providing services under site break down these were assumed to be fine.
Before we go into details of modern DNS GSLB, lets see what the second article on GSLB highlights
Clients are often not topographically close to their caching servers
calculation of proximity can itself significantly degrade the user experience
problems with static proximity and Geo-targeting
These problems are very much relevant to DNS based GSLB implementations. There are some enhancements that went into GSLB devices in recent times. Remember, Alteon Content Director?
ACD implemented client proximity in way back in 2000.
From ACDs documentation
Site-Selection Proximity Methods in ACD
The site-selection methods, are:
! FastPath
! SmartPath
! DualPath
! DNS Round-Robin
! DNS FastPath
! MediaPath
The first three methods determine which local domain can provide the fastest route from its
server to the client. The DNS Round-Robin method uses round-robin selection to equalize
domain traffic across distributed sites instead of client proximity to the site. (DNS Round-
Robin does not require an ACD for each local domain.) DNS FastPath awards the connection
based on the fastest route from its server to the client’s DNS server. MediaPath is specifically
designed for MMS requests
True client proximity is achieved by first three methods listed above. Fast path for example, upon receiving the client request, searches its cache to find nearest site. If there is no entry it starts creating one by sending a redirect message to all the sites. Each site responds with REDIRECT packet to client by optional changes in the URL path. Client will open a connection to the first received REDIRECT packet. That ensures the site is closer to the client.
The above implementation carries a security drawback. The firewalls in all the sites should have rules to allow REDIRECT packets that would not look coming up from their internal networks.
In the recent GSLB implementations of some products like Alteon application Switch (NAS) , true client proximity is achieved by addressing the limitations in ACD.
“After receiving the client request, the GSLB device in that site, scans client proximity table. If it does not find entry, it goes on to calculate response time to the client from each site. It sends a response to the client with HTTP text data with the following information
(Assume there are three sites – A, B and C)
http://
http://
•http://
cntpurl—special type of url used by sites to compute RTT
Client X sends an HTTP request to Site A, Site B, and Site C. Client X establishes a TCP connection with Site B and Site C, and sends a cntpurl request. Site B and C respond with a dummy response and
in the process compute the RTT of their TCP connections with the Client X. Site B and Site C update the computed RTTs to Site A. On receiving RTT from Sites B and C, Site A sends the consolidated
RTT list to all sites.
- At this time, Site A serves the request from the client.
- During the next request from the Client X, Site A redirects the HTTP request to the closest RTT Site (Site C in this example).
- Client X opens a new connection with Site C.
- Site C serves the HTTP request.
If the best site is down, the client is given next best site based on RTT in the Client proximity table.
The client proximity table is a cache maintained in all the sites and is completely in sync.
This solution works for all HTTP traffic and works in hand with DNS GSLB. Now lets again go back and see what 2nd article said. This solution
Always calculates the true client proximity with a solution that works in hand with DNS GSLB.
Does not depend on Geo location so no problem with VPN clients
A little overhead in traffic only during first contact by the client.
I agree completely with the views mentioned in GSLB article and I thought to share the latest features that addressed those problems mentioned in that article.
Monday, March 23, 2009
Implementation suggestion for LBs to use in Virtual Data Center (VDC)
data centers. Virtualization platforms like Xen, vmware, HyperV
from Microsoft are used in the VDCs. With increasing number of
subscribers, and many relying on the cloud data centers, there
will be IP address exhaustion for internal servers allocation.
Fortunately, the VDCs with the help of virtualization platforms
can have overlapping IP addresses. The subscribers are given
freedom to choose their own set of IP addresses. In such
setups, the network devices should be programmed such a way
that they identify the server not just by IP address ,its based
on subscriber id too.
Lets take an example. The data center has two subscribers.
There is a load balancer before the real servers. Subscriber A
has VIP1 and the subscriber B has VIP2. But they chose to
have real servers in 10.x.x.x network and chose same IPs
for their real servers.
-------------10.1.1.1(Real 1)
-------------10.1.1.2(Real 2)
VIP1 ------------10.1.1.3(Real 3)
VIP2-------------10.1.1.1(Real 4)
_________10.1.1.2(Real 5
_________10.1.1.3(Real 6)
In the above setup, the load balancer has to distinguish
between real servers based on the subscriber. Traffic
coming to VIP1 must be load balanced between real 1,
real 2 and real 3. While the traffic to VIP2 must be sent
among real 4, real 5 and real 6.
In order to achieve the above setup, the load balancers
must have intelligence to classify the traffic between
subscribers. This can be achieved by allocating a different
vlan id per subscriber. Subscriber A will be allocated a
vlan-a and the other subscriber B should be given a different
vlan, vlan-b. The load balancer after taking a lb decission ,will
tag the packets with respective vlanid. Thus, the packet will
make to its destination.
/--------10.1.1.1(Real 1)
/---------10.1.1.2(Real 2) --vlan a
VIP1 ----------10.1.1.3(Real 3)
VIP2-------------10.1.1.1(Real 4)
\_______10.1.1.2(Real 5) --vlan b
\______10.1.1.3(Real 6)
- Should have different VLANs per subscriber. VLAN will be the
identification parameter for the subscribers.
- LB should be able to take same IP for different real servers and
associate with vlanid
- A different routing table per subscriber. The routing table should
consider vlanid for the route lookups or maintain a different
routing table per subscriber.
The load balancers should implement the above mentioned to
tune to the Virtual Data center and solve future problems.