I can’t tell you how many times I have had to explain to someone who has just updated their connection to 10, 45, 100 Mb so they can quickly transfer there critical files to the other side of the planet only to find they aren’t getting any better throughput than they had before. This is a common misunderstanding about the relationship between bandwidth and throughput. The bottleneck is not the bandwidth, it’s the latency, and it is tough to argue with the speed of light.
The main problem comes down to our reliance on very effective and very old protocols that were developed for a different type of network. The problem here is TCP or Transmission Control Protocol. It is the widely used transport protocol that most of the internet relies on.
TCP provides the mechanisms to control the flow of packets across a network. It allows you to recognize when packets get lost and retransmit them. It can identify when the network begins to get congested and throttle back. It does a great many very useful things. The problem is that in order for all of this to happen the sender needs to receive an acknowledgment from the receiver or an ACK packet.
This is where the latency of a network comes in. As the round trip time between networks grows, the amount of data that can flow across a TCP stream goes down. TCP gets hung up waiting for the ACK packets and the transfer rates go down. This is why even though someone might put in 10Mb connections in Kansas and Jakarta they are disappointed when they get less than a megabit of throughput.
We are starting to see the limitations of the protocols the form the basis of network communication. This is why a whole new breed of network appliances are starting to emerge, the WAN accelerator. Without getting too complicated, something I’ll save for a later post, there are several ways in which you can make things go faster, and different products take advantage of different techniques.
If you compress the data going across your network there are fewer packets to send and you get greater throughput. This is of course assuming you can compress and decompress fast enough to make it worth your while and that your file is compressible. A lot of traffic will benefit from compression. Things like database log shipping or word documents will see the benefits of compression. Things that are already compressed or encrypted will see little or no benefit from compression, and in some cases it might be worse.
A lot of systems perform header compression, which reduces the amount of data being transmitted by compressing the packet’s header although this generally only results in minimal improvement.
Increase the window size
There is a certain amount of data that can be sent without needing to have an ACK packet sent. By significantly increasing this window, you can improve the throughput of a TCP stream. Typically most OSes have a 64K limit on their window size which is quickly exhausted on a high-bandwidth, high-latency network connection. By utilizing TCP extensions, this window size option can be increased to 1 Gigabyte.
This is a technique that is often used by acceleration appliances, and it can be very effective if there is not a lot of packet loss on the network. When there is a significant amount of packet loss this benefit can quickly disappear.
The fastest packet is one that doesn’t need to be transmitted. The biggest bang for your buck tends to come from Caching of data. Often when you think about caching, the idea of a file being stored and retrieved locally. Most of the accelerators use byte level caching. This means that they don’t look at files, but rather byte patterns. This gives the benefit of being able to cache portions of different documents that use the same chunks of data.
There is a huge amount of complexity around identifying and storing these byte patterns. This is a big differentiation point between the different vendors. Some do this very effectively and others not so much. Depending on the nature of your data, you might see certain vendors perform better than others.
Protocol specific acceleration
The other place that vendors seek to differentiate themselves is with protocol specific tricks that improve the throughput. This involves intelligent systems that can anticipate usage patterns and start pre-caching information in anticipation of it being used. If you open a file off of a server, the accelerator will know to start caching the rest of the file so it is there when you need it. There are tricks for decrypting and then compressing and caching portions of ssl traffic, and many more such optimizations. There is a wide variance in how the different vendors perform in different environments.
Convert TCP to UDP
UDP does not rely on ACK packets, it does not guarantee that packets make it to the other side. Because of this, UDP can traverse high latency networks with greater throughput than TCP. This means that the application or device that is doing the conversion needs to handle flow control and data integrity on its own, and do it more effectively than TCP. There are many products on the market that do this quite well.
The problem is that when you do these types of things you start to have problems with things having all the packets arrive in the correct order and prioritizing them across a network. So, depending on what you are send across the wire this may be a worthwhile technique. I see this approach most often used in software based acceleration options.
Break it into multiple TCP streams
Another technique is to break the single TCP stream into multiple parallel TCP streams. This way rather than waiting for the acknowledgments from a single stream you can have multiple ACK going across. You run into issues with reassembling the stream, packet order, etc., but you can saturate a circuit with TCP traffic this way.
Another TCP extension option is Selective Acknowledgments or SACKs that allow finer control over how packets are acknowledged, reducing the some of the penalties of packet loss and high latency network connections.
So the question remains, if growth of bandwidth is outpacing TCPs ability to handle the throughput, where do we go. There really is a looming crisis in the protocols we use. They are deeply embedded in the applications we use every day, but at the same time, they are not performing at the level we now require. It remains to be seen if acceleration will be built into every network we build going forward or if there will be a shift away from some of the underlying protocols. I would have to believe there will be a limit to the benefits we will get out of acceleration, and that soon we will be talking about how to fix the network.