This is not as hot as it seems. The RTT quoted (50 ms) is ridiculously slow for most broadband connections.
If you really do have a connection that slow then yes, you can improve matters a bit by adjusting the window sizes, here is an article that goes in to a lot more detail:
Also, you normally don't control the sending side, which can overrule any window size you request.
Tuning TCP has its uses but client side tuning is very limited in effect by itself, clients and servers tuned to be optimized for their particular connection can see some gain.
> The RTT quoted (50 ms) is ridiculously slow for most broadband connections.
The speed of light would like a word with you.
Even if you had a "straight" link across the surface of the earth, the information transited at 100% of c, and there were no processing/equipment-induced delays, New York to Los Angeles is ~26ms round-trip.
Unfortunately, the speed of light in fiber is not 100% of c, it's more like 66% of c, so your round-trip time is more like 40ms.
This still assumes zero congestion, zero equipment delays, and a direct A to B path. In practice, you will never have any of those.
In practice, I get ~80ms from the SF Bay area to New York on an atypically fast (for the US) connection.
On similar distances/hop counts to the ones described in the article I get ~15 ms.
He's in Sweden and his host is in Stockholm.
If you test with google, the you have to take into account that google has 'points of presence' all over the globe, so if you test using 'google.com' their DNS server will hand you an IP that is physically close to you which will result in fewer hops.
> google has 'points of presence' all over the globe, so if you test using 'google.com' their DNS server will hand you an IP that is physically close to you which will result in fewer hops.
For anyone intrested, it's called "Geo DNS".
On the other hand, the Google public DNS (IP: 4.2.2.1) uses "anycast" routing. You use the same IP address range no matter where you are located and the magic happens using BGP (a router protocol).
I've read the article three times, and I see no mention of anyone's location, distances, or number of hops, merely a statement that the link is 50ms round-trip.
Meanwhile, your statement also made no mention of distance, only a blanket "ridiculously slow for most broadband connections" without qualification, which is what I was responding to.
I'd be fascinated to know how you determined where he personally was when conducting the test, and the location of the other, unnamed server he was using to download the content:
"I only have approximate numbers since the RTT between the servers I was testing was a bit unstable. Virtual servers and all that"
I'd also be very grateful if you could respond to my broader point about your blanket statement, rather than continuing to draw attention away from it with snide remarks.
He's a Swedish national, his host is in Sweden. You can determine that from the .se at the end of the domain, the lookup of the domain name which gives you an IP address attached to a router in Stockholm.
Sweden is a fairly large country but not nearly as large as the continental US, so in flight time of packets on a single wire is going to be limited to ~half the cross section of Sweden, say 500 km or so rather than the thousands of kilometers from your example. Broadband is extremely common in Europe with Sweden as the leading country wrt to broadband penetration.
If he was testing from a location very far away from his local server I would assume (yes, that's a risk but I'll take it) that he would have mentioned that in the article.
The default is that people that are from Sweden and that host their machines in Sweden are themselves currently in Sweden.
If he is on a very slow link (which is always possible) then he should have mentioned this in the article, and he probably should have done a traceroute -n to rule out that he's not using a congested hop somewhere along the way.
The virtual server is most likely the culprit, that's why I said 'the result is not as hot as it seems'. My guess is that there is some underlying artifact here but there is not enough info in the article to figure out what that is. A real analysis would require much more information.
I ran the same tests in the article (minus the virtual severs) on two links that I control and saw a very minimal improvement, another reason to write 'not as hot as it seems'.
You can repeat all of this in a couple of minutes.
So, in closing: if you are going to make blanket statements about TCP performance you should either give all the particulars or I will assume reasonable defaults in trying to repeat your experiment.
Both the title here and the article suggest that this is some sort of silver bullet and it definitely isn't, the specifics of the situation are what makes it work for him, there is 0 guarantee that doing the same will improve your situation, and I can think of some circumstances where it will make things worse.
'The speed of light would like a word with you' was plenty snide in its own right and that's why I responded the way I did, apologies for that but you made a huge assumption yourself by extrapolating to a situation that you are possibly familiar with rather than researching the most likely situation the author of the article was in, which to me seemed to be the way to approach the problem and verify the result.
Of course I'd be interested in a magic one line tweak that will improve all my short lived connections but this isn't it, or if it is then it will need a lot more documentation on how this can be used productively on random connections rather than the way it is used here, to optimize for a single link.
Better like that?
By the way, the google article linked below gives a much more in-depth argument for increasing the window size and also gives you a good idea on the limits of its effectiveness:
The article was about dealing with the slow start problem. It didn't seek to examine the causes or likelihood of latency in its or any other particular situation.
It had a well-defined problem: 50ms RTT combined with TCP slowstart == more round trips and significantly more time for a small transfer.
The problem was mitigated with a neat little option in recent Linux kernel versions that most people don't know about.
Your response was a very spartan, mostly off-topic, exceedingly broad statement, "a ridiculous amount of latency".
It doesn't matter that this particular situation might not have been so bad had the network been better (something we still haven't established, because all you've done is make an assumption about where the second server was located), what matters is that 50ms+ round-trip times are not in any way unusual in the general case.
You decided to silently narrow the issue dramatically to what you perceived to be the specific circumstances of the author, then make a broad statement about latency on broadband connections without presenting evidence or examining applicability to the wider Internet. This is especially strange, since it implies the author and his fellow Swedes will be the only ones reading an (English-language?!) blog, thus he wouldn't be concerned about latency to, for example, the UK, US, or Australia.
When called on it, your response was that I should have done the "research" you did, and effectively, that everyone should make the same assumptions you did.
Question for the advanced: with that kernel version warning, since CentOS 4.8/4.9 is stuck at 2.6.9-023stab053.2 does that mean I can definitely not use this workaround?
The .9 seems to imply not, but I am not sure how to interpret all those versioning extensions (ie. stab053.2)
It's impossible to state with absolute certainty without going through the patches that have been added by Red Hat to your kernel, but I'd be surprised if something like this from so far down mainline got backported. 4.x is pretty old, and the regular life cycle ends early next year.
To be fair, CentOS 6.1 is current - and has a "continuous release" repository too.
But those (like me) running old installs on CentOS 4.x are pretty much stuck though unless they are running fully dedicated and can do an unofficial upgrade (tricky stuff).
The "continuous release" repository provides individual updates from RHEL 6.2 as CentOS finishes repackaging them, rather than requiring you to wait until they've repackaged all of them. It still won't get you a kernel newer than (a heavily-patched) 2.6.32 from 2009.
Yep, this server is stuck on CentOS 5.5, which is apparently the latest the host offers.
I'm glad the new version uses a current kernel - I (naively) assumed that as the server was set up recently, the version of CentOS installed would be the newest one.
What are your requirements? There are a couple good VPS providers out there that offer up-to-date OS options. Look at Linode if you haven't already. (They've added London and Tokyo datacenters in the recent past, so if geography was a factor those might help.)
You can actually run unofficial/custom images on Linode with the pvgrub bootloader, too, though of course it's a little more work. People have even been running FreeBSD.
If you're going to experiment with this make sure you look at things like MTU as well, those are not always set to the optimal size for your link. If you have a high latency link in there somewhere (say you're in the mountains in Colorado and your only way to go online is through a satellite link) then this really starts to matter.
If you really do have a connection that slow then yes, you can improve matters a bit by adjusting the window sizes, here is an article that goes in to a lot more detail:
http://www.speedguide.net/articles/the-tcp-window-latency-an...
Also, you normally don't control the sending side, which can overrule any window size you request.
Tuning TCP has its uses but client side tuning is very limited in effect by itself, clients and servers tuned to be optimized for their particular connection can see some gain.