Node.js is not a fad. It represents the first workable JavaScript-based server with mass appeal. The real story is that JavaScript is here to stay. There have been other server-side JavaScript frameworks in the past, but none of them have taken off like Node.js. If Vert.x wins over the JavaScript crowd, then that's great, because coders will be able to write JavaScript.
I think it's actually unfortunate that the first popular server-side JS framework is so tied to the async model. I'd be more likely to try it if it weren't.
I wouldn't dismiss the node.js stack quite so easily. Those "proven" technologies you mention had to go through their own lifecycle of continued improvement.
I remember a time when my colleagues who were steeped in C++ had a good laugh at my expense because I was building server-side web applications with a new framework and a hot, new language. It woefully under-performed similar C++ applications in benchmark tests.
It was 1998, and the language was Java. I could write my applications much faster and in a more maintainable way than they could, but they didn't care. Their technology was proven, and Java was simply a fad.
Not really. It shows that this benchmark is crap (likely benchmarking disk io versus disk io + some caching). Read Isaac's comment for more detail. He sums it up pretty well. No profiling info, using a custom test, no analysis besides some pretty graphs.
I have a hard time believing the JVM is really 10x faster than v8 for such a simple server.
It would surprise me because there just isn't that much time being spent in JavaScript on this test. Do the math.
If a program spends 2% of its time in V8, 10% of its time in the network stack, and 80-something% of its time reading a file from disk, then how can you even consider that you can make it 10 times more effective by optimizing away the 2%? Even if the JVM was 100 times faster than V8, then you would expect to get faster by a factor of just slightly less than 2%.
Ie, if you were seeing 1000 requests per second before, and you're spending 2% of your time parsing and running actual JavaScript, and you make the VM go to literally zero latency (which is impossible, but the asymptotic goal), then you'd expect each request to take 2% less time. So, they'd go from an avg of 1ms to 0.98ms. Congratulations. You've increased your 1000qps server to 1020.4 qps.
On the other hand, if you take the 80% of time spent reading the file over and over again, and optimize that down to zero (again, impossible, but the asymptote that we approach as it is reduced), then you would expect every request to take 80% less time. So, your 1ms response becomes a 0.2ms response, and your 1000 qps server is now a 50000 qps server.
So, no, if you respond to 10x as many requests, it's almost certainly either a bug in the benchmark, or some apples-to-oranges comparison of the work it's doing. I called out one obvious issue like this, that the author is using a deprecated API that's known to be slow. But even still, it's not THAT slow.
You can't summon speed out of the ether. All you can do is reduce latency, and you can only reduce the latency that exists. Even if your VM is faster, that only matters if your VM is actually a considerable portion of the work being done. The JVM and V8 are both fast enough to be almost negligible.
The flaw in your argument is that the server is spending 80% of its time reading a file from disk.
It's more than likely that it spends close to 0% of its time in disk access since its serving the same file, which will be cached by the OS in memory.
About the deprecated API. Earlier on I updated the results so they don't use that API, and I also added results for using streams. The results are slightly better but not by very much.
I use node but I kind of felt that this sort of scenario should be pretty obvious before you use it. I never use node to serve up static files, I use nginx instead. Small static files will be cached by the OS, as you said, which makes subsequent reads really quick. Since this is a small text file, it compresses really well over the wire too, so the time to serve up the request is lowered too. There's simply not much I/O to be a bottleneck in this benchmark scenario.
I wouldn't say that this is an unfair benchmark. But then I don't use node because it's "web scale". I use it because using javascript on the server, client, and on the wire (JSON) is pretty damn slick.
I'm interested in checkout out vert.x. But, this goes to everyone,let's not let this whole affair degenerate. Right tool for the right job. This particular benchmark scenario is explicitly the wrong way to use node. I'd suspect that if you were to change the readFile into an HTTP request however, the numbers might change. I also wouldn't be butt-hurt if vert.x still came out on top. There are still a ton of things to love about node.
The statement, "The JVM and V8 are both fast enough to be almost negligible" is flawed. OS file system caching pretty much makes disk IO negligible, and the time spent in JVM and V8 are the majority of the time. The benchmark is consistent with the system behavior, showing the difference between JVM and V8 is not negligible but substantial.
The discrepancy in this benchmark is almost entirely copying the results of read(2) into a userspace buffer. OS caching is important, but it's not the whole story here.
As some other commenters pointed out, if you pre-load the file contents, the discrepancy goes away. Also, if you actually limit vert.x to 1 CPU, it gets erratic and loses any performance benefits.
Is it possible that his JVM is setup to execute with different ulimit settings than Node? The Node numbers look too suspiciously close to multiples of 1024 (e.g., default file descriptor limit).
EDIT: whoops, saw 4096 for the stream but it's 4896.
I cannot comment for deelowe, but every time in the past I've seen such a wide difference for such a simple benchmark, there was some methodology problem. After all, for such a simple benchmark, most of your time is spent in the OS.
It's possible vert.x really is that much faster, but given history I'm reserving judgement. I.e. until profiles and root cause(s) become available.