I feel that the cause of misunderstanding lies in the fact that "thread" in this context is usually means "thread based IO", which means that when a thread issues a IO request it remains blocked until the IO request returns, leaving CPU time to other threads. All this regardless how many processors you have; it works perfectly fine with single processors.
Async IO is different, it's a different patter of access to IO and as such it's orthogonal to any threading or multiprocessing that's going on in order to actually do stuff in response to that IO.
> It's been found that if you employ only a single thread (that can run any number of tasks) you get a performance boost over using a larger threadpool under some conditions, but a single thread wouldn't let you scale by taking advantage of all processors.
Indeed. Nodejs solution to this problem is to have a cluster of nodejs processes and a dispatcher process on top. So multiprocessing is done the "old way".
In that case, Java gives you both options: blocking IO and non-blocking, polling IO. Netty can use both, but most people use it with the non-blocking option. Experiments have shown that sometimes one is faster and sometimes the other.
Async IO is different, it's a different patter of access to IO and as such it's orthogonal to any threading or multiprocessing that's going on in order to actually do stuff in response to that IO.
> It's been found that if you employ only a single thread (that can run any number of tasks) you get a performance boost over using a larger threadpool under some conditions, but a single thread wouldn't let you scale by taking advantage of all processors.
Indeed. Nodejs solution to this problem is to have a cluster of nodejs processes and a dispatcher process on top. So multiprocessing is done the "old way".