It's stated that certain from the primary advantages of Node (and presumable twisted et al) over more conventional threaded servers, may be the high concurrency enabled through the event loop model. The greatest reason behind this really is that every thread includes a high memory footprint and changing contexts is pretty costly. If you have 1000's of threads the server stays the majority of it is time changing from thread to thread.

My real question is, how about we os's or even the underlying hardware support a lot more lightweight threads? When they did, would you solve the 10k trouble with plain threads? When they can't, can you explain that?

Modern os's supports the execution of the very many threads.

More generally, hardware just get faster (and lately, it's been getting faster in ways that's much user friendly to multithreading and multiprocessing rather than single-threaded event loops - ie, elevated quantity of cores, instead of elevated processing throughput abilities in one core). If you cannot pay the overhead of the thread today, you are able to most likely afford it tomorrow.

Exactly what the cooperative multitasking systems of Twisted (and most probably Node.js et al) offers over pre-emptive multithreading (a minimum of as pthreads) is easy programming.

Properly using multithreading involves being a lot more careful than properly utilizing a single thread. A celebration loop is only the way of getting multiple things done without going away from single thread.

Thinking about the proliferation of parallel hardware, it might be well suited for multithreading or multiprocessing to obtain simpler to complete (and simpler to complete properly). Stars, message passing, possibly even petri nets are the solutions individuals have tried to solve this issue. They're still very marginal in comparison towards the mainstream multithreading approach (pthreads). Another approach is SEDA, which uses multiple threads to operate multiple event loops. This has not caught on.

So, the folks using event loops have most likely made the decision that programmer time may be worth a lot more than CPU time, and also the people using pthreads have most likely made the decision the alternative, and also the people exploring stars and the like want to value both types of time more highly (clearly insane, that is most likely why nobody learns them).

The problem is not really how heavyweight the threads are but the truth that to create correct multithreaded code you'll need locks on shared products which prevents it from scaling with the amount of threads because threads finish up awaiting one another to achieve locks and also you quickly achieve the stage where adding additional threads doesn't have effect as well as slows the machine lower as you become more lock contention.

Oftentimes you are able to avoid securing, but it is tough to get right, and often you only need a lock.

So if you're restricted to a small amount of threads, you may well discover that getting rid of the overhead of needing to lock assets whatsoever, as well as consider it, constitutes a single threaded program faster than the usual multithreaded program regardless of the number of threads you add.

Essentially locks can (based on your program) be really costly and may stop your program scaling beyond a couple of threads. And also you more often than not have to lock something.

It isn't the overhead of the thread this is the problem, it is the synchronization between your threads. Even when you can switch between threads instantly, coupled with infinite memory none of this helps if each thread just eventually ends up browsing a queue for it's turn at some shared resource.