Why does SpiderMonkey use its own NIO Impl?

Computers are normally fast enough to just drain a network connection. In the new SM you are informed about the new messages on another thread, so you don’t block the network with your data processing (which you should do on yet another thread anyway, e.g. the OpenGL thread).

scrubalub said:
I can imagine setting this up, the only problem I have is since I'm using the basic java.io, the api calls tend to block when you are reading from an inputstream. For example ObjectInputStream readObject() blocks until it can read from the stream. Having multiple connections on 1 thread (at least in my case) would potentially cause starvation.

Is what I just described the reason for using NIO?


Yes, that's what NIO is doing... specifically the Selector class allows one thread to handle hundreds of connections.

Different situations will call for different strategies, though. I can imagine if someone was running a game server on a 32 core system they'd hope that more than one thread was reading from the network... ;)

The new SM redesign is supposed to allow for different threading/connection implementations underneath the new standard API. Currently, it just does the NIO-single thread approach for the server but this could be swapped out for a thread-per-connection model. I doubt if either model would be noticeably different under 100 connections or so, though.

My own plugin framework uses NIO as well for the distributed functionality. It has 3 threads, one for reading, one for writing and one for accepting connections. Oh, and off course a main thread to link those three together. This works like a charm since usually, on a dual core system, 4 threads can work side by side without any reduced performance.

…and most of the time, network related threads are actually waiting.



Do you use a different selector for each thread then? One of old (old as in two weeks ago :)) SpiderMonkey’s problems was messing with the same SelectorKeys from multiple threads. Random badness ensues…

different selectors. Each thread has it’s own selector, which listens to a specific event, either read, write or connect. If you want to take a look at the code, the entire plugin framework can be found at sourceforge: http://sourceforge.net/projects/pffj/. Hardly any documentation there atm. though. Since I’m the only person using it.



No wait, that’s not true. There’s the slightly outdated wiki of course. I haven’t updated that in a while, but it should be mostly up to date.

Not sure how widely accepted this is but it was an interesting read

http://www.thebuzzmedia.com/java-io-faster-than-nio-old-is-new-again/

I believe it. And certainly on a multicore system, especially as core counts get larger, a multithreaded network core will be better. I actually architected the new spider monkey to support either approach I just didn’t get a chance to implement the socket-only approach on the server.



…client side in new spider monkey is 100% not NIO. NIO is only used in the TCP server kernel… and that’s swappable.

I have written my own networking piece with one thread per connection. However I want to improve that model but after reading all these different sources I can’t make a decision on what approach to use. I wonder if a multiplexer approach with javaio with threadpooling would be a good choice?

I’ll tell you that the pitfalls with an NIO implementation are numerous. It is very unforgiving to threading issues… which was one of the reasons I rewrote it over the old SpiderMonkey implementation.



Where I think a straight multi-threaded Java.io implementation is reasonably straight-forward. Modern OSes can create and manage thousands and thousands of threads… and when they are blocked on TCP calls they cost little to nothing. Let the thread scheduler do the work of the NIO selector thread, in my opinion. :slight_smile: