SpiderMonkey and ByteBuffer serialization

I have been writing a voxel game with multiplayer support… Needless to say, I need to transmit block data to client from server quite fast. Since I also need to save block data on server side, I have decided to just use same serializer for saving and networking.

Here we get to my problem. My message (tagged with @Serializable) contains a byte direct byte buffer, which contains all data client needs to render that chunk of blocks. It is not relevant what that data is, I have checked that buffer contents (and size) are as intended.

When I try to send a message containing that buffer to client, sending thread freezes AND client gets nothing. No error messages, warnings or anything… It just doesn’t work. I added a debug message before and after that method call, only before one is ever printed.

I’m wondering if the buffer is too big to be sent over network. It was 201 bytes long in one test case, and still didn’t work.

Any advise would be appreciated. I have no idea if I’m doing something wrong or if SpiderMonkey (version 3.1-beta1) has a bug.

SpiderMonkey messages are limited to 32k… but it should have at least logged an error I think if that was the issue. SpiderMonkey also has no built in support for sending ByteBuffers so you must have added a serializer for that? Else I’d have expected the registration of the message class itself to throw an exception.

Maybe the thread you are calling send() from is swallowing the exceptions higher up. Put a try/catch(Throwable) around your send() call to see if it is throwing exceptions.

Edit: do note that ByteBuffers are not really safe to use in this way unless you create different views. Thread safety will also be an issue if you are writing to them… but ByteBuffers have state even when being read.

1 Like

Thanks for quick response!

Doing try/catch allowed me to get the following exception (it seems using slf4j messed up some of JME logging):
java.nio.BufferUnderflowException at java.nio.Buffer.nextGetIndex(Buffer.java:506) at java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:310) at com.jme3.network.serializing.Serializer.readClass(Serializer.java:375) at com.jme3.network.serializing.Serializer.readClassAndObject(Serializer.java:389) at com.jme3.network.serializing.serializers.FieldSerializer.readObject(FieldSerializer.java:163) at com.jme3.network.serializing.Serializer.readClassAndObject(Serializer.java:392) at com.jme3.network.base.MessageProtocol.createMessage(MessageProtocol.java:180) at com.jme3.network.base.MessageProtocol.addBuffer(MessageProtocol.java:160) at com.jme3.network.base.ConnectorAdapter.run(ConnectorAdapter.java:169)

The interesting thing here is that this stack trace has no reference to my custom buffer serializer. Even more strange is that I always use direct byte buffers in my code… This exception is from heap byte buffer, which I guess is used by SpiderMonkey. It is quite possible that I’m doing something wrong, but I have no idea what that could be. Any ideas?

Edit: This error appears on client side. For some reason, sending byte buffers using my serializer makes them impossible to deserialize.

If that’s on the server then it’s when reading a message from the network connection, not even sending.

The exception indicates that it is trying to read the class ID from an empty buffer. So it’s like the sender sent a bad message.

Edit: and I have no idea how you’ve setup your serializers or anything. Your ByteBuffer won’t serialize without a custom serializer so I don’t know how you’ve wired that up. If you haven’t and are still managing to send messages somehow then that’s a strange issue in itself.

Edit: Oops. I think I figured it out.

I misunderstood something in ByteBuffer javadocs, and as result JME buffer got corrupted. Naturally the error did not mention my code, since messing with buffer was not enough to cause exception; you had to read data from it after that.

Edit 2: But same error still… Weird. Serializer: http://hastebin.com/erizafogek.java

And I’m registering this serializer with:
Serializer.registerClass(ByteBuffer.allocateDirect(0).getClass(), new ByteBufferSerializer());

Without some care to use ByteBuffer.duplicate() for every different view (ie: every outbound message) then you will also get really weird state across senders of the same data.

Personally, I’d steer away from ByteBuffers here. Not sure what the direct memory buys you anyway.

Well, I could use an byte array in the message. I haven’t done so because chunk serialization/deserialization operates on byte buffers to allow fast loading data from disk using memory mapped files. I’m using same code for network serialization.

But maybe I can leave optimizations out here and just construct byte arrays… I’ll try that.

You also may want to benchmark if it’s ultimately any faster.

Obviously, I don’t know what exactly you are using your chunks for but in Mythruna’s case, I found that disk I/O was ultimately negligible compared to all of the actual data access at runtime… which is slower using direct buffers. You might be optimizing in one less relevant place but increasing the cost of everything else.

In Mythruna’s case, the data is even compressed on disk… so the benefit of memory mapped files disappeared super early. It only takes <10 ms to fully read and decompress a chunk anyway, as I recall.

Good to know, thanks! In my case, it might be a bit different since a “chunk” is actually octree which is deserialized into Java object. Time to benchmark it I guess…

My guess is that this direct memory access may eat anything you saved by memory mapping the files.

Honestly, I’ve rarely found a case where BufferedInputStream wasn’t the magic bullet. The use-case where memory mapped files are useful is very narrow.