.

.

You split the message and use a adapted protocol.

Yep, what he said.



It’s a tradeoff because of the way Serializer works.

Compression is not hard to support… there is already a GZIPMessage type that only doesn’t work because of a quirk in the API conversion I did.



Variable message length is harder though because of the way Serializer is architected. I really want to rewrite it someday but that’s a sizable effort. I’d really like something based on bit streams, personally. Because Serializer uses ByteBuffer to write messages, it needs some maximum size allocated before the message can even be serialized. A pain and sort of the root of the size limitation.

Hey, you know most routers that connects two peers only accepts messages at the maximum of 1500 bytes. That’s over the ethernet protocol. Are you sure of what you are doing?

The 1500 bytes max is at a lower level.

Yeah, the MTU is a lower level. It does affect throughput, though. For example, a UDP packet that exceeds the MTU is more likely to be dropped/lost since ethernet has to reconstruct the full packet on the other end. So while 100% of 100 byte UDP messages might get there, statistically it’s very likely that many 32k UDP packets get lost.



…but most of us shouldn’t worry about that most of the time.

Just saying because if you expect that a 32kb message arrives at the same time of a 1.5kb, then don’t.