How does network client know when it can send its first message?

I’ll first ask my simple networking question and then explain how I got there…

Q. Using JME 3.1, when a network client connects to a server and does client.start(), how does it know that everything has settled down from creating the connection and setting up the serializer classes so that it can safely send its first message?

Explanation of how I got here, purely for those that may be interested:
My ‘game’ started just as a bit of tinkering about 6 months ago with JME 3.0 and Java, neither of which I’d used before, but many years ago I was involved with a few 3D games. Without going into the gory details, it morphed into a distributed system with multiple servers and multiple clients each of which may or may not be on the same PC. After a bit of trouble with the way it handled serialization I got it all working, but I wanted to use Java 8. This meant JME 3.1 was required. That broke my setup in major ways.
Firstly, the clients no longer register classes for serialization… easy fix, just don’t do it on the client.
Secondly, it doesn’t like more than one server on a JVM, which was a big problem for me, but I’ve overcome this with a hastily written message broker; every message now has to make 2 hops but it turns out this has other advantages including that I can now monitor all the messages using the broker.
Finally, under 3.0 both server and client had to register serialization classes so the serialization was guaranteed already set up before the first message was sent, but on 3.1 you only code it on the server and the server informs the client of what can be serialized and sent when the connection is made. However, this takes some time and I was left scratching my head for a while as to why my client failed to send when run normally but was happy when running in the debugger. Once I twigged that the serialization hadn’t been initialised fully, my simple solution was to put in a 1 second delay. However, I seek the proper way of doing it.

I may also have got the wrong end of stick about the networking so any corrections/clarifications are also welcome.

Cheers,
Mui.

Do you have a ClientStateListener interface added to your client?
This interface provides you the methods you are looking for!

/** Called when the specified client is fully connected to the remote server. **/
void clientConnected(Client c) 

/**Called when the client has disconnected from the remote server.**/
void clientDisconnected(Client c, ClientStateListener.DisconnectInfo info)

ClientStateListener should be a safe way as Domenic mentions. Also if you used a client service, it is guaranteed not to be started until all of the server-side services have done their thing… which includes the service that sends out the seriaization stuff.

Another common approach is for the server to send the first message to the client. (Though if you do it fast enough you will find that you hit a bug where the client receives the serializer list message and the 'welcome" message at the same time and won’t know how to process the latter.)

client.send() is supposed to block until the connection is fully setup but I think there are ways to confuse it. It couldn’t be a total ban-hammer until all services are started because the services themselves often need to send messages during setup.

You can also remove the serializer service and go back to the more fragile approach of manually registering on the clients. Lack of automatic client registration makes it ridiculously hard to write self-contained services… which is why the default is to include it.

Thanks for your prompt replies. I don’t think I was doing anything unusual/fancy in my set up. client.send() really didn’t seem to have waited for all the messages to get through.
It’s working now with a ReentrantLock and a Condition. :slight_smile:
Thanks both.