Writable RemoteEntityData

@pspeed I would like to formally ask for a potential small upgrade to RemoteEntityData.

Requested feature:
I would like to be able to have a read/write version of RemoteEntityData

Use Case:
My server scaling is done via a proprietary peer-to-peer cluster. I would like to have a master EntityData Node with slave nodes being able to publish changes to the master and of course using the master to push changes to the slaves which the system does properly in the as-is design.

Alternatively is you have another design pattern to deal with clustered servers I am all ears (eyes)

1 Like

Iirc RemoteEntityData is on the Client and HostedEntityData on the server?
The pattern is there because the client only has his read only view of the server state, especially because the server should be authorative, content could be changed on the server already as the client tries to change a thing and such.

So the solution would be to have a packet which is sent to the server which triggers an EntityData Change. Not sure if there is an easy way to bypass this otherwise.

For a simple single server to multi client use case I would agree. The feature request is so that the server side can now be clustered. This would allow Zay-Es to be compliant with large scale MMO server architecture … which is what I am building out. It would be a RemoteSlaveServerEntityData class or something aptly named.

Edit: Of course the Entity IDs would be generated by the Master server.

1 Like

@zissis thanks for this request. I also have the same requirement. I think this would be cool.

Having something like RemoteSlaveServerEntityData will let us leverage the server while keeping all data in master server.

1 Like

For clustering a bunch of federated servers I think you definitely DON’T want to reuse the remote entity data stuff. That’s specifically for client-server communication and it misses a lot of opportunities for efficiency in the case of shared state.

For example, it’s fine if each member of the cluster generates its own entity IDs as long as they include the machine ID.

I don’t know enough about the specific use-cases you are trying to solve but I strongly believe that making RemoteEntityData client-hackable is not the right solution.

1 Like

@pspeed can we have multiple SqlEntityData (each for a cluster server) that all are connected to one HSQLDB HTTP Server which is running on master server ?

I guess technically yes… though I wonder what it buys you.

It’s really hard to talk in the hypothetical. I don’t understand what the motivation is to cluster in this way so it limits my ability to offer any advice at all, really.

In a clustered environment, there are certain unavoidable constraints like Consistency, Availability, and Performance (pick two out of three). Within those constraints you have to decide what the priorities are and then architect the clustering around the side-effects.

Your approach sounds like performance is not at all a concern and consistency is primary. Unfortunate since in a properly constructed set of ES systems, “eventualy consistency” is usually ok.

Edit: for further reading: CAP theorem - Wikipedia

1 Like

@pspeed you are correct. The dimension of interest is consistency. The server cluster itself deals with availability and the 100 gig local network in combination with custom dynamic node scaling deals with performance. I will describe the problem statement below:

Some known architectural constraints:

  • Server authoritative game
  • Must support over 40k simultaneous users
  • Must use Zay-Es for an entity system
  • Zay-Es Systems must scale dynamically across nodes (servers) when load is too high
  • Single master server for EntityData repository (Single source of truth)
  • Slave EntyData servers should be able to run Systems and update master EntityData server over TCP/IP
  • ALL Clients get their data from master EntityData server

The problem statement
Zay-Es does not have a RemoteSvaleServerEntityData implementation that all slave servers can use.

The current KLUDGE
I have overloaded RemoteEntityData and implemented RMI and some ugly synch code in order to send changes to the server and update all the remote entity sets synchronously. VERY YUCKY POO POO!!

Hope this helps frame the request better.

Edit: Master server generates ALL entity IDs through an RMI call for all slaves in my yucky poo poo kludge

image

Yes, but what are your separate servers handling? Zones? That’s the classic case for an MMO, divide zones across servers, rehome them as needed.

(Note: this is probably the worst possible approach for handling distributed zones.)

Nodes are just compute devices that can run Systems that can be distributed. In my specific game for example, I have tens of thousands of battle scenes that get distributed across the compute server network.

Just a blurp: Distribute the processing of the system, instead of distributing the systems ? Use terracotta, hadoop or some other system to achieve scaling?

Disclaimer: I know nothing of this :slight_smile:

Processing still seems locally coherent, to me. A single process needs real consistency but all of the other federated processes could probably get away with “eventual consistency”.

…unfortunately with a single-server (SQL server) star architecture everyone contends with everyone else. And then they still should find a way to share changes even though the data already has those changes.

A true federated ES might allow localized data storage and simply send out cache invalidation messages to peers. I suppose they could still share a centralized DB but it should be something enterprise-level like Postgres.

I always envisioned a federated ES to be federated along system boundaries, though, and not anything spatially aware… but ultimately it shouldn’t matter too much. The thing you’d want distributed most is physics and it should be easy enough to split on zones… which could still be local data.

For true MMO style federation, normally you’d have a server (or block of servers) handle a specific area of the world/universe. Things can be very localized in that case.

Anyway, this is all more complicated than just smacking hackable remote support into the client-facing RemoteEntityData. That might have been simple but it’s just about the worst architecture to use for federation and won’t scale well at all.

1 Like

I would agree … It is starting to smell like the old Novel network architecture with the master being the bottleneck … and we all know how Novel work out lol

Edit: I am switching my design so I just have many battle servers each with it’s own local EntityData and the client can just connect to the appropriate battle server when it needs to get into the battle scene. Then there is no single bottleneck or single point of failure.

@pspeed I formally withdraw the upgrade request :slight_smile:

1 Like

Regarding the DB… If you require scalability at the DB layer I highly highly highly recommend you ditch SQL altogether and use a database/data model that was designed for this, such as Cassandra.

If having SQL is particularly important, at least switch to an implementation such as CockroachDB that was designed for this use-case. Any other clustering pattern of SQL servers will be a time-consuming minefield to navigate successfully.