so using an ECS like Zay-ES and splitting code up into systems is very nice and a very efficient way to implement logic.
But as my project grows I find it difficult to determine what should go into the ECS and what not.
A few examples that I’d like to get some opinions on:
Physics Simulation: I’m using the Bullet Physics integration bundled with JME for the simulation of rigid bodies, joints, etc. It works fine so far but I’m wondering about forces, impulses and torques. For example right now I have Impulse and Torque components that get set on Entities created by the Controls system (which takes in user input and creates these one-component entities). They get consumed by the Physics system to apply forces etc. to the actual PhysicsRigidBody objects. It feels very ECS-like in this approach but in theory I could also directly invoke forces and torques on the Physics system from the Controls system (or even from the Input system directly) like “physics.applyForce( … )” called from “controls.update( … )”. The same goes for information coming out of the Physics system like collisions, overlaps, etc. (e.g. I could directly poll them from the Physics system to restrict player behavior). I imagine that less indirection and abstraction could help me understand my own code better in the long run…
Player Input: So I am at this point where I need to decide if user input should go through the ECS too. Like with the physics example I could directly invoke forces, impulses and torques from the Input system to the Physics system or to make it fully ECS-style let the Input system create or set Input components on controllable entities. I am unsure about which approach would give more benefits as the game grows in complexity.
So I have to make these large-scale architecture defining decisions and I am curious how others do it.
Coming from the classic “input → simulate → render → repeat” game loop structure I am not quite sure what should go through the ECS. I have this feeling that only game objects that could potentially be placed in a hypothetical level editor should be handled by the ECS but that’s just a guess…
For clarification when I’m talking about “complex 3D games” I mean games where the player acts in a 3D-scene with simulated rigid body physics, complex level geometry, other AI driven entities, potentially networked, etc.
The advice I got was: Split it logical up in a server/client design (even if it is not multiplayer). What ever belongs to the server is ECS, what ever is pure client does not belong to ECS. It turned out that this guide line is very useful and handy for such kind of decisions.
Player input does not belong to ECS directly, but in the end it will generate entities with components to trigger the “server” to move your character/ship/car/whatever.
Think about both, and try to imagine their advantages/disadvantages for your situation.
Eg. if I have some entity with a physic objects, I could store the physic object/control/spatial in a component.
Then to apply methods such as applyForce:
I could call the method directly, by eg obtaining the appropriate phys object control/spatial/object. This might take few method calls and might not be the most user friendly approach.
OR I could create a component Force, which would be read and removed by a ForceSystem and do this for me at a specific time. This is much cleaner solution in terms of final usage, however, if I do not care about the specific timing, it seems the whole ForceSystem and Force component is too much code for no actual benefit.
OR I create a static method applyForce(id, force). Very similar to the approach 1. except it makes the code more user friendly, you apply a force by suppling entity id and the force. The method can also throw an exception if the entity supplied does not have the phys object/spatial/control.
One benefit of using components is that you get to accumulate all forces, before applying them. If you apply player input directly, you dont do that. It may end up being the same result , but I feel that the aggregated forces suits me better.
SiO2 integrates Bullet with Zay-ES in an extension.
Demo code here:
This may or may not be helpful.
To extend that example to player input, I’d have a different control driver attached to the physical object representing the player. A system would be responsible for applying the latest translated input (similar to what the steering driver wants).
I’ve opted for this approach because quite often the forces that are applied to the object need to have instant feedback from the physics simulation. Even the acceleration applied may depend on the current velocity, orientation, etc… for example it’s quite common to apply some lateral forces to keep the player from sliding/drifting. And especially in the case of things like going up hill, climbing stairs, etc. you may want to instantly change how forces are applied so that the player feels like their input is smooth. After all, they can’t press the key harder to indicate that they’d like to apply more force going up hill (so that speed is roughly the same).
Generally, I consider things like force and torque components to be too low level for the ES… but a lot would depend on the type of game.
In deed a useful guide line!
I’m trying to split my code up like that from now on. How far should one take this aproach though? I mean you could even go as far as having two seperate applications talking to each other through a messaging layer even when only playing locally.
And should both client and server side share the same ECS data?
I think I will go in this direction (already tested it with basic forces and torques) but I’m thinking about having it more higher level, like not having individual Force components but rather components that represent control “intents” (e.g. MoveForward, TurnLeft, Jump, etc.) and the Physics system would take care of applying the forces accordingly. These intent components would then be created by a Control system that reads local inputs and deleted on the same frame after the Physics has consumed them.
That sounds in deed like a nice benefit! If I go the more higher level control “intent” route I think that would have the same benefit as well.
Thanks for the links, will have a closer look at that!
If I understand this correctly a “control driver” is basically a marker for controllable entities right? The Physics system would then have a set of those and apply forces accordingly? And what do you mean with “translated” input? I’m guessing it’s about converting individual key presses into a “move” vector variable for example. Sorry for my misunderstanding.
That’s the case for my game too, because I need some “stabilizers” for certain things for example. And that is also the point at which my first attempt at Force components got pretty hairy
After having tried the more low level Force and Torque components approach, I feel so too. As mentioned above I’m trying to implement higher level concepts of “intent” components now.
Well that depends, you could, but if you don’t have plans to make a multiplayer game it is a bit overkill.
Yes they do or more precisely the server sends them to the client (not the other way around!). You also can study the sim-eth-es example from Paul. The keyboard/jostick/mouse/controller input in that example are send via RMI to the server (at least the one I did study a while a go to write my game).
You can look at the steering driver to see how control drivers work. In that case, there is only thrust and ‘steer’. For players, I use generic three (or two) axis thrust plus an orientation. I send that information to the server every frame async and unreliable. The server then updates a component on the player’s avatar entity. The player input system then makes sure the control driver has the latest info.
The control driver uses this intent (fwd + strafe + orientation) to decide how to apply forces to the rigid body. It’s similar to how steering is done for the NPCs… but different.
There is no player avatar in that demo… but when I release spacebugs then it will have all of that. I could post code if necessary.