5 minutes of hesitation

Any time dotZ is negative, you are going to have issues steering. For one thing, you should be steering at max speed and your dotLeft and dotUp are going to be smaller values. In classic steering cases, this often doesn’t come up because you only steer towards things you can see (ie: in front of you).

Else you have to do some detection of behind like…

if( dotZ < -0.998 ) {
    // Object is directly behind us... pick a random or arbitrary direction or whatever
    dotX = 1;
    dotY = 0;
} else if( dotZ < 0 && dotZ > -0.998 ) {
    dotX = dotX / dotX; // make it max... ie: 1
    dotY = dotY / dotY; // same... though you may decide to only do this for the larger of the two
}

Also looks like my inverse multiplication was redundant… but now it looks better indeed, thanks a lot!

Back to continue dumb questions digest…
Let’s say I have entities with components as on picture below (it is simplified, the real composition is kinda more complicated):


Now I want those that are Armed and Movable to do one type of movement and those that are Movable and Dockable but NOT Armed to do another. How would I achieve that without checking for Armed being NULL? If I place all the checks within one system, it’s getting huge; if I’m offloading something to another, I have to resolve intersections somehow.

What are the different types of movement?

None of those seem like valid components to me. So you might need more explanation.

Well those are different units that may receive some generic order (= a component broadcasted by some supervising system) and process it differently. Those that are Armed pick, say, closest operational target and open fire (those that are as well Movable - not just pick the target, but do some engage movement towards it), those that are not Armed, process that same order differently: they pick disabled (not firing nor moving) targets and do movement to dock and capture them. So I am trying to split these functionality types - ones that do disabling and ones that do capturing while both groups receive same “capture these targets” order. So I’m bit confused now, if these traits aren’t supposed to be represented by components, then how?

Not sure… to me they sound like something internal to the AI system.

How many different systems will be using the Armed component? Which system will be producing the Armed component?

Same questions for Movable and Dockable.

They all are produced by unit stats refreshing system that assigns them depending on conditions (actually if unit has operational armed parts - it’s considered armed, if they all are down - it’s unarmed, so it may change on run time etc). And yes, all this is more or less AI work, conceptually, but this way this system has too many things to consider… being one system. And I’m worried because of its resulting size. So I tried to split it by target assignment and movement first - it didn’t split (w/o additional component checks). I tried to split it by resulting movement types - it doesn’t split again, since target might be assigned or not, so I need to check if it was again… so far I isolated successfully just weaponry processing (armed units form a entity set that is then used in weaponry processing system), but as for movements I still fail… I would like to process “engage” set of movements (like different engage strategy depending on target speed, armory etc) in one system, and “docking/landing” type of movements in some other system (yeah, they all will set orders for general movement system, but I need to make sure they don’t set them for same entities obviously)… and some other type of behavior (withdrawal, patrooling, maybe something else) - in some other system. But it doesn’t look like I can split it this way. So I think how I might do it without placing too many things in one class…

What kind of AI are you using?

Whether behavior trees, state machine, or GOAP, or whatever… this kind of stuff is generally intrinsic to that. Whether or not you choose to store the information in components for storage or operational simplicity is another thing… but it may be that you never need “entity sets” for most of this because the AI is already managing it.

…and if your AI feels like it’s sprawling then it could be just that you are trying to use the ES as an AI architecture. In my opinion, that’s a mistake.

Could be something I really overlooked in all the posts I’ve been through here by now. Was there some example of yours or someone else how to properly integrate them together? I’m sorry if I missed.

I have not posted any AI examples. Maybe others have.

AI is one of those areas where I have more ideas on paper than I have practically implemented. I’m well read on the subject but haven’t coded a lot of my own versions yet. But I don’t have concrete examples to post… and that would be a pretty huge corpus anyway as each different type of AI architecture would potentially use a different approach. I generally treat AI as another player. It will use the same game object state as the player/clients see and it will manipulate objects in a similar way.

But for example, some of the things you mentioned as “object state” can be modeled as “behavior” in a behavior tree. It’s not a “this attribute” kind of state as much as it is a “I’m using this behavior right now” kind of state. Or in a finite state machine, it’s related to the current state you are in. The idea of “I’m not ‘armed’ anymore” is just a state transition.

Yep, so actually what I was also trying to model with components like if it needs to engage - it receives “Engage” component and then another system takes and processes it. But since AI system is not “ES” system, I am kinda confused now, where ES ends and OOP starts. So you think of AI as another player, well… I could mimic controls but getting the data is in question now. Player gets them from screen basically and then uses its own brain memory to remember state… AI should get them - well, from something that is not entity set and not an entity with components… I’m not sure I’m getting the whole picture now, how I should initialize and process it separately from what’s happening in ES. How do I keep them coherent, at least very schematically?

But the screen gets them from game object state… ie: the ES.

Your AI will probably have entity sets to access the world game state that is kept there. It doesn’t necessarily treat its internal AI state as world game state. And probably doesn’t want to.

But it will have one state (“current behavior”, let’s say) for each unit, so it will have some kind of map sorta <EntityId, Integer> that is processed - well, in its update, and controls units by setting their components? If so, effectively isn’t it just another ES system? It looks like just one more abstraction layer over general movement and other systems, or is it something that works fundamentally differently? You say it is not ES. If it takes entitys ets and sets back components (therefore affects world game state) then which way it is not ES?

I mean that it is one system from the ES’s perspective. It is one ‘unit’ that reads ES state and sets ES state. It is not a collection of systems that have to super-communicate with one another.

I don’t know what AI architectures you are familiar with so I don’t know what better short hand to use. And if the answer to that is “none” then you may have some reading to do.

Behavior trees may be the easiest place to start though I guess state machines resonate better with software developers. (In my mind, on some level, a behavior tree is like a specific kind of state machine anyway.)

Whatever approach you use, a lot of the 'AI doing its job" information you are talking about is either direct properties within the AI “actors” or is implicit in the current state/behavior… (the current state or behavior may or may not be reflected in the ES at all).

Even if you want some AI state to be visible to the player, it is likely these are made visible through separate game components or game entities. For example, some games like to have the actors/NPCs announce state transitions using audio or text messages. This isn’t a system watching AI state to decide this… this is the AI system deciding to create sound or audio message objects that the game already knows how to deal with. (Then the AI can choose when/why/where to do it, also.) (Note: They are also no different than player emotes in this case.)

Now it becomes more clear, thanks.
Pity I didn’t meet anything related here before. Either it is so trivial aspect that nobody ever had any question implementing it in ES and it is just me who didn’t get it, or I dunno. So, it is a system and one system. Something at least to start from :slight_smile:
Looks like my main trouble was that I was trying to split something I shouldn’t even try to.

I think it doesn’t come up often here because people don’t get that far, maybe… or their AI is so simple/game-specific that it’s just cobbled together as whatever they needed.

I know that’s often true in my case… all of it.

Which isn’t to say that an ES isn’t ever involved in the AI. But you should be careful to know the exact reasons why. For example, in Mythruna, I can store general “user data” style properties on an entity. I use this in my object scripts to set/retrieve script-specific info. I do this for convenience because the ES data is already persisted… but the scripts are the only system to use it, these components are never retrieved in an “ES” way, etc… They are technically not part of the ES managed game state in that sense.

And even sometimes it might make sense to expose state to the ES if it means you can get nice bulk operations. Let me morph your Armed example into something I can use maybe to illustrate different approaches and their trade offs.

Let’s say that “Armed” represents some state that says the gun can shoot. This is because it has charged up or collected enough resources or is not in disrepair or some other complex state conditions. It is necessary to complicate in this way else there is likely no point to having an Armed component at all.

In that example, we have a few choices…

In a behavior system, when it is time to check for a behavior transition (either because one behavior has ended or because our behavior is constantly checking for some interruptible conditions) we would check all of the necessary prerequisites for our gun to decide if we transition into the “shooting the bad guys” behavior.

In a state machine style system, one of the exit conditions for our current state might be to check to see if we are armed, performing similar logic to the above based on the type of gun we have.

In both cases, there may be a whole bunch of states/behaviors that never check for this. For example, the “find health now at all costs” behavior is likely to override anything else and not bother checking for “am I armed right now” and transitioning to any of the “kill the bad guy” behaviors.

The idea of being “Armed” in a component sense is duplicate information that is implicit in the data elsewhere. For example, it’s reflecting ‘loaded’ state or ‘in good repair’, etc… However, as the number of conditions that roll up into an “Armed” state change, it may be desirable to do these as a system that can efficiently operate on a ton of small entities all at once. Be careful, this flexibility spirals fast, though.

One system could watch for for ammo counts and set the FullAmmo component. Another system could watch for the power level and set the FullCharge component. Another system could watch for the repair level and set the FullyFunctional component. If you still deem it necessary to have an Armed component that rolls these up then another system could have different entity sets for all of those and set Armed for every entity that is in all sets. (Easily checked with three loops.)

Alternately, you could flip it over and have all guns be armed by default unless specifically “Off”. In that case, you flip each of the systems and still combine into an “Off” state.

Alternately alternately, you treat “On/Off” as a debuff… in the sense that each of those three systems is adding an “Off” buff entity. Then the “Off debuff” system sets the “Off” on the target entity for any entity it finds pointed to by the off debuff. Then you can collapse out the system that combines the specific “Loaded”, “Charged”, etc. components into a more general off debuff system. (It also makes more sense to debuff since any of those negative conditions means ‘off’ rather than having to combine many ‘on’ conditions into an AND.)

Anyway, If you have a hundred thousand entities and the state transition between Armed and not Armed happens often… and the AI will be checking that frequently… then the above might make sense. If the rolled up conditions are few, the transitions relatively infrequent, or the AI checks relatively infrequent, then it may not make sense.

It’s important to also point out (probably) that the same state probably applies to the player’s gun.

So, either the player and AI are checking for a few conditional states before they know they can fire… or you have a handful of systems monitoring these and the player/AI are just looking at the “Off” state of their gun.

Point being: the player is just another actor sharing the same state… which means we know we haven’t done anything wrong. Whether it’s overly complicated or overly flexible/overly engineered for the task is dependent on the factors already mentioned.

2 Likes

This looks more the way I tried to imagine it: one set of components forms the other “higher in hierarchy” component (or absence of) that is then taken by some other system exclusively as marker for that system to operate. The backside, however, is that sometimes this way particular components will be checked multiple times as they may be involved in forming multiple other components (one for fire, one for movement etc). As to performance - well what occured to me is that it is not necessary to have all that information frame-accurate, I could tolerate some latency here (like AI knows it’s unarmed few seconds later than it actually is so for few seconds operate “wrong way” - of course I still have to make sure it can’t fire when it can’t “physically”, but it can think it can fire and move wrong way for that time), so I just implemented kind of general Counter component (that has percentage and rate) and did the checks just when the percentage is over 1… this way varying the rate I could make it more or less “fast reacting” depending on context… so maybe I could elaborate this approach further, and so problem is just to specify proper component hierarchy here. At least it looks worth giving it a shot, cause it is easy to represent via ordinary ES things, maybe…

None of it gets around having to ultimately find an AI framework/approach you like… then that all may be wasted effort once something easier comes into play.

Actually, this is the part of development I’m interested in the most, so it’s a good talk, and I’m not afraid of over-engineering things right now. I’ve barely met an “over-engineered” AI in games before, typically all the behavior diversity is limited by adding some random here and there, like it’s 1 of 3 that monster will turn this way, or spawn in this area etc. So experimenting here and trying to make things more general looks interesting :slight_smile:

For me AI is just another system like it is for physics. I would have a AI component to tell me what AI I should do. The AI system just have an entity id to AI context map. AI will then maybe apply force or fire a weapon which will produce a bullet entity or a spell entity. I would not “distribute” AI over several systems, like I wouldn’t do this for physic.
Just my two cents.

1 Like