Object action definition with groovy

Regarding this

@pspeed I am interested to know which of these approaches you would take.

1- Have a separate groovy script for each object type and inside the script define all the methods that object type has.

2- Or organize methods by name and define all methods which have the same name in one place. For example, let’s assume the “Take” action. So I will create a groovy script and name it “Take.groovy” and in there I can implement and register “Take” actions for all object types that are pickable.

To me, the advantage of the second approach is that for objects types that their effects are the same I can define the method once and register it with all of those object types.

May I know your idea on this and probably which of them you are taking in your case?

Regards

In my system there is no “compilation” until style correlation between any sort of concept. Any groovy script can add methods to whatever ‘object type’ that it wants, too.

An object type inherits from another object type so can inherit all of its methods, override them, call the original, etc… In my vague recollection, I also had a way to just add a new closure to the chain for the parent’s method so that they are all called… but I may just be thinking of calling the parent.

I’d have to look up the code to be more specific.

1 Like

I see, thanks.

1 Like

I do still plan to look up that code soon. I’m just trying to clear some other things off my plate first. I’ve set some aggressive “end of year” goals for myself and I’m falling behind already.

1 Like

Hehe, me too :sweat_smile:

@pspeed in your case is AI interacting with this object action system as well?

And may I ask in new version of your engine what is your approach toward AI?

AI using the actions was the plan.

I haven’t decided on a final approach for AI in the new engine. I like GOAP but all of the documentation implements it in a dumb way, I think. Behavior trees are easier to get things started but ultimately I’d like to incorporate GOAP-style solvers. I have a stack of reading material I plan to plow through when I get there.

I think the root level will still probably be a behavior tree since you can implement any other style of AI as a behavior on top of that.

1 Like

I see, thanks for your response. :slightly_smiling_face:

Actually, I’d be curious to know what others are doing, especially with GOAP or similar solver systems.

The problem I had with GOAP was representing game state properly during the solver. A lot of the online material seemed to feel like everything could be solved with the action chains but that seems to be centered around a ‘type’ of interaction and not the actual world state changes.

For example, if you need two of “object X” do some action then the solver might try to chain the action of picking up the same thing twice because it doesn’t know that after the first action, it’s not there anymore. And while there are ways to work around that simple example, it’s easy to construct multi-step examples or a set of branching that is not so easy to resolve.

Admittedly, I only played with it a little while.

Maybe I am wrong but isn’t “Procedural Preconditions” what you are referring to?
They are checked both for planning and while actually performing the action. So when the item is gone by the first action while the second action is running, then checking for procedural preconditions should fail on the second action which will fail the whole plan I believe.

From a quick glance that’s what I am understanding from https://github.com/p1387h/JavaGOAP/wiki

But implemented properly, you end up with entire copies of parts of the world state for each tree branch you explore… and their tree branches, etc… So often this isn’t done. The “effect” is used but doesn’t actually alter the world state but is used when matching actions to goals.

…and then for any action set greater than 3-4 deep, you end up with the potential for loops. On the other hand, for any action set greater than 3-4 deep you end up with ever-expanding duplicate world state data if you do it ‘properly’.

So without macro goals/actions/conditions, it’s only ‘sort of smart’. It can pick up the ammo to load the gun but not get the key to unlock the door so that it can get to the key to unlock the chest that has the ammo, etc…

A couple years ago I did some work in AI automation. The result of that work was a new algorithm for quickly solving large GOAP-style systems, and it’s quite robust - usually converges to a good solution quickly (the original implementation is in CPython and can solve a several-step system with 10k actions in ~6 seconds, or a several step problem with 15 actions in several milliseconds), properly handles world state changes, doesn’t get stuck in loops, etc. I’ve a slide deck for a presentation I did on it not too long ago that I’ll publish tonight when I’ve got a few minutes. Sources are not available because I don’t own the copyright, but the algorithm is not patented and I’ve been wanting to do an open source implementation for some time.

Edit: I incorrectly reported the solver as being able to solve a problem with 10k actions in small fractions of a second - that problem actually took it ~6 seconds to do, and the problem I had in mind had 15 actions (enough to totally choke a traditional non-A* solver).

4 Likes

You may want to look into PDDL and it’s Planners / The field of Automated Planning and Scheduling.
It shares it’s Ancessor “STRIPS” with GOAP. PDDL won’t be your solution because it doesn’t support the concept of Probability, but the working is very similar.

I’m doing my master thesis in that field and can brief you there:
Typically states are only booleans, but some planners support numerics too. You then have actions consisting of preconditions and effects.

Now as you figured the space is too big to keep in memory, so the branches of the built graph are lazily evaluated.

Then the task is “just” to find an optimal or any (depends) solution on a graph.
As you also figured there is the problem of loops, but there is a solution to that.
Naive Algorithms just keep track of already visited nodes, but as you also figured, this can be bypassed when going bigger loops.

When considering the optimal solution and having defined action costs, algoritms such as an heuristic guided A* won’t get stuck, because it will realize by the heuristic that the optimal solution becomes unreachable when looping.

There is however yet another solution and that is clever preconditions.
For instance my action goto ?a ?b would have two preconditions: at ?a and not at ?b.
This is a problem for a lot of planners in this example, but this may or may not be true for GOAP.
Either way this prevents looping effectively, as the action cannot be taken.

For the example of a pickup-loop, one could have an inventory-size or weight that is being tracked. This even leads into the fancy part of the AI where it has to consider how many ammo it will pickup, if it knows it WILL pickup keys.

FWIW I think I also base my stuff on Behavior Trees, because it’s superior to FSMs and others while being capable of impleneting everything as Behavior.

1 Like

Yep, and if you need an FSM then you can implement that in a behavior.

I like behavior trees because you can use whatever approach is best for a particular type of activity.

Regarding GOAP, in Mythruna the list of actions is also “tear down this block”, etc… which increases the state space significantly. Before deciding to do more research, I ended up figuring that I’d be dealing with macro-actions rather than some small thing (dig a tunnel versus remove a block)… which also very much fits a behavior tree. So the action could be “get some ammo” rather than “walk here, pick up ammo, etc.”… and the “get some ammo” is a complicated behavior in a behavior tree that bubbles up in priority and competes with other behaviors like eat/sleep/defend, etc… But that particular behavior is free to ‘strategize’ how it likes to find ammo.

It just cuts down on some of the more nuanced “there is no ammo so I’m going to beat them over the head with my gun” style decision making. But I feel like if I build “imperatives” in then those things will work themselves out in priorities. ie: both behaviors are active and the more desperate I am to kill the bad guy and the longer ‘get ammo’ has been sitting unresolved, the better and better ‘beat him with the gun’ starts to look.

It also feels like it’s more debuggable… and could even easily have ‘personality traits’ feed into the imperative-based prioritization.

Anyway, it’s all just been napkin-level notes at this point. My first AI will be decidedly simple (chickens and dogs to chase them around).

Edit: it also occurs to me that for very targetted goals, this sort of parallel priorities is very much how a human might do something. Like, picking up the things you might need while walking to a particular task even if you ultimately don’t need them.

2 Likes

I didn’t get a chance to post this last night, but better late than never: https://myworldvw.com/blog/wavefront-planning/.

1 Like

Heheh… half leaving this as a note for myself… but I thought of another simple test AI: “the toddler” He just walks around looking for nearby objects with a “use” or “open/close” action, etc. and then just does it over and over a few times before getting bored and finding the next one. Depending on the objects in the environment, it could get quite chaotic.

It was based on a note to myself that from an “AI perception” interface, the mobs should be able to look for objects with certain criteria, to include what actions they have.

3 Likes

Oh… that’s a nice idea. :ok_hand:

You can also use this for active learning - recording the effects and factoring those into future choices.