AI Engine

@toolforger said: Ca. 60k lines of C++ code if I read Things that I don't see it covering: 1) Going upwards is harder than walking a level surface. Agents should consider that by factoring in a higher distance for plans that involve an upward slope (this could complicate the computations). Similar considerations apply for rough ground (gravel, swamp etc.), and obstacles that don't block a path but need to be jumped or otherwise require extra effort. 2) Agents that can walk vertical or overhanging surfaces. E.g. the Xenomorphs can climb walls and ceilings in the various Alien/Predator games.
Perhaps abstracted out to A) a material on the navmesh (that applies both slope (so baked in to RGB, perhaps, at creation or loading), and "friction" or "cost" for want of a better word (alpha values). The idea being to do as much pre-calculation as possible. And store it in accessible form, of course. But then you also get into use cases where some elves can slip through the woods easily and some grav-tanks ignore rough terrain... This implies needing to know what *type* of friction is present. (I'm specifically thinking of algorithmically generated arbitrary terrain). So how much more (if any) efficient is looking up a value and vector from the navmesh material over calculating the same from the mesh?

B) a controller that applies (or doesn’t) the material and LoS tests for obstructing mesh. This would also allow or disallow a “suction” effect for reversed slopes (one byte for angle of slope and two for direction?) to allow counter-acting gravity. (a LoS test between creature’s feet (local -Y)? If the creature is vertical or reversed, it would cling to the navmesh beside/above it rather than straight down (the LoS test I see in the terrain-following examples)

(Caveat: please forgive the terminology, I am self-taught and trying to catch up on both jargon and methods :slight_smile:

Thinking a bit more, the ability to cling or to overcome friction, would be dependent on the type of drive or movement of the agent, as applied to both friction and slope. Wheels based 100% on gravity and friction, could not negotiate vertical or even very steep surfaces. Feet and hands, using a combination of friction and “grip” could. Anti-gravity drives could ignore friction completely.

So, perhaps a combination of friction (a function of material and gravity) and grip (a function of the material only). Either way, the cost of traversing the navmesh should factored into the decisions, and I’d seriously like to implement variable escher-esque geometries :wink: And Xenomorphish things lurking above.

I’d leave the details of friction, grip etc. out of the AI engine itself. It’s enough if the application program provides a “resistance” and lets the nav algorithm determine effective distances (i.e. multiply distance across a triangle with the triangle’s resistance).

One thing that I don’t know how to include well is directional resistance.
I.e. walking up a slope is slower than going down.

Escheresque stuff doesn’t sound hard to do. If you can transform it into a triangle mesh, the algorithm can work with it. You should even be able to deal with “impossible” geometries (huts that are larger inside than outside, teleporters/wormholes, whatever). The triangles don’t need to have fixed coordinates, they just need a size and a resistance (resistance function I guess).

Final note: Building this from scratch is probably easier than porting from C++. The C++ code might still be useful to make out the devils in the details before doing the Java design.

@toolforger said: I'd leave the details of friction, grip etc. out of the AI engine itself. It's enough if the application program provides a "resistance" and lets the nav algorithm determine effective distances (i.e. multiply distance across a triangle with the triangle's resistance).

One thing that I don’t know how to include well is directional resistance.
I.e. walking up a slope is slower than going down.


I have problems visualizing how to divorce the cost of movement from the AI pathfinding. It really seems to me (from a position of relative ignorance) that including cost data (in the form of mesh/material-dependent data and application/control-dependent methods) in the AI is crucial to implementing new behaviors. I.e. if we want to implement a climbing/clinging method, the AI must be able to expand the mesh it considers walkable/searchable. I’m thinking the AI needs to set up a navmesh (with cost factors) per control (as the cost factors vary by control), but the original calculation of cost would be control dependent. And would default to simple, cost-less traversal, so the control doesn’t need to calc cost.

So I think we are saying the same thing in different ways… :slight_smile:
And while I agree that the cost-calc makes more sense in the controller, implementing an interface for it in the AI core makes a lot of sense to me. Which means working out a few more details, eh?

Directional resistance only makes sense in a grav-aware environment. Which means the cost calculation has to be aware of the direction of travel and the angle of the mesh (in the direction of movement compared to the grav normal). This means you need to (at the least) make the nav-mesh (and perhaps local gravity constant?) available to cost-calc method.

@toolforger said:Escheresque stuff doesn't sound hard to do. If you can transform it into a triangle mesh, the algorithm can work with it. You should even be able to deal with "impossible" geometries (huts that are larger inside than outside, teleporters/wormholes, whatever). The triangles don't need to have fixed coordinates, they just need a size and a resistance (resistance function I guess).
Yes, but an engine that knows the Dweller on the Threshold can walk the ceiling (that would be the floor if gravity would only be a variable thing... ;-), but your PC can't (unless somehow he could control gravity... ;-), That would take a general purpose pathfinder that literally constructs different navmesh (or at least defines different traversable nodes) depending on the abilities (controls) of the mobs.

Or are tesseracts completely out of the question? :slight_smile:

I’d let the pathfinder accept a resistance-returning function. Probably with a triangle as input and a float as output.

For directional friction, I don’t know how to do that well.
Gravity is simple - with triangle vertices as input, the direction of gravity is known and the resistance calculator could be initialized with whatever data it needs to properly factor in gravity into its calculations.
What I don’t know is how to make the resistance calculator aware of what direction the pathfinder is interested in. The pathfinder might not even know yet.

The real question is what the pathfinder algorithm assumes. Is the resistance function linear? That limits the kinds of resistance functions we can use - e.g. you can’t use a resistance function that changes directionality, such as a vortex in the middle of a triangle that increases speed with every cycle around it.

Yes, you need a different navmesh per resistance class. You have that anyway - some units can swim, some can’t. Humans are faster than horses between trees, walls that are impenetrable for infantry merely slow down a tank.
You don’t want to calculate an arbitrary number of navmeshes, but you need to be able to calculate different navmeshes.

Tesseracts wouldn’t be a problem, as long as each triangle has a locally consistent coordinate system.

I’d happily help out with this. My graphics knowledge isn’t great, but my degree is AI and cognitive science. I’d love to be able to give back to the JMonkey community for all your help.

Regarding Neural networks, their use may be somewhat limited at the moment but I can’t see a reason not to include them if someone wants to do it. I’ve used Neural nets when choosing strategies and if a game wants to learn the more it’s played it could be a cool thing to use or at least experiment with.

Also, a bit random. The conversations have mostly been in regards to path finding/planning. What about things like language processing? Could anyone thing of use for these apart from having more intelligent dialogue from npcs?

2 Likes

I have yet to see a use case for language processing where the effort-to-effect ratio wouldn’t be horrible.

I’d be interested in model building - NPCs building a mental model of what’s going on around them. Most “AI” engines simply use the real model, so the NPCs never err, and if you can deceive them, it’s because they were scripted that way.
However, I have no idea how to do that without incurring a huge per-NPC processing overhead, so it’s one of those “maybe once I have all my other projects finished” daydreams.

Other than that, AI is reduced the usual strategic planning part. It can be more rule-based or more script-based, depending on what the game designers were aiming for; usually it’s a mixture of both (and since plausible AI is so hard to do in a rule-based manner, only the really large studios do that well, or do it at all).

1 Like

Anything new on this front @sevarac and @foxhavendesigns ? GSoC is upon us, and, even though we’ve never made it in before we’re gonna take another crack at it this year, with a twist (more on that later).

Maybe you could put together some ideas for a suitable student project relating to AI?

1 Like

Hi,

We’re making good progress. The basic design of the agent system is almost completed, and we’re working on intgration with JME. We’ll put it all together with some neural network I guess in the following month. It would be very interesting to go for GSoC. I’ll take a look at come with some ideas.
We’re using some basic shooter demo, but for real thing and GSoC project it would be good to have some specific game environment that we’ll be working on. Any suggestions which game would be most interesting/usefull to use as a base for AI Engine development?

1 Like

Is there any chance of being approved to work on this for GSoC?

I am a CS major transitioning to a Cog Sci Ph.D. My undergrad concentration has been in game engine architecture and artificial intelligence, and for my year-long senior project I am building (with a team of four other undergrads) a general purpose AI engine for the Leadwerks game engine. We have so far implemented an abstract agent system and finite state machines, and have prototyped behavior trees, planners, and a steering system.

Given that Leadwerks has a built-in implementation of Recast, I also have some experience with that if it is a desired feature.

1 Like

I’m not in any position to decide anything, but from what you write, this sounds like it has a really good chance of getting accepted, both topic-wise and from your situation.

I’m not sure whether a group can apply as a collective; it depends on Google’s rules which tend to get tweaked a little bit every year, but @erlend_sh should know.
If you can’t apply as a group, split the work into chunks and have your undergrads apply individually.

I’m not sure but I think CS majors can’t apply; Erlend should know for sure though.

@tim-m-shea (dang that’s a tricky name to @mention!) it definitely seems like you’re eligible:
http://www.google-melange.com/document/show/gsoc_program/google/gsoc2014/help_page#2._Whos_eligible_to_participate_as_a

And it sounds like you know your stuff as well. The application period is still 2 weeks off but you should definitely apply :slight_smile:

@sevarac can bring you up to speed on what he had in mind for a 2 month AI project. Actually I have some questions for him myself:

  • To what extent are you planning to implement pathfinding? (if there’s a chance we can integrate Recast, it would be better if the AI engine would focus on communicating effectively with Recast)
  • It’s my understanding that you’d like the AI to be quite flexible, but which genre will you be focusing on? FPS?
    I would imagine games like Half Life, Skyrim and maybe even Sims could all benefit from the same AI groundwork, but you suddenly have completely different targets if you’re making AI for a game like StarCraft 2 or League of Legends.
  • I would love an example scenario of what you’d like this AI engine to be capable of by the end of the summer, e.g. “AI Character identifies two different weapons on the ground, chooses one over the other, sees player, tries to kill player.”

I can try find you a good example game when I know better what you’re looking for.

To me, the most exciting project would be Recast navigation for jME3 that integrates well with the AI engine, but can also be used independently of it.

Thanks @toolforger and @erland_sh. Just to clarify, I plan to apply alone, although I have shared the information with my team members in case any of them are also interested.

I am excited about this. I haven’t been active on the forums previously, but I’ve been using jMonkey for about a year for a research project and I really enjoy it.

Regarding the project scope and general requirements, I can say that the most challenging aspect of the senior project has been the design of an abstract agent system that is:
a) flexible enough to work in multiple genres
b) usable enough to work at all
c) no more complex than ad-hoc solutions (what I mean by this, is that game developers are already creating agent systems, but those agents are typically implicit and unstructured; game characters invoke behaviors, pursue goals, make decisions, etc, but not within a well-defined framework; if we are going to replace the implicit agents with an explicit, structured agent system, it needs to be no harder to understand or use than the status quo)

Achieving all three of these requires some delicate balancing of concerns. My initial vision for the senior project was to build an agent system that could accommodate any AI reasoning technique (e.g. NNs, GAs, hFSMs). We found that although a majority can be supported by a range of designs, it is probably impossible to design one agent framework that will work for every single algorithm out there.

Other significant concerns are ensuring that there is adequate time to produce examples, and the possibility of producing “applications”, for example producing AI templates that can be used and extended easily.

Another issue that was mentioned earlier in this thread, whether to support (or design) a custom AI language is perhaps more difficult to assess. Without establishing the specific goals of that language, its hard to decide whether it would be valuable. It is also closely connected to other decisions, such as which reasoning techniques will be implemented, because some have more precedent for this than others (e.g. designing a rule based system is almost always done with a domain-specific language).

Finally, with regards to Recast (or navigation in general) I found that much of game AI can be reduced to making decisions about where to go, so having a robust navigation system is important, but I do think it’s just as important to keep it decoupled from the AI itself, because the NavMesh paradigm will only be applicable to a subset of games.

Sorry for the braindump!

Edit: Just rereading, and I should make it clear that when I say replace the implicit agents with structured agents, I don’t mean forcibly, but just, “provide a replacement for”. :smiley:

No worries, that braindump was really interesting.
A lot of proposals for AI and agent stuff have been floated here. This was one of the few ones with clear goals, rationales for them, and trade-off considerations.
It’s also a far more solid foundation than what I have come to expect from GSoC applicants, so please go ahead and apply, I’d really love to see that work done.

If you find work for your undergrads, these can be GSoC applications, too. I’m not sure what the expected minimum work volume for a GSoC project is, but I did see applications accepted that turned out to be no more than a week of work.
They could do stuff like glue code, example projects, tutorials etc. that would add value to your project without detracting you.

I’m very glad to see that there is great interest in getting AI and agent stuff to JME, that would be very exciting.
@tim.m.shea Is there any code available online to see it?

I like the part:

...design of an abstract agent system that is: a) flexible enough to work in multiple genres

We tried the same thing, and it is available at GitHub - QuietOne/MonkeyBrains: Agent framework for jMonkey Engine
At the moment we’re working on demo.

@erlend_sh

- To what extent are you planning to implement pathfinding? (if there’s a chance we can integrate Recast, it would be better if the AI engine would focus on communicating effectively with Recast)

I was thinking of www.gamesitb.com/nnpathgraham.pdf‎
I guess this can be don ewith recast

- It’s my understanding that you’d like the AI to be quite flexible, but which genre will you be focusing on? FPS? I would imagine games like Half Life, Skyrim and maybe even Sims could all benefit from the same AI groundwork, but you suddenly have completely different targets if you’re making AI for a game like StarCraft 2 or League of Legends.

As quoted above : an abstract agent system flexible enough to work in multiple genres.
I would go for FPS and RTS

- I would love an example scenario of what you’d like this AI engine to be capable of by the end of the summer, e.g. “AI Character identifies two different weapons on the ground, chooses one over the other, sees player, tries to kill player.”

Its just like that we’re working on it: agent looks around, spots another enemy agent, goes towards him and shoots. We’ll add weapon selection and scenarion with many agents, and maybe two group of agents.

My primary idea is to use JME to create virtual environment/testground for AI algorithms, specificaly neural networks. I guess this can be also used/reused in JME based games.

It would be great if we could put these two AI projects together.

Cheers
Zoran

hi,

lots of interesting stuff here. @erlend_sh pointed me in this direction to ask about a possible GSoC project. I have a PhD in cog science and philosophy and I’m currently on a MSc in Sofware Engineering. I’ve been looking into OpenSteer (The GSoC ideas page suggests porting this to java as a possible project)

I’m aware that the ai-library already has some steering behaviour in there. Would it be useful to expand this? Perhaps including some more sophisticated context behaviours as discussed here? How compatible would it be with the other ideas being discussed above?

Some handy assets for testing purposes:

http://www.blendswap.com/blends/view/71409

http://www.blendswap.com/category/rigs/sort:downloads/direction:desc

@shoga

I’ve been looking into OpenSteer (The GSoC ideas page suggests porting this to java as a possible project)

I’m aware that the ai-library already has some steering behaviour in there. Would it be useful to expand this? Perhaps including some more sophisticated context behaviours as discussed here? How compatible would it be with the other ideas being discussed above?

That would be good - a fast way to move forward, and get something usable.
Lets say, wrap existing steering behaviours with Behaviour class from GitHub - QuietOne/MonkeyBrains: Agent framework for jMonkey Engine
The nice thing with having steering implemented as agent behaviours is that we can easily try, test and replace or evolve them.

I've been reading up on steering behaviours quite a bit recently, and tinkering with OpenSteer. I'm wondering how you would expect this to integrate with the AI engine and/or any navigation system? Regarding the latter, my thinking would be that steering is different from, and should be kept separate from, navigation, because it's about simulating realistic behaviour on the journey, rather than plotting the course. I'd love to hear your thoughts on this.

I’m open for your suggestions. The ideal approach would be to have NavigationBehaviour which can listen to some other events that are fired by other behaviours, and take diferent parameters while planing path.

And we could test it with assests @erlend_sh suggested above.

@sevarac said: I'm very glad to see that there is great interest in getting AI and agent stuff to JME, that would be very exciting. @tim.m.shea Is there any code available online to see it?

Unfortunately I can’t make the decision to share the code at the moment, because the project is being sponsored by the developer of a proprietary engine. He may or may not decide to monetize it. If he doesn’t, we will likely open-source the library when we’re done.

@sevarac said: It would be great if we could put these two AI projects together.

I would very much like to collaborate, given that we have some obvious overlap in terms of goals; however, on the ideas page it is sort of implied that you were interested in potentially mentoring an AI engine project, so if that isn’t the case we would need to identify a mentor with enough knowledge.

@tim.m.shea said: (...) I would very much like to collaborate, given that we have some obvious overlap in terms of goals; however, on the ideas page it is sort of implied that you were interested in potentially mentoring an AI engine project, so if that isn't the case we would need to identify a mentor with enough knowledge.
What other knowledge are you referring to?

FYI, all SoC participants will have a direct line to the whole jME core team, alongside the mentors, so I think we’ll have the “knowledge-spectrum” pretty well covered :wink: