What's happening with MonkeyBrains?

Just to throw in my ten cents worth … and I am probably barking up the wrong tree … but I was discussing designing AI for my game with a friend, a while ago now, and he suggested I look at the Drools library. Admittedly it is for business decision making, but it is scripted (I think) and seems fairly adaptable. At the time, I thought it had some promise of being useful.

Regards the Behavior Tree (BT) of libgdx-ai . Yes, it’s well written I have to say. Even if I’m not a big fan of BT in general because I like more the reactive manner of AI (or half scripted, half reactive), or goal driven… BT AI modelling’s hard to make a realistic AI, because they are too “sharp” or too “case-study”, at least from my experience and what I’ve read.

libgdx-ai has a handful of utilities and they even introduce a DSL to make BT, which is nice but after looking at the code, I saw concepts spread over the places and force user into their way of thinking instead of provide a “library” or “framework”.

My revision of Behavior composing (not just tree but a graph of Behaviors). Keep them as DataStructure if it’s possible. Don’t bring BlackMagic into Data.

  • Task for example are very common in AI field but will be too generic for an imperative language like Java. Task should be just a wrapper of Runnable or Callable like in Java utils. An Game AI framework may let Task has two more characteristics:
 void update/interpolate(time); 
 Set<Task> decompose(); 
//or float getComplexity();

Update let Task hook to update beat of the game. Decompose let the Task expose its internal logic in monotholic way. So any kind of Agent, Rule or Scheduler can run task, evaluate task, judge its complexity.

  • The Tree stand along as an generic DataStructure.
  • The Visitor is also an interface to travel the Tree (the Graph).
  • MetaTask can take any valid description (script) to procedure a Task.
  • ChainedTask allow to compose task together, the process of compose ChainedTask is opposite with decompose a normal Task.

Now because all the above it just like organize data in well defined data structure, if I want to:

  • execute a Task in the Tree, I visit it by ExecuteTaskVisitor and let them roll. Those visitor is likely an AppState in JME3 code.
  • Evaluate a Task, use EvaluateVisitor which take a single node or a Whole into account.

As if I have another Tree ( Tree of Spatial for example) which matched the structure of my TaskTree in someway, I may also visit them asymmetrically and calculate the result! This is the main idea under the hood by using well-defined structure: Divide to Conquer.

For example: visit the behavior tree at the same time with visit the dialogue tree. You can see it as I decompose a thing into two! In a lot of framework they tend to make it like this:


Instead I do:


For DSL, I use Groovy! to save my self in hell of problems later.
As an DIY version, you can introduce a MetaEvaluator or ScriptEvaluator depends on what you want in a Behavior Composing system, some of those listed below which Groovy can feed you well:

  • logic evaluate
  • interfere/invoke real java object’s attribute and method
  • decomposable into small piece by meanings (human- read write -able)

Regards Drools, I don’t think it’s well adapt to game neither. It’s big and over complex subject to make simple rule or action that your “common” game need.

I like idea of behavior trees, if you look at demo games, that was the idea of making AI in games. As for AI items, I made for GameEntity class. That class should represent all entities relevant to AI.

I don’t like how the name BluePillAgent as it doesn’t describe what is difference between Agent and BluePillAgent. When I made this I was primarily focused on FPS games, but to make it more flexible, we can make Agent class really and have his extension for different games RealTimeAgent and TurnBasedAgent as first big different agents an then build a tree. :smile:

Ok. I think we should remove Node from MonkeyBrainsAppState.

I’m the main developer behind gdx-ai. :smile:

Actually the whole gdx-ai framework is dimension independent. Thanks to the use of interfaces and generics both steering behaviors and pathfinding API work with 2D and 3D because is the developer’s responsibility to provide the actual implementation of the model. Also, about “actor” model I don’t really get what you’re talking about. If you’re referring to something scene2d-related, well, I think you misunderstood the API.

Behavior trees have proven to be a real mean to implement complex agent’s behaviors. Think of games like Halo 2 whose AI heavily relies on behavior trees. :wink:


Hi @davebaol,

Thanks for your reply.

Don’t get me wrong. I said your library is well-written. Of course you did abstract out the dimension by generic wrapper. The problem of 3D is: it’s not “2D with Z”!

I don’t say JME is for making 3D game, but as I wrote my AI lib for example, I try to bring all the case spanning in 3D space first, but not try to wrap my mind around 2D space and leverage with a Z axis…

Also, I DID try to integrate libgdx-ai to JME and in fact in my code base of Atom I have all the utils package of libgdx including Vector math, Array an such… So it’s not a big deal to integrate it to JME anyway… what is the problem? It’s there anything in between make me feel uncomforable to use libgdx-ai in my JME project (yes, mine only).

  1. 2D or 3d: There is no Physics constraints, there is no Floor, there is no Mesh? This is the main point of tell your library wraped around 2d. Or at least you are not thinking in 3d dimension firstly. What physics really relate to AI after all? Let say your Steerable after calculation introduce a Vector that force toward the ground (dig through the ground). This situation will not happen in almost 2d game but really common in 3d games, because there are floors! Then this in turn hand out to user to solve. When it come to physics, steerable will hit harder then a WeightedBlendSteering or PiorityBlendSteering. I suggest to add PhysicsBaseSteering and FloorBaseSteering to treat those situations. Another thing is Graph but not Mesh, you properly know about NavMesh in detail. Your framework introduce a handful of graph utilities but not Mesh. Mesh is an extension of lattice graph that also have concern about its Cell, which is the area between non overlap vertex. Mesh is a big deal in 3d AI. In fact, I will never back to use a Node base pathfinding but a Mesh base pathfinding… It’s cleaner, faster and much easier to control. But that’s another story.

  2. MessageDispatcher: Why I need a MessageDispatcher. Because of “actor”. You may not see because the assumption is hidden here. MessageDispatcher assume that the “actor” don’t delivery the “message” themself but pass to another system to handle this job. In turn, MessageDispatcher join the AI framework just to delivery handful of un-categorized or even un-ralated to AI messages. As a system designer, I really against this idea. I want a GenericMessageDispatcher to delivery anything (but I already have it), and I want specific and ManagedSpecificMessageDispatcher to handle well-defined scope messages (like AI telegram). You may see the point… You framework don’t intend but introduce handful of ultilities that let user do “bad practice” by it.

Edited: (I think I have to make my self clearer at this point)
I suggest making TelegraphMessageDispatcher an interface instead. And for a solid implementation with libgdx utils as DefaultMessageDispatcher (or MessageManager).

Telegraph interface here is super Generic and look extracly like Predicate.


I think Telegraph should be eliminated (or at least become internal API) because people will going to bring various classes extends Telegraph and they think they have to depends on it every where. This will make your Telegraph the “blackhole” because everything have to go through it and bottleneck will soon be seen!

Telegram should be an util class, which immutable and should not be extend in anyway. Telegram is what I called Signal in my concepts (you know i’m not a native speaker)


Here we are talking about Agent framework

which not really introduce the concept of message delivering but it’s belong to per implementation.

  1. Scheduler: This is util and I’m not too much against it but I think I may use better library for this job. Introduce too much utils is not really awesome if the user comeback and complain about it later.

About BehaviorTree:
As I said, it’s look promising and also nice write up. I used it in my soon release game :slight_smile: But it still against my philosophy of writing a framework. Introduce too much concepts

It’s like you map “1 to 1” concept from AI field into a solid Java class or interface. As implement in Java, you may know that we already have a handful of interface in Java util that written 14 year ago and proved by millions of users? To write another one that look exactly the same or somewhat extendable from the old one, I think we should carefully decide. You know, as the official AI lib of libgdx, your lib going to have million users soon. :slight_smile: That’s just my opinion, one of your user!

Your Task vs

FutureTask<MetaData> implements Set<Task>


Minor but annoying problem:
In libgdx:
We have Vector interface
Vector2 and Vector3 for example are

Vector<Vector2> and Vector<Vector3>


So this code is straight forward

public class SteeringAgent implements Steerable<Vector2> {

and can use


to calculate.

Vector2f and Vector3f this not related in anyway. :slight_smile: That’s silly but it’s true. The core of two engines are different!

Agent should only “provide” Signals.

Signal is immutable, clonable and let the world know about how it’s look, what weapon it have, where is it.

Let Agent has any “property” as mutable Java attribute/ reference will cause concurrent troubles later!

In BluePillAgent (should you change the name to be more specific, like PropertyAgent), the Agent that “store” the some "item"s inside, forming an “inventory”; it may provide signal about its “inventory” and let those “observer” look at it continously.

The big question is: do we really need to introduce a solid concept of “property” for “agent”. Do agent really need to touch other’s item to think?

What I suggest above is: let introduce a minimum Java implementation: as an interface, or as an immutable final util class to represent this concept. Or suggest the user of the library that they should implement so call “property” that way.

Just done the first small commit of RedPill, with AISpace and a basic Sight :slight_smile:

The naming (red/blue) is hopefully provisional, just for keeping it consistent with this discussion. Once we have settled things, we may opt for cooler names.

Glad to have you on board! :wink:

To me this doesn’t look like a disadvantage. Physics isn’t used everywhere, and an AI for spaceship doesn’t need Floor either. On the other hand, we could write a specialized AI with these constraints, and that’s why I’d like the AI modules to be somehow pluggable.

Well, I didn’t understand if you are suggesting improvements to gdx-ai or if you are describing how MonkeyBrains should be written :grin:

So far, even with the faults(?) mentioned by @atomix,I think that MonkeyBrains could leverage libgdx-ai (probably by means of a libgdx-ai-monkeybrains-bridge). I don’t want to reinvent the wheel myself.

But also I don’t want to force design decision! Since @atomix is so much passionated about this, and if @Tihomir agrees, I’d gladly allow him to put his code on the repo. :yum:

About steering behaviors and full 3D did you read this? Steering Behaviors · libgdx/gdx-ai Wiki · GitHub
Extending the API to support full 3D for those behaviors having an angular component is not so hard.
You just need to use vector and quaternion in place of scalars for angular acceleration and orientation. Of course the 4 behaviors mentioned there need an additional implementation that uses full 3D math. However, 2.5D geometry covers likely more than 90% of games out there, so I thought that support for full 3D steering was not a priority when I wrote the API. It will be added if/when people will ask for that.

About physics, I think it’s an added value that the framework doesn’t impose physics. You are free to implement your model with the underlying physics engine you prefer or any physics at all. It’s totally up to you. That’s one of the reasons why I call gdx-ai a framework instead of a library.

About the message system, there’s nothing in the framework forcing you to use it in your game. It’s just a simple and well-established technique that you are free to use if you want.

Finally behavior trees, they are nothing more than a formalism. All formalisms introduce some kind of concepts by nature. As such you have to learn those concepts in order to use btrees at their best and appreciate their expressive power.
Also, since you are comparing java classes with btrees concepts ask yourself “Are all game designers able to modify source code?”… You know, luckily a team is made up of many people with different skills and roles. Ideally, a game designer should be able to change the AI of an agent without having to modify and compile the source code if possible.

Anyways, if you want to contribute or concretely share your knowledge I’ll be glad to discuss things with you.

Thanks, I found this board by following links at Sign in to GitHub · GitHub

@jmonkeyengine developers
Of course if you decide to use gdx-ai in your engine I’ll be glad to provide all the support you might need.
I’m open to API changes and extensions if required. Just let me know. :wink:

See ya


That’s definitely a road I would personally like to see us go down eventually. But for now, I fear I might have brought gdx-ai into the discussion prematurely. MonkeyBrains was in the midsts of a new iteration, and I feel like I contributed to making that transition harder by adding yet another factor into the mix.

@Tihomir and @Pesegato have enough on their plate for now just figuring out amongst themselves how to further MonkeyBrains as-is.

As for jME and gdx-ai, I would love it if someone (@atomix maybe? it’s hard to determine where you stand! ^^) took it upon themselves to use gdx-ai as another alternative AI engine for jMonkeyEngine. I don’t see that as as a loss for MonkeyBrains, since there are already enough “chefs” involved :stuck_out_tongue: And honestly I would be kind of bummed out if someone introduces Yet Another AI Library at this point.

1 Like


As I said, I have tried to integrate lidgdx-ai into JME in my code base. So yes, it’s possible!

For now, it will include serveral libgdx util as libgdx-ai depends on them. For everything except physics related problem, I’m quite confident that the libgdx-ai can handle well. Let say we will have a libgdx-ai-jme library, a folk of libgdx-ai. And, as advertised in its wiki, support 2.5D and soon 3d :smile:

For “physics” and “floors”, I think I have to summon the dark lord from hell and …modify some code to see if it fit. The NavMesh can also be integrated after include the MeshGraph and MeshGraphCell. And of course, other pathfinding methods: Theta*, and cooperative pathfinding for example. (i may contribute this to the orignal repo as well)

For “telegram” stuffs, which I find quite “out of hand”, I will make them internal API, only used for StateMachine and those existed Telegraph implementation.

To introducing “Agent” in this lib is very debatable, Agent and Actor concepts have overlapped part and different part. To be honest, I don’t have enough brain power to think about merging this two into one right now. But let see.

For BT, I will retain almost everything. Because I’m not really interested in it much :slight_smile:

Sorry for my stubborness if you find any unintended offense.

No offense at all. We were just discussing, right? :innocent:

If you give me read access to the source code I’ll look into it to see if I can improve gdx-ai in order to fix the problems you encountered while integrating steering behaviors and physics.


This is going to be long discussion, isn’t it? The main problem of developing MonkeyBrains further is “what does good AI framework and what are they responsibilities?”.

I will start with my idea, and it would be great if we could agree on some points so we can improve MonkeyBrains.

  1. I think of AI framework as mind. It doesn’t have physical properties so it is up to user to implement body. Like mind can live exist without body, so the AI framework must be flexible to work with any physics library or similar. Because of the premise AI framework must include except of agents and their behaviors and objects of interest for AI. Objects they can perceive, touch, use and such.
  2. The goal is to make a lot of independent behaviors, and it is up to user to include use what behaviors that are useful to him.
  3. Because AI framework should represent mind, behaviors are at main focus, not agents.

I thought there would be more, but I think this sums it all.

Hi @Tihomir,

It’s not like I going to to write a PhysicsForce calculation or anything like that. They are just utils and it should be there just like Steering or pathfinding.
PhysicsSteering is exactly like other Steering behavior, except it abstract out some methods for user to implement with the physics engine of they need:

  • PhysicsAvoidSteering look quite like the WallAvoidance, scan obstacle around the Vehicle but will use physics cast instead of normal raycast. This will solve the case of dig though the ground I said before, because this Steering will exam the terrain.
  • PhysicsGravitiyForceSteering take force into account, so any steer vector normally just calculated in an virtual plane need to be reform with a “gravity” and “force” its physical body have at that moment. It’s look like PiorityBlendSteering but have with some physics elements built in. For example: two ball in a physics world not just collide and stop (like boids in normal steering) but they be push away from each other immediately, can deform and reduce the force if it’s a soft body…

All those elements will be abstract out as much as possible to hook into modern physics engine.

“Floor” is also very common in 2.5D game. And I’m not talking about a real floor but virtual planes, and there are virtual gaps between those virtual planes that character can go though.


Support for floor because I think people going to JME tend to write 3D or 2.5D games intead or just 2d game (even if they can). That’s what fancinated about out engine compare to libgdx or cocos for example.

Anyway, I want and needs suggestions :slight_smile: I always overthought time to time.


I make a new thread to discuss about libgdx-ai for those who interested with the idea. Because we also have the main dev of the lib join this conversation. It’s going to be a huge advantage and a good opportunity to “make benefit” of both engines. :slight_smile: So i make the move, i will push my demo code as soon as I get my weekend time.

P/s: And should I also revive my AI lib, anyone interest to see it in action :wink: (Note, it’s not a lightweight library):


@atomix I agree that terrain (floor) should be the part of AI. It is necessary for good spatial reasoning. I started the jNavigation project for use of Recast Navigation for the purpose of MonkeyBrains, but I never finished it. I will probably start it all over again, as the Recast Navigation was updated with new methods of creating navmeshes. The best possible way for representing terrain is with polygons, and Recast works well with them, but the problem is that with that kind of terrain representation are implementing basic senses. How did you implemented terrain reasoning and what are your thoughts of it?

@Pesegato I was thinking about your posts, I think there was misunderstanding what does inventory do. It is possible that the name was misleading. I think of Inventory class not as storage that agent has, but some things that agent can do periodically. I think I can better explain this with few examples.

  1. In FPS, agents have gun, weapon, bombs. I think it is wrong extend Agent class and possibly override something that shouldn’t be overridden in framework and I think that from big picture, agent entity and weapon entity can exist separately and logic of them working together shouldn’t be in agent class, but in behaviors. So the idea of extending agent in direction of weapons should be in use of Inventory interface and making basic behaviors that will go well with them. Example of that in MonkeyBrains is SimpleAttackBehavior and Inventory class.

  2. Inventory will never have the status of class, because the storage of weapons differ from game to game. It maybe list, array or matrix. It is up to programmer to define how to store items.

  3. Inventory is embedded into agent class and it will be given tpf-s. This will enable to reduce cooldown everything what is added that should have cooldown in inventory. I remind you that weapons have cooldown and are unable to use, until the cooldown is over. Without this programmer will have extend the agent and override the appropriate methods to reduce cooldown.

  4. Steering behaviors are influenced by the mass of agent. Mass depends of agent’s mass and mass of his inventory. Also embedded into agent. If you don’t want to include mass of inventory just return zero for appropriate method.

  5. I am aware that some agents don’t use weapons, so this problem is solved inside of agent class. If you don’t use inventory, it will not change anything.

  6. Super powers, magic, skills should be stored into inventory, as they can be used periodically. They will not have mass. But inventory is used for this kind of stuff. I think MagicInventory for magic stuff is god extension for Inventory, as it can include stuff like mana, and with Java 1.8, we can make additional logic inside of these inventories…

I think I explained my point of view nicely. What do you say @Pesegato should we return Inventory and possibly rename it if the name was misleading?


Yes, there should be an interface Terrain (because if there is any more abstract name to call it). VirtualGround i suppose. VirtualSpace and VirtualGround are the world of full 3D physics and world of 2.5 and 2D steering, pathfinding respectively.

A NavMesh is a candidate for VirtualGround because it wrap around a 3D geometries and suggest a surface that AI character can walk and can think in this surface. The virtual grounds have boundaries. Think of floors and rooms.

VirtualGround or VirtualSpace should only affect “behaviors” and “senses” but should not affect “overall stragegy” or the layer upper that. In an FPS for example, VirtualGround should affect local movement and animation of characters, but not team tactic, team events for example. That’s said, VirtualGround is the semi-real and related to physical embodiment of AI agent, to distinguish with AISpace which is a fullly virtual space in mind.

Of course bringing any thing like VirtualGround or VirtualSpace on an AI framework will limit the flexibility. I think they should be utils only.

As I implemented AI for an RTS game, I used VirtualGround a lot. Two main usage of VirtualGround in such game is for pathfinding and team tactic. And underlying is use a Versionable, changable navmesh.

Let me explain about this version of NavMesh. It’s not a general NavMesh like one in Recast. It’s a Mesh, and very changable.

  • When my units do pathfinding in the Mesh, they insert they positons as inner vertexs in the Mesh’s Cell. The Mesh’s Cell have a limit of capacity and whenever that Cell is consider full. You have to re-route with another path.
  • Then, the calculated path is also embed in the Mesh. If one find a similar path already built in the Mesh (with threshold) it may reuse that path.
  • The Mesh is updated over time. The inner vertexs can be clear after expire time. This ticket holder method is much more efficient that the normal cooperative path finding method which involve a 3d table.
  • By travelling the mesh by adjacent Cells, a team tactic can enhance its vision by far instead of thinking in Node Model. Because, we have informations about surface attribute (dirt, grass, lava…), also positions and area.
  • Versioning here is a complicated concept from database world. For the user, it’s look like we have a multi-version of NavMesh geometry sharing the same reference. That charateristic enable mutil-agents, multi threads to working in the NavMesh at the same time. An Agent can decide to do pathfinding in its outdate Mesh (with a threshold of differencies) or update the local Mesh.

Operation in the Mesh work by
multiversion concurrency control

rule or
timestamp-based concurrency control

If you don’t want to mess with this concurrent design, you can just forget it :slight_smile:

That’s a slim cut of what can be done if you involving Mesh and Cell instead of Node, as implemented VirtualGround and for Terrain reasoning.

The last note: If I write a library,I never tend to force user to use my utils part. Terrain reasoning only existed in a few genre of games. So if they don’t have a VirtualGround, or any Terrain at all in game. They don’t have to know about it.


First of all, I like your tenacity :wink:
I have to say that your Inventory makes much more sense now; however I still think you are leaking (too much) game logic into the AI.
And also this approach is somewhat enforcing a way to code things (weapons/attack) inside behavior, which may not be what the game developer want.
If you take a look to the image I’ve build several post ago, you’ll notice that I try to keep things as simple as possible (to the point of making them dumb), but that’s just my personal preference.
Keep in mind that I’m not an AI expert, nor a game developer expert, so I may be mistaken and I might change my mind later on. But now this is my point of view… hope you don’t get offended! :innocent:

Since the library is for jMonkeyEngine I think that an abstraction layer is good; however the prime candidate for a “floor” should be


It has a lot of nice goodies, is well documented and it’s even editable from the SDK! :smiley:

1 Like

TerrainQuad is a datastructure for terrain i suppose! We have to separate this two concepts when implement them.

VirtualGround should be some thing independent with any Spatial. Later on, we can have TerrainQuadVirtualGround or NavMeshVirtualGround… depends on what really affect the way AI think and decided to move. It may not really affect the way AI move . The virtual ground is the conceptual part of the mind about surrounding “walkable surface” to be distinguish with VirtualSpace is conceptual part for surrounding enviroment in general. Those utils are really helpful in games that characters or agent actually “walk” or “move”. Most games but not all. That’s why they should be utils only. That’s the point.

I don’t know if that is a good thing, but it helped me a lot in achieving my goals.

When you say it like that, it seems that framework is leaking game logic, but I agree that this approach is enforcing a way to code with MonkeyBrains. I see the Inventory as yet another observable space, that seem to be important for AI.

Example: Some Mortal Kombat look-a-like game. One player will have in inventory all possible skills, and behaviors that will use those skills. I think that behaviors can’t and should be independent from observable state (skill status shouldn’t be directly inside of behaviors). Reinforcement learning as really important method for AI in games, is this kind of games is based on possible moves that agent can do with success that move had. Moves that had better success will occur more frequently than the others. The idea is simple, but the results are good. For making this kind of learning processes that depends on “self-awareness”, some concept of agent state should be included.

Like this Inventory (maybe we should rename it as AgentAwarnessState or something similar, I am open to suggestion, as AgentAwarnessState does not sound good), health status is also important for AI decision making, but it is also part of game logic. Idea that agent first shoots at agent with lower HP status is area of AI, how the HP status drops is game logic, or it should be, but the boundary between is really fuzzy.

I like your approach. I have approach of making blocks for AI and then recombining them, but you have proven a lot of time that my blocks can be too big to be used, so we have separated them to fit to your needs and my idea of how framework should be. My idea is to have one very simple base that is flexible and different blocks with less flexibility, but with great results for some particular game type.

I am not easily offended kind of person. It is great for having you on board, as my idea it that MonkeyBrains shouldn’t be just framework for game developers, but also for average programmer that wants to make AI easily without much thinking how the AI works (hence idea of AI framework with very flexible base, and branches that are not so flexible).

I agree with @atomix about VirtualGround point of view. VirtualGround as interface and TerrainQuadVirtualGround and NavMeshVirtualGround as concrete implementaion of that interface, just like List, ArrayList and LinkedList. :smiley:

1 Like

@tihomir I think your goals exceeds mine; so for now I’ll just concentrate on what I want to implement. On a second step, we’ll think on how is best to extend the functionality :wink: