API suggestion

This is the code of MonkeyBrainsAppState.addAgent():

    public void addAgent(Agent agent) {
        agents.add(agent);
        agent.setId(setIdCounterToAgent());
        if (inProgress) {
            agent.start();
        }
        rootNode.attachChild(agent.getSpatial());
    }

Not only this code assumes that the agent’s Spatial is not attached to the rootNode, it also assumes that the developer actually WANT to attach the Spatial to the rootNode. Which is not always true (for example, I want to attach it to a shootables Node).

I suggest to delete that line and delegate this work to the developer… please :slight_smile:

That isn’t the problem for the now. MonkeyBrainsAppState gets the reference to rootNode through setApp() method, if it makes that much of the problem, you can extend it and override setApp. All parameters in MonkeyBrainsAppState are protected, so the developers can change it if there is need. I will have in mind for this suggestion. :slight_smile:

Well, actually I had to fork MonkeyBrains just for deleting this line. I understand that putting the rootNode away from MonkeyBrains requires a huge refactoring, but that’s for the good of the library on the long run… at least I think so.

I am currently very busy with my work and research so I don’t have enough time to correct it all. Your (@pesegato) ideas were very helpful in refactoring MonkeyBrains. If you like I can give you access to github code so you can improve API without me. Post your github username, and I will add you as contributor.

On another note, I was thinking how to remove boilercode. As I don’t see how it is possible with MonkeyBrains, I am exploring the possibility for creating language on top MonkeyBrains or something similar. If you have idea for some next improvement, post it here. :smiley:

My username is Pesegato also on github :smile:

Thanks! If you add me, this is what I’ll do:

  1. fix the above issue
  2. split monkeybrains into 2 packages: monkeybrains and monkeystuff (where all non ai-related stuff will go).

This will obviously break the monkeybrains samples :frowning:

What do you mean by that exactly? What kind of language would this be?

Glad to see more MonkeyBrains contributors coming on board :smile:

@Pesegato I have added you to both MonkeyBrains repository and MonkeyBrainsDemoGames repository.

@erlend_sh I don’t have slightest idea how the code will look like. I am researching the programming paradigms to see there potential, as they need to be supportive of objective Java. For now I am inclining to functional programming like Clojure, but it is still too early to discuss it, so if anybody has idea how they want to code AI, please suggest it here.

Idea is there will be 2 supported MonkeyBrains framework like iOS Objective-C and Swift (I am now iOS developer), everything can be built with java code, but there will be support for that new language to make some things easier.

BTW @erlend_sh What is recommended Java version for working with jME?

@Tihomir, take a look to xtend (xtext), some of the goal is to remove (hide) boiler plate and to generate java code. Currently it’s only supported on Eclipse, maven, gradle, but a IDEA port is coming.

Done my first commit! Now I want a shiny gold badge :smile:

Back to more serious arguments: since I am a contributor of MonkeyBrains, I can speak openly about what I think are the problems of this lib.
The core concept is cool: agents, behaviour, teams, not to mention the steering behaviours.
Unfortunately, the lib makes several assumption about your game that may or may not fit the developer’s needs.
This is what I was referring above: split the non-AI related stuff on a separate package.
Until this step is finished, I doubt that a higher level language will alleviate the issue… :frowning:

Nice, did not know a IDEA port is coming. Having it on my radar now :smile:

I’ve kind of wanted MonkeyBrains to become an AI framework that work and useful. But from my point of view, it’s not really well adapted to JME3 nor have good abstraction for an extensible AI framework yet… I kind of write my self a AI library long time before MonkeyBrains and I go back with it… Anyway, this is my opinion, as this thread said open for suggestion, and I can introduce my self a game developer not an AI expert or anything near that, to judge MonkeyBrains. I just know what I know, bare with me :slight_smile:

I’ve also used Agent concept in a lot of my games written in Java, Java dialects and C-family languages.

In Java, I’ve implemented my framework with a leverage of Guava and RxJava, for extensive ultilities collection, composing of functions and routines (threads, actions, events …). So it’s not very pure Java but I also use this two libraries intensively anyway, I think it’s a good point use it for AI. The foremost abstraction of the framework is:

  • Agent is an interface, a collection, provider, updater of Behavior.
  • Behavior is a signal that Agent give the outside to aware of it, and also the activity it does in the mean time.
  • Team is somewhat too much to include into a framework. I think any extended Collection interface work just fine.
  • World is somewhat too much for a framework also.

What I have learn from a mix background of game developing and software development research (tools for AI) is: making AI things more composable as possible to make it helpful!

-) I’d like the Container (scope and space) concept. Container instruct an overall Layout for Agents (alignment, movement, formation, rules)… Container may also introduce something like a Blackboard of information exposed for Agents within scope.
-) Layer is a Software managed concept, introduced to bring code generation and GUI creating into AI. Layer along with Container provide managed information (as Events, Data) for Agents to process in update loop. Layer is somewhat like Steering for movement, coporative Pathfinding, goal - decision making … which should be made to be able to “layering” one on top of another with mininum of interleaves.
-) Event, State, Data, Sampling are too generic and have a lot of library did excelent works defining them. I just wrap the library I want and use them with JME3.

In JME3:

  • Agent is an interface, which expose its Behaviors for other Agents, and exposed also Events for its Container.
  • Event (with good library already taken care of them) let Data flow concurently and we don’t need an overall sotiphicated method to handle it. For personal usage, I’ve use RxJava and Guava. Some rapid RingBuffer can also be used to handle Events within Container and Layers.
  • Layer can be built from outside, such as code generation, GUI tool or Inject into One/Some or all the Agent. Talking about injection, it’s likely talking about DI (inject when contruct object) but also can be inject realtime (like add a Component to Entity in ES). For current JME3 game, I use GUI tool to generate AppState and Control to layering and cascade movement and animation. My colleges 've researched Qi4j to make much more overall layering for the whole system, that’s very advance thing that this very slim abstraction capable to achieve.
  • Solid candidate for Container is Spatial, to be exact Node, who can represent a simple tree Space, Scope and Layout. One can argue with that naive approach and make another Layer and stream data into Spatial via Control for example. Or even make it MVC if it’s suite their model of thinking and toolset. It’s possible.
    -) We can also use EntitySystem to handle Data and use Signal (a simpliest Entity and Component) to represent interactions. In my AI framework on top of “a folk” of Jay-ES, I’ve used Signal intensively and after a while, I’ve found it very composable and even easy to understand and to debug AI.

That’s some thought on top of my brain right now. Hope this useful someway…

I must say @atomixnmc that I didn’t completely understood your explanation as I am AI programmer and not game programmer. A lot of stuff probably looks like they don’t belong there, like team. Team is collection for now and I agree with you for that, but it won’t be forever. The problem with this kind of agents are communication. In first-person shooter game, agents won’t know the position of each other, except we enable them to know the map, which I think is cheating, so the idea of team is communication group. Also expectation for team class is to become sort of group agent for real time strategies and to enable them to have group behaviors. I think some steering behaviors team as parameter.

Inventory has also been target, like it doesn’t belong to AI. I beg to differ. Steering behaviors has mass of agent to properly calculate velocity. Inventory is interface. If you don’t implement it and add it to agent, nothing will happen, but I think you should include mass of inventory in mass of agent. Without it you will have to override mass of agent… a lot of work for little benefits. Also inventory inside agent will update all cooldown of stuff in inventory that programmer think it should have cooldown. I hope I have defended my idea.

Agent is interface. Ok, it can be, but that way a lot of AI stuff can’t be implemented. I have made Agent generic so you can use that generic class for extending what ever you like. In MonkeyBrainsDemoGame Sword & Gun, I demonstrated it when generic class extends physical object class. Also I used in game for updating agent, as matter of fact I used it as dual (agent as thinking object and model as physical object).

Behavior is signal. I don’t completely understand. Some explanation, literature, tutorial?

World is too much for light framework, but it will needed for complex calculation. Example for now would be Recast Navigation (please, do not ask me when it will be completed :frowning: )

MonkeyBrainsAppState should update all agents in game. It is based on Entity system.

I hope this explains my plans for MonkeyBrains and why the things are now as they are.

I know the struggle to make things abstract and also useful at the same time.

The problem is if we assume too much about the “universe structure”, we will missed a lot of features that should be easier to adapt if we don’t assume that.

Java utils library itself does excellent work in not assuming anything, to be used as generic as possible. I think we should continue to make it that way, stick with Collection, don’t force an Event mechanism but let the user give their own.

I’m talking about blackboard architecture for example. An Agent will want to contribute its knowledge into knowledge base by giving a way a piece of data call “Signal”… This is depends in user implementation. Signal can be anything from primitive (numbers) to complex data.

To support this kind of flexibility, we should not:

  • assume the Event as anything specific (maximum an empty interface with unique Id)
  • assume Behavior as anything specific because it can be decomposed as Signal. In some implementation, it can be Component of an Entity; in other, it can be a Control like what you did.

I will give two examples how I do this differently in several projects:

/** Because this AILayer is built manually, instead of code generation or editor. That's why we need an interface for AIEvent.*/
public interface AIEvent{
      UniqueID getId();
}

public interface Signal extends Component{ //,AIEvent {   // Yes, you can make Signal and AIEvent if you want!
    
}

public interface Agent{
     UniqueID getId();
     void aware(Signal... datas);
     void signal(Signal data);
}

/** It's fair enoug to make this abstract instead of interface because we will involve movement here! */
abstract class SteeringBehavior implements Signal{
     void getForces();
}

public class CharacterEntity {
     Entity aiEntity;
     public ChacaterEntity{
           aiEntity.add(new SteeringBehavior());
           
          
    }
}

/** This is a USER class! This class will normally compose different Layers in order of they want. Let say SteeringBehavior, then CharacterMovent, then GoalDriven and finally a TeamTacticStragegy.  Those States normally generated by an editor! */
public class GameWorld extends AppState{
    /** EventDispatcher will be our unoppiniated implementation that suite both update beat mechanism and event base mechanism. EventDispatcher will broadcast Signal underhood and notify Subcriblers of Signal all over the game.  */
     static EventDispatcher eventDispacher= new RxJavaEventDispatcher();
     ArrayList<AppState> layers;
    //StateManager stateManager = new StateManager(null); // My implementation of StateManager that agnostic

    public void initialize(Application app, AppStateManager stateManager)
   {
         layers.add(new SteeringBehaviorState());
         layers.add(new CharacterMovementState());
         stateManager.attachAll(layers);
    }
     public void update( float tpf){
          entityData.resolve(eventDispacher);

         //stateManager.updateInternal(layers);
    }
}

EventDispatcher will be our unoppiniated implementation that suite both update beat mechanism and event base mechanism. EventDispatcher will broadcast Signal underhood and notify Subscriber of Signal all over the game.

Then later if you want to debug the game AI, you just make a Swing frame Subscriber that take signals and display in another thread (Obsever pattern).

This is incredible transparent, composable and extensible. So user can make up their own solution, make component and contribute back. I’ve learned from various of sources, and sell my Unity AI components for living. :smile:

I suggest you take a look at RxJava’s Obsever pattern to see how their wrap their mind around real-time pipeline and events. I’d like to think about AI is nothing then a stream of ideas, that’s why I looked at RxJava intensively and comeup with this.

It’s modern and cool as hell at the same time!

And maybe a video of how cool a Rx can be in game.

Sollution2:

For character-based game, which FPS or Fighter game for example is a candidate:

/** Because this AILayer is built manually, instead of code generation or editor. That's why we need an interface for AIEvent.*/
public interface AIEvent{
      UniqueID getId();
}
public interface Signal extends AIEvent{
     Vector3f getPosition();
      Weapon getWeapon();
      CharacterTeam getTeam();
      float getHealth();
      ....
}
public interface Agent{
     UniqueID getId();
     void aware(Signal... datas);
     void signal(Signal data);
}
/** It's fair enoug to make this abstract instead of interface because we will involve movement here! */
abstract class SteeringBehavior extends AbstractControl implements Signal{
     void getForces();
}
/** This is both Agent and Signal, why? because it can aware of other signals and can react, but also can be a signal for other system, like radars that scan the whole map...! */
class PlayerCharacterBehavior extends BetterCharacterControl implements Agent, Signal{
     UniqueID getId();
     void aware(Signal... datas){ 
          // attack the enemy if there is any signal of them around 
    };
     void signal(Signal data){
         // this is my chance to tell the world how i'm doing?
      }
    void update(float tpf){ 
       // if i'm talkative , i will try to signal everytime i have a chance
      signal(new CombatSignal("Enemy is near", Vector3f(myPos), myTeam, myHealth,...);
    };
}
class CombatBehavior extends AbstractControl implements Signal{
    
}

public class CharacterEntity extends Node{
    String id;
     public ChacaterEntity(String id){
          // add some model...
           
           addControl(new PlayerCharacterBehavior());
           addControl(new CombatBehavior());
    }
}

public class CharacterTeam extends Node implement List<CharacterEntity>{
       EventDispatcher teamRadio = RxJavaSpaitialRangeDispatcher(3); //A radio that work in 3 metters range

      public void update(float tpf){
             teamRadio.broadCast(AlivePredicate.filter(getTeamList()),"We are alive");
             teamRadio.broadCastWithEnvelop(AlivePredicate.filter(getTeamList()), DeathPredicate.filter(getTeamList()), "They are dead!");
     }
}

Behaviour that related to Spatial infos (positions, boundary, visibility) should be implemented in JME (I’m afraid of making this to wide without focusing into anything; in other engine, it can be different)

abstract class SpatialBehavior implements Signal, Control{
    Spatial spatial;
   Seek seek; //Steering seek
   PathFollow pathFollow; //Steering path follow and finding
     void setSpatial(Spatial spatial){
         this.spatial = spatial;
        this.seek = new Seek(spatial);
        this.pathFollow = new PathFollow(spatial);
        this.behaviors.add(seek);
       this.behaviors.add(pathFollow);

       //or something like this If each behavior is not a Control
/*
      this.seek = new Seek();
      this.pathFollow = new PathFollow(other);
      this.team = new TeamCohenison(team);
      this.spatial.addControl(seek);
      this.spatial.addControl(pathFollow);
      this.spatial.addControl(team);
 */
    }
      void update(float tpf){
        Vector3f forces = calculateForces(behaviors);
         spatial.setLocalTranslation(Forces.updateWith(behaviors));
         //or  If each behavior is not a Control
         //for (behavior: behaviors){ behavior.update(tpf);}
     }
}

Please proceed the discussion on this thread What's happening with MonkeyBrains?