MotiveAI (New Video)

Hello,



I would like to present:

MotiveAI – intelligent agent wrapper for adaptive AI learning


Code license:
New BSD License


video about functionality:
http://www.youtube.com/watch?v=OuAPR5aqO60

project, code and download:
https://code.google.com/p/motiveai/

please look into Tests to see how the system is used.

and i hope you will like that.
6 Likes

Do you have a link to a nice article that explains the theory behind this?

1 Like

This is definitely something people will be interested in, keep at it. Btw your video is currently unlisted on YouTube, so it can only be reached via the direct link.



Also, when explaining a lot of theory like this, lacking English skills can indeed become a problem. Definitely don’t let that keep you from being public about your work and otherwise interacting on the forum to help and be helped; you’re doing it right. Just be aware that by mastering the English language you will have a much easier time communicating your ideas and collaborating with other developers.

1 Like

The video works fine here. :confused:

1 Like
@madjack said:
The video works fine here. :/

I just meant no one will find it via a search or related video on YouTube.
1 Like

That’s what I figured after posting.



As for it being unlisted I imagine it’s so until it’s more complete? shrug

1 Like

Very cool thing tho, forgot to say… I absolutely see the potential. That guard chasing down the player is very interesting. I don’t understand what it is that he “learns” tho, but I understand it works through your AI system somehow.

1 Like

i’m happy that you are interested in idea.



@erlend_sh:

yes, lacking English skills is my problem, but at least you understand me. Maybe i will be better,

after some years reading posts or articles written in English.



like @madjack mentioned, video link is not public, because it was made only for JME purpose.

And this is actually “trash video” showing that the idea is working.





@androlo:

generally, it is based on general informations, not specific algorithm.

here is very general image showing the idea:

http://en.wikipedia.org/wiki/File:IntelligentAgent-Learning.png



i will prepare detailed information about how it work, and post it here.

androlo:

I don’t understand what it is that he “learns”

he learn what behaviour was good in certain environment.

don't know if it's enough, but here is some of theory i made:
http://i.imgur.com/dDTNw.jpg

http://i.imgur.com/DBsfF.jpg
1 Like

Interesting.



In some type of games it feels like the agent would be very difficult to master after a while. Is there any “stop” when it doesn’t learn more or use everything what it have learn?

RasmusEneman:



this is very good question.



Actually this “stop” depends of:

[java]public enum BasedDecision {



/**

  • try to use low execution count based on performance,
  • case it can have better performance than it had before.
  • choose factor:
  • ((maxcountedBehave_count + 1) / (actualBehave_count + 1)) * actualBehave_performance

    */

    GIVE_A_TRY,

    /**
  • choose only tested solutions, give chances only to solutions that was never tested.
  • every tested solution worse than best solution will never be tested again

    */

    ONLY_CHECKED

    }

    [/java]

    also count is integer so it can’t go too far. need to limit that.



    and i plan to make something like “fast simulation” that will learn agent everything it need.



    any ideas are welcome here.

Thank you for those diagrams, and the info. Still a bit hard to grasp hehe, but I guess studying a few concrete examples will fix that. Downloaded this code now, very excited.

@androlo:

yeah, certainly it’s not easy to understand that. Diagrams helps me also.



it’s good that Forester(BioMonkey) guru is interested in that :slight_smile:



The system is great, but Tests not. i will need to make better AI Tests.

today i fixed using of public @interface DurationOfBehaviour. because when manager was set to UpdateRate more than 0, it don’t worked.

but it’s already fixed.



procedures “can be used” and “must be used” should limit decisions of AI.

The more it is limited, the more predicted choices it make.

“must be used” is actually not implemented, but it will.



by “can be used” i mean:

[java]

/*

  • predefined condition, limit agent decisions
  • Return true if agent can use this Behaviour in actual Environment
  • Return false if agent can’t use this Behaviour in actual Environment

    */

    public abstract boolean canBeUsedInEnvironment(Environment environment);

    [/java]





    also BasedDecision type should specify how AI should thinking,

    actually BasedDecision.ONLY_CHECKED need to be fixed, so please don’t try to use that.



    i don’t know how to exactly make FactorInteger class.

    it is limited. need to think how to change that to a better version.



    about programming with that System:
  • number of possibles Envirements is based on permutation of all possible factor choices.

    So it is better to use lowest number of Factors(and their possibilities - especially in FactorInteger case)
  • the new Envirement choices will be based on that what factors Behaviour modify. just need to make it.



    Soon i will finish Test4_Advanced, so it will show true possibilities of this AI wrapper. Maybe video will come with that.

Actually a pretty interesting topic, i also have a real world example where i would try this…



I have a few questions left after checking the code (Chest Defender):



As far i understood with the lines:

[java]

env.setFactor(“isEnemyClose”, new FactorBoolean(false, false, 0));

env.setFactor(“isDefendingChest”, new FactorBoolean(false, true, 6));

env.setFactor(“isBored”, new FactorBoolean(false, false, 1));

env.setFactor(“isRested”, new FactorBoolean(false, true, 1));

env.setFactor(“isAttacking”, new FactorBoolean(false, true, 4));

[/java]



you are setting the possible actions with their performance. Following this, the above code makes the overal goal ‘defending the chest’, so far so good. However the idea came in my mind that a perfect agent would always defend the chest and never attack (since the performance is lower).



I looked trough the sources and the ‘Test2_Sensor’ got my attention, expecially following part:

[java]

float distance = agent.getSpatial().getLocalTranslation().distance(node.getLocalTranslation());

System.out.println("distance: " + distance);

if (distance < 15f) {

System.out.println("enemy set to node: " + node);

agent.closeEnemy = node;

System.out.println("enemy node: " + agent.closeEnemy);

environment.setFactor(“isEnemyClose”, true);

}

[/java]



the above part of the sensor in combination with: (Test2AttackEnemy.java)

[java]

if (environment.getBooleanFactor(“isEnemyClose”)) {

environment.setFactor(“isAttacking”, true);

environment.setFactor(“isRested”, false);



[/java]

forces the agent to attack the enemy if its closer than 15WU independently of the performance.



What i do not understand here is what influence the learning/performance/past results have on the outcome…

Actually I see one thing I would like:



Allow shared and not shared expirience.



→ eg in a strategy game it is good to have a common base knowledge shared by all ai players, however depending on the map and the plyers gamestyle it might be very intresting to allow the ai to have own expiriences each. So you might end up with a agreesive bot a camper bot ect. whatever worked for that ai a few times.



→ In a shooter/stealth game having ai’s that not share all knowlede essential. If you kill and dispose the one that saw you before he could call for help, the rest should not know about this.

@zzuegg said: What i do not understand here is what influence the learning/performance/past results have on the outcome…


He learns exactly when he needs to be active during combat, and when there is possibility to rest, and even play a game. After a few years of chest guarding he'll be able to break out the gameboy in between every exchanged blow. He will later learn how to eat and go to the bathroom as well - during combat, making him inexhaustible. Maybe? :D

@oxplay2 said:
Soon i will finish Test4_Advanced, so it will show true possibilities of this AI wrapper. Maybe video will come with that.


It seems to have very much contents already. It also appears to be a good way to just put general AI stuff in, even if not using the most advanced components, UI seems clean etc. Hope it keeps going :D

And again, very nice diagram. Everything makes a lot more sense now.

As far as I can tell, the best 'AI in games* communities I know of would be:

http://forums.aigamedev.com/

http://www.gamedev.net/forum/9-artificial-intelligence/

http://devmaster.net/forums/forum/32-artificial-intelligence/



Any one of these would be a good place to share your work and ask for feedback, look for similar projects and get some tips for further reading. If anyone knows of any other good AI communities, specifically for game development or otherwise, I’d love to hear about it.

I WANT TO USE IT IN MY PROJECTS!!!



REALLY NICE MAN!!!

zzuegg:



yeah, Tests are not good enough.


you are setting the possible actions with their performance


not actions, this are Factors and performance gained from thier good value.
so Agent know when he is "happy / happier / happiest".

in this example it can be done really different.
I made it fast case i prefer to make System, not tests.

that a perfect agent would always defend the chest and never attack


In this example "defend the chest" is an action of running to chest. (it make him happy if he move to it OR is close to it).

I looked trough the sources and the ‘Test2_Sensor’ got my attention


Now since i added FactorReference(also need to test it) it could be just
environment.setFactor("isEnemyClose", node);

What i do not understand here is what influence the learning/performance/past results have on the outcome…


how he learn?(Generally)
- After execution of behaviour,
he check how much environment have changed and what performance gained/lost from that action
and remember that behaviour data for actual environment. So if agent need to make action in the same Environment,
then he can choose behaviour from memory.(Decision is based also on few things)
- (this part will be added in Beta)also after execution remember exactly what factors have changed, so for similar new environments,
agent will choose behaviours that could modify factors for a better.


Empire Phoenix:

Allow shared and not shared expirience.


You can easly make it. Just Create 2 MotionAI Managers. Each manager have it's own Learn Central Unit.

-> eg in a strategy game it is good to have a common base knowledge shared by all ai players, however depending on the map and the plyers gamestyle it might be very intresting to allow the ai to have own expiriences each. So you might end up with a agreesive bot a camper bot ect. whatever worked for that ai a few times.


for not shared:
you can just add FactorInteger player. it will split Environments between AI players.

But about shared fators...
It is an great idea.

Maybe Factors could be set to "global" or "local".
Actually don't know how best i could done sharing, but i will need to find best solution.

-> In a shooter/stealth game having ai’s that not share all knowlede essential. If you kill and dispose the one that saw you before he could call for help, the rest should not know about this.


It's only wrapper/universal system.
IMO fastest solution for that, should be user programmed "call for help", that could modify near agents environment factor(for example FactorReference named "enemy"). Or maybe i will make something in system to make it easier.

About automatic sharing between agents that are "close". system is universal, and agent can be even packing machine(i mean agent do not need to know about his location). so user need to make it, not system.

androlo:

It seems to have very much contents already. It also appears to be a good way to just put general AI stuff in, even if not using the most advanced components, UI seems clean etc. Hope it keeps going


yeah, exactly.
and yes UI is clean,
but imo Tests are actually done very ugly.

erlend_sh:

yeah, very good links.
most of informations found there are related to algoritms for pathfindings/cover/other algoritms,
or just a math. But there are some agent related infos.
Mostly i read pdfs about agents, so i know theory, then make practice in java.

mifth:

what a passion! :)

You can use Alpha, ofc. If you will find any issues or just have good ideas, write here or send email.
1 Like

i’m sorry about earlier trash video.



the new video should clearly explain functionality.



RasmusEneman asked about “stop” of learning. now it’s implemented.

next steps should be more about easy and universal sharing of informations.



video:

http://www.youtube.com/watch?v=OuAPR5aqO60

1 Like