zzuegg:
yeah, Tests are not good enough.
you are setting the possible actions with their performance
not actions, this are Factors and performance gained from thier good value.
so Agent know when he is "happy / happier / happiest".
in this example it can be done really different.
I made it fast case i prefer to make System, not tests.
that a perfect agent would always defend the chest and never attack
In this example "defend the chest" is an action of running to chest. (it make him happy if he move to it OR is close to it).
I looked trough the sources and the ‘Test2_Sensor’ got my attention
Now since i added FactorReference(also need to test it) it could be just
environment.setFactor("isEnemyClose", node);
What i do not understand here is what influence the learning/performance/past results have on the outcome…
how he learn?(Generally)
- After execution of behaviour,
he check how much environment have changed and what performance gained/lost from that action
and remember that behaviour data for actual environment. So if agent need to make action in the same Environment,
then he can choose behaviour from memory.(Decision is based also on few things)
- (this part will be added in Beta)also after execution remember exactly what factors have changed, so for similar new environments,
agent will choose behaviours that could modify factors for a better.
Empire Phoenix:
Allow shared and not shared expirience.
You can easly make it. Just Create 2 MotionAI Managers. Each manager have it's own Learn Central Unit.
-> eg in a strategy game it is good to have a common base knowledge shared by all ai players, however depending on the map and the plyers gamestyle it might be very intresting to allow the ai to have own expiriences each. So you might end up with a agreesive bot a camper bot ect. whatever worked for that ai a few times.
for not shared:
you can just add FactorInteger player. it will split Environments between AI players.
But about shared fators...
It is an great idea.
Maybe Factors could be set to "global" or "local".
Actually don't know how best i could done sharing, but i will need to find best solution.
-> In a shooter/stealth game having ai’s that not share all knowlede essential. If you kill and dispose the one that saw you before he could call for help, the rest should not know about this.
It's only wrapper/universal system.
IMO fastest solution for that, should be user programmed "call for help", that could modify near agents environment factor(for example FactorReference named "enemy"). Or maybe i will make something in system to make it easier.
About
automatic sharing between agents that are "close". system is universal, and
agent can be even packing machine(i mean agent do not need to know about his location). so user need to make it, not system.
androlo:
It seems to have very much contents already. It also appears to be a good way to just put general AI stuff in, even if not using the most advanced components, UI seems clean etc. Hope it keeps going
yeah, exactly.
and yes UI is clean,
but imo Tests are actually done very ugly.
erlend_sh:
yeah, very good links.
most of informations found there are related to algoritms for pathfindings/cover/other algoritms,
or just a math. But there are some agent related infos.
Mostly i read pdfs about agents, so i know theory, then make practice in java.
mifth:
what a passion! :)
You can use Alpha, ofc. If you will find any issues or just have good ideas, write here or send email.