[Bolidz] AI

I'm thinking about implementing AI these days, reading a lot of google links.



First there is path finding, with different accelerations different speeds at different moments, boosts etc

This shouldn't be a problem, I'll probably use genetic algorithms.



But I want my bolides to be clever at using weapons and so here is the kind of IA I want:



->if front opponent life level is low, fire a bullet or a rocket or something else. But if rear opponent is near too and my life level is low, prefer slaloming or try make both (use a mine ? use a protection ?) . But if the the race is near from ending, prefer use a boost and try to catch the first place.



What are the possibilities to acheive this ? 

A complex state machine ? A neural network with a lot of training  ? Any other possibilities. I want to have a good IA but I don't have much time.



Any suggestions, comments,  ideas very very appreciated  :slight_smile:



Kine


A simple approach like Insect AI (Nick Porcino) comes to my mind… maybe it's worth reading a bit about it.

kine said:

I'm thinking about implementing AI these days, reading a lot of google links.


Okay, I will give my opinion on this.


First there is path finding, with different accelerations different speeds at different moments, boosts etc
This shouldn't be a problem, I'll probably use genetic algorithms.


It would be interesting to see how a Genetic Algorithm will solve this.  GAs are a search and optimization technique to find the solution to a problem by following the survival of the fittest theory.  The GA will have to have one or more populations where each individual in that population representing a solution to the path problem you mentioned above.  By generating new individuals, you eventually come up with an individual that can solve your path finding problem.  This will make a very difficult problem to solve.  Tho you may have an approach I'm not familiar with. :)

Instead, try a google search on planning algorithms or real time path planning.  The type of problem you are describing above falls in the category of kinodynamic planning that pertains to motion planning where velocities and accelerations are involved.  I don't know a lot about this in detail but that kinodynamic planning is a very difficult problem.

Another approach which is easier I think are steering behaviors.  Google "Craig Reynolds Steering Behaviors".


But I want my bolides to be clever at using weapons and so here is the kind of IA I want:

->if front opponent life level is low, fire a bullet or a rocket or something else. But if rear opponent is near too and my life level is low, prefer slaloming or try make both (use a mine ? use a protection ?) . But if the the race is near from ending, prefer use a boost and try to catch the first place.

What are the possibilities to acheive this ? 
A complex state machine ? A neural network with a lot of training  ? Any other possibilities. I want to have a good IA but I don't have much time.


State machine is worth looking into.  Neural Networks are good at the locomotion level not at the decision making level.  I suggest you look into Behavior Trees.  Go to http://www.aigamedev.net and do a search on Behavior Trees.

Just something for you to think about.

Jeff

Definitely go to aigamedev. I'd look up hierarchical finite state machines (HFSM). It seems to be fairly prevelant in modern games and once the framework is in place the behariors created with it are reusable/modular.



As far as NN go… I'll say that this may not always be true, but they aren't your best bet for game AI. Creating complex behaviors like you mentioned would be seriously difficult. Halo uses HFSMs and created different sets of behavior for each enemy and used slightly modified versions for more difficult elites. Training seperate NNs to do something like that would be quite an undertaking.

I think simple steering will work here, use a few ray sensors in the sides of the car and when one activates just go the opposite direction. Use larger scales if the sensor is at a higher angle or the object is close. For weapons, you could use those neural networks and genetic algorithms but that would be overkill for a simple game like this. A few if statements can go a long way.

back in my robocode days, i implemented a lot of strategies and simply tested which works best under which condition. a condition was a vector (n dimensions), so i could calculate the "distance" to the "current vector", meaning the distance between 2 situations, and chose the learned reaction for the vector nearest to the current one.

once i had a lot if data, i could try to find similarities between those leading to the same best reaction and minimize the data and thus the cpu load.



this isn't a real NN, but it does work if the situation can be described simply and has only a few, clear input parameters.