Hi guillaumevds,
This will not be easy if you don’t have a nice multithreading approach first. The keyword is “interactive” and “reactive”.
Abstract first,
By interactive, you want your game, say graphics and sounds display while your characters AI computing its calculation and later return results (path finding, stragegy). Let see this like AI computing unit are sending load of data into graphics and sound pipelines. Like you playing a movie in your media player.
Now let say you did organize your AI into “agents”, and “unit of works” that may let you computing their “costs”. There is also a “budget” of time that you allow AI framework to take each frame and in a span of 30 frames. You will now see the problem is to anwser: “how to control the amount of data and how much data that I send to graphics pipeline, to balance with time I compute the data and time I play the data arcordingly.”
To this point, balance need profiling (a lot of profiling and tweaking of course), but in general, you should count on some framework to make your life easier. I really recommend you to take a look at RxJava and the way they implements their pipeline controllers.
In my framework, I used RxJava and structure my AI computing cost into “agents” - “actor”, “unit of work”, “layers” and “data stream - data pack”. In general, those concepts allow me to wrap my AI algorithm with any orders, any parallel mechanism I wish to… That’s a big win.
[Edit after re-reading] Above sentences means: I can send data between actors, between layers, push them all into a pipe and warranty the observers will receive the signals after some ticks. The streaming continues in pipelines that transparent to developer, and they have thinks of the “static overall order” carefully, ask themself if there is a reason to use it at all. As my experience after re structing, there is no point to use a static order in almost game instead using independent orders of streams… So, AI don’t have to wait for graphics, graphics don’t have to wait AI, main character don’t have to wait for enemy, enemy also don’t have to wait… They are independent agents and just work with “Signals”!
To corporate with AI in level design, our level designer have to put “helper” around the map. This helper signal useful hints that may broadcast to agents around that point. You can see navmesh, obstacles, tactic point are these kind of helper and they send signals all the time. As I write a FPS for example, the bugdet is really depend in where my main character is standing. This kind of balancing along with concern about Level of detail make you bring down some kind of “filter”. And thank to RxJava, there is also concepts about “filter” to help you descrease the load of the stream.
You see I just talking about data structure and general design (observer pattern from RxJava in particular) over some case of AI programming.
In the real game, there are a lot of places where general design WILL NOT WORK. And you have to optimize much into the dark evil low level code (poor AI programmer)… But at first, you can and should use RxJava, if you in your road of learning.
Cheers,