Live stream was another good one. Thanks for those who joined and participated.
If you want to watch the recording, I’ve added chapters to make it easier to skip around. The preview:
1:25 The model “gaefen”
3:00 Corn side-track
5:40 Game AI First Principles
17:20 Mythruna specific AI stuff
23:37 Other AI implementation-specific considerations
25:50 Previous AI demo
28:00 Prepping the “gaefen” for integration
32:30 Adding collision shapes
41:20 Movement thresholds
45:40 Loading it into the world
50:25 Will it work?
51:40 Fixing the bugs
57:50 Looking in totally the wrong place
1:09:30 Fixing the missing texture
1:13:35 It works!
1:15:30 Wrapping up
Talking about near and far AI was pretty interesting to me that you would use active AI for nearby objects and passive AI (schedule table!) for very far objects… sounds to be complicated but very interesting for an infinite world!
Thanks, it’s one of those open issues. Encounter tables are the obvious fallback but I believe that making the “encounter” logic more and more aware eventually leads to what I’m looking for.
I can picture it in my head, anyway. Remains to be seen if it will actually work.
The trickier problem seems to be how to manage the near and the next-to-near AI in a way that isn’t really invasive to all of the AI behavior logic. In the simple cases, it feels like “Joe the Blacksmith” can just indicate “I’d like to go to the tavern” and then the AI system would work out whether to break it up into real actions or just go through the motions. But I don’t know if that works in every case.
We’ll see… because I’m trying to build that in from the beginning this time so I can infect all of my APIs with it.