Pathfinding control development considerations

about the pathfinding control,

let’s say for example that the pathfinding algoritm produces 3 points,

i thought about let the model follow the path from first to second point

and make a check in the update method to see if the second point is reached

and then change the direction of the movement using the characterControl methods,

and let the path iterator point to the next point until the final point is reached,

the think that i don’t like in terms of OO programming is calling a Control inside another Control.

At each point you will want to check if you can see farther down the path without running into anything. Without that, it will be somewhat jagged and not the most optimal path.

A big issue with the nav mesh pathfinder is that it tightly couples your code with it and stores its AStar state in the nodes, preventing multiple pathfinders running at once in background threads. Hence my resistance to submit the code into and repos.



What I did with it though was not have it move my units, they moved themselves each frame and snapped themselves to the terrain. This was done in a control.

Controls can definitely access other controls, they are where your logic lives.

myMovementControl.getSpatial().getControl(MyPathfinderControll.class).doSomething();

Mhmmm i will study the whole thing further because i don’t really want to create a one shot control,

rather i would like to create a dynamical path finding system that can switch among few different states

and let entities movement decisions communicate each other at thread level, so to create coordinate movements

among entities.

For movement between entities look at the steering library in the jmonkeyplatform-contributions/AI project

Yes it will be my next step ,

is the contributions in the trunk or in a branch?

https://jmonkeyplatform-contributions.googlecode.com/svn/trunk

Thank you

Ok, i took a brief look at the steering classes,

these are algorithms used to implement some kind of emergent behaviour that subtends a decentralized intelligence,

what i would rather implement in the long term is a centralized intelligence for every agent in the system, so to connect in the future

a backward and forward chaining inference module for every agent, so even if now i am building a simple controller my idea is to project

the future development in that direction. I thought about a communication among the agents but always keeping a knowledge

base separated for every agent. Said this , the idea of emergent behaviour is very interesting.

The steering is relatively easy to implement, and is actually very effective. That’s why I suggested it. I’m using it to quite some decent success.

There is some communication between the entities with steering, they need to know who their neighbours are, but beyond that it doesn’t care much about them. You can plug it into your centralized controller probably fairly easily.



You can always combine steering with a central system, or store your units in squads and manage the squads directly. This is useful for moving them all towards a common location without all wanting to sit on the same spot. But I think to have realistic movement you need both steering and squads.

Sure,

i was just thinking right now on integrating some kind of state to swicth from

path following to a fight state and then go on and implement particulal tecniques like steering.

Hello,



After your discussion, I decided to take a look into the library that you mentioned (AI in JMonkey Contributions). So, in the middle of the tests I’m facing some problems that I can’t figure it out. For some reason, the veichle sometimes goes through an obstacle, and it’s just because it’s getting the wrong closest obstacle. This happens when there are 2 obstacles in the front of the veichle, one behind another, and they are close. In such situations, the logic breaks and it stops working. I’m trying to implement a ObstacleAvoidence behavior, but I’m not succeding at it.



Can someone take a look at this? http://dl.dropbox.com/u/3279456/Steering.7z - There goes a link for the project. I removed the binaries as they are big (like 30 times the whole project).

I think that this algorithms are made to be integrated with the navigation mesh,

that’s why you’re encountering problems with obstacles.

Well, they are totally independent to the navigation meshes, Maybe if I add them to a physics space, so there is no way to two objects overlap, but I’m almost sure the problem lies somewhere else. Need some help :stuck_out_tongue:

The “Steering” code is separate from pathfinding entirely. It will walk through objects, especially in the test, if the target object is moving. How it works is it measures avoidance forces from all nearby objects and adds them together. For instance, if you are walking between two equally distant obstacles it will have cancelling forces and continue straight. Same when it is following something that is moving: it forces towards that object, but also forces away from an obstacle, sometimes cancelling each other out and colliding. Not perfect, but good for groups of moving items trying to avoid each other.

@Sploreg said:
The "Steering" code is separate from pathfinding entirely. It will walk through objects, especially in the test, if the target object is moving. How it works is it measures avoidance forces from all nearby objects and adds them together. For instance, if you are walking between two equally distant obstacles it will have cancelling forces and continue straight. Same when it is following something that is moving: it forces towards that object, but also forces away from an obstacle, sometimes cancelling each other out and colliding. Not perfect, but good for groups of moving items trying to avoid each other.

Yes, I understand that, but what I'm trying to implement here is the obstacle obstacle avoidence behavior, wich should result in almost always successful avoidence of obstacles. I'm failing on that.

Can you explain a test case?

It is hard to visualize when an obstacle obstacle could happen and if the two obstacles are moving or they are standing still.

This is a solution to obstacle avoidance provided by the same guy that made the navigation mesh already avaiable as plugin

into monkey SDK :

@geniodella said:
This is a solution to obstacle avoidance provided by the same guy that made the navigation mesh already avaiable as plugin
into monkey SDK :

http://digestingduck.blogspot.com/2011/02/very-temporary-obstacle-avoidance.html

His technique is the same as the steering code in the platform-contributions/AI package. He just has one farther step in grouping clusters of objects and taking the farthest away to adjust the steering tangential force.

@shirkit, you have to combine navmesh pathfinding with steering. Fixed obstacles are avoided with the navmesh pathfinder, and movable obstacles are avoided with steering. What can also be modified with the steering code is to add some collision detection. So if the entity actually bumps into something, it can side step it until it can get around. You can just add another Behaviour that does this and add it when something is bumped, and remove the other Behaviours. Then when the path is clear again, add them back in.

I think that the steering behaviour can be applied even to smooth the path of the character that follows the

spline created, sometimes there could be some overlapping with meshes while walking.

ya it definitely can. I have found it to be quite powerful, but it does miss some edge cases or special circumstances. I am okay with that though since my entities are usually inside the world thanks to the nav mesh pathfinder. I use the steering to mostly avoid friendly entities that are moving together, to group them into a squad, and to avoid some movable enemy obstacles.