I have a question regarding the CollisionResult object.

I have a planet and I’m building a simplified mesh over it, so what I have done is creating a sphere with slightly bigger radius, and I cast rays from the center of the planet towards the points of the new mesh. If the ray collide first with the mesh it means that that point needs to be translated to the collision point with the planet (so in this case it means that in that point there is a mountain).

To get the idea that is the code:

Vector3f origin = planet.getPlanet().getLocalTranslation();
Ray r = new Ray(origin, v.subtract(origin).normalize());
CollisionResults res = new CollisionResults();
node.collideWith(r, res); //node contains the planet geometry and the sphere geometry
if(res.getClosestCollision().getGeometry().getName().equals("navMesh")){ //if it collide first with the mesh
float distance = res.getFarthestCollision().getDistance(); //this gives infinity sometimes
Vector3f realP = res.getFarthestCollision().getContactPoint();
v.x = realP.x;
v.y = realP.y;
v.z = realP.z;
}

What I don’t get is that sometimes when I get the collision results and I check the distance of the farthest one it sometimes returns Infinity. How is that possible? How can the ray collide with the planet at Infinity?

I also want to specify that it should never be the case that it does not collide with the planet, since the ray starts from the centre of it and goes outside…

Can someone help me figuring this out?
Thanks in advance!

Did you debug the two vectors you are using as origin and direction in the ray?, they could not be the values you expect them to be (ie: using getLocalTranslation vs getWorldTranslation).

What is “slightly bigger” to you? You can’t go about and make actual planetary scales with bullet. 1 unit is one meter by default, 50 square kilometers is about the maximum area you can get reliable physics in. If you scale your weight and gravity (or time) values accordingly you can make 1 unit any actual “real-world” size though.

So if you make a solar system with physics (why? it would take ages to see any interesting “physical” movement on a video feed of the solar system - but anyway) scale it so that these 50 square kilometers encompass the whole solar system.

If you want a whole solar system where you can zoom in on a single human flying in space you a) need a good GUI to find stuff in an overview and b) can’t do it in one coordinate system on a computer in any way, look at the WorldOfInception test case for “infinite resolution” universes.

I’m using just a planet with no physics, it is just an environment to make a pursuit and evasion game with agents going around.

The algorithm just misses few vertices, and as @NemesisMate proposed, I checked them and apparently the problem is that sometimes the vertices given by the buffer position of the mesh are (Infinity, Infinity, -Infinity). How is that possible? I’ m using the mesh from the Sphere class. I don’ t see how can they be Infinity…

Could be degenerate splits near the poles. I’ve never looked much into JME’s Sphere mesh as I find that sort of sphere mesh generally useless for anything but making a ball (and even then it uses a lot of triangles for that but it is convenient.)

For your purposes, you may be better off using a sphere made from a subdivided-then-projected cube.

Actually, I’m not sure what the point of using a sphere is in the first place exactly. Why intersect the sphere mesh when you could just check the radius of your existing point? I’m not sure how a mesh needs to be involved in this but I don’t fully understand what you are doing. Pictures maybe?

Here is when the mesh is updated (I know is not precise, for example if a mountain is inside a triangle it won’ t be detected by the algorithm,but it gets better if I increase the number of vertices on the Sphere mesh)

But it happens sometimes that even if a vertices is under the mountain, it does not translate to the collision point (see below) and I think that this is the case when the mesh vertex gives infinite.

(I’ m sure that it stays under the mountain and is not connected on the side because if I zoom in I can see it laying under the mountain.

So you are trying to make the sphere mesh conform to your actual mesh. I see.

Where does your original mesh come from? Do you have an easy way to “unproject”? ie: if you know a location on the sphere can you figure out where in your map data that is?

Either way, I think you will ultimately be better of with a subdivided cube based sphere as the points will be better distributed and you won’t get pinching at the poles.

The process of making a subdivided cube-based sphere also lends itself well to projecting into these sorts of shapes. Each time you subdivide one of the faces you must project the new vertexes out to the sphere. If you had some way of finding that location in your original map data then you could just use that value and save a bunch of heavy steps.

(If you’d based your planet geometry on a subdivided cube-based sphere then you’d get this LOD for free, actually.)

We have a class for generating the mesh of the planet actually. But I have to be honest I have not coded it myself and since this program is for a school project I’m not sure at this point I have time to understand it and adapt it for my scope.
But technically yeah I have access to all the buffer of the planet mesh if that is what you mean with map data (I’ m sorry I’m still a newbie in this field), but I don’t see how this can be useful in this case.
I’d try using the subdivided cube-based sphere, but still I’m not sure I have time to implement it (or maybe there is a way to choose how to generate the sphere?)