[SOLVED] Point Mesh FPS drop in a specific point

I’m having some trouble troubleshooting why the FPS drops when a specific location of a large mesh is shown on screen.

So in the past I had 2 meshes, around 6 million vertices in total but realised I was using wrong calculations to position said vertices so I have switched methods of calculating the Cartesian coordinates. Back then I wasn’t getting lots of FPS and some areas in particular (high vertex density ones) obviously gave lower FPS which was understandable.

Cut to present day - I am testing everything with one mesh that has around 2.6 million vertices and one area in particular (the centre coordinates at 0,0,0) massively drops FPS for no reason. I can have 50% of vertices shown on screen right in front of my eyes and have reasonable FPS but as soon as I move the camera to look at the centre coordinates where only 1%-5% of the vertices are located - the FPS drops.

Now I’ve changed the data files to have only 100k entries (100k vertices to be created) and noticed something which I can’t seem to grasp. If when creating the mesh I define that the number of vertices this mesh will have is 100,000 everything is fine. If I leave the original value (2.6 million) from testing previous data but still only draw 100 thousand vertices on screen - same thing - FPS drop at centre coordinates.

I’ll upload a couple of pictures for you to better understand the issue I’m facing.

https://imgur.com/a/ngYxb

Bare in mind that the answer “you have too many vertices on screen” shouldn’t be the case here because before I had 6 million vertices and it was fine.

This is how I’m creating the mesh:

Calling constructor from a state:
galaxyUniverse = new UniverseMesh(rootNode, assetManager, 100000);
If I replace the 100000 by let’s say 1million and initialise only 100k vertices - centre still causes lag, if I leave it at 100k and initialise 100k vertices then everything is alright. I need at least 2.6 million vertices like it was before.

Constructor itself:

public UniverseMesh(Node rootNode, AssetManager assetManager, int objectSize){
     rNod = rootNode;
     aMan = assetManager;
     vertices = new Vector3f[objectSize];
}

Adding Vertices:

public void AddVertex(double cx, double cy, double cz, double redshift){
        float x = (float) (getDistance(redshift) * cx), y = (float) (getDistance(redshift) * cy), z = (float) (getDistance(redshift) * cz);
        vertices[vertexCount] = new Vector3f(x,y,z);
        vertexCount++;
}

I understand I’m calling getDistance 3 times, I know, all of this is beta, please don’t be mean on stuff like that.

Actually creating the mesh after all the vertices have been set:

public void createMesh(ColorRGBA color){
        mesh.setBuffer(Type.Position, 3, BufferUtils.createFloatBuffer(vertices));
        mesh.updateBound();
        geom = new Geometry("OurMesh", mesh);
        mesh.setMode(Mesh.Mode.Points);
        mesh.updateBound();
        mesh.setStatic();
        Material mat = new Material(aMan, "Common/MatDefs/Misc/Unshaded.j3md");
        mat.setColor("Color", color);
        mat.getAdditionalRenderState().setFaceCullMode(FaceCullMode.Off);
        geom.setMaterial(mat);
        rNod.attachChild(geom);
}

Just from the symptoms, I would say you have vertexes piled on top of each other at 0,0,0.

Given that it’s based on the max size or whatever, I’d also guess that beyond a certain threshold, those vertexes are just left at 0,0,0… so eventually, the bigger your vertex buffer, the more vertexes are sitting right there at 0,0,0.

If it were me, I would dump all of vertex buffer’s values to a file and see how many 0,0,0 points there are.

Edit: actually, rereading your post… you’re essentially saying that. In one example, if I read it right, you have millions of vertexes sitting right at 0,0,0… so obviously drawing when 0,0,0 is in view is going to take longer. The GPU doesn’t know that they are all the same and so has to do the work each time.

Moral of the story: don’t create vertex buffers massively bigger than you need. (Also, break up your data into smaller chunks to avoid having to create massive buffers because you “might need them” later.)

1 Like

I’m not saying I have millions of vertices sitting at 0,0,0, but your suggestion might solve the problem. Thanks, I’ll look into it tonight

what is getDistance(redshift) being called on? i am assuming it to return the Pythagorean distance between 2 positions(Vector3f), but it does not seem to be called on any vector, and is given a scalar argument.

No, you’re right… not millions. Just 900,000… unless I read the above statement incorrectly. You’ve left out just enough information that I have to guess.

It’s used to calculate the distance to a galaxy using the Hubbles law:
d (distance to galaxy) = z (redshift) * c (Speed of Light) / H0 (Hubbles constant)
I know the calculations won’t be astronomically precise, but it will do.

You’re right, I completely missed that, thank you.

Edit:
I completely missed the fact that empty vectors are added as a mesh’s vertices

Note: ultimately you will be happiest creating your visualization as spatially organized chunks so that they can be properly culled. Playing with your chunk size can require some experimentation to figure out the best grid size but then the math is pretty easy.

zoneKey.x = x / zoneSize
zoneKey.y = y / zoneSize
…etc.

Look up your vertex list in a Map or whatever.

When done, build meshes for all zones in the Map.

Edit: and the bonus is that you will easily find a crazy chunk size if you are dropping a lot of stars in one place.

That’s interesting, i am curious how you have chosen to populate cx, cy, cz, and redshift for each vertex/star?

Edit: i’m guessing they are spherical coords across the unit sphere?

Thanks, I will look into that.

One more thing I wanted to ask if anyone knows the answer to - is it possible to change the material/colouring of a bunch/single individual points/vertices? Because now I’m applying a single material to the whole mesh.

You can set the color buffer if you want to color each vertex. Or the texture coordinates if you want to change what part of a texture (for point sprites), etc… essentially, create more buffers on the mesh for the shader to deal with.

1 Like

The cx, cy, cz and redshift values are taken from the Sloan Digital Sky Survey (SDSS) database. For some reason my conversion from right ascension and declination to Cartesian coordinates produced wrong results so I went with multiplying the distance calculated using the redshift by cx, cy and cz individually.

maybe use a gradient texture and change texture coords of each vertex according to redshift value? although i do not know how that applies to mode.points

I was thinking of setting a slight transparency of points that are further away from the camera to give a slightly better understanding of depth if that is even possible and wouldn’t make my processor boil.

Do that in the shader… else you will constantly have to reset the color buffers every time the camera moves. It’s pretty trivial to do this in a shader though.

1 Like

This can be marked as solved now. Thanks pspeed for making me actually realise that in a small area of a 100 world units radius from the centre coordinates (0,0,0) there were 948178 points situated. Now I’ll have to go research a way to optimise this.

1 Like

Unless you need to dynamically modify points, I’d suggest just removing duplicates before creating the mesh.

In theory there should not be any duplicates anymore. Previously I had a 0 value for some objects (~20k) in a specific field in my data file that I multiplied that value (0) by other values to get the positions of the points. Noticed that today when trying to solve the problem described in this topic. I’ll need to contact my university’s cosmology department to further look into the data and the calculations I do.

1 Like