Chunks Data send to GPU

Hello jmonkeys,

believe it or not, this time I’m not facing a problem, I’m just wondering about your opinion to my approach.

When meshing my chunks (blocky voxel stuff) I create 2 triangles per face, thats 6 indices and 4 vertices (since 2 of the vertices are shared).
Since quite some information I send to the GPU is redundant (in that I have to send the colored light and sunlight strength as well as blockID and direction the triangle is facing per vertex although they are the same for all vertices per face)
Now I’m thinking instead of doing that, I could also just send a single point per face to the GPU, get rid of all the redundant information and replace it with small information about width and height of the face (because of greedy meshing) and then span this face in the geometry shader.

Is it obviously a bad idea because geometry shaders are ‘heavyweight’?
considering chunksize of 32x16x32 and viewdistance of up to 1024 blocks
I just mention these values because I’m afraid the further the viewdistance, the more performance would be influenced by the geometry shader if geometry shaders are that bad

Looking forward to your opinions and greetings from the shire,
samwise

So why the long view distance and what type of project is this? To put it in perspective… the minecraft max view distance is a 240 block radius. Even if you could compress block data to a single byte per block you are still maxing out memory for the average computer / gpu out there and thats not including what you would need loaded in system memory for if you have more than one player.

They aren’t inherently bad, like the devil, but i would keep them as last resort.
I assume your sun light has some form of attenuation/shadowing based on the chunk matrix (like old (and maybe current ?) minecraft), is that correct?

well the game is round based and each round sets on a procedurally generated island with a diameter of 1024 blocks
now at any time you should be able to see the whole terrain when looking in the appropriate direction, without distance fog
you know those games where you start in a plane and hop off? right at the start you can see the whole map and i need something similar.
and that actually works, even on my geforce 840m, and i am by far not maxing out GPU memory.
i am not compressing anything in the ram but im heavily compressing data i send to the GPU especially since i provide 3 lod levels per chunk so that makes for even more data

now i was just wondering if it might be worth it to construct the quads in the geometry shader instead of passing redundant information to the gpu (im not actually looking for the ram, i just thought when i do it in a geometry shader i can make decisions based on the camera position per frame, like i could just not emit any vertex for faces that would otherwise be culled by backface culling)
probably its not worth it as gpu probably is good at doing such optimizations, but i was wondering if anyone had experience or thoughts about it

i am not sure what you mean by chunk matrix
my sunlight is just a direction and a color passed to the shader, additionally when meshing i give each face the information about how much sunlight is at this face, then in the shader i multiply this strength with the color to get the final suncolor at this face, then dotting sunDirection with normal from normalmap etc.
for colored light i propagate it light minecraft does but in 3 different channels, one for red, green and blue, the sum of all colorful light at a face it also send to the gpu to apply appropriate ‘light’.
im sorry if i didnt get what you mean :smiley:

I’m assuming that the island isn’t very deep then? IE the max height isn’t 1024? To be honest I wouldn’t spend time optimizing it until it is an actual issue. That being said… i’ve run into a TON of people trying to run my game with VERY slow computers so I would find a machine that fits your minimum requirements and see how it holds up and profile it.

Is there a reason to encode the light data into a vertex buffer instead of computing it dynamically on the fragment shader?

thats right, max height is 256 blocks
i guess youre right with that premature optimization hint, just since i was working on the voxel shader the last days to get parallax mapping done (and actually fix a bug in normalmapping i never noticed) i had wrapped my thoughts about shaders a little, im still a noob though, but reading about geometry shaders and having tried some grass shader stuff already i thought maybe the geometry shader is good at emitting vertices and might speed things up in that i need to send less data per chunk to the gpu because that would mean i can theoretically add more chunks to the scenegraph per frame

yes definitely, ive got hundrets of light emitting blocks i could never send all these positions and colors and such to the gpu and do it there, i propagate the light somewhat similar to what minecraft does because it only needs to update when changing blocks and doing it on the gpu would ofc yield more realistic results but would have to be computed each frame

you got a good point in finding the lowest spec device i want to support but im totally fine if that is my mediocre notebook :smiley: