In case you are using mipmaps:
Huh, I didn’t know that was done on the cpu. That would definitely explain why mipmapping represented half of the total render time!
I am currently using glGenerateMipmaps for the initial mipmap generation, then using compute shaders to propagate updates to the mipmaps. Before I was calling glGenerateMipmaps every frame and was getting about 15ms for a 64 voxel texture and 30ms of a 128 voxel texture.
In the other thread you say that you get corrupted/black/flickering voxel. Just as an FYI, imageStore is not thread save. You have to use one of the atomic operation on the image to avoid corrupt data
https://www.khronos.org/opengl/wiki/Image_Load_Store#Atomic_operations
Unfortunately you have to convert to a integer based image format.
I am aware that imageStore isn’t the best to use here. The biggest limitation with image atomics is they can only be performed on images containing only a red channel. I need four channels, so I’d have to create an image per channel to do it that way.
Another possibility I read somewhere – and I’m a little fuzzy on the details – is to read/write a plain integer buffer with atomics for initial voxelization, then transfer the results to the 3D texture.
Sorry for hijacking this thread. Do you need the alpha channel? I tough of encoding the color in a integer. If you use normalized colors you can encode 16 bits per channel if you stick to vec3 or 10bits if you need alpha.
If a moderator sees this, can you copy to a different thread?
That could work. I do need the alpha channel, but it doesn’t need to be very precise. So maybe allocate the bits like 9-9-9-5 or 10-10-10-2? Doesn’t give much precision to work with for the RGB channels though, especially for HDR cases.
Depends on the usage of the alpha channel. Imho 10bits per channel should be enough. Depending on your current format you might save a lot of memory and bandwidth. I am also not sure if hdr is needed on the voxel data. You have better insights there.