Texture arrays from rendered framebuffer

I’m trying to use texture arrays as input into shader. It works ok for set of images loaded from assetLoader, but I cannot get it to work between frameBuffer output and input into next shader.

First problem is that TextureArray expects set of images instead of set of textures. I have tried to pass framebuffer texture.image collection into TextureArray constructor, but then it failed during upload of image buffers (because they are null). When I tried to cheat with ‘clearUpdateNeeded’ on image, I get no crash, but texture seems to be completely empty.

I could alternatively use array of textures, but they seem to be not supported at all.

I don’t want to have to fall back to things like

    Texture2D ShadowMap0
    Texture2D ShadowMap1
    Texture2D ShadowMap2
    Texture2D ShadowMap3
    //pointLights
    Texture2D ShadowMap4
    Texture2D ShadowMap5

instead of using texture arrays/cube maps.

I guess the issue is that the data is set in the texture array in the constructor but when you do that the frame buffers images are not rendered and contain no data

Have you tried to create your texture array in the postRender method (i guess you are in a processor, right?). It’s not a solution but it would give an indication.

I guess we could make an updateTextures(Texture…) method to refresh the data.

After thinking about it for some time, it probably cannot work. TextureArray is supposed to be a single bind id, optimized behind the scenes (kind of 3d texture with no mipmaps on one axis), while my render targets were created separately one after another. It could possibly work with:

  • array of textures (quite a few changes to material type)
  • MTR rendering directly to texture array using layers (not sure if this is realistic without geometry shaders, plus I need it to be depth textures)

but these are probably bit out of scope for me right now.

I will probably have to fall back on ShadowMapN mechanism after all…

On bit related topic, any plans to support FrameBuffers with cube depth buffer side as output?

You can assign a render target to a particular texture array slice or cubemap side - we already do this for shadow mapping.

@Momoko_Fan said: You can assign a render target to a particular texture array slice or cubemap side - we already do this for shadow mapping.

Can you tell me where can I find an example? From what I can see at the moment, all shadow implementations are using 4-6 separate textures, not glsl texture array or cube map.

I know that you can assign face of color cube map to framebuffer output (TestRenderToCubeMap). I have not see an API for assigning depth cube map face or color/depth texture array slice into framebuffer. I’m using git source.

@Momoko_Fan said: You can assign a render target to a particular texture array slice or cubemap side - we already do this for shadow mapping.
we don't. At least I never did it.
@nehon said: we don't. At least I never did it.
Okay, well, we do have a FrameBuffer.addColorTexture() that takes a cubemap face parameter. It should be fairly easy to add a similar method for texture array slice. @abies: If you like, you can make the change and then make a pull request on the github.

Although, I am not sure how exactly to treat a depth texture as a render target, you may need to use a color texture for shadow mapping …

@abies said: After thinking about it for some time, it probably cannot work. TextureArray is supposed to be a single bind id, optimized behind the scenes (kind of 3d texture with no mipmaps on one axis), while my render targets were created separately one after another. It could possibly work with: - array of textures (quite a few changes to material type) - MTR rendering directly to texture array using layers (not sure if this is realistic without geometry shaders, plus I need it to be _depth_ textures)

but these are probably bit out of scope for me right now.

I will probably have to fall back on ShadowMapN mechanism after all…

On bit related topic, any plans to support FrameBuffers with cube depth buffer side as output?

The geometry shader gets awful slow when emiting complex structures, (or lots of triangles).
Some info i have found on this is that it is better to use instancing and something like “glLayer=glInstanceId”

Some other idea would be to use imgload/store to write to a 3d texture, and afaik imgload and imgstore on the same pixel is barrierfree.

As far as I know all of these things require OpenGL 4 support which we haven’t gotten to yet.

At the moment we support OpenGL2.x and many OpenGL3.x features but no OpenGL 4…