Attaching a Texture3D object to a FrameBuffer

Hello,
I was wondering why one is unable to attach a 3D texture to a framebuffer object. If there is no good reason, has anyone made a patch for this?
Thanks.

So I attempted to just edit the source on my own to try to get this to work. After doing the easy editing of the FrameBuffer class to make it accept 3D textures, I edited the call in the updateRenderTexture() method in the class LwjglRenderer like so:
[java]
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,
convertAttachmentSlot(rb.getSlot()),
convertTextureType(tex.getType(),
image.getMultiSamples(),
rb.getFace()),
image.getId(),
0);
[/java]

to
[java]
glFramebufferTexture3DEXT(GL_FRAMEBUFFER_EXT,
convertAttachmentSlot(rb.getSlot()),
convertTextureType(tex.getType(),
image.getMultiSamples(),
rb.getFace()),
image.getId(),
0);
[/java]

When I compile the source I get the following error regarding this change:

required: int,int,int,int,int,int
found: int,int,int,int,int
reason: actual and formal argument lists differ in length

I am not too well versed in knowledge of bare openGL so I am not sure how to fix this…
Thanks for any help!

Well what exactly do you expect to happen when rendering into a 3d texture?
-> Rendering always creates only 2d images, so you could only fill in one z level of the 3d texture then. (This should work someway however as far as i know)

@Empire Phoenix said: Well what exactly do you expect to happen when rendering into a 3d texture? -> Rendering always creates only 2d images, so you could only fill in one z level of the 3d texture then. (This should work someway however as far as i know)
That's a question I was about to ask. Actually what is the use case for this? About the compilation issue, I guess the signature of the 3D version of the method is different from the 2D version. You'd have to check the specs.
@Empire Phoenix said: Well what exactly do you expect to happen when rendering into a 3d texture? -> Rendering always creates only 2d images, so you could only fill in one z level of the 3d texture then. (This should work someway however as far as i know)
@nehon said: That's a question I was about to ask. Actually what is the use case for this? About the compilation issue, I guess the signature of the 3D version of the method is different from the 2D version. You'd have to check the specs.
I simply use a geometry shade to set gl_layer to a specified value so that the fragment is written into the correct slice of the texture (theoretically). I think I fixed the problem with the LwjglRenderer method...I just changed the aforementioned part of the method to this:

[java]
if(tex.getType().equals(Texture.Type.ThreeDimensional)){
glFramebufferTexture3DEXT(GL_FRAMEBUFFER_EXT,
convertAttachmentSlot(rb.getSlot()),
convertTextureType(tex.getType(),
image.getMultiSamples(),
rb.getFace()),
image.getId(),
0,
0);
}else{
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,
convertAttachmentSlot(rb.getSlot()),
convertTextureType(tex.getType(),
image.getMultiSamples(),
rb.getFace()),
image.getId(),
0);
}
[/java]
Which seems to work…I am having a slight issue with my 3D texture, but I think it is unrelated to this.

Thanks!

Multiple shadowmaps with one drawcall would be one possible usecase.

In case anyone who is ever doing something similar looks at this for help: The above edit to the method will only draw to the first slice of the 3D texture. I think to make it work with a geometry shader one has to do something like:
[java]
org.lwjgl.opengl.GL32.glFramebufferTexture(GL_FRAMEBUFFER_EXT,
convertAttachmentSlot(rb.getSlot()),
image.getId(),
0);
[/java]
or
[java]
org.lwjgl.opengl.NVGeometryProgram4.glFramebufferTextureEXT(GL_FRAMEBUFFER_EXT,
convertAttachmentSlot(rb.getSlot()),
image.getId(),
0);
[/java]

@zzuegg said: Multiple shadowmaps with one drawcall would be one possible usecase.
ok you got my attention :p. How?

@okelly4408, what do you want to use it for? Geometry shaders are not supported by the engine right now (even if their support is planned).

Once you have geometry shader support it is pretty simple:

[java]
//vertex shader
out vec3 position;
void main(){
position=inPosition;
}

//geo shader
int vec3 position[];
uniform int layers;
uniform mat4 mvps[]
void main(){
for(int i=0;i<layers;i++){
gl_Layer=i;
for(int j=0;j<3;j++){
gl_Position=vec4(position[j],0)*mvps[i];
EmitVertex();
}
EndPrimitive();
}
}

//frag shader
does the usual depthwrite

[/java]

@nehon said: ok you got my attention :p. How?

@okelly4408, what do you want to use it for? Geometry shaders are not supported by the engine right now (even if their support is planned).


I am using it to optimize an atmospheric scattering model I have been working on for a while. And I patched the engine to add support for geometry shaders.

3 Likes

Ok, this is interesting. I look forward to this.

So I am having a bit of trouble with this…
I am creating a multi-target framebuffer, attaching the two 3D textures, and then drawing full screen quads for each layer of the textures. The problem is that only one layer of the texture is every written to…When I set the layer manually in the geometry shader to some value then it is that layer that is drawn into…so the geometry shader is doing it’s job. It almost seems like the problem is that every time a new quad is attached to the viewport the texture is cleared before it is written into.
Anyway, here is some code to show what I am doing:

[java]
Texture3D deltaSM, deltaSR;
deltaSM = new Texture3D(RES_MU_S * RES_NU, RES_MU, RES_R, Format.RGB16F);
deltaSR = new Texture3D(RES_MU_S * RES_NU, RES_MU, RES_R, Format.RGB16F);

deltaSM.setMinFilter(MinFilter.BilinearNoMipMaps);
deltaSM.setMagFilter(MagFilter.Bilinear);
deltaSM.setWrap(WrapAxis.S, WrapMode.EdgeClamp);
deltaSM.setWrap(WrapAxis.T, WrapMode.EdgeClamp);

deltaSR.setMinFilter(MinFilter.BilinearNoMipMaps);
deltaSR.setMagFilter(MagFilter.Bilinear);
deltaSR.setWrap(WrapAxis.S, WrapMode.EdgeClamp);
deltaSR.setWrap(WrapAxis.T, WrapMode.EdgeClamp);

Material inscatter1 = new Material(assetManager, "Inscatter1.j3md");
inscatter1.setTexture("transmittanceSampler", transmittance);

fboInscatter1 = new FrameBuffer(RES_MU_S * RES_NU, RES_MU, 1);
fboInscatter1.setMultiTarget(true);
fboInscatter1.addColorTexture(deltaSM);
fboInscatter1.addColorTexture(deltaSR);

ca = new Camera(RES_MU_S * RES_NU, RES_MU);
ca.setLocation(new Vector3f (0,0,1));
ca.lookAt(new Vector3f(0,0,0), Vector3f.UNIT_Y);

ViewPort p = renderManager.createPreView("Pre", ca);
p.setOutputFrameBuffer(fboInscatter1);

for(int i = 0; i &lt; RES_R; ++i){
	setLayer(inscatter1, i);
	inscatter1.setInt("layer", i);
	
	Geometry g = drawQuad(inscatter1);	

	p.attachScene(g);

	if(p.isEnabled()){
		g.updateGeometricState();
	}
} 

[/java]
To explain the problem a bit more clearly: in the loop if I draw on the condition that “i” equals 4 or 9 then the 4th layer is not drawn into but the 9th one is.
BTW: the updateRenderTexture() method in LwjglRenderer now looks like this…
[java]

public void updateRenderTexture(FrameBuffer fb, RenderBuffer rb) {
Texture tex = rb.getTexture();
Image image = tex.getImage();
if (image.isUpdateNeeded()) {
updateTexImageData(image, tex.getType(), 0);

        // NOTE: For depth textures, sets nearest/no-mips mode
        // Required to fix "framebuffer unsupported"
        // for old NVIDIA drivers!
        setupTextureParams(tex);
    }

//method obtained with the following import statement: import static org.lwjgl.opengl.EXTGeometryShader4.*;
glFramebufferTextureEXT(GL_FRAMEBUFFER_EXT,
convertAttachmentSlot(rb.getSlot()),
image.getId(),
0);

}

[/java]

Any ideas?
Thanks.

According to okelly4408’s posts, he gave up on this and implemented it in LWJGL directly.

Has anything changed in the meantime? What’s the suggested way for rendering to a slice in a 3D texture, if there is any?

I use Texture2DArray for this purpose. Is there any benefit when rendering to a real 3D texture?

3D textures offer automatic interpolation and better caching in all 3 directions. It also uses a smaller mipmap which is scaled down on the depth-axis too.

I don’t want to hijack this thread for my own usecase. My goal is a different one but I’m considering texture arrays too.
I tried something with TextureArray. Is that the same thing?
(I used its internal Image and passed it to a Texture3D - not knowing what I’m doing. It can’t work because those things use a different memory layout)

I would try out OpenCL, but our target hardware doesn’t support the extension that is needed for 3D texture writing. Hence I’m trying to update the texture with a shader.

1 Like

Yes, TextureArray was what I meant. You can do manual interpolation in TextureArrays. I investigated the possibilites to use Texture3D some years ago but came to no suitable conclusion. Maybe you would have to implement it.

My last post recently received a heart, so there still might be interest out there.

I added support for 3D textures to the engine some time ago, but I thought it was too weird for a pull request:
https://github.com/FennelFetish/jmonkeyengine/commit/45fc51999b9b56401e1943d0403346c11216500e

Also includes example java/shader code (requires geometry shader):
https://github.com/FennelFetish/jmonkeyengine/commit/bceeaaea2aa5ad208d0858c156b56fb303b58770

Just realized the heart came from OP.
WB and thanks for the input!

2 Likes