I want to obtain a matrix with depth information per pixel of my current view. It seemed like a good idea to use the internal ZBuffer to acquire this data, but in the new JME3 API I can’t find anything like: renderer.createZBufferState().
How can I grab this or other internal buffers from OpenGL?
Thanks!
Stuff like that is normally done on shader level. There is a depth map of the current frame you can access in the renderer, look at SimpleWaterProcessor which creates a preView that uses a depth texture.
Thanks for the pointer. I studied it, but I have a hard time to pull out the parts I need.
I made the method below for HelloCollision. The returned image is not null itself, but im.getData(0) is
I don’t really blame it, because I don’t see myself where depthTexture is really filled with ‘content’.
BROKEN:
[java]
public Image createZBufferImage(int renderWidth, int renderHeight){
Texture2D depthTexture = new Texture2D(renderWidth, renderHeight, Format.Depth);
Camera depthCam = new Camera(renderWidth, renderHeight);
// create a pre-view. a view that is rendered before the main view
ViewPort depthView = renderManager.createPreView(“Depth View”, depthCam);
// create offscreen framebuffer
FrameBuffer zBuffer = new FrameBuffer(renderWidth, renderHeight, 1);
//setup framebuffer to use texture
zBuffer.setDepthBuffer(Format.Depth);
zBuffer.setDepthTexture(depthTexture);
//set viewport to render to offscreen framebuffer
depthView.setOutputFrameBuffer(zBuffer);
// attach the scene to the viewport to be rendered
depthView.attachScene(sceneModel);
Image im = depthTexture.getImage();
return depthTexture.getImage();
}[/java]
I have the feeling returning an Image is actually not smart. Then I only have access to bytebuffer, while I want a float[] or int[]. But ok… I have to get it working first anyway.
Any suggestions?
ByteBuffer can be accessed as Float or any other kind of buffer.
Ok, that’s nice. The problem remains that im.getData() == [] in the above code. Does it make sense how I recombined bits and pieces from SimpleWaterProcessor?
Hi, for what purpose do you want to get the depth buffer in the first place?
For example in jME3 the depth buffer is used in post process effect or for shadows.
But the image data is used in a shader like a classic texture.
I wonder why would you want it outside of the shader?
I want to retrieve the depth buffer information to have comparison material for 3d sensors (in particular a stereo camera). So I acquire some real world data and want to compare it to similar data simulated in a simple JME scene.
That is why I really want to get my fingers on the z-buffer (as if it was a sensor). I might even export it for further processing outside a java environment. But I have to admit that I want to see the z-buffer as well (as a simple effect) to get some visual feedback, when possible.
So I was wondering if I could grab that info from the LWJGL renderer, or as normen said, rendering the zbuffer to some texture (but I still have difficulties to connect the pieces together).
Why dont you just do a raytest at that location in the scene instead of trying to do that in the rendering?
Well… a raytest gives me just one point, while I’m interested in a whole set of that kind of information (the entire depth image as mentioned before). I know that that information is already available internally somewhere, so it feels like a waste to calculate, say 800 x 600 via ray intersections again.
Although the online JME tutorials are great, for this kind of ‘backdoor’ I have to be somewhere in the API. I find it harder to find my way there.
Ok so i guess the image rendering is your only option.
In the example above i think the issue is the camera. You should use the original camera (the cam attribute of the simple Application) instead of creating a new one.
The thing is using this, the scene will be rendered twice…which is too bad.
What I would do is to use a processor to get a hand on the depth buffer rendered during the render of the main viewport in a frame buffer.
But then you would have to render the scene to a full screen quad.
For this use a Filter.
Look at the ColorOverlayFilter, what it does is rendering the scene to a full screen quad and multiply the output with a color.
Create your own filter based on this one and add this method :
@Override
public boolean isRequiresDepthTexture() {
return true;
}
then you’ll be able to grab the depth of the scene in the defaultPass.getDepthTexture()
That said…maybe @Momoko_Fan knows a better way to get a hand on the depth buffer
I’ll definitely give that a try. Thanks guys for your help so far!
You can get the data from the GPU by using glReadPixels with the format argument being GL_DEPTH_COMPONENT. However I don’t know how it handles 24 bit depth buffers …
Basically you need to take the method LwjglRenderer.readFrameBuffer() and adapt it so it reads the depth buffer. We should really make this built-in functionality …
Is the depth data not aviable to shaders as well without using a method like described above?
Cause there are some cool stuff you could do with this data aviable like dof (depth of field , the further away objects are the less sharp they are untill everything ends into blur, mostly usefull for stuff like cutscenes where you want to viewer to focus on something.)
You can already access the depth data in a shader by using depth textures, the OP is asking about accessing the data on the CPU. This is only possible by using the OpenGL function glReadPixels
@arie
Hi, from what i read of this post, I’m trying to accomplish a very similar thing, i.e to use the internal data from jmonkey to simulate depth sensing (laser,sonar etc.). Were you able to get this working using the suggestions above?
Hi Adam. Indeed we want to achieve the same thing. Unfortunately I didn’t get it working yet.
As Momoko mentioned, OpenGL’s glReadPixels is the way to go. But:
- making a function similar to readFrameBuffer() outside com.jme3.renderer.lwjgl.LwjglRenderer seems undoable, since it use local private variables.
- adding a method to LwjglRenderer itself should work, but I don’t want to build JME locally and having to sync it with official updates. I don’t have enough knowledge to contribute a well considered patch, though.
I hope that more experienced people can add this functionality. Adding a parameter may do the job: readFrameBuffer(FrameBuffer fb, ByteBuffer byteBuf, int channel), where the channel can be GL_DEPTH_COMPONENT (the existing readFrameBuffer(FrameBuffer fb, ByteBuffer byteBuf) is then a convenience method). Again, I’m not sure what other things should change inside that method.
JME team… can you help us out?
How about extending The lwjgl renderer? then you can get updates as long as it doesn’t change the interface parts you need.
I think I come close, but the attempt is not successful yet. Some issues:
- I cannot access some private fields of LwjglRenderer (inparticular context)
- I don’t know exactly when/how to use my code
(can I call it always? is there a full frame rendered?)
When I call it all values are zero…
NB: I initialize the renderer in simpleInitApp() of HelloCollision.java, which seems to work when I try to read out the buffer later in onAction().
[java]import com.jme3.renderer.lwjgl.LwjglRenderer;
import com.jme3.texture.FrameBuffer;
import com.jme3.texture.FrameBuffer.RenderBuffer;
import com.jme3.texture.Image;
import com.jme3.util.BufferUtils;
import java.nio.ByteBuffer;
import java.nio.FloatBuffer;
import org.lwjgl.opengl.GL11;
public class TestRenderer extends LwjglRenderer {
public void readDepthBuffer(FrameBuffer fb, ByteBuffer byteBuf) {
if (fb != null) {
RenderBuffer rb = fb.getDepthBuffer();
if (rb == null) {
throw new IllegalArgumentException(“Specified framebuffer”
- " does not have a depthbuffer");
}
setFrameBuffer(fb);
// context is private: no access…
// if (context.boundReadBuf != rb.getSlot()) {
// glReadBuffer(GL_COLOR_ATTACHMENT0_EXT + rb.getSlot());
// context.boundReadBuf = rb.getSlot();
// }
} else {
setFrameBuffer(null);
}
GL11.glReadPixels(0, 0, fb.getWidth(), fb.getHeight(), GL11.GL_DEPTH_COMPONENT, GL11.GL_FLOAT, byteBuf);
}
public Image createDepthImage(int w, int h){
return new Image(Image.Format.Depth32F, w, h, BufferUtils.createByteBuffer(4 * w * h));
}
public FloatBuffer createDepthBuffer(int w, int h){
ByteBuffer byteBuffer = BufferUtils.createByteBuffer(4 * w * h);
byteBuffer.rewind();
return byteBuffer.asFloatBuffer();
}
}[/java]
If you’re using SceneProcessors, you can call it at the postFrame callback.
For AppStates, use postRender callback.
For SimpleApplication, use simpleRender() callback.
@arie
Hey Arie, any progress on this issue? Right now I’m doing a bunch of ray casts and it’s a little slow…