Visualizing the Depth Buffer

I’m trying to display depth data from a framebuffer, and I’m running into problems. I’ve come across responses to similar questions on the forum and I thought I had implemented everything correctly, however, the material I’m applying gets garbled.



I’ve been modifying the TestRenderToMemory example. I’ve added the depth data to my framebuffer as you mention in this post: http://hub.jmonkeyengine.org/groups/general-2/forum/topic/loading-and-accessing-a-framebuffer-for-each-viewport/ and then I set the texture on the material of the box to my depthData. I expected the results to be a smoothly shaded texture, showing the depth of the box. What I get looks like garbage from my video card. I’m not sure what I’m doing wrong here. I’d really appreciate some pointers.



My code so far:

[java]package post;



import com.jme3.app.SimpleApplication;

import com.jme3.material.Material;

import com.jme3.math.ColorRGBA;

import com.jme3.math.FastMath;

import com.jme3.math.Quaternion;

import com.jme3.math.Vector3f;

import com.jme3.post.SceneProcessor;

import com.jme3.renderer.Camera;

import com.jme3.renderer.RenderManager;

import com.jme3.renderer.ViewPort;

import com.jme3.renderer.queue.RenderQueue;

import com.jme3.scene.Geometry;

import com.jme3.scene.shape.Box;

import com.jme3.system.AppSettings;

import com.jme3.system.JmeContext.Type;

import com.jme3.texture.FrameBuffer;

import com.jme3.texture.Image.Format;

import com.jme3.texture.Texture;

import com.jme3.texture.Texture2D;

import com.jme3.util.BufferUtils;

import com.jme3.util.Screenshots;

import java.awt.Color;

import java.awt.Dimension;

import java.awt.Graphics;

import java.awt.Graphics2D;

import java.awt.event.WindowAdapter;

import java.awt.event.WindowEvent;

import java.awt.image.BufferedImage;

import java.nio.ByteBuffer;

import javax.swing.JFrame;

import javax.swing.JPanel;

import javax.swing.SwingUtilities;



/**

  • This test renders a scene to an offscreen framebuffer, then copies
  • the contents to a Swing JFrame. Note that some parts are done inefficently,
  • this is done to make the code more readable.

    */

    public class depthCamera extends SimpleApplication implements SceneProcessor {



    private Geometry offBox;

    private float angle = 0;



    private FrameBuffer offBuffer;

    private ViewPort offView;

    private Texture2D offTex;

    private Texture2D depthData;



    private Camera offCamera;

    private ImageDisplay display;



    private static final int width = 800, height = 600;



    private final ByteBuffer cpuBuf = BufferUtils.createByteBuffer(width * height * 4);

    private final byte[] cpuArray = new byte[width * height * 4];

    private final BufferedImage image = new BufferedImage(width, height,

    BufferedImage.TYPE_4BYTE_ABGR);



    private class ImageDisplay extends JPanel {



    private long t;

    private long total;

    private int frames;

    private int fps;



    @Override

    public void paintComponent(Graphics gfx) {

    super.paintComponent(gfx);

    Graphics2D g2d = (Graphics2D) gfx;



    if (t == 0)

    t = timer.getTime();



    // g2d.setBackground(Color.BLACK);

    // g2d.clearRect(0,0,width,height);



    synchronized (image){

    g2d.drawImage(image, null, 0, 0);

    }



    long t2 = timer.getTime();

    long dt = t2 – t;

    total += dt;

    frames ++;

    t = t2;



    if (total > 1000){

    fps = frames;

    total = 0;

    frames = 0;

    }



    g2d.setColor(Color.white);

    g2d.drawString(“FPS: “+fps, 0, getHeight() – 100);

    }

    }



    public static void main(String[] args){

    depthCamera app = new depthCamera();

    app.setPauseOnLostFocus(false);

    AppSettings settings = new AppSettings(true);

    settings.setResolution(1, 1);

    app.setSettings(settings);

    app.start(Type.OffscreenSurface);

    }



    public void createDisplayFrame(){

    SwingUtilities.invokeLater(new Runnable(){

    public void run(){

    JFrame frame = new JFrame(“Render Display”);

    display = new ImageDisplay();

    display.setPreferredSize(new Dimension(width, height));

    frame.getContentPane().add(display);

    frame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);

    frame.addWindowListener(new WindowAdapter(){

    public void windowClosed(WindowEvent e){

    stop();

    }

    });

    frame.pack();

    frame.setLocationRelativeTo(null);

    frame.setResizable(false);

    frame.setVisible(true);

    }

    });

    }



    public void updateImageContents(){

    cpuBuf.clear();

    renderer.readFrameBuffer(offBuffer, cpuBuf);



    synchronized (image) {

    Screenshots.convertScreenShot(cpuBuf, image);

    }



    if (display != null)

    display.repaint();

    }



    public void setupOffscreenView(){

    offCamera = new Camera(width, height);



    // create a pre-view. a view that is rendered before the main view

    offView = renderManager.createPreView(“Offscreen View”, offCamera);

    offView.setBackgroundColor(ColorRGBA.DarkGray);

    offView.setClearFlags(true, true, true);



    // this will let us know when the scene has been rendered to the

    // frame buffer

    offView.addProcessor(this);



    // create offscreen framebuffer

    offBuffer = new FrameBuffer(width, height, 1);



    //setup framebuffer’s cam

    offCamera.setFrustumPerspective(45f, 1f, 1f, 1000f);

    offCamera.setLocation(new Vector3f(0f, 0f, -5f));

    offCamera.lookAt(new Vector3f(0f, 0f, 0f), Vector3f.UNIT_Y);



    //setup framebuffer’s texture

    offTex = new Texture2D(width, height, Format.RGB8);



    /**
  • I added the two lines below to bind the depthData to the Framebuffer

    */

    depthData = new Texture2D(width, height, Format.Depth);

    offBuffer.setDepthTexture(depthData);



    //setup framebuffer to use renderbuffer

    // this is faster for gpu -> cpu copies

    offBuffer.setDepthBuffer(Format.Depth);

    offBuffer.setColorBuffer(Format.RGBA8);

    offBuffer.setColorTexture(offTex);



    //set viewport to render to offscreen framebuffer

    offView.setOutputFrameBuffer(offBuffer);



    // setup framebuffer’s scene

    Box boxMesh = new Box(Vector3f.ZERO, 1,1,1);

    // Material material = assetManager.loadMaterial(“Interface/Logo/Logo.j3m”);

    Material material = new Material(assetManager, “Common/MatDefs/Misc/Unshaded.j3md”);



    offBox = new Geometry(“box”, boxMesh);



    /**
  • I added the code below to attach the depth data to the box

    */

    Texture tex_ml = assetManager.loadTexture(“Interface/Logo/Monkey.jpg”);

    material.setTexture(“ColorMap”, depthData);

    offBox.setMaterial(material);



    // attach the scene to the viewport to be rendered

    offView.attachScene(offBox);

    }



    @Override

    public void simpleInitApp() {

    setupOffscreenView();

    createDisplayFrame();

    }



    @Override

    public void simpleUpdate(float tpf){

    Quaternion q = new Quaternion();

    angle += tpf;

    angle %= FastMath.TWO_PI;

    q.fromAngles(angle, 0, angle);



    offBox.setLocalRotation(q);

    offBox.updateLogicalState(tpf);

    offBox.updateGeometricState();

    }



    public void initialize(RenderManager rm, ViewPort vp) {

    }



    public void reshape(ViewPort vp, int w, int h) {

    }



    public boolean isInitialized() {

    return true;

    }



    public void preFrame(float tpf) {

    }



    public void postQueue(RenderQueue rq) {

    }



    /**
  • Update the CPU image’s contents after the scene has
  • been rendered to the framebuffer.

    */

    public void postFrame(FrameBuffer out) {

    updateImageContents();

    }



    public void cleanup() {

    }

    }[/java]

I know this is possible, I’m at a loss right now though.

You cannot setDepthBuffer and setDepthTexture on the same FrameBuffer … You must use either one or the other.

Also, are you trying to make the buffer available in system memory or in a texture?

The ultimate goal is to simulate a depth camera. To do so, I expect to use memory, then write that to a screengraph. Is there a flaw in my logic there?

@morrowsend said:
The ultimate goal is to simulate a depth camera. To do so, I expect to use memory, then write that to a screengraph. Is there a flaw in my logic there?

Do you only want to see the depth data on-screen or do you want to process it somehow? What is a screengraph?

I would like the ability to do both, visually see the results on the screen, as well as process the data (externally, as one might do with the results of a kinect point cloud data).

@morrowsend said:
I would like the ability to do both, visually see the results on the screen, as well as process the data (externally, as one might do with the results of a kinect point cloud data).

There's no built-in way to fetch the depth buffer from the GPU at the moment. Probably Renderer.readFrameBuffer() would have to be modified to allow that.
As for showing the results on screen, you can just take the depth texture and put it on an object, it should work fine.

I commented out the “setDepthBuffer” line (number 178) in the above code and and just tried with the texture, but the box is completely white, there is no depth, in fact you can’t even see the edges of the box. (Ignore the extra edge, my screenshot program wasn’t fast enough to catch the full frame at once.)

http://i.imgur.com/i3wG7.png

If I do get the depthTexture working, won’t I have to manually apply this texture to everything in the scene?



I know there is a guy form the forums who has done this, Here’s a screenshot of a video of his simulator. The left screen shows the depth camera (from the robot’s perspective):

http://i.imgur.com/1sDUS.png



I’ve checked his posts, but haven’t been able to figure out how he did it, and he is no longer an active member of the forums. Here are the relevant posts:

http://hub.jmonkeyengine.org/groups/effects/forum/topic/heat-camera-in-jme/

http://hub.jmonkeyengine.org/groups/graphics/forum/topic/edge-detection-from-camera-image/

http://hub.jmonkeyengine.org/groups/graphics/forum/topic/jme3-zbuffer/



This is where I got the setDepthBuffer() code in my first post, which I can’t get to work.

http://hub.jmonkeyengine.org/groups/general-2/forum/topic/loading-and-accessing-a-framebuffer-for-each-viewport/



My idea is to take screenshots of the viewport at a set frequency, then with an external program, process the screenshots, so I don’t need the raw data from the depth buffer except to create the image in the viewport.

Still no luck on my end figuring out how to do this. Anyone have suggestions?