Edge Detection from Camera Image

Hi,



I’m trying to take the image produced by a camera object, convert it to a bitmap image, and then run an edge detection algorithm on it and render the resultant bitmap to a viewport. I’ve been looking at TestRenderToMemory but am having trouble understanding how to proceed.



If I have a camera object with an associated viewport which produces the view I need, and I use ViewPort.setOutputFrameBuffer, I think that puts the rendered image into a framebuffer object, but I’m not sure how to convert this to a bitmap, nor how to get a viewport to display this bitmap…



Any hints or thoughts would be greatly appreciated…

If you are trying to see how to get it as a Java image, then you might want to look at the code for: http://hub.jmonkeyengine.org/javadoc/com/jme3/app/state/ScreenshotAppState.html



It converts the frame buffer to an AWT image and saves it to disk… should show you how to get the image.



…however, to me edge detection works better when you also have the z-buffer… which that won’t get you. If I were to do edge detection, I might do it in a post-processing shader where that stuff is readily available.

Thanks for quick reply! I will look at that code to understand how to get the image.



The other question is how to take that (or other) image and render it to a viewport?



Sorry for my noobness, what’s a post-processing shader? How do I learn about that?

The post-processing shaders run after the scene has been rendered. They have access to the depth buffer and color buffer as rendered so far and can write new values out to the color buffer. There are numerous examples. I tried to comment the heck out of the DepthOfFieldFilter to make sure someone could understand it: http://hub.jmonkeyengine.org/javadoc/com/jme3/post/filters/DepthOfFieldFilter.html



More the realm of edge detection is the cartoon edge filter: http://hub.jmonkeyengine.org/javadoc/com/jme3/post/filters/CartoonEdgeFilter.html



There are tests for these in the jME3 bundle that show how to use them.

Hey so I’m using the code you suggested from ScreenShotAppState like this:



RobotView.setOutputFrameBuffer(fb);

bb = BufferUtils.createByteBuffer(RobotCam.getWidth()*RobotCam.getHeight()*4);

image = new BufferedImage(RobotCam.getWidth(), RobotCam.getHeight(), BufferedImage.TYPE_4BYTE_ABGR);

world.getRenderer().readFrameBuffer(fb, bb);

Screenshots.convertScreenShot(bb, image);



try {

ImageIO.write(image, “png”, new File(“Goddamn.png”));

} catch (IOException ex){

logger.log(Level.SEVERE, “Error while saving screenshot”, ex);

}





Which writes a file with the image to disk. The problem is however, it’s not the image as seen by the viewport RobotView (top line) its the entire image with multiple viewports. Is it possible to just get the image from a particular camera/viewport?

…I’m afraid that is something that I don’t know. I’m not familiar enough with the viewport code to know how the above might be adapted to fit.

Ok no problem,



One more question, where within the DepthOfFieldFilter.java is there access to the depth buffer? That would be amazingly useful!



Cheers

DepthOfFieldFilter.java is just setting up the GLSL shader that will run. By returning that it wants depth data, a depth texture is made available to the fragment shader.



The real work is done in the fragment shader:

http://code.google.com/p/jmonkeyengine/source/browse/trunk/engine/src/core-data/Common/MatDefs/Post/DepthOfField.frag

ok, now i see a float called zBuffer which is heartening. But is this value just for one pixel? How to access the entire depth image with all the pixels?

The fragment shader is executed on the GPU once per pixel. The entire depth buffer is in the depth texture that is pulled from to get the z value.



Note: if you are not trying to write GLSL shaders then none of that is helpful for you because it’s all done down on the graphics card and, theoretically, the textures all live there where they are nice and efficient.



…it doesn’t speak to how to bring any of it into “CPU space”.



I only mentioned it because if I were doing edge detection, I’d do it in a shader because it will be way more efficient than in Java code on the CPU… never minde the big buffer copies to get the data in the first place.

If you want edge detection you should look into the CartoonEdgeFilter. Maybe you can even use it as it is.



On a side note, if you want to go into developing your oown filters, you can have the depth buffer by overriding the isRequireDepthTexture() method in the filter and make it return true. (see how it’s done in the cartoon edge filter)

the depth texture will be passed to the shader as the m_DepthTexture uniform.

What I really need to do is extract both the visual image, and the depth image, from a number of camera/viewport pairs, and from what you have said developing a filter that extends coloroverlayfilter and cartoonedgefilter will allow me to do this using the methods



myFilter().getRenderFrameBuffer() ----> to get the visual image



and

public boolean isRequiresDepthTexture() {

return true;

}

myFilter().getDepthTexture() to get the depth texture…



Is this right?



Thanks

I don’t know how much this is related to your other post http://hub.jmonkeyengine.org/groups/general-2/forum/topic/loading-and-accessing-a-framebuffer-for-each-viewport/



Depending of what you really want to do, maybe you should choose the way i describe in my last post.

However, if you stay on the filter way here are some hints :


adam said:
What I really need to do is extract both the visual image, and the depth image, from a number of camera/viewport pairs, and from what you have said developing a filter that extends coloroverlayfilter and cartoonedgefilter will allow me to do this using the methods

myFilter().getRenderFrameBuffer() ----> to get the visual image

and
public boolean isRequiresDepthTexture() {
return true;
}
myFilter().getDepthTexture() to get the depth texture..

Is this right?

Thanks

Not exactly, The FilterPostProcessor feeds all the filters with the rendered scene as a uniform sent to a shader (m_Texture). If isRequiresDepthTexture return true on the filter, it's fed with the depth buffer too (m_DepthTexture).

Once in your shader, you'll have these too textures, and you can do what ever you want with them.

However, i think there is an issue with filters and multi-viewports, what append if you try to add let's say a coloroverlayFilter on viewport1 and a cartoonEdgeFilter on viewport2?

Thanks so much for your reply… To be clear


  1. It looks like from your post on the other thread that a filter isn’t neccessary for what I need and I’ll just extract from the viewport’s output framebuffer as you detail.
  2. And you are right there is an issue with a filter being applied to the entire screen and not just the viewport it is attached to, but since i’m not going to use filters this is not a drama for me.