Loading and accessing a framebuffer for each viewport

Sorry to post this in multiple threads, but I’ve been at this for days and cannot believe that there isn’t an easy solution.



I have multiple viewports in a SimpleApplication, and am trying to access (seperately) the image data rendered from each viewport. I’ve used RobotView.setOutputFrameBuffer(), and I can put out the buffer and convert to an image which I can view. But the image I get is the same for all viewports. This is my code to access the framebuffers and write the image to disk:



bb = BufferUtils.createByteBuffer(RobotCam.getWidth()*RobotCam.getHeight()*4);

image = new BufferedImage(RobotCam.getWidth(), RobotCam.getHeight(), BufferedImage.TYPE_4BYTE_ABGR);

shotcount++;





if(frameb != null){



world.getRenderer().readFrameBuffer(frameb, bb);

Screenshots.convertScreenShot(bb, image);

if(shotcount % 20 == 0){

try {

ImageIO.write(image, “png”, new File(robot.name +shotcount+".png"));

} catch (IOException ex){

logger.log(Level.SEVERE, “Error while saving screenshot”, ex);

}

}

}





I can’t work out how to put the image from each viewport into its associated framebuffer… Any help would be greatly appreciated…

You need one frameBuffer for each viewPort.

How do you initialize them?

I suspect you have only one fb, and that all the rendering process is done before you convert it to a file.

Yeh, that’s what I suspect as well. I can’t work out how to initialize them correctly. What i’m doing now is:



RobotView = world.getRenderManager().createMainView(robot.name + " View", RobotCam);

RobotView.setOutputFrameBuffer(frameb);



and then the above code to update it. The step that’s missing (i think) is to actually load the framebuffer with data from the viewport since I think its null at the moment…

I found this thread, which seems to be doing what i’m trying to do,



http://hub.jmonkeyengine.org/groups/general-2/forum/topic/multiple-applications-sharing-one-rootnode/



is it really necessary to implement these two new classes (AppCanvas and GuiState) just to get the image data from each viewport??

No, it’s not necessary

What i was asking is how do you initialize the frameb veriable



what you should do is something like this (pseudo code)



FrameBuffer fb1=new FrameBuffer(viewport1Width,viewport1Height);

Texture tex1=new Texture2D(viewport1Width,viewport1Height,Format.RGBA8);

Texture depth1=new Texture2D(viewport1Width,viewport1Height,Format.Depth); //only if you need the depth buffer

fb1.setColorTexture(tex1);

fb1.setDepthTexture(depth1);



viewport1.setOutputFrameBuffer(fb1);



FrameBuffer fb2=new FrameBuffer(viewport2Width,wiewport2Height);

Texture tex2=new Texture2D(viewport2Width,wiewport2Height,Format.RGBA8);

Texture depth2=new Texture2D(viewport2Width,wiewport2Height,Format.Depth); //only if you need the depth buffer

fb2.setColorTexture(tex2);

fb2.setDepthTexture(depth2);



viewport2.setOutputFrameBuffer(fb2);





with this in tex1 and depth1 you’ll have the rendered scene and depth buffer of viewport1,and in tex2 and depth2 you’ll have the rendered scene and depth buffer of viewport2.

1 Like

Wow thank you so much, that’s exactly what I was looking for! So I initialize as you suggest and I’m trying to get a look at the image data thats embedded in the texture2D objects you helped me to extract:



FrameBuffer buf = new FrameBuffer(w,h,0);

imageData=new Texture2D(w,h,Format.RGBA8);

depthData=new Texture2D(w,h,Format.Depth);

buf.setColorTexture(imageData);

buf.setDepthTexture(depthData);

RobotView.setOutputFrameBuffer(buf);



if(imageData.getImage() != null){



** BufferedImage image = ImageToAwt.convert(imageData.getImage(),false,false,0);

shotcount++;



if(shotcount % 20 == 0){

try {

ImageIO.write(image, “png”, new File(robot.name +shotcount+".png"));

} catch (IOException ex){

logger.log(Level.SEVERE, “Error while saving screenshot”, ex);

}

}



}



But the line ** gives a nullpointer exception and I can’t work out why… I hope that conversion method is being used correctly?

This can’t work. The scene has to be rendered so the framebuffer has data in it.

The code I gave you is frameBuffer initialization, your image conversion has to be done after the scene has been rendered.



Why do you need a buffered image?

I am simulating a vision camera, and a depth camera on a robot for an A.I simulation project. I will need the data from both ‘cameras’ to process perception information for the bot. I thought that a buffered image is a suitable format…



I’ve now used a class which implements SceneProcessor in the same way as is done in TestRenderToMemory, and this allows the framebuffer to be filled after the scene is rendered. I get the vision image data out of the framebuffer as is done in TestRenderToMemory like this:



world.getRenderer().readFrameBuffer(buf, cpuBuf);



synchronized (image) {

Screenshots.convertScreenShot(cpuBuf, image);

}



So no I know that the rendered scene is getting put into the framebuffer, to get the depth information i should be able to dig it out of the texture2D called depthData assuming that I initialized it in the same way as we discussed?

Yes exactly. bind your depthData texture to your framebuffer with the setDepthTexture and you’ll have your depth buffer :stuck_out_tongue: