Image processing for AI using render to Texture - getImage() not working

Hi everyone,



so far I've been clean sailing, but I've encountered an odd problem trying to get the pixel color data from a texture.



The texture is being generated by a TextureRenderer. I am successfully applying this texture to a quad (based on the jmetest.renderer.TestCameraMan Example), so I know the image information must be present somewhere in the texture



In simpleRender() I have the following code. Just for test purposes I want to see the reference to the image data in the textureVirtuaCam Texture.



protected void simpleRender() {
        tRenderer.render(vehicleWorld, textureVirtuaCam);
        //Image Processing of texture textureVirtuaCam here
        System.out.println(textureVirtuaCam.getImage());
}




My problem is that "textureVirtuaCam.getImage()" always returns null

The purpose of this is for image processing in an AI implementation. If someone can tell me what I've got conceptually wrong, or if there is an easier way of doing this please let me know. I haven't found any posts relating to getImage() so far and neither have I found a scanline or Image processing implementation.

Any help is greatly appreciated. If you need more code sample let me know too.

cheers,

Micah
1 Like

Render-to-texture is a feature that is used for handling offscreen surfaces on the gpu, thus you cannot access the data on the cpu. For your application I suggest you take a look at the method Renderer.grabScreenContents().

Thanks Momoko_Fan for the quick reply!



I have looked at “Renderer.grabScreenContents()”, but it seems that this is meant to read byte values from the current (main) renderer/camera. I am indeed trying to offscreen-render a secondary camera from my AI agent’s perspective (see the image below for the setup) and retrieve the byte/pixel values from this camera.







In the meanwhile I have found this sample, here in the jME wiki, which I hope might do what I am looking for:



http://www.jmonkeyengine.com/wiki/doku.php?id=offscreen_renderer





Although I am not sure if grabScrenContents() or a similar method is available for their implementation of offscreen rendering in “LWJGLOffscreenRenderer”.



I will try to implement my simulation using this offscreenRenderer example next and will keep you updated on my successes/troubles.



Thanks again for any further advice,



Micah

Have you had a look at http://jmonkeyengine.googlecode.com/svn/trunk/src/jmetest/renderer/TestCameraMan.java ?  It sounds very close to what you are looking to do.

Hi renanse,



yep, I actually used TestCameraMan.java as the basis for my method and that gives me the result you see on the screenshot. The perspective of the Robot is rendered onto the quad surface in realtime - just like TestCameraMan.java does.



My problem is now that I need to retrieve the actual colour values from the texture that is being rendered to the quad and according to what Momoko_Fan said, this poses problems as the "image" data of that texture is innaccessible to the cpu.



Im now basically wondering whether:


  1. I can somehow copy the "image" data from the GPU to something that is accessible to the CPU



    or


  2. I can use a different render class other than TextureRenderer to render the Robots's perspective and

    a. Display the rendered perspective (as a texture on a quad surface, or a seperate window)

    b. Get the image byte data from this renderer using grabScreenContents()



    Thanks again for any further advice.


jME doesn't support what you're trying to do. You can only access the pixels of the main screen using grabScreenContents(), you can't access the TextureRenderer pixels.

You can either edit TextureRenderer to support this feature, or you can hack it in your code by making direct OpenGL calls.

Hi Momoko_Fan,



thanks again for the advice. Although this is not directly supported my the JavaMonkey Engine, the “offscreen_renderer” class I cited before seems to be the kind of modified textureRenderer that you are suggesting.



http://www.jmonkeyengine.com/wiki/doku.php?id=offscreen_renderer



If its not too much to ask, if you can have a look and tell me if you think that implementation will make the pixels accessible you could save me a lot of time, as I’ll start working on applying the OffscreenRenderer today.





Notes on why I think “OffscreenRenderer” might solve my problem:


  1. The usage example from that page makes me think that I should be able to pass my rootNode (or the node containing all the objects I want the robot to 'see') to the offscreen_renderer:




osRenderer = ((LWJGLDisplaySystem) display).createOffscreenRenderer(320, 320);
osRenderer.setBackgroundColor(new ColorRGBA(.667f, .667f, .851f, 1f));
if ( osRenderer.isSupported() ) {
    osRenderer.getCamera().setFrustumPerspective(45f, 4f/3f, 1, 200);
    osRenderer.getCamera().setLocation(new Vector3f(0, 0, 75f));
} else {
    LOGGER.debug( "Offscreen rendering is not supported!");
}
osRenderer.render(someNode);
IntBuffer buffer = osRenderer.getImageData();




2. While the part in the example where the imageData is rendered to a texture looks exactly like the scanline algorithm I'm looking for:

ImageData imgData = new ImageData(width, height, 32, new PaletteData(0xFF0000, 0x00FF00, 0x0000FF));
for (int x = 0; x < width; x++) {
    for (int y = 0; y < height; y++) {
        imgData.setPixel(x, y, buffer.get((height - y - 1) * width + x));
    }
}
org.eclipse.swt.graphics.Image image = new Image (org.eclipse.swt.widgets.Display.getCurrent(), imgData);

Yeah the OffscreenRenderer class is what you're looking for. It works like TextureRenderer but lets you access the rendered image directly on the CPU.

Brilliant! Thanks for checking Momoko_Fan. I'm working on it now. Also a big thankyou to 'The Librarian', who I believe is the author of OffscreenRenderer. Working on it as we speak- I'll post when I'm successful.



Micah



Finally got it working  :D Now offscreen-rendering the Robot's perspective, applying post-processing (in this case lateral-inhibition/edge detection) and displaying the result on a separate SWT Shell.

If anyone else needs help with this, I think I can help now. Thanks again to everyone who helped me.

This is what I did in simpleInitGame() to create the OffscreenRenderer:




        /*Create the OffscreenRenderer*/

        osRenderer = ((LWJGLDisplaySystem) display).createOffscreenRenderer(320, 320);
        osRenderer.setBackgroundColor(new ColorRGBA(0f, 0f, 0f, 1f));
        if ( osRenderer.isSupported() ) {
            osRenderer.getCamera().setFrustumPerspective(45f, 4f/3f, 1, 200);
            osRenderer.getCamera().setLocation(new Vector3f(0, 0, 75f));
        } else {
            //LOGGER.debug( "Offscreen rendering is not supported!");
        }


        /*I attached it to a Node so I can make that node follow the Vehicle*/

        camNodeOS = new CameraNode("Camera Node Offscreen", osRenderer.getCamera());
        camNodeOS.setLocalTranslation(new Vector3f(0, 50, -50));
        camNodeOS.updateGeometricState(0, true);
       
        camNodeOS.setLocalTranslation(-1, 2, 0);
      Quaternion q2 = new Quaternion().fromAngleNormalAxis(90 * FastMath.DEG_TO_RAD, new Vector3f(0, 1, 0));
      camNodeOS.setLocalRotation(q2);

        // Attach the CameraNode to the Vehicle      
      vehicle.attachChild(camNodeOS); // Remember "vehicle" is attached to the rootNode






Here is the code in SimpleInitGame() to create second shell to display the result from processing the Offscreen image. The "image" is a org.eclipse.swt.graphics.Image that is accessible to simpleRender() and simpleInitGame().



      display2 = new Display ();
      shell = new Shell(display2, SWT.SHELL_TRIM | SWT.DOUBLE_BUFFERED);
      shell.setSize(320, 320);
      shell.addPaintListener(new PaintListener(){
         public void paintControl(PaintEvent e){
               if(image != null){
                  e.gc.drawImage(image, 0, 0);
               }
         }
      });
      shell.open ();





in simpleRender() I tell the Offscreen renderer to Render and retrieve the ImageData from the OffscreenRenderer:

          
osRenderer.render(rootNode);
IntBuffer buffer = osRenderer.getImageData();

// Create a new ImageData Object for the retrieved (and possibly modified) pixels
imgData = new ImageData(width, height, 32, new PaletteData(0xFF0000, 0x00FF00, 0x0000FF));

/* Later I access the pixels via */
for (int y = 0; y < height; y++) {
    for (int x = 0; x < width; x++) {
                   
    int p = buffer.get((height - y - 1) * width + x);

    /* Do Postprocessing on Pixels Here */

    // Write the pixels to the new imData Object
    imgData.setPixel(x, y, p);
    }
}



// Then Update the Secondary Shell using the new "imgData"


            //Clear the Image Data Resources
            if(image != null){
               image.dispose();
            }
            //Create the new Image Data
            image = new org.eclipse.swt.graphics.Image(display2, imgData);
           
            //Redraw the Shell we created in simpleInitGame
            if(shell.isDisposed() == false){
               shell.redraw();
            }


Hello mrosenki I've been working in a project like this and I've been follow your code but I don't understand what is "osRenderer"



could you please tell me what kind of variable is?

The "osRenderer" is an object of OffscreenRenderer, you can download the source code here:

http://www.jmonkeyengine.com/wiki/doku.php?id=offscreen_renderer

Hi Momoko_Fan



thanks for the advice

I already donloaded the source code and added to the JME and I modified the code of the TestCameraMan



I added the code mrosenki

and added this

import java.nio.IntBuffer;

import org.eclipse.swt.SWT;

import org.eclipse.swt.events.PaintEvent;

import org.eclipse.swt.events.PaintListener;

import org.eclipse.swt.graphics.Image;

import org.eclipse.swt.graphics.ImageData;

import org.eclipse.swt.graphics.PaletteData;

import org.eclipse.swt.widgets.Display;

import org.eclipse.swt.widgets.Shell;

import com.jme.renderer.OffscreenRenderer;

import com.jme.renderer.lwjgl.LWJGLOffscreenRenderer;

import com.jme.system.lwjgl.LWJGLDisplaySystem;



my problem is when I run the aplication always return

"ADVERTENCIA: FBO not supported."



what can i do?



if you could help me

I would be very grateful

Your video card does not support FBO, an OpenGL2.1 feature.

This is common with onboard Intel cards and some older cards.

There's a way to do this using Pbuffer, however no easy solution is available at the moment (unless you want to write one yourself).

The code might work on a newer PC.

Ok

Thank Momoko_Fan

hi momoko_fan



do you know how to create a bitmap in java?

http://tinyurl.com/382z89a

hi everyone,

i’m trying to do something like this, i’ve use the TestCameraMan to display the image of an other camera, and show two objects

what i need now is segmenting the image to identify where is each object and know when they are touching

i dont know if i can use mrosenki code for this.



i dont know exactly how works “OffscreenRenderer”

i’ve found this page:

https://wiki.jmonkeyengine.org/legacy/doku.php/jme2:offscreen_renderer?s[]=process&s[]=image&s[]=color

from what i get i need to create and add those classes to their package (OffscreenRenderer.java, LWJGLOffscreenRenderer.java, LWJGLDisplaySystem.java), because i cant find them in jme, am i right??



the problem is that for LWJGLDisplaySystem.java, the code is wrong or missing

because the functions: isCreated() and getRenderer() doesnt exist

mrosenki, how did you make it work??



thank you all for your further help