Set initial background image and depth buffer on frame

Hi, I am trying to render a background image before the scene is rendered, and would also like to set the z-buffer data before the scene is rendered. What is the simplest way to do this?

My image will be the raw RGBA or BGRA pixel colors. So far I’m thinking I’ll need to implement a SceneProcessor that will own a pre-view ViewPort (created with RenderManager.createPreView()) that will be rendered to in the SceneProcessor’s postQueue() method. But I don’t know how to simply render a 2D image to the ViewPort, or set the initial z-buffer.

What are you trieing to accomplish? Rendering a skybox?

I want to render video frames that come from my own separate API, into the scene before anything else. The image should not move with the jme Camera; it should always fill the view port.

Separately, I want to be able to set the z-buffer samples to arbitrary values before the scene is rendered.

For the z-buffer, you can read some value from the texture and write it into gl_FragDepth in the fragment shader

I guess I’m really looking for how to code it: which jme classes/methods to use, and how to hook into the render chain where I need to. For example, I feel like I’m getting closer with the following code, but I don’t know how to finish it off, or I may be way off from the best way to do it:

ViewPort videoViewPort = renderManager.createPreView("Video View", camera); // I think this will cause the created pre-view to be rendered automatically by the RenderManager? And since it's a pre-view, it will be rendered before the scene objects videoViewPort.addProcessor(new VideoProcessor(...)) ...

class VideoSceneProcessor implements SceneProcessor {

public void postQueue(RenderQueue rq) {
// Render the current video frame
renderManager.getRenderer().renderImage(0, 0, width, height, frameImage); // THIS METHOD DOESN’T EXIST. How do I draw an image (preferrably without directly depending on LWJGL) ? Do I have to use Picture class?

    // Set initial z-buffer samples
    renderManager.getRenderer().setZBuffer(byteBuffer); // This method doesn't exist either...


Your help is appreciated! Thanks.

Yes use the picture class. But either way you’ll need to make a shader to modify the z-buffer.

Ok, so I’m trying to use the Picture class, right now just focusing on the video frames without the z-buffer. For now I’m generating random RGB values for the video frames, which would cause a “color TV snow” look if displayed, however I have been unable to render it. What am I missing? Here is my code (the commented code in postQueue() is other code I used to try to get it to work):

public class MixedRenderer implements SceneProcessor {
private ByteBuffer frameData;

/** Picture which shows the current video frame. */
private Picture picture;

/** Scene Node that video frame Picture is attached to. */
private Node videoFrameNode;

/** Pre-view ViewPort for displaying video in the background. */
private ViewPort videoViewPort;

/** Display width/height. */
private int width;
private int height;

private AssetManager assetManager;

/** Texture which holds the video frame. */
private Texture2D videoTexture;

private RenderManager rm;    
private ViewPort vp;

/** Camera that views the video frame; does not move with scene camera. */
private Camera videoCam;

/** Image which holds the current video frame. */
private Image frameImage;

/** Video frame pixel data. */
byte[] bytes;

private boolean initialized = false;

public MixedRenderer(AssetManager assetManager, int width, int height) {
    this.width = width;
    this.height = height;
    this.assetManager = assetManager;

public void initialize(RenderManager rm, ViewPort vp) {
    this.rm = rm;
    this.vp = vp;
    this.videoCam = new Camera(width, height);
    picture = new Picture("Video Frame", false);
    picture.setPosition(0, 0);

    videoFrameNode = new Node("Video Node");

    videoViewPort = rm.createPreView("Video View", videoCam);
    bytes = new byte[320*240*4];
    frameData = ByteBuffer.allocateDirect(bytes.length);
    frameImage = new Image(Image.Format.RGBA8, 320, 240, frameData);
    videoTexture = new Texture2D(320, 240, Image.Format.RGBA8);
    initialized = true;

public void reshape(ViewPort vp, int w, int h) {

public boolean isInitialized() {
    return initialized;

public void preFrame(float tpf) {

public void postQueue(RenderQueue rq) {

// videoCam.setProjectionMatrix(null);
Camera prevCam = rm.getCurrentCamera();
rm.setCamera(videoCam, false);
// rm.getRenderer().clearBuffers(true, true, true);
// rm.getRenderer().setDepthRange(1, 1);

    for (int i = 0; i < bytes.length-3; i += 4) {
        bytes[i] = (byte) (Math.random() * 255 - 128);
        bytes[i+1] = (byte) (Math.random() * 255 - 128);
        bytes[i+2] = (byte) (Math.random() * 255 - 128);
        bytes[i+3] = (byte) (Math.random() * 255 - 128);
    frameImage.setData(frameData); // needed?
    videoTexture.setImage(frameImage); // needed?
    picture.setTexture(assetManager, videoTexture, false);
    vp.getQueue().renderQueue(Bucket.Sky, rm, videoCam);
    rm.setCamera(prevCam, false);

public void postFrame(FrameBuffer out) {

public void cleanup() {


And in the main class which extends SimpleApplication we have:
public void simpleInitApp() { . . . mixedRenderer = new MixedRenderer(assetManager, settings.getWidth(), settings.getHeight()); viewPort.addProcessor(mixedRenderer); }

You’re supposed to attach the picture to the gui bucket not the sky bucket

Try this, hope it helps.

setting a stationary background image in jme3/