Offscreen rendering

Hello,



this is my first post here.



Correct me if I'm wrong:



I would like to render something (with JME…) and then use the rendered pixels in some way (i.e. produce a video).


  • From the docs it seems that the TextureRenderer is the class I'm looking for, isn't it ?
  • If the TextureRenderer goes through PBuffers, does it share the same context with other TextureRenderers ?



    Cheers,



    Mik


Hi,

This may help (or not :wink: ) :



http://www.jmonkeyengine.com/jmeforum/index.php?topic=519.msg47760#msg47760



AFAIK, you cannot retrieve an image with the existing TextureRenderer class (which is the reason for this OffscreenRenderer class…)

Many thanks!



As it seems to be a popular question, what about asking "the officials" to put the snippet into the next JME distribution ?

I'd be honored, but before that I'd like some user input :slight_smile:

BTW how did it work for you?

I looked at the OffscreenRenderer class… can Framebuffer objects be created without a display context? I know it's possible to create a pbuffer because it has its own context.

Hmm, good question.  Normally FBOs share the main display context…

That is what I suspected, since creating an FBO requires using GL calls…

IMO TextureRenderer and OffscreenRenderer should be merged and provide the following features:

  1. Pbuffer is to be used if FBO is not supported or if there are no display contexts.
  2. More sophisticated way to read data from the framebuffer (grabScreenContents should write to an Image object)

I agree. The main difference between the two classes is how the FBO is initialized.

  1. A fallback method is now implemented in TextureRenderer, should be easy to adapt it.
  2. low level access to the IntBuffer should be kept, even if higher level utility functions should be incorporated. Image objects tend to depend on the GUI toolkit… I'm also concerned about perfs: today it's impossible to update the IntBuffer 60 times per second. I haven't looked where the bottleneck is (grabScreenContents or image conversion).

This sounds like a good objective.  I'll be happy to look into it for inclusion in the post 1.0 code base.

I'll try to submit a patch to TextureRenderer as soon as I find some time…

2) low level access to the IntBuffer should be kept, even if higher level utility functions should be incorporated. Image objects tend to depend on the GUI toolkit... I'm also concerned about perfs: today it's impossible to update the IntBuffer 60 times per second. I haven't looked where the bottleneck is (grabScreenContents or image conversion).

I don't really think you understood what I meant..
grabScreenContents makes a direct call to GL, but uses RGBA8888 as the image format. The jME Image class directly maps to the OpenGL image formats so there's very little effort needed to implement reading to an Image object.

Indeed, I had not understood that :wink: I'll look into it then

IRT speed, when you are evaluating be aware that reading data back from the card is going to be fairly slow on most cards as they are optimized mostly for sending data one way (to the card.)  I know there were some concerns about that in a related thread.

Well this is mainly for doing offscreen rendering, for producing a video, thumbnails, etc. If you want to render to a GUI you would use a canvas instead.

In my case, it's for integrating in a GUI but outside of the jME canvas (for instance a separate window, etc.)

I don't need 60fps but still, I'd like to optimize the process (right now I'm circumventing the problem with on demand rendering, i.e. I update the image when some event is received)