I have just started using Jme and the physics suit as a possible replacement for Xith with javaodea. My reasons for no longer using Xith… and so I was hoping the following are possible in Jme:
Capture a rendered scene / render to some sort of offscreen buffer (Efficiently) for using the image data within the program.
Render multiple views of a scene (on or offscreen).
Dynamically update a texture efficiently (so that I can stream a video / web camera image into a program).
I have taken a look at a number of the newly available demos (online) which appear to demonstrate the ability to render multiple views. I would appreciate any pointers on the above requirements. Can these things be done, or do I need to use a lower level openGl api?
Generally, yes - It surely depends what you want to do with the image whether you can call it 'efficiently' or not. You can render to pbuffers and you can send them to system mem…
Yes - have a look at TestCameraMan - it uses Render To Texture
2 ;)) Streaming videos etc. to texture still is an unsolved problem (wonder if lwjgl supports it - or even OpenGL?) - have you seen any code for it using OpenGL? (in any engine or programming language)
Many players in fact have an option to use OpenGL for rendering video. Many even take advantage of the videocard for enhanced scaling and things like yuv->rgb conversion. A good (open source) example is MPlayer, which I think can do that on all 3 major platforms (at least windows/*nix). I also remember seeing some windows/C examples, like:
JMF was quietly abandoned by Sun but they'll refuse to acknowledge it. An initial glimpse at it shows promise but when you spend hours of your life in the hell that is trying to use JMF you quickly change your opinion and just try to find your way back out. Suffice to say I've had a bit of experience with JMF. :o
JMF is extremely buggy, inconsistent, and lacks support for most file formats. IBM released a plugin for JMF a few years ago that supports some extra formats, but it's linux only and it always errored out for me when I would try to use it. I would love to have video support in jME but JMF is not the way to go in my opinion.
Well that's why I said "if you supply the media". I wouldn't touch JMF with a 10 foot pole for trying to make anything that plays "generic" media… but do you know a better alternative? That being said, I don't even know if JMF gives you any acces to the underlying data stream, and it's probably far from very efficient since it's probably not update for the java 1.4 api.
An active forum! Coming from the Xith forums this is like some new hip movement or something! Thank you for your replies.
On offscreen rendering requirements.
I would like to be able to render to an offscreen buffer, system memory is fine. I need to use each rendered frame as it is produced in machine vision algorithms, so essentially rendering into a memory area which can be used from within the program for use in a BufferedImage for example. Is this possible without a massive hit on performance? For example there was a hack in Xith which required an on-screen canvas to be visible for offscreen capturing etc… nasty.
On multiple view rendering: Brilliant!
I am happy to use the JMF for streaming web camera data into a buffered image. The way I have seen a number of API's achieve the dynamic texture updating is by getting a reference to a image object which contains the texture data, this is then updated on each frame with the streaming data and the texture is then set as 'dirty', which I assume uploads the image data into the graphics card texture memory (or something!).
Thank you again for your time. If anyway has any pointers on how to do in particular (1) that would be a tremendous help, as the main aim of my using Jme with the physics suit is to provide a 'camera' view stream on a mobile robot which can be used in computer vision algorithms.
Perhaps an effort needs to be made in the direction of taking the current JMF and turning it into a real API with features for rendering to LWJGL…I am willing to look into this in a couple weeks, but would love some assistance as video in jME would be the coolest thing since Irrisor’s Swing integration in 3D.
Depending on the nastiness of the source code of JMF it may just be a learning experience on how to do certain things and then write an implementation from scratch. I’ve often considered trying to write an MPEG2 player in straight Java. :o
Grabbing the screencontents is also very easy… (check Renderer.grabScreenContents) so you could copy each rendered frame to a BufferedImage. Not the most optimal approach (BufferImage will never be that anyway), but easy. For better performance, look into the pbuffers.
As for the multiple views, you can do as many passes as you like on the scenegraph, you don’t even need render to texture for it. Just change the camera and render the scene from whatever angle you want. Change the viewport, or grab the screencontents or whatever you want, and do the next angle.
An API (even if it were using JNI) to access the audio/video codecs in Windows, Linux, and Macs would really be an awesome feat. Anyone with enough JNI experience willing to do that?
I would be willing to take the burden of cheering you on.
Yes, but what should be stubbed? I don't know of any good players our their that have ports for Windows, Unix, and Linux that also would have capabilities of outputting the audio and video to OpenAL/OpenGL.