Custom draw call

How to override draw call of Node. I’m trying to render camera view there.

i tried

@Override

public void render(RenderManager rm, ViewPort vp)



Or is there a different way to render that … Please Help.



Thanks.

@svarvi said:
I'm trying to render camera view there.

mhh....if not camera view, what do you think the engine is rendering exactly?

There is no need to override any render method. Maybe you should explain what you are trying to achieve exactly so we can point you out in the the good direction.

Hey man…

I was talking about the device camera … Thats an augmented reality app that i’m trying…

i’m trying to render what the device sees…



The more i use the word “camera” the more confusing it gets…

oh ok :stuck_out_tongue:

So you want to render what the camera captures to an object in the scene right?

http://hub.jmonkeyengine.org/AndroidCamera.zip

@normen … That was very helpful. but the vuforia sdk that i’m using already takes care of switching on the device and rendering it to a gl surface



All i have to do is call a native function in the gl function "public void onDrawFrame(GL10 gl) " …

Then you cannot use jme for that but have to use vanilla gl.

Well if you can run jme in the same opengl context, and get the texturepointer of the framebuffer the vuforia renders to(and let it render to a secondary buffer) , you can link that to a jme texture and use it in any material. (Of course you need to make sure that the sdk and jme render with the same thread.

He says it also renders it so there has to be some thread already :slight_smile:

Yes, but I suspect that the library is able to only activate the camera, or alternativly a application might probably be merged with the librarys internals. → Of course only if sourcecode access is given.

I’d still use jme to render it to gl, whats the point in using their render example and trying to push it into jme? ^^

Hi people…

i get what you are trying to say…

But unfortunately vuforia isn’t open source…

I thought instead of digging for texture there i would dig for renderer here.



I’m not very sure whats happening behind the scene.

QCAR Vuforia gets the activity class … It might get the glcontext(not even sure if its possible just guessing) … and renders the videobackground there.



All i want is Jmonkey + Qcar. If there is any other way… please let me know…



Vuforia: https://ar.qualcomm.at/qdevnet/api

Well jme and the sdk need to run in the same glcontext, if you can somehow make this happen, the rest is realtivly simple.



But if i get the api right, you could make it work with a image target and use that then for a texture?

(So one roundtrip over the cpu but might work well enough)

There should be no need at all to let the vr api render the picture, especially if you want to use some other engine to further modify that image. Just use the vr data and not the display routines.

1 Like

Okay … i get half of what you are saying…

let me just say what i have in mind.

I’m doing this on android.



There are a bunch of gl calls LIKE

gltransform(Which i have no control over)

which i called in the onDrawFrame(GL10 gl) while using jpct which i think means they share the same glcontext



In jmonkey i thought if i extend Node i can Override draw(RenderManager rm)

But now they have changed it.

Am i wrong till now if i am please correct me.

If not and if there is some other class that i extend… you get it right…

The VR api does two things: a) It reads the video stream and deduces spatial data from it so it can judge the current scene in a 3d context. b) It renders the image to opengl. Just leave out b) and do that with jme is what I’m saying.

Im trying to tackle the same problem with the vuforia AR + jme. My thoughts were to take the pose matrix that vuforia produces with the image data and then pass it to jme and transform your object to be rendered accordingly. But the problem that i see is how to take the physical camera view from vuforia and give it to jme so that is can render it as a texture on some sort of mesh. ps i’m new to this so disregard anything dumb i just said :slight_smile:

@jlambert said:
Im trying to tackle the same problem with the vuforia AR + jme. My thoughts were to take the pose matrix that vuforia produces with the image data and then pass it to jme and transform your object to be rendered accordingly. But the problem that i see is how to take the physical camera view from vuforia and give it to jme so that is can render it as a texture on some sort of mesh. ps i'm new to this so disregard anything dumb i just said :)

Any displayed image is basically just an array of bytes arranged in some way (RGB, RGBA, BGR etc.), so you only have to find out where you can access that data "on the other side" and then render it as exemplified in the AndroidCamera example I linked on the previous page.

All tracker objects have a matrix associated with them … You can extract position and rotation (matrix3f → quaternion) with that info and set it on a spatial