Wouldn't it be nice....?

…to have an angliph render ?

Is there a way to obtain it ?

http://www.google.com/search?q=angliph

zero hits… I give up. What is angliph?

Google: http://www.google.com/search?hl=en&q=anaglyph+render&btnG=Search

Quoted from: http://www.engg.uaeu.ac.ae/a.okeil/stereo-vision/anaglyph.htm



Anaglyphs



An anaglyph is a moving or still picture consisting of two slightly different perspectives of the same subject in contrasting colors that are superimposed on each other, producing a three-dimensional effect when viewed through two correspondingly colored filters. They offer a simple and inexpensive method for viewing stereo images. The quality of stereoscopic experience is not as good as the quality of stereo images seen through shutter glasses.  Your eyes and brain need some time to start experiencing the stereoscopic effect.  The more you train your eyes the faster they adapt to seeing the red-blue images.  If you have no anaglyph glasses click here to download a file that shows you how to make one yourself in a few minutes.





Gray Anaglyphs



Most anaglyph images are gray scale images with areas rendered in red and others rendered in blue. There are different types of gray anaglyph images such as  red-green, red-blue, red-cyan, and red-yellow anaglyphs. I personally prefer red-cyan because the final images appear more bright and neutral in color compared to other types of gray anaglyphs.





Colored Anaglyphs



Colored anaglyph images maintain RGB information of objects in the scene.  Colored anaglyph images offer an interesting alternative to gray scale anaglyph images as long as the original images do not contain neither red nor blue objects or backgrounds. The wind tower below is a good example of a colored anaglyph. Red or blue areas  such as the red geodesic dome or the blue sky back ground  shown below reach only one eye which causes a lot of confusion and spoils the stereoscopic effect.





From another source: http://ozviz.wasp.uwa.edu.au/~pbourke/texture_colour/anaglyph/



OpenGL



Rendering anaglyphs in OpenGL is straightforward with the help of the routine glColorMask(). The basic idea is to create the scene with all the surfaces coloured pure white. Render the scene twice, once for each eye. If designing for glasses with the red filter on the right eye and the blue filter on the left eye then before rendering the right eye, call glColorMask(GL_TRUE,GL_FALSE,GL_FALSE,GL_FALSE); and before rendering the left eye, call glColorMask(GL_FALSE,GL_FALSE,GL_TRUE,GL_FALSE); If your OpenGL hardware supports stereo buffers then the above can be implemented directly, if not then one needs to render the scene twice and use the accumulation buffer to merge them, see later.

Ah ok, thank you! I knew this by the name "stereo imaging", but anaglyph sounds a lot more mysterious :smiley:

I made a StereoRenderPass that does teal/red anaglyphs…  I will post it sometime with the rest of the JmeContext system.

Man, you are on fire!  XD

Very cool Momoko! I was actually researching OpenGL anaglyphic rendering and stereo rendering for a home-brew wiimote head tracking VR system (Similiar to Johnny Lee’s head-tracking system, but using LCD VR goggles).



Does the StereoRenderPass support both anaglyphic rendering using accumulation buffers and GL_STEREO using the stereo buffers?



If anyone is interested here’s some links to do with wii, stereo rendering, vr tracking:

http://del.icio.us/dougnukem/wii

Very cool topic… but dang it, the thread subject has got an old Beach Boys song stuck in my head. :frowning:

It doesn't use accumulation or stereo buffers, just color masking. I made it only to try out my brother's 3D glasses (was very nice, with things sticking out like they should, etc)

It is very easy to do it yourself using normal jME; render scene with ColorMask set to red only, then render it with the camera shifted a bit and ColorMask set to green and blue.

Wouldnt it be nice if we were older

Then we wouldnt have to wait so long

And wouldnt it be nice to live together

In the kind of world where we belong […]



So do I have to wate till I'll find the code in jme, right ?

Ack, great… thanks a lot, elettrozero! 

I will release the code early then, elettrozero, just for you :slight_smile:

    protected float eyeSeperation = 15f;
    protected float focalLength = 100f;

    protected final Vector3f temp = new Vector3f(),
                             temp2 = new Vector3f();   


    public void setEyeSeperation(float eyeSeperation){
        this.eyeSeperation = eyeSeperation;
    }
   
    public void setFocalLength(float focalLength){
        this.focalLength = focalLength;
    }

    @Override
    public void runPass(JmeContext cx){
       if (!enabled) {
                return;
            }
            doUpdate(cx);
            RenderContext rc = cx.getRenderContext();
            applyPassStates(rc);
            cx.getRenderer().setPolygonOffset(zFactor, zOffset);

            Camera cam = cx.getRenderer().getCamera();
            temp2.set(cam.getLocation());

            cam.getUp().cross(cam.getDirection(), temp);
            temp.normalizeLocal();
            temp.multLocal(eyeSeperation * 0.5f);

            // **************************
            // * PASS 1, LEFT EYE BEGIN *
            // **************************
            // render left eye by adding temp (the left vector)
            temp2.addLocal(temp);
            cam.getLocation().set(temp2);
            cam.update();
            GL11.glColorMask(false, true, true, true);
            doRender(cx);

            // **************************
            // * PASS 2, RIGHT EYE BEGIN *
            // **************************
            // make sure to clear depthbuffer
            cx.getRenderer().clearZBuffer();

            // render right eye by subtracting temp twice
            temp2.subtractLocal(temp).subtractLocal(temp);
            cam.getLocation().set(temp2);
            cam.update();
            GL11.glColorMask(true, false, false, true);
            doRender(cx);

            // return the location to back where it was
            temp2.addLocal(temp);
            cam.getLocation().set(temp2);
            cam.update();

            GL11.glColorMask(true, true, true, true);

            cx.getRenderer().clearPolygonOffset();
            resetOldStates(rc);
      }

If you pass from Italy some times, I'll offer you  a beer :smiley:

Sorry for my ignorance…what is JmeContext ?

If you see the code, he just uses it to get the renderer, so you can do that without much of a problem, the real problem comes from using several other variables that are not indicated in the code, plus the annotation (@Override) which points out that he is extending a class and referencing those fields from there… We'll see what he does :slight_smile:

Tadah!!!

Thanks so much Momoko_Fan.







Not so impressive, must tune eye distance and something but look it for a while, expecially the difference between small avatar (on bottom left) and terrain behind.