Understanding parts of the Core JME

Hi there ,

if i want to understand some parts of core Jme , is there a good reference for this context , because there are methods/Functions that runs maths & Jme Streams that i cannot understand even a little bit (because i am not a graphical programmer):

for example in this code:

what does this part does ? what are the OutputCapsule & InputCapsule ?

  @Override
    public void write(JmeExporter ex) throws IOException {
        super.write(ex);
        OutputCapsule oc = ex.getCapsule(this);
        oc.write(lightPosition, "lightPosition", Vector3f.ZERO);
        oc.write(nbSamples, "nbSamples", 50);
        oc.write(blurStart, "blurStart", 0.02f);
        oc.write(blurWidth, "blurWidth", 0.9f);
        oc.write(lightDensity, "lightDensity", 1.4f);
        oc.write(adaptative, "adaptative", true);
    }

    @Override
    public void read(JmeImporter im) throws IOException {
        super.read(im);
        InputCapsule ic = im.getCapsule(this);
        lightPosition = (Vector3f) ic.readSavable("lightPosition", Vector3f.ZERO);
        nbSamples = ic.readInt("nbSamples", 50);
        blurStart = ic.readFloat("blurStart", 0.02f);
        blurWidth = ic.readFloat("blurWidth", 0.9f);
        lightDensity = ic.readFloat("lightDensity", 1.4f);
        adaptative = ic.readBoolean("adaptative", true);
    }

& from where does these equations generally come from :

@Override
    protected void postQueue(RenderQueue queue) {
        getClipCoordinates(lightPosition, screenLightPos, viewPort.getCamera());
        viewPort.getCamera().getViewMatrix().mult(lightPosition, viewLightPos);        
        if (adaptative) {
            float densityX = 1f - FastMath.clamp(FastMath.abs(screenLightPos.x - 0.5f), 0, 1);
            float densityY = 1f - FastMath.clamp(FastMath.abs(screenLightPos.y - 0.5f), 0, 1);
            innerLightDensity = lightDensity * densityX * densityY;
        } else {
            innerLightDensity = lightDensity;
        }
        display = innerLightDensity != 0.0 && viewLightPos.z < 0;
    }

    private Vector3f getClipCoordinates(Vector3f worldPosition, Vector3f store, Camera cam) {

        float w = cam.getViewProjectionMatrix().multProj(worldPosition, store);
        store.divideLocal(w);

        store.x = ((store.x + 1f) * (cam.getViewPortRight() - cam.getViewPortLeft()) / 2f + cam.getViewPortLeft());
        store.y = ((store.y + 1f) * (cam.getViewPortTop() - cam.getViewPortBottom()) / 2f + cam.getViewPortBottom());
        store.z = (store.z + 1f) / 2f;

        return store;
    }

i am not asking for somethings that are super detailed no … i want a general guide for these core codes or effects plugins in this case if exists hopefully, i hope you understand me .

Thank you !

1 Like

@mitm Awesome,thank you ! & about the maths formulas ?

Well the math is magic to me, but most likely known geometric formulas relating to 3d I would assume.

There are others with way more knowledge that could answer you better on math.

I can read the magic and figure it out if forced but Id rather stick to using public methods that use these things to create the magic, for whatever reason.

Its hard enough learning how to use the engine so I tend to stay away from digging deeper unless something breaks and I am forced to pull the curtain back.

1 Like

Okay, No problem , I have found this book


& in chapter 7 PinHole Camera System , it describes most of its math , the Projection Matrix of course the main target that projects the 3d object to be infront of the camera view as 2d object on a 2d screen :

(EDIT) : the final equation equations(7.8 & 7.9) : image
so formula is based on the red triangles similarity rule that each side is divided by the other of the corresponding angle anyway the base formula for clipping is as i can understand he is trying to keep the light beams of normalized vectors & always visible on the screen or the cameraView as far as he can so he have called this function inside the postQueue Renderer to keep updating it to follow the camera , i hope i am right but till now i am not sure about the 1f in (store.x+1f),(store.y+1f),(store.z+1f):

private Vector3f getClipCoordinates(Vector3f worldPosition, Vector3f store, Camera cam) {

        /**
        *Rendering a scene is always relative to the camera, and as such, the scene's vertices must also be defined relative to *the camera's view.
        *
        * getting the common series of matrix transformations that can be applied to a vertex defined in model space, transforming it into clip space that the camera is looking at & multiplying this value by the position of the light source then storing it in store vector so now you get lightbeams positioned in front of the camera view 
        */
        float w = cam.getViewProjectionMatrix().multProj(worldPosition, store);
        /**
        * Normalizing the position vector of the light source 
        */
        store.divideLocal(w);

        /**
        * To make the objects displayed on your Homogenous 2d plane , image should pass by clip plane
        * Getting values of projection matrix P of the clip plane (2d vector mostly & z must be resized to 1) using this General formula:
        *  Pc=(u,v,f)=(u/f,v/f,f/f) & since focal length(normalized z-axis)=1 , then  Pc=(u,v);
        *  
        *   from traingle similarity after enclosing a triangle around the vertex we wanna project into 2d screen :
        *     u/X=v/Y=f/Z
        *    ->then : X=u=( f * x )/Z
        *           : Y=v=( f * y )/Z
        *
        *    ,So the projected point  Pc=(u,v)=( (f*x)/Z ,(f*y)/Z )
        * 
        *     Next , we need to translate Pc to
        *    the desired origin not the camera orgin(principal point)
        *    Let this translation be defined by (tu , tv ) 
                
                ,then :  Pc=(u,v)=( (f*x)/Z + tu,(f*y)/Z + tv);
        *
            finally to get the final value by multiplying the result u & v or x & y by the resolution of the object:
                    
        *
        *          So the final formula function is :  Pc=(u,v)=   ( mu* (((f*x)/Z) + tu) , mv*(((f*y)/Z) + tv) , 1 );
        *            where; Pc : Clip Projection point 
                            u  : x-coordinates of the projection point
                            v  : y-coordinates of the projection point
                            f  : focal length which is parallel to the Z-axis & represents its normalized value
                            x  : the original x point of the Vector3f of that object
                            y  : the original y point of the Vector3f of that object
                            Z  : the principal axis that passes by the camera origin = 2*focal length usually 
        *               
                ->Plug-in values :
                        x=store.x;
                        y=store.y;
                        
                        mu=1f;
                        mv= 1f;
                        
                        focal length according to x-perspective(f) = cam.getViewPortRight() - cam.getViewPortLeft();
                        focal length according to y-perspective(f)=cam.getViewPortTop() - cam.getViewPortBottom();
                        
                        Z= 2f;
                        
                        tu=cam.getViewPortLeft();
                        tv=cam.getViewPortBottom();
                        
                        
        */
        store.x = ((store.x + 1f) * (cam.getViewPortRight() - cam.getViewPortLeft()) / 2f + cam.getViewPortLeft());
        store.y = ((store.y + 1f) * (cam.getViewPortTop() - cam.getViewPortBottom()) / 2f + cam.getViewPortBottom());
        //normalizing z projection to convert the visible vertices into 2d object to appear on the 2d screen
        store.z = (store.z + 1f) / 2f;

        return store;
    }

Man, its a real magic

Probably +1 is taking a value from -1…1 and turning it into a value from 0…2.

Given that it later divides its value by 2 then its taking a value from -1…1 and turning it effectively into a value from 0…1 because I think that’s the space that viewports are in.

Actually, it’s even more straight forward than “viewports are in blah blah” space. This is taking a value (store.x) that is in the range of -1 to 1 and calculating where that is in the range viewportLeft to veiwportRight.

Same is if it was:
(val + 1) * (max - min)/2 + min

2 Likes

Aha yes it must be in range 1~ -1 on the 2-axis to be seen not clipped on your screen ,So the 1f is a compensating value in the Normalized Device Coordinates (NDC) , but is this value

the same as focal length f ?

No. It’s the viewport left and viewport right on the screen (frame buffer).

1 Like