gl_LightSource?

Does JME bind lights to this?
If not, what are the alternatives for getting lighting info for single pass rendering?

On a similiar note, is there any document that has a FULL list of global bindings for JME?

I see that CameraPosition, FrustumNearFar and a few others that I have stumbled across are available but not documented any place that I can find.

Thanks in advance.

I think gl_LightSource is part of the Fixed Function pipeline… newer shader based engines is supposed to define the data interface against the GPU themselves.

This is my goto reference for global bindings :wink:
http://code.google.com/p/jmonkeyengine/source/browse/trunk/engine/src/core/com/jme3/shader/UniformBinding.java

And here is some info about the light uniforms.
https://wiki.jmonkeyengine.org/legacy/doku.php/jme3:advanced:jme3_shaders

1 Like
<cite>@kwando said:</cite> I think gl_LightSource is part of the Fixed Function pipeline... newer shader based engines is supposed to define the data interface against the GPU themselves.

This is my goto reference for global bindings :wink:
http://code.google.com/p/jmonkeyengine/source/browse/trunk/engine/src/core/com/jme3/shader/UniformBinding.java

And here is some info about the light uniforms.
https://wiki.jmonkeyengine.org/legacy/doku.php/jme3:advanced:jme3_shaders

Ok… I see where the g_LightX globals are defined, but it also specifies that you HAVE to set LightMode to MultiPass. How are these passed in when using SinglePass LightMode?

Ugh… I am going to have to chain the composite for each light, aren’t I?

AFAIK there is no working singlepass mode. It was some time since i digged in to that part of the engine, maybe some of the core devs have better / more recent information for you.

Out of curiosity; what are you building?

<cite>@kwando said:</cite> AFAIK there is no working singlepass mode. It was some time since i digged in to that part of the engine, maybe some of the core devs have better / more recent information for you.

Out of curiosity; what are you building?

Same ol’ same ol’… deferred rendering implementation. I was trying a Post Processing Filter approach, render everything needed in the postQueue and then use a single shader to handling compositing and lighting, this way I could forward the normals, depth, etc through the process chain and really only render the scene a single time for everything (minus shadows… at least for now). It looks like I’ll have to compile the light info myself… which isn’t much of an issue, I just got the idea that this was being done as the LightMode enum contains SinglePass as well as Multipass.

EDIT: Bit more info on the implementation:

The rootNode of the scene contains nothing but lights. The scene is self contained and contains no lights. Since there isn’t really a single pass mode for lighting, I’ll probably store the lights in some sort of light manager, compile the info and track updates… or some such non-sense.

EDIT 2: Bah… I should have mentioned. This is for a specific project that uses a single texture atlas for everything.

I might boint you to the other deferred rendering thingy, as it does already most of waht you do and works quite fine so far (So you could either help getting it cleaned up, or at least get inspiriation from it).

Also, theres almost no gl_ calls or variables in OpenGL 3.0 anymore, people are supposed to do it all via shaders “themselves”.

<cite>@Empire Phoenix said:</cite> I might boint you to the other deferred rendering thingy, as it does already most of waht you do and works quite fine so far (So you could either help getting it cleaned up, or at least get inspiriation from it).

I doubt I have anything new or useful to contribute… I’m just piecing together a solution based on a few articles (more than likely the same articles everyone else is using). If either idea I’m trying that are different pan out, I’ll post the findings. Otherwise, I’d just be confusing the issue.

<cite>@normen said:</cite> Also, theres almost no gl_ calls or variables in OpenGL 3.0 anymore, people are supposed to do it all via shaders "themselves".

I get this… however we are sorta bound to 2.0 aren’t we? (i.e. we have no null check, arrays require a const int for defining length, etc, etc)

Anyways, how does JME pass in multiple lights for a single pass render? (I see the enum but no documentation on the subject.)
What is JME’s equivalent of gl_MaxLights?

EDIT: I’m ok with these needing to be established myself, I’m just trying to leverage anything that JME is currently doing for me.

Well jme simply doesnt the single pass never really worked as far as i know.

Look at this method https://code.google.com/p/jmonkeyengine/source/browse/trunk/engine/src/core/com/jme3/material/Material.java#753
It’s repsonsible for computing and sending the light direction position and so on.
There are some tricks done to send all the needed data into the fewest uniforms. Also some inconsistencies.
I never used the single pass light mode to be honest but I think some did (@phroot maybe?)

1 Like
<cite>@nehon said:</cite> Look at this method https://code.google.com/p/jmonkeyengine/source/browse/trunk/engine/src/core/com/jme3/material/Material.java#753 It's repsonsible for computing and sending the light direction position and so on. There are some tricks done to send all the needed data into the fewest uniforms. Also some inconsistencies. I never used the single pass light mode to be honest but I think some did (@phroot maybe?)

Oh… I didn’t catch that the light type was passed in with the color the first go around at that. Interesting!

Question: Why are the light directions calc’d in the lighting shader if they are being passed in? Or are they just being converted? I have actually read through the shader… just skimmed it for method names to see how it was structured

@t0neg0d said: Question: Why are the light directions calc'd in the lighting shader if they are being passed in? Or are they just being converted? I have actually read through the shader... just skimmed it for method names to see how it was structured
The light direction/position is passed to the shader in world space, it has to be computed per vertex/fragment in tangent space for lighting. Depending if you use vertex lighting or not

EDIT : actually that’s tangent space if you have normal maps, else it’s computed in view space, but the direction is relative to each vertex.