Problem creating a PointLight with realistic lighting

Hey folks!
In the following Video there is a point light that doesnt light propably when it is in the middle of the model passage.

Point Light declaration:

PointLight test = new PointLight();
myLight light = new myLight(test, ownReference);
test.setColor(new ColorRGBA(3.0f, 3.0f, 3.0f, 3.0f));
test.setPosition( new Vector3f( 0.0f, 0.0f, 0.0f) );
test.setRadius(3.0f);
rootNode.addLight(test);
Material decleration:

Material mat_brick = new Material(assetManager, “Common/MatDefs/Terrain/TerrainLighting.j3md”);
Texture myTex = loadDiffuseTexture();//gets Diffuse Texture
mat_brick.setTexture(“DiffuseMap”, myTex);
test.theNode.setMaterial(mat_brick); // with test.theNode -> node that contains the mesh of the passage
i have no idea at all why this doesnt work!
Thanks for any help!

Lighting parameters are calculated per vertex so when you have large expanses that are just one huge triangle but a really small point light then it won’t look right.

Break your shape up into smaller triangles and it will look better with a close up point light like that.

Thanks mate!,
Your answer is not perfect but it got me on the right way what i have to look for.
For anyone interrested:
To solve this Problem without increasing Vertexcount of models or programming a own shader you can simply use,

Material mat_brick = new Material(assetManager, “Common/MatDefs/Light/Lighting.j3md”);
mat_brick.setBoolean(“VertexLighting”,false);

the setBoolean Function to turn of the VertexLighting and you get a perfect Lighting on even with a very low vertex Count and a pointlight

o_O That should be off by default.

Something is fishy.

@emtonsit:

While this can be a good and easy solution if the polycount of the scene is rather small, i just want to point out that it switches the lighting calculation from per-vertex to per-pixel level which causes a lot more gpu usage.

1 Like
@naas said: @emtonsit:

While this can be a good and easy solution if the polycount of the scene is rather small, i just want to point out that it switches the lighting calculation from per-vertex to per-pixel level which causes a lot more gpu usage.

Being pedantic, this is not always true. It largely depends on the scene and the graphics card. For example, Mythruna actually runs slower with per-vertex lighting on my graphics card… and that’s even considering that bump mapping, etc. are no longer on.

One theory is that vertex calculations always have to be done but frequently fragments don’t because of the z-buffer. Since JME sorts the opaque bin front to back it’s possible that moving calculations to the vertex calculation can actually slow things down if you have a lot of vertexes and a lot of overdraw.

At any rate, vertex lighting is supposed to be off by default, I though. Vertex lighting turns off normal and bump mapping so it’s strange.

1 Like
@pspeed said:

Being pedantic, this is not always true. It largely depends on the scene and the graphics card. For example, Mythruna actually runs slower with per-vertex lighting on my graphics card… and that’s even considering that bump mapping, etc. are no longer on.

One theory is that vertex calculations always have to be done but frequently fragments don’t because of the z-buffer. Since JME sorts the opaque bin front to back it’s possible that moving calculations to the vertex calculation can actually slow things down if you have a lot of vertexes and a lot of overdraw.

At any rate, vertex lighting is supposed to be off by default, I though. Vertex lighting turns off normal and bump mapping so it’s strange.

Humm this is interresting… doesen’t that mean that there are more vertexes than pixels/fragments are being processed?
Maybe it’s a special case there because Mythruna has very few overlapping surfaces compared to the polygon count due to the block system?
But a calculation usually takes the same time no matter if it’s done by the vertex or fragment shader right?

@naas said:
@pspeed said:

Being pedantic, this is not always true. It largely depends on the scene and the graphics card. For example, Mythruna actually runs slower with per-vertex lighting on my graphics card… and that’s even considering that bump mapping, etc. are no longer on.

One theory is that vertex calculations always have to be done but frequently fragments don’t because of the z-buffer. Since JME sorts the opaque bin front to back it’s possible that moving calculations to the vertex calculation can actually slow things down if you have a lot of vertexes and a lot of overdraw.

At any rate, vertex lighting is supposed to be off by default, I though. Vertex lighting turns off normal and bump mapping so it’s strange.

Humm this is interresting… doesen’t that mean that there are more vertexes than pixels/fragments are being processed?
Maybe it’s a special case there because Mythruna has very few overlapping surfaces compared to the polygon count due to the block system?
But a calculation usually takes the same time no matter if it’s done by the vertex or fragment shader right?

It indicates that there are more vertexes then drawn fragments. Mythruna only renders the surfaces of things but if you are looking directly at a mountain then you don’t see the stuff on the other side of that. If they are drawn last then those fragments are never processed… so you incur vertex processing for things that aren’t drawn.

That being said, on some cards it is faster to have vertex lighting on. This indicates the some cards (looking at you ATI) are really bad at this optimization and some cards (looking at you nVidia) are really good at it.

1 Like

@pspeed:
Aah ok so if a polygon is completely “behind” a previously drawn polygon that situation can be recognized due to z-buffer information and the graphics card can skip the fragments of that polygon? Wouldn’t that also mean that its better to draw polygons closer to the camera first?

Sorry for so much questions but i think it’s really usefull information :slight_smile:

@naas said: @pspeed: Aah ok so if a polygon is completely "behind" a previously drawn polygon that situation can be recognized due to z-buffer information and the graphics card can skip the fragments of that polygon? Wouldn't that also mean that its better to draw polygons closer to the camera first?

Sorry for so much questions but i think it’s really usefull information :slight_smile:

From my post above:

@pspeed said: Since JME sorts the opaque bin front to back it's possible that moving calculations to the vertex calculation can actually slow things down if you have a lot of vertexes and a lot of overdraw.

So, yes. :slight_smile:

Though I should have said “opaque bucket”. “Bin” is from another scene graph that I used to use.