How many pointlights?

Generally, how many pointlights in a scene would be too much?

I think I currently have about 5-6, I have an i5 on my laptop and a 1gb graphics card. So, not the most overpowered machine in the world but I generally get FPS between 20-50.

How many pointlights should be used before I would be expecting this sort of FPS? Although not an easy question to answer with limited information, is generally 5-6 low?

@avpeacock said: Generally, how many pointlights in a scene would be too much?

I think I currently have about 5-6, I have an i5 on my laptop and a 1gb graphics card. So, not the most overpowered machine in the world but I generally get FPS between 20-50.

How many pointlights should be used before I would be expecting this sort of FPS? Although not an easy question to answer with limited information, is generally 5-6 low?

Every light causes the scene to be rendered an additional time.

If you think you will have a lot of point lights then you may want a different solution for lighting. Are the lights dynamic?

I would consider 5-6 high, I rarely use more than 2 lights.

Obligatory:
[video]http://www.youtube.com/watch?v=EAdGhMRBbzY[/video]

Do your point lights have a large or small radius? If I remember correctly jME uses some form of culling on point lights when a radius is set.

Also are all your lights in the rootnode? try to move them down as mcuha s possible, as then the rendere needs to render less per light

@pspeed said: Every light causes the scene to be rendered an additional time.

If you think you will have a lot of point lights then you may want a different solution for lighting. Are the lights dynamic?

I’m not entirely sure by what you mean as dynamic. It’s a house so the pointlights need to be on the root node in order to light surrounding objects, so when the light goes off in the room everything goes dark. Are there any disadvantages to baking lights into textures/materials?

@Empire Phoenix said: Also are all your lights in the rootnode? try to move them down as mcuha s possible, as then the rendere needs to render less per light

Yes =). But i’m not sure I can use an alternative such as putting them in sub nodes/subtrees because then it’ll only light that single object?

The alertative I have I guess and I think it will probably be a much better one is splitting the house model into individual rooms and loading between entering/exiting doors.

Thankyou all for replies!!

The only disadvantage of baking light is that you cannot move them anymore. But if you just want to turn them on and off, you can have 2 texture, one with “light on” and the other with “light off”. However, i don’t know how to bake lights i.e. i am not sure that we have something like that in jme.

If you are good with blender you can try to do it there, then export the result.

Also, if you are going this way, remember that you’ll be able to also bake shadows, very accurate and nice shadows. However, you’ll still need to handle the shadow of the player at runtime (but it will be less expensive by a lot).

For the record, it’s how minecraft handles lights : they have different levels of lighting and they change the textures of the cubes according to these levels. Ok, that was how they did it before, i don’t know how they do it now, even is it seems still true.

1 Like

@bubuche Pretty sure baking the light and shadows in is something you do in blender, or even photoshop/gimp by just drawing darkness onto the texture

1 Like

no, i don’t do it in blender, cause i hate blender. And yes, it’s pretty much just give the correct “darkness” value on the texture. But if you have a lot of light points and a lot of reflective things, you cannot afford to do that while the game is running. If you can pre-render it, you can have very right environnement without killing you gpu.

And if the camera is not moving, you can even go for a “picture-like” approach with a lot of others optimizations. Old 3D games did that (for exemple ff8. And it’s the best of the series, period :stuck_out_tongue: ).

1 Like

A quick rundown on deferred rendering!

  1. Create a frame buffer with multiple target textures.
  2. Render the scene offscreen using a pass to render normals, a modified version of unshaded to composite color, normal, lightmap, etc, etc and the scene’s depth texture into their individual target textures of the frame buffer.
  3. Write a composite shader that uses each of the above textures and a list of lights to apply to the final render. Set the target output framebuffer to the main buffer to display your output on screen.

Ambient lightis are handled differently. You only need the color to apply this, however, your list of lights will need the following info at a minimum:

Type (directional, point, spot, etc)
Position (where the light is)
Color (what the light does)

And then there are a few extras depending on the type of light… spots have direction, attenuation and falloff–I believe, points have radius… potentially falloff… decay… whatever)

NOTE: OpenGL 2.0 does NOT allow data structures as uniforms–though it says you can use them… it doesn’t work.

So the trick part, really, is getting the info into the shader. You can pass some of this info into the color…

For example:

The alpha channel isn’t needed… use this for radius of pointlights… and attenuation or falloff of spotlights.
For position… you only need x/y/z… so… w of the vec4 can be used to store other info.

Basically… you can pack the extra info needed for the lights into existing uniforms and not worry about reaching the limit just to perform deferred rendering.

Yep yep… lots of rambling… but… worth mentioning how to start looking into making this happen.

SIDE NOTE: The dpeth texture is obviously needed to composite the final frame output… however, many of the post filters require this (and a normal pass as well).

The problem is… the way filters expect these does not account for deferred rendering. You will need to modify them, override requires depth texture and require scene whatever and have them return false… then pass in the needed textures yourself from the buffer you created for capturing the rendered scene components during the deferred process.

SIDE NOTE 2:

Keep shadow rendering in mind when you do this… there are pitfalls. Maybe someone else can elaborate on this a bit more. I have to run errands!! Or was supposed to be anyways.

FINALLY:

The whole purpose of doing this is to allow for as many lights as you would like without impacting your FPS past the initial process. You can then still use lights dynamically (make fire light flicker… move point lights with torches, etc, etc, etc).

You should be able to use more lights then you would need with no problem at this point.

1 Like
@t0neg0d said: A quick rundown on deferred rendering!
  1. Create a frame buffer with multiple target textures.
  2. Render the scene offscreen using a pass to render normals, a modified version of unshaded to composite color, normal, lightmap, etc, etc and the scene’s depth texture into their individual target textures of the frame buffer.
  3. Write a composite shader that uses each of the above textures and a list of lights to apply to the final render. Set the target output framebuffer to the main buffer to display your output on screen.

Ambient lightis are handled differently. You only need the color to apply this, however, your list of lights will need the following info at a minimum:

Type (directional, point, spot, etc)
Position (where the light is)
Color (what the light does)

And then there are a few extras depending on the type of light… spots have direction, attenuation and falloff–I believe, points have radius… potentially falloff… decay… whatever)

NOTE: OpenGL 2.0 does NOT allow data structures as uniforms–though it says you can use them… it doesn’t work.

So the trick part, really, is getting the info into the shader. You can pass some of this info into the color…

For example:

The alpha channel isn’t needed… use this for radius of pointlights… and attenuation or falloff of spotlights.
For position… you only need x/y/z… so… w of the vec4 can be used to store other info.

Basically… you can pack the extra info needed for the lights into existing uniforms and not worry about reaching the limit just to perform deferred rendering.

Yep yep… lots of rambling… but… worth mentioning how to start looking into making this happen.

SIDE NOTE: The dpeth texture is obviously needed to composite the final frame output… however, many of the post filters require this (and a normal pass as well).

The problem is… the way filters expect these does not account for deferred rendering. You will need to modify them, override requires depth texture and require scene whatever and have them return false… then pass in the needed textures yourself from the buffer you created for capturing the rendered scene components during the deferred process.

SIDE NOTE 2:

Keep shadow rendering in mind when you do this… there are pitfalls. Maybe someone else can elaborate on this a bit more. I have to run errands!! Or was supposed to be anyways.

FINALLY:

The whole purpose of doing this is to allow for as many lights as you would like without impacting your FPS past the initial process. You can then still use lights dynamically (make fire light flicker… move point lights with torches, etc, etc, etc).

You should be able to use more lights then you would need with no problem at this point.

Thank-you so much T0neg0d for your detailed and helpful responses as always!

I’m unsure about a few parts you’ve written and 100% how to go about this, but that’s good! Gives me a platform to learn and work from so hopefully in a week or two i’ll be able to fully decipher your wisdom ;).

Please remember that tradeoff of deferred lights is problem with translucent materials. You need to hack around to get transparency working - by doing multi-layer gbuffer, screen-door with blurring or using forward rendering for transparent objects with different shader/light logic.

Plus, if you use 2-stage process described above, you won’t be able to use different fancy materials for objects. Again, there are workarounds - you can pass material id to light stage and user uber-shader with switch per material type, or you can use 3 stage process, where you output light stage into another gbuffer and then do the composition of aggregated lights and material in final, per-object shader. Again, tradeoffs and complexity.

But effects can be stunning.

Now, if somebody would code something which would handle a lot of lights for opaque objects, few lights for transparent objects, proper particles, multiple point light-shadows, volumetric light/fog, reflections and glow in one easy to use package… please let me know :wink:

@javagame said: @bubuche Pretty sure baking the light and shadows in is something you do in blender, or even photoshop/gimp by just drawing darkness onto the texture

Blender actually has a tool that lets you ray trace a light map from the scene - so it actually gives proper lights and shadows in the generated light map.