2D Shadow and Lighting - Best Method?

So I have been trying quite a few different methods over the last week or more on how to actually accomplish 2D shadow and lighting in JME - for those that are unsure of what I mean, I am talking about this:

I want to do this on the GUI node because it functions how I would expect a 2D game to function (x and y are based on the resolution, and z is basically a z-order). I already know about parallel projection for the camera so it faces directly down onto a 3D scene, but im effectively trying to draw a 2D scene in a 3D scene. it doesn’t look right and I really don’t want to go down that route. I very much prefer using the GUI node and I would appreciate it if this discussion did not involve doing it any other way. I appreciate that doing it in the GUI node means i’ll have to “roll my own” lighting and shadows - but I knew that anyway. The jme lights and shadows are for 3D environments, not 2D.

I’ll list my point of references first, just so it’s easier to understand my thought processes:

So this is the process as far as I am aware to produce this result:

  • Create an “occlusion map” - create a black and white texture that displays all items that cast a shadow in black, and the rest in white.


In the image above, the occlusion map shows that the black area will cast a shadow, and the white area will not.

  • Create a “distance map” - create a black and white texture that represents the distance of these object from the center of the image (the light origin).


The image above isn’t quite right - I’ve been tampering with the code…

  • Create a “reduced” distance map - create a 1D texture that uses recangular polar conversion to determine the distance based on the shade of grey.


I managed to get this somewhat working using the CPU. It’s slow as hell and I would really like to get this thing on the GPU to speed things up.

I managed to get a fragment shader working that does the last step (polar conversion) to display the lighting somewhat - although it’s still buggy.

uniform sampler2D m_reducedMap;
uniform vec2 resolution;
uniform vec2 lightPosition;

varying vec2 texCoord;

//sample from the 1D distance map
float sample(vec2 coord, float r)
    return step(r, texture2D(m_reducedMap, coord).r);

vec4 processReducedMap()

    float PI = 3.1415927;

    //rectangular to polar
    vec2 norm = texCoord.st * 2.0 - 1.0;
    float theta = atan(norm.y, norm.x);
    float r = length(norm);
    float coord = (theta + PI) / ( 2.0 * PI);

    //the tex coord to sample our 1D lookup texture
    //always 0.0 on y axis
    vec2 tc = vec2(0.0, coord);

    //the center tex coord, which gives us hard shadows
    float center = sample(tc, r);

    //we multiply the blur amount by our distance from center
    //this leads to more blurriness as the shadow "fades away"
    float blur = (1.0 / resolution.y)  * smoothstep(0.0, 1.0, r);

    //now we use a simple gaussian blur
    float sum = 0.0;

    sum += sample(vec2(tc.x - 4.0*blur, tc.y), r) * 0.05;
    sum += sample(vec2(tc.x - 3.0*blur, tc.y), r) * 0.09;
    sum += sample(vec2(tc.x - 2.0*blur, tc.y), r) * 0.12;
    sum += sample(vec2(tc.x - 1.0*blur, tc.y), r) * 0.15;

    sum += center * 0.16;

    sum += sample(vec2(tc.x + 1.0*blur, tc.y), r) * 0.15;
    sum += sample(vec2(tc.x + 2.0*blur, tc.y), r) * 0.12;
    sum += sample(vec2(tc.x + 3.0*blur, tc.y), r) * 0.09;
    sum += sample(vec2(tc.x + 4.0*blur, tc.y), r) * 0.05;

    // gl_FragColor = vColor * vec4(vec3(1.0), sum * smoothstep(1.0, 0.0, r));
    vec4 vColor = vec4(1, 1, 1, 1);
    return vColor * vec4(vec3(1.0), sum * smoothstep(1.0, 0.0, r));


void main()
    vec4 color = processReducedMap();

    gl_FragColor = color;

The fruits of my labour are somewhat frustrating - and not really working in my favour:


My process involves creating each texture on the CPU - looping over X and Y to create an occlusion map, then using that map to create a distance map, then using the distance map to create a reduced map, then finally passing the reduced map to the shader.
I do this by creating an com.jme3.texture.Image and an com.jme3.texture.image.ImageRaster to modify the pixels. As is expected, continually looping 480,000 times per frame is not cool.

I have been reading about framebuffers - and have seen one or two examples of how to do this in JME - but if I were to do this using shaders alone, I would need to get the output of the framebuffer 3 times in one frame - which I don’t think is possible.

Does anyone have any ideas on how best to approach this?


Note: I don’t know why you are so hung up on the GUI node. It is very special in that it completely flattens Z. Z is only used for render order but there is no depth. No depth buffer, no Z. The Z is completely flattened as if all your Z were 0. This is why all of your regular shadow approaches would fail because there is no depth. The depth of every pixel is forced to 0.

Contrast that to an orthogonal view where you will still have depth and it would otherwise behave exactly as the GUI node… just without the flattening that comes. (ie: if you chose to render 3D models then they wouldn’t look all messed up as they will in the GUI node.) So if you rendered boxes they would have depth that could be correlated to a shadow map. Still, conventional shadows for so many lights will always be slow.

Anyway, since you are already grid-based, I wonder if a cellular automata approach would work better. The simple approach would be to only use that but you can use it as the basis for more.

Pseudo code:

pending.add(lightLocation and intensity)
while( pending not empty ) {
    remove location
    plot the light intensity on your grid
    for all for directions {
        if( intensity > existing intensity ) {
            pending.add(location, intensity)

That is minecraft style lighting in 2D. And it’s very fast. It bleeds around corners, though.

At any rate, something similar could be done with only checking visited state for that particular light and casting a ray for each cell encountered that is not solid. You’d just have a lot more cells to consider. If you aren’t already, you will certainly want to process things light-centric, though. You will avoid doing work for 90% of your grid cells that way.

I have a project doing almost this, not JME and the project is mostly dead, but anyhoo, I used a library called straight edge to build a polygon of “things in light”. Then some shaders, buffers and blend modes to draw the light.
Here is a rough draft showing the light-polygons (the white polygons):

never mind the graphics, those are not the final shaders.
I read some other articles on doing 2D-lighting, if you want to go with image based techniques this seemed pretty nice:

Edit: Oh, you already knew about the image based stuffs, sorry, should’ve checked your links.

I did attempt tile-based lighting but it was just as slow if not slower than this method because each tile with a different color needed a new material - and thus was a unique object - and if the whole scene was lit with say 20 torches, the object count would be through the roof. This is what got me into thinking of putting a full-screen quad in front of the game and using transparency attenuation on a large texture for light - which got me into this whole debacle…

…unless the color is part of the tile.

You will end up writing a shader no matter what you do.

Ah. Ok… I see. So i could create a chunk of tiles, say 16x16 for arguments sake, use GeometryBatchFactory on it to reduce object count - which then forces them to use the same material, so send an additional 16x16 texture to the shader (one pixel foreach tile) assigning the light value to each tile in that chunk. Also, instead of iterating over each tile each frame to query it’s light value, only update the ‘light texture’ whenever light changes, else keep giving the shader the same texture.

Sounds intriguing. I’ll give it a shot.

Ok - so I gave it a shot - and low and behold - it works! and it works very very well. Notice the frame rate below.


The white is the background tiles. The grey is the foreground tiles. the green is the light (that doesn’t clean up after itself right now). I achieved it by putting an additional quad in front of each chunk and created two textures that are passed to a custom shader. Each texture is 16x16 (16x16 tiles in each chunk) - the first texture contains lighting data (color, etc). and the second texture (for now) uses the red channel to determine a back tile from a front tile which allows me to churn down the opacity and stop lighting if it’s a foreground tile.

Using an ImageRaster to modify the textures makes it super-quick. So far im really pleased :smiley: It seems to fulfull all my requirements and best of all it’s all on the GPU.

Thanks again for the advice <3


So here are the results for those looking for this effect - not bad really. Lights are pretty much cost free.