I am noticing rendering issues with the DirectionalLightShadowFilter when I have large values in my x and z coordinates. The rendering issues go away if I “normalise” my coordinates so they are closer to 0.
As an example of “large coordinates” I mean x=14500 and z=75000.
If I rerender the exact same scene, but shifting my coordinates so that x=1100 and z=4600, the rendering issues go away.
If I turn off the DirectionalLightShadowFilter, or use a DirectionalLightShadowRenderer instead, the issues go away.
The “issues” I am reporting are as follows:
Small changes in the direction of the camera results in large changes to the shadows. These changes seem to cycle (i.e. if I slowly move the camera from right to left, the shadows go through a few different states then return to the original state)
I can see square “tiling” of the shadows. I think these match the shape of the meshes in the scene (I am using the Blocks framework, which creates square meshes)
I have attached some screenshots that attempt to show the issue. I have shown the same scene viewed from slightly different angles, and you can see the shadows are rendering very differently. You can also see the “tiling” effect I was describing.
Another screenshot shows the same scene, with coordinates reduced so that the issue does not appear.
Use DirectionalLightShadowRenderer instead of DirectionalLightShadowFilter. The DirectionalLightShadowRenderer does not give the right look for my application, whereas DirectionalLightShadowFilter does (when I have smaller coordinates)
Use smaller Coordinates. My World is quite large, so if I “normalise” one part of the world to render correctly, then other parts of the World will get this issue instead.
I increased the shadowMapSize from 1024 (my previous value) through different values to 16384 (which I believe is the maximum). That didn’t fix the issue.
The ShadowRenderer looks very different to the ShadowFilter when I use it (even before I noticed this issue). Are you saying they should look the same? I don’t see any of these issues when I use the ShadowRenderer. Even the API is quite different…
Here is my code (obviously I only enable one or the other: I am not adding both the Renderer and the Filter at once)
sun = new DirectionalLight(new Vector3f(-0.1f, -1f, -0.1f).normalizeLocal(), ColorRGBA.White);
final int SHADOWMAP_SIZE=1024;
DirectionalLightShadowRenderer dlsr = new DirectionalLightShadowRenderer(assetManager, SHADOWMAP_SIZE, 4);
DirectionalLightShadowFilter dlsf = new DirectionalLightShadowFilter(assetManager, SHADOWMAP_SIZE, 3);
FilterPostProcessor fpp = new FilterPostProcessor(assetManager);
Note: that ‘float’ floating point starts to lose noticeable precision for physics, etc. at 65,000 or there abouts… so you will have other issues also in the end.
As others have said, for anything other than fixed size maps of relatively small sizes (square km or so), you are better off keeping the camera at or near 0,0,0 and moving the world.
Note 2: because of the way JME handles its world transforms, world bounding volumes, etc. it’s every so slightly more performant to avoid moving the root node every frame. If you choose to leave the camera at only 0,0,0 then you should at least not move the root node if the player hasn’t actually moved (simple if check will avoid recalculating the entire world transform/volume hierarchy). And for games that will page in other tiles at boundary crossings, it can be a little better to let the camera freely roam inside a tile and reset its position only when crossing the tile borders (since you will already be shifting the paging region, etc. anyway). It’s trickier though.
Also, either way, things like the water filter will look strange as they don’t provide any way to offset the world position. (Always meant to add that to the water filter.)