Optimize terrain shaders for tri-planar mode per layer, remove fog defines to make room?

Not entirely sure which category to put this in, since the only Terrain related category is troubleshooting, and this isn’t troubleshooting related.

Lately I’ve been thinking of many ways to optimize many PBR terrains for rendering large open-worlds, and one of the biggest resource-sinks for all 3 of jme’s stock terrain shaders is tri-planar mapping, since it multiplies the texture reads x3.

Currently the terrain shaders define a single boolean to apply tri-planar mapping, so you either turn it on or off for all 12 texture layers.

So I’m thinking of refactoring jme’s terrain shaders to also include 12 new boolean defines for enabling tri-planar on a per layer basis, that way you can, for example, enable it for a mountain texture that is primarily painted on vertical terrain, and then disable tri-planar for other layers that don’t need to use triplanar mapping, and this could be done even more aggressively for terrains at a far distance. This could result in a potential save of 20+ texture reads per terrain when using AdvancedPbrTerrain in some cases.

The only potential issue is that the AdvancedPbrTerrain shader (the one that would benefit the most from this optimization) has reached its limit of 64 defines in the .j3md file, so in order to make this change on my local branch, I had to remove the code for in-shader fog that jayfella added to the pbr shader a long time ago. Personally I had no issue axing this fog code because it was broken for terrains and since the original author is gone and no longer around to support it, and the actual FogFilter works much better IME.

Do any other jme devs have any thoughts on this? I also am curious to know if there is anyone currently using the PBR terrain shaders with in-shader fog that would be negatively affected by this? (although that in-shader fog is very broken with terrains so I would be surprised if anyone is actually using it)

1 Like

Is this a JME limitation or a GLSL limitation?

The problem with post-processing fog is that it applies to everything even if you don’t want it to. And if you want to render your terrain in multiple layers with different scales then you are totally out of luck.

Post processing fog is a kludgy hack, really.

Also, killing 12 parameter slots for a bunch of booleans seems unfortunate. If we can fix the limitation that makes that a scarce resource then that would be the better option.

Is the issue that JME requires material parameters in order to have defines? I’ve often felt that the inability to set defines directly is a big limitation.


I haven’t tried using filter based fog much yet, not as much as I’ve tried using the in-shader fog. I actually disliked how the in-shader fog didn’t apply to the water and sky box was using, but I can see how it would also be bad when the fog filter renders fog over objects you don’t want it to like over-head health bars.

I think I experimented with using the fog-filter for very light fog that likely was too light for me to notice any issues, and then I tried using the in-shader fog at the same time for blending out the far-terrain so that players can’t tell where the frustrum distance culling is happening. But using intense fog was where I ran into issues that the original author of the in-shader fog unfortunately wasn’t able to help with, and I think I ended up getting too frustrated to continue following up on it to at the time to be entirely honest lol.

Here’s an example video I made at the time showing the issue with in-shader fog at high intensities. Any TerrainPatches that were within range of a point light would suddenly cause the entire TerrainPatch to light up weird like this, but it only happened at high fog intensities:

I suppose its been a few jme version since then though, so maybe I should try it again and see if anything has changed. (edit: I tried the in-shader fog again and it is still bugged)

That is a good question, I tried looking this up in the past and it sounded like there is no official value set by GLSL, but I found documentation referring to a GLSL parameter that determines this limit. So I think it can be changed when configuring GLSL for the engine.

Although I’m noticing now I am able to go past the limit of 64 without getting an error now that I’m trying again. Last I encountered this error was an older version of jme so maybe something changed that updated it… I am certain I did receive that error before though otherwise I would’ve left the fog code in on my own fork.

But I guess if I’m not hitting that limit anymore I should be able to go ahead and add 12 extra tri-planar defines without having to worry about removing the fog code.

I suppose its not an issue now, since the define limit error is no longer occurring and I’ve surpassed 64, but it would definitely be nice if you could set defines without having to declare a material parameter to go along with it, especially in cases like this where its just a boolean value that never needs read as a uniform.

Yeah, I have a bunch of cases where I had to “waste” a material parameter for a boolean or a single float. And while I’m not entirely sure this contributes to the “material parameter limit” (since that may only apply to actually bound uniforms in the shader), it still bothered me a bit to have infrastructure setup for a material parameter I will only map to a define.

1 Like

…but it’s bugged only for terrain, yes? JME terrain is not well written. I do not trust it very much. There is no engine reason that the in-shader fog would affect lighting at all… certainly I do not run into this issue with my own Lighting.j3md forked shaders.

In Mythruna, it’s the lights that I want to not be fogged. Post-processing fog would kill all of my far away lights at night and that’s no fun. There are some other effects lack that that I would prefer to either not fog at all or have a different set of parameters.

1 Like

No it is also happening for my other models too actually. I wasn’t sure so I tested again, and it seems like any intense fog flickers the entire geometry when I get close enough while holding a weapon that has a point light.

If I remember correctly, It didn’t even end up being anything related to fog specifically. I ended up reducing my shaders to have only a single line in the main() section to hard-code the output color to the same gray color that intense fog would result in, and the issue still occurred. So it seemed to be a deeper JME issue that only got highlighted coincidentally as a result of the gray color you would experience by intense fog.

Here was the thread I made about the issue from a while ago:

I think the last thing mentioned in that thread was you saying I should make a test case, but I wasn’t exactly certain at the time how to best show the issue and eventually forgot about it since I stopped using the fog in my game. Maybe I should try to look into it more since the issues still happening.

1 Like

Ah, I don’t use point lights and so wouldn’t have seen this issue.

I hit the error telling me I have too many defines again, but it is only happening with one specific map that must be using more textures and consequentially using more defines.

I think I made the incorrect assumption that the 64 limit referred to the amount of defines declared in the .j3md file. But it seems like it only takes count of the ones that are being used, since it’s happening on this map and not another map despite using the same shader. Although I’m still slightly confused about this to be honest, since the other day I thought I was using more than 64 defines with values assigned to their matching uniforms at one point… so I guess its likely I was making some sort of mistake in my testing at that time.

Here’s the error I get when it fails to load a terrain that is using too many defines:

Caused by: java.lang.IllegalStateException: Cannot have more than 64 defines on a technique.
	at com.jme3.material.TechniqueDef.addShaderUnmappedDefine(TechniqueDef.java:449)
	at com.jme3.material.logic.SinglePassAndImageBasedLightingLogic.<init>(SinglePassAndImageBasedLightingLogic.java:74)
	... 57 more

So to make the changes I planned, I do think we would need to find a way to increase the limit of 64 defines in the engine (without causing any other issues doing so of course), and I think increasing this limit could likely benefit other jme users working with their own large shaders as well.

It looks like it should be as simple as changing this constant value here:

Any thoughts on this from jme devs/users?

1 Like

I looked in the code.

…it’s 100% arbitrary and because we were lazy and fully allocated the array instead of letting it expand as we go.

Also, considering this class is just spitting out strings in the end… it seems needlessly complicated that it stores int[] in the first place rather than just using the Object values already provided. (In which case a SafeArrayList would have been a fine choice for keeping the values.)

As to your issue randomly cropping up, there is something called “unmapped defines” that are set on the technique. I did not look into it but maybe these are internally used defines that are not part of the j3md and so wouldn’t always factor into your count.

In any case, the DefineList definitely needs an overhaul.


I’ve opened an issue in regards to refactoring this class in the ways you mentioned.

I think it may be a slightly bigger task than I first thought, since (depending on the specifics of how you want to refactor it) it may require making changes in a lot of other classes that use DefineList. But I’m by no means an expert on this section of the engine and only briefly looked at the relevant source code in the past day, so this discussion can continue on the issue I just filed.

In the meantime, I’ve also submitted another PR increasing the define limit from 64 to 128 to at least bypass the issue until we refactor DefineList to allow the array to grow as needed. This way I can continue making the triplanar optimizations to the terrain shaders and potentially have that included in the next 3.6.1 engine release as well.


I don’t know if this information might help you to fix this, but the DefineList class was manipulated using a TreeMap and it was a Savable on the git-blame, it might be a good idea to look back on the git-blame in details before attempting to think of a solution.

There is a commit, that seems to fully change the logic of how this works.

Not sure what you are referring to. DefineList doesn’t use TreeMap.

The issue in question starts here:

…and to some extent, the line above.

DefineList pre-allocates an int array but because it cannot tell if a value has been set or not, is used the isSet variable (64 bits) to tell which entries have been set and which entries haven’t. This is why it can only support a maximum of 64 defines.

And it’s the only reason.

Fixes can be made without changing any other classes. All fixes would be internal. Even just switching to an object array and having isSet check for null would remove the 64 entry max requirement and cost essentially nothing more than the int array approach.

Your PR cannot work because it doesn’t fix the isSet issue. To up the limit to 128 would require a second isSet long… or a different approach (maybe a BitSet instead).

It’s a very strange implementation and has existed for a really long time… since the beginning of JME3 I think.

1 Like

I am talking about this commit (in case it has concerns related to the fix):

It concerns it only in the sense that DefineList was added in that commit.

Again, we are not talking about changing the outward behavior of this class other than removing the max defines limit.

If someone can point to relevant code that would have a problem with a new max defines limit (preferably something more narrow than ‘the whole codebase’) then that would be useful.

1 Like

Is anyone able to take a look at this recent PR and offer any insight on how to fix this final failing test? It looks like this one failing unit-test is the last big roadblock to getting this PR finished

Someone will have to run the test to see what the order of the sort is and then change this line:

So that the order matches what’s expected.

It looks like some information is dumped as the test is run which will be in the report if you run tests locally.


Are we talking about MaterialParameters here? Why not define one ByteArray rather than 12 bytes?

I’m not sure, to be honest. These type of lower level optimizations are not my specialty.

If I had written this class from scratch myself I’d have probably just used an array of booleans instead of bitshifting a long or even using a BitSet :laughing: although I do suspect that there was a good reason that DefineList chose to use bits instead of booleans since I think new instances of DefineList get created extensively in the engine… but again I’m not a specialist in this area so I am just assuming.

If anyone wants to further optimize DefineList, I think that would be best done in another PR. But for now I’m just focused on getting that last wonky unit-test to pass to fix the original bug limiting defines to 64.

I removed 5 failing unit tests that were apparently unnecessary, so now I just need help getting the last one to pass since it sounds like that one is necessary and should not be removed. (although I disagree and would remove it. but I’m still too confused about why this last test even exists to confidently say that. It just seems strange that this test can be passed by changing the order of paramaters in the test’s code rather than code in my PR… so its clearly not telling me anything useful, so what is it even doing then? Maybe I’m just overly skeptical considering 5/6 of the other failing tests turned out to be unnecessary too)

When you are sorting geometry front to back then it’s important that if a > b and b > c then a > c. Or bad bad bad things happen. rendering pipeline crashes. App stops, etc… (Edit: It also stands to reason that if A > B and B > C then B < A and so on… or that when you compare A, B again it should still be the same order… all of which are important for a quick sort which might compare different values on different sides and expect consistent results each time… multiple times in the same sort.)

Bad ju-ju.

I believe the tests that are checking sort stability are doing that. In order to do that they had to make some assumptions about INITIAL SORT ORDER… ie: the SORT ORDER THAT’S PASSED TO THAT METHOD.

If you pass sort(a, b, c, d, e) to that method, it sorts it backwards, reverses it… and now has a, c, b, d, e because now the hashcode sorts them differently… it doesn’t mean that the sort is unstable. It means that the initial assumption about sort order is now different and sort(a, c, b, d, e) needs to be called instead.

I really don’t know how to explain this in more detail.




I didn’t mean to imply that I intend to actually remove it, since it is doing stuff I still don’t understand and (unlike the other 5 more straightforward tests that were removed) I could atleast tell from first glance that this test was not written solely for troubleshooting the DefineList class and has ramifications beyond the scope of the DeinfeList class I’m working on.

I more so am confused as to why my PR broke this test. I didn’t quite understand how the changes I made would affect the order of sorting and cause the test to suddenly fail.

But I’m happy enough changing the order of the paramaters in the unit-test to solve it.

But I couldn’t help but mention that it seems illogical to me that the test is written in a way that a PR will cause the test to fail despite the PR being correct. It felt like that could imply something is broken with the unit-test and could just cause more problems and require being updated again next time someones PR breaks it for the same reason my PR broke it. (or maybe it isn’t possible to write the unit-test in a way that can automatically account for changes to hash code, in which case it isn’t broken and just needs manually updated anytime hashcode is changed… in which case I guess I should also at least add a comment to the unit test explaining that)

But my apologies for implying I want to remove that test, that certainly isn’t the correct solution. I guess I’m just ranting since its frustrating that these failing unit-tests made me think my code was wrong multiple times while I was working with some aspects of java that I’m not as familiar with, and it had me double guessing myself on things I was right about before getting confused by the other failing tests that ended up needing removed… I guess I should just be happy the code in my PR ended up being okay and that its just the unit test that needs updated.