Pluggable PBR shader?

I often find myself trying to write a shader that dynamically generates textures based on whatever parameters, but want it to apply those textures onto the standard PBR shader. The way I go about it is by copying the stock PBR shader and modifying it, but this gets tedious very fast. Is there a way I could do this without duplicating existing PBR code? I’m thinking something along the lines of what Unreal material nodes want at the output (base color, metallic, rough, etc. textures) and then apply those textures onto UE’s PBR shader.

What’s the current status of jme’s shader nodes? As far as I’m aware there’s no PBR node, right?

1 Like

i remember someone told it should be moved to Shader Nodes? or at least as a imported methods.

Shader Nodes work already. you can even visually edit it in SDK Material preview Shader Nodes TAB. (Like blender shade nodes)

It would require rebuild all shaders, that is possible but i belive its a lot of work.

Pbr with triplanar is expensive. Doing it for 4, 5, 6 pbr materials (grass, dirt, rocks, snow, etc) is crazy talk, so what you need is an array of packed textures (use all the channels you can). Putting that in a shader node would be a bit unique to the point where it wouldn’t make sense anymore to use shader nodes.

Having said that, terrain aside due to its uniquety, there’s no reason you can’t use shader nodes with pbr.

It would be nice if someone could break apart PBR into shader nodes so it was more composeable.

…this was never done for regular lighting either which is why we had to fork it to put things like fog, etc. in.

A shame and probably a relatively straight-forward task for someone wanting to know the shaders and shader node system “inside and out”. A good learning experience. lol


Just in case this is related!! :wink:

Here is what @nehon wanted to do about shader nodes. A programmable shader node with a Java API.

I’d like to address this, by providing an easier way to code them and build them from java code.

Yeah, I think I convinced him to move towards what my original proposal was… which could determine things more implicitly. The code generator is pluggable… so maybe I’ll take a look at it at some point to see what can be done.

In my approach, shaders could be evolved towards shader nodes as regular shaders and then just broken up into nodes as a last pass. (A called my shaderlets because they were just composeable code snippets that were still valid GLSL.)


so this would be advantages?

  • single shader node file, not j3sn + frag/vert
  • no name name collision avoidance by prefixes
  • programmable Shader via shader node usage
  • generally easier/faster node merging into shader (would just join this files).

i assume it would be ofc able to use same Shader Node twice in process?

Btw, alternative to shader nodes is just use “.glsllib” eg put the pbr routines in a single file and import where needed

From what I understand A node is like a chain of modules that accept an in and out and you chain them to get a result.

So I could have a lighting node or pbr node, I guess the “trick” is where it goes (probably at the end).

As with a lot of work from @nehon - it’s usually years later that it gets recognised for what it is.

Yes, for this usecase, glslib could work, you just need to define the Input texture names and are Done. You cant do changes inbetween though

Regarding glsllibs, there’s already this:

It’s not documented how to use it and I suspect it’s mostly helper functions, not a computePBRColorFromTextures() or whatever one would call a function I eas describing in the first post.

But with shader nodes you theoretically don’t need to toy with shaders at all. You have a pbr node that wants textures and it outputs a color. So you create a node and give it “modules” and programmatically build it. So before that one could have a pbr IN node, a noise node, a height node that chooses between multiple IN nodes, whatever, then finally pass it through the pbr lighting node and viola. It’s all programmatic. Nodes can be re-used.

Glsl libs require editing vert and frag files. Shader nodes are a lot more re-useable and don’t require shader editing. It’s a lot more OOP.

As cool as node editors are, I’m yet to find a node implementation that’s more efficient at math (which generating textures usually is) than good old glsl. In Blender creating procedural textures is a mess exactly because of this.

It can be less efficient at times but what you gain is flexibility, ease of use and speed of creation. And that you don’t need to learn shader language. Often times it’s also really good to design it with a GUI to test ideas and quickly change things to get the output you’re looking for. For a newcomer that could be the end, and for an advanced user they could use that visual as a sheet to work from to write it all in a dedicated shader.

I’m not saying that a shader nodes PBR implementation wouldn’t be good for the engine. I’m just saying that it doesnt fit my use case.

Well, as everyone said, for me Shader Nodes should look similar to Blender Node system, like in the link some Artist show one:

so you just connect what you need. it will auto-manage shader glslib imports, variables and others during merge into one shader as i understand.

But what is nice is that you can do things like(also in provided link):

Variable color = new Variable(“vec4”, “color”).set(“vec4(0.0)”);
Shader.node(“dash100.frag”, “Dash”) // finds the Dash node
new Variable(“vec4”, “inColor”).set(“vec4(1.0)”),

// then use color as an input for another node for example

that you cant do just using glslibs.

Since anyway most of JME users are developers, it would not be problem to use just glslibs, but myself i think it would be good to have programmable Shader Nodes or GUI edited too. IMO its quite easy and its not “Mess”. Look at this image that artist provided, imo there would be much more “Mess” when doing it as single shader(even with glslibs) instead of shaderNodes where you only care about in/out. (you dont need to know all behind it, while in glslibs you othen need to know things like “use this function, but before i need use this function, also i cant use it twice, because some variable might be doubled in value”. So here i would say Shader nodes are like “proper Object encapsulation” term.

Ofc its just my opinion, i will be happy with anything that will upgrade shader system. (also i think @RiccardoBlb might tell much here).

I’m the wrong person to ask, i generally dislike any form of node editors.

The beauty of glsl is that the language itself is simple and barebone, there is not much that can go wrong. Using a node system on top of that adds a new layer of abstraction and complexity that is not really justified in my opinion.

I agree that the current shaders are very hard to maintain and poorly designed, but i believe that we can improve them by using only properly designed functions and structs.

6/21 nodes in that node graph are math nodes, not to mention various separate/join RGB nodes which are not needed in glsl. That’s why I think some things are done in glsl much easier.

The idea behind shaderlets was that you split your shader up into functions with inputs/outputs (which can generally be determined implicitly). This idea morphed into what shader nodes is now which essentially REQUIRES graphical editing to do anything non-tedious. I mean, you can do it manually but it’s very painful.

The idea for shaderlets was that you’d just add them all into a list and your shader would come out the other end. It would autowire based on naming and/or you could insert shaderlets to remap things.

I lobbied strongly to have the shader generation pluggable so that I might come back and play with my ideas. It was only years later that nehon and I were talking and he saw the benefit of the approach.

I still want to do it. Every time I want to add just one little thing to the end of some shader, I want to cry.