I often find myself trying to write a shader that dynamically generates textures based on whatever parameters, but want it to apply those textures onto the standard PBR shader. The way I go about it is by copying the stock PBR shader and modifying it, but this gets tedious very fast. Is there a way I could do this without duplicating existing PBR code? I’m thinking something along the lines of what Unreal material nodes want at the output (base color, metallic, rough, etc. textures) and then apply those textures onto UE’s PBR shader.
What’s the current status of jme’s shader nodes? As far as I’m aware there’s no PBR node, right?
Pbr with triplanar is expensive. Doing it for 4, 5, 6 pbr materials (grass, dirt, rocks, snow, etc) is crazy talk, so what you need is an array of packed textures (use all the channels you can). Putting that in a shader node would be a bit unique to the point where it wouldn’t make sense anymore to use shader nodes.
Yeah, I think I convinced him to move towards what my original proposal was… which could determine things more implicitly. The code generator is pluggable… so maybe I’ll take a look at it at some point to see what can be done.
In my approach, shaders could be evolved towards shader nodes as regular shaders and then just broken up into nodes as a last pass. (A called my shaderlets because they were just composeable code snippets that were still valid GLSL.)
But with shader nodes you theoretically don’t need to toy with shaders at all. You have a pbr node that wants textures and it outputs a color. So you create a node and give it “modules” and programmatically build it. So before that one could have a pbr IN node, a noise node, a height node that chooses between multiple IN nodes, whatever, then finally pass it through the pbr lighting node and viola. It’s all programmatic. Nodes can be re-used.
Glsl libs require editing vert and frag files. Shader nodes are a lot more re-useable and don’t require shader editing. It’s a lot more OOP.
As cool as node editors are, I’m yet to find a node implementation that’s more efficient at math (which generating textures usually is) than good old glsl. In Blender creating procedural textures is a mess exactly because of this.
It can be less efficient at times but what you gain is flexibility, ease of use and speed of creation. And that you don’t need to learn shader language. Often times it’s also really good to design it with a GUI to test ideas and quickly change things to get the output you’re looking for. For a newcomer that could be the end, and for an advanced user they could use that visual as a sheet to work from to write it all in a dedicated shader.
Well, as everyone said, for me Shader Nodes should look similar to Blender Node system, like in the link some Artist show one:
so you just connect what you need. it will auto-manage shader glslib imports, variables and others during merge into one shader as i understand.
But what is nice is that you can do things like(also in provided link):
Variable color = new Variable(“vec4”, “color”).set(“vec4(0.0)”);
Shader.node(“dash100.frag”, “Dash”) // finds the Dash node
new Variable(“vec4”, “inColor”).set(“vec4(1.0)”),
// then use color as an input for another node for example
that you cant do just using glslibs.
Since anyway most of JME users are developers, it would not be problem to use just glslibs, but myself i think it would be good to have programmable Shader Nodes or GUI edited too. IMO its quite easy and its not “Mess”. Look at this image that artist provided, imo there would be much more “Mess” when doing it as single shader(even with glslibs) instead of shaderNodes where you only care about in/out. (you dont need to know all behind it, while in glslibs you othen need to know things like “use this function, but before i need use this function, also i cant use it twice, because some variable might be doubled in value”. So here i would say Shader nodes are like “proper Object encapsulation” term.
Ofc its just my opinion, i will be happy with anything that will upgrade shader system. (also i think @RiccardoBlb might tell much here).
I’m the wrong person to ask, i generally dislike any form of node editors.
The beauty of glsl is that the language itself is simple and barebone, there is not much that can go wrong. Using a node system on top of that adds a new layer of abstraction and complexity that is not really justified in my opinion.
I agree that the current shaders are very hard to maintain and poorly designed, but i believe that we can improve them by using only properly designed functions and structs.
The idea behind shaderlets was that you split your shader up into functions with inputs/outputs (which can generally be determined implicitly). This idea morphed into what shader nodes is now which essentially REQUIRES graphical editing to do anything non-tedious. I mean, you can do it manually but it’s very painful.
The idea for shaderlets was that you’d just add them all into a list and your shader would come out the other end. It would autowire based on naming and/or you could insert shaderlets to remap things.
I lobbied strongly to have the shader generation pluggable so that I might come back and play with my ideas. It was only years later that nehon and I were talking and he saw the benefit of the approach.
I still want to do it. Every time I want to add just one little thing to the end of some shader, I want to cry.