AST based ShaderNode Generator

And 75% of your propositions are already implemented, but you refuse to see it.

Why instead of giving an alternative option don’t you try to leave the current option but just enhance the visual tools?, make it easier for artists, if something lacks, add buttons, add visual magic to make it work with the current system. If there is something that can’t really be done with buttons, menus and boxes because the system itself doesn’t support it in any manner, then, I would complain.

Ok let’s be a bit more constructive and take it point by point :

  1. Possibility to have default values.
    I guess you mean per shader nodes input? why not. Though we already have default values for shader params… so I feel there is redundancy in it… but why not. It would need a slight change in the ShaderNodeDefinition reader, and to
    assign the value to the variable in the generated code, it the mapping is not set
  2. Auto-resolving names of additional methods in shader nodes to avoid name collisions.
    This is IMO just nice to have, and barely. Users can manage this themselves easily with a naming scheme. Also it requires a lot more work in the generation phase (like building a method/variable lexer), that can increase generation time. I recall generation is done at runtime, so we want it as fast as possible… So IMO the benefit doesn’t outweigh the cost…
  3. Auto-resolving some defines inside shader nodes.
    Already in: each material param can be mapped to a define and the generator will do the internal plumbing…
    so if you have a Color material parameter you can go #ifdef Color in your code. The define is set if the param is set.
  4. Auto-resolving names of local variables of shader nodes to avoid name collisions.
    same as 2. to me.
  5. Auto-resolving world bindings from imported shaders inside shader nodes.
    It’s just a matter of declaring the globals you use as uniforms at the top of the shader node code… But this could be automated by the generator… why not.
  6. Сonvenient solution to use imports/extensions inside the shader nodes.
    This is already in. You can import glsllibs files or load extensions in any shader code, they will be deduplicated if relevant and inserted at the beginning of the generated shader code.
  7. Possibility to visualize binding material parameter to custom defines of shader nodes.
    As said before the mapping is direct. MaterialParam → define, the define name will be uppercase in the generated code, but you can use it with the same case in the shader node code.

EDIT : if you want, let’s start with 1 and 5, Make a separate PR for each of them and we’ll discuss during the review.

but an artist should worry about this, my implementation does it automatically.

I didn’t see the logic of deduplication this in the current shader generator.

Let me shows you the result performance of this in near future :wink:

I can do the p.1 for your shader generator as well, but in the case with the p.5, it’s based on AST now :frowning:

the main problem for me now is some limitations with developing of shader nodes.

About big nodes, it’s an example of we can make a shader node with more difficult logic than 1-2 method calls now. If someone doesn’t like big nodes, it’s ok, but now if I want to make some big nodes for my artists, I can’t do it.

Ho you mean your implementation AUTO add define around all the code that use a shader param with a ifdef???

Well… what could possibly go wrong?
You know defines are not free, each one of them does a shader compilation permutation, we don’t want the generator to do it automatically we want to keep control over it.

… That means nothing to me…

It’s an example, I have some source of a shader node:

    #ifdef SD_IS_SET_inColor
        vertexColor = inColor;
    #endif

if the shader generator sees that we set the input parameter inColor to the shader node, the generator appends the define to the top of the result shader:

#define SomeShaderNode_SD_IS_SET_inColor 1

and updates the code to:

        #ifdef SomeShaderNode_SD_IS_SET_inColor
           SomeShaderNode_ vertexColor = SomeShaderNode_inColor;
        #endif

I use AST presentation of imported glslibs to understand which uniforms are missed in technique def.

-_- you don’t need this to implement 5. you have all world params declared in the j3md.

As said before looks like optional inputs. If you need this, IMO it’s just a sign that your shader node is not modular enough.

Look, I’m tired, I’ve repeated this too many times. You have been given a lot of tracks to explore to implement what you need without changing the core engine, but you just seem to ignore them. I’m offering you the possibility to make pull requests from some of your propositions, take it or pass, I don’t really care, I’m out of this discussion.

1 Like

In this case, you don’t have it here

I think we need to allow developers to use both approaches. If a developer wants to make a list of smart shader nodes for his artist, why not?

@nehon so I think I don’t have a chance to get your approving of this, but I think I can make a PR with minimal changes to have a possibility to use my generator as an external library. I need to add some fields to a shader node definition class and to update the loader of this.

The point of shader nodes was to include or not include the code as needed… not to wrap it all with ifdefs.

The easiest approach for the artist in your case is just to give them a bunch of parameters they can set on your super-shader. You’ve effectively removed all useful configurability anyway.

I think at this point, your goals are sooooo far off the original goals of shader nodes that you might as well write your own from scratch. We wanted to avoid giant shaders like Lighting.j3md. You want to slip back in that direction because monolithic giant nodes are “non-technical artist friendly”. This we never our goal. Never. Not once… ever ever. Never. Not ever. In fact, a node editor was never even in the original plan. Just an ability to wire up shaderlets into something that could be easily reconfigured to avoid having to fork ALL of lighting.j3md just to add fog.

You have basically started slipping back in that direction.
Artist: “How do I had fog?”
Developer: “Oh, wait… I need to code that into the giant fragment node… one sec… oh, wait, I also need to add something to the giant vertex node…”
…and now if there are any problems, you get a giant debug output of the 9000 line shader full of stuff not even used.
…and then this directly impacts the community as we have to sort out this strange horse shit when it’s posted to the forum.

Anyway, one of the side-effects of shader nodes was that you now have a place to plugin your own shader generators (because I requested it… so I hope it’s really there… I had my own shaderlets idea for a long time). So just go your own way, I guess. You won’t be getting any use out of our modular nodes, anyway since they likely won’t wire into your giant two nodes.

1 Like

The big nodes are good examples to show how a shader generate can handle difficult shader nodes, you can make big nodes, or you can make small nodes, but you have a possibility to make different kinds of shader nodes, not only simple/single-line nodes.

If i may offer a completely unasked for opinion… i read the whole thread. @nehon is striving for modularity. The base of his idea. @javasabr you seem to be building on top of that. An abstract layer if you will. Remys idea is required in order for yours to function. And you provide an abstracted version of it - like BaseAppState. But without AppState (remys code) yours becomes limited in the way that an ill-designed application would function.

I think your debates should be more focused on the implementation you need to provide an abstraction you desire, not on the implementation itself, as clearly @nehon has that covered already - and provided you with examples such as PBR on how to do that - and its limitations on doing so. Much like SimpleApplication. You are kinda painting yourself into a corner. Defining it closer to your object. Which is ok for some. But it limits you. Hence the whole point of 3.2.

At least i think thats whats happening here.

What you see in a UI should never be bound 1:1 to underlying model. In a UI you can basically emulate a bigger thing which is in the underlying model several small things. generally spoken. It is for sure a bad idea change underlying modular thing to fit a UI. in general not restricted to shader nodes.

I think that only p.7 is related to UI… it’s not important to implement, I think so.

When you’re done, call it AST/io :penguin: