Shader Injection, first try

I had to try it. Changed the Material.java to support more then one default technique and allowing to merge techniques from a different material,

Changed the MaterialDef.java and the RenderManager.java as well to support these changes.



Here is the first result,



http://i.imgur.com/5moC6.png



The only difference between these two runs is the line:

[java]

materialCenter=materialRight.clone().merge(materialLeft);

[/java]



The good thing is, something gets injected, the bad thing is, it gets injected in every shader used :frowning:

Another bad thing is that for each shader ‘injected’ an additional render pass is performed.

4 Likes

oh god…did you ever heard of the Pandora box?

Now we’re doomed…

Stat text got sexier :slight_smile:

1 Like

Good start!

I’ve been thinking about a “shader generator” some time now. Register it to the assetManager and then generate materials, matdefs and shaders dynamically…



It’s currently just a rough idea, gonna try a simple implementation when I have time and get the spirit :slight_smile:

2 Likes

I’m having a different approach with less code modification, maybe we could talk(All three of us :P), I’m currently doing the same thing.

I think everyone who has made more than one shader and had to go back and add something to every one of them has probably thought about fixing this problem. :slight_smile:



I, too, have designed a “shaderlet” system where you can write code snippets (fraglets and vertlets) and have them assembled into composite shaders with auto-wiring. Tweaking my existing shaders has still been quicker than writing a shader composition system and so I haven’t done anything beyond a design. Done properly, the code chunks are still editable in existing GLSL editors because they are still 100% valid GLSL code on their own.



I keep telling myself “someday” but motivation and time are at a premium right now. If my game income ever pays the bills then maybe.

1 Like

@pspeed really we should do that once and for all!!

2 Likes

@kwando:


I’ve been thinking about a “shader generator” some time now.


Me too... and did you planning writing some codes for a "shader generator"...

Here is my design for such system:

Concept:
+ Composing from Nodes (Input, Operator, Output) -> A graph tree-> Travel the graphTree and make a ParseTree
+ From ParseTree
> ToCode ( C++ syntax by a template system, my idea is using Veclocity or Groovy Template System_both good! )
> Some Node has Previews ( should using CPU Computing ... I thought )

UI:
+ Netbean Platform and VisualLibrary to visualize Node

Now I can produce some simple C++ code like:
Input NodeA named varA,
Input NodeB named varB ,
Operator AXB ,
Output = AxB => result=varA+varB ... which have a Veclocity template like this : result={varA}x{varB}

Anyway such system can become un-usable or un-efficient because the procedured code somehow not really easy to read and also a fixed uber-shader the result become!

@atomix. Hmm, I’ve been thinking along those lines.

I think a visual library would be nice, surely it would make shaders more accessible. I’m not sure I would use It myself though since I can’t sleep with badly written shaders…

I once settled on building such a system, but failed since I did not find a way to generate code. I’m no expert in either parsing nor language theory.

This is a bit like Unreals material editor :slight_smile:



@pspeed I think your approach is valid. Writing small reusable shader fragments and then combine them, great! Actually atleast I dont mind writing GLSL at all so I like this one. :smiley:



Setekh three people is usually smarter than one, so it might be a good idea.

I still think you guys have come more far than me in this “process” so I might not be that helpful atm :stuck_out_tongue:


@nehon said:
@pspeed really we should do that once and for all!!

Just do it ;)
@pspeed said:

I, too, have designed a "shaderlet" system where you can write code snippets (fraglets and vertlets) and have them assembled into composite shaders with auto-wiring. Tweaking my existing shaders has still been quicker than writing a shader composition system and so I haven't done anything beyond a design. Done properly, the code chunks are still editable in existing GLSL editors because they are still 100% valid GLSL code on their own.

I keep telling myself "someday" but motivation and time are at a premium right now. If my game income ever pays the bills then maybe.


So true... I used RenderMonkey and NvidiaFX and find that too "easy" to compose a general shader. The point is as our engine heavily based in shader, and manual tweak in shader can also cause a bad Exception... That's why I want to make such things...


@zzuegg said:
The good thing is, something gets injected, the bad thing is, it gets injected in every shader used :(
Another bad thing is that for each shader ‘injected’ an additional render pass is performed.

Is it real that the "bad thing" can't be fixed, or else, we just waste our time... BIG ONE! :p

Well i did think about this too, and for all normal stuff this is probably quite powerfull.



However for shaders that go near the hhardwarelimits you could get problems (like uniform count ect) wich are better solveable manually.

@atomix said:
Is it real that the "bad thing" can't be fixed, or else, we just waste our time... BIG ONE! :p


Without any runtime modification at the shaders code not, i assume this can't be fixed since only one shader/geometry/renderpass is processed.
Correct me if i am wrong, all i know about rendering i have from a 2 day experience in the jme source :D

A better approach would be to keep a list of injected shaders and generate the used shader automatically.
I am thinking of something like:

[java]
Shader1.frag
void main(){
//some shader stuff
}

Shader2.frag
void main(){
//some shader stuff
}

becomes:

UsedShader.frag
void Shader1_main(){

}
void Shader2_main(){

}

void main(){
Shader1_main();
Shader2_main();
}
[/java]

Of course this need that all passed variables/uniforms have to be checked (and modified too)

Nothing offensive but …the approach sound a little bit … crazy, how can jme, then lwjgl and opengl understand such shader code like that? and the two main function can confict eachother immediately, which is useless again…



Some thing like :

[java]Shader1.frag

void main(){

//some shader stuff

gl_color = vec4(0,0,0,0);

}



Shader2.frag

void main(){

//some shader stuff

gl_color = vec4(1,1,1,1);

}



becomes:



UsedShader.frag

void Shader1_main(){



}

void Shader2_main(){



}



void main(){

Shader1_main();

Shader2_main();

// OK , produce only the last shader,

// We can tweak the code to understand passes, but somehow this approach leaves us too much manual things to do

}

[/java]

Hm, sorry i don’t get your point. here is a real life example. Not a good one, but it should show my point.

Take two shader:



ColoredTexture.frag

[java]

varying vec2 texCoord;



uniform sampler2D m_ColorMap;

uniform vec4 m_Color;



void main(){

vec4 texColor = texture2D(m_ColorMap, texCoord);

gl_FragColor = vec4(mix(m_Color.rgb, texColor.rgb, texColor.a), 1.0);

}

[/java]



and ShowNormal.frag

[java]

varying vec3 normal;



void main(){

gl_FragColor = vec4((normal * vec3(0.5)) + vec3(0.5), 1.0);

}[/java]



The resulted shader after combination could be:



CombinedShader

[java]

varying vec3 ShowNormal_normal;

varying vec2 ColoredTexture_texCoord;

uniform sampler2D m_ColorMap;

uniform vec4 m_Color;







vec4 ColoredTexture_main(){

vec4 ColoredTexture_texColor = texture2D(m_ColorMap, ColoredTexture_texCoord);

return(vec4(mix(m_Color.rgb, ColoredTexture_texColor.rgb, ColoredTexture_texColor.a), 1.0));

}





vec4 ShowNormal_main(){

return(vec4((ShowNormal_normal * vec3(0.5)) + vec3(0.5), 1.0));

}



void main(){

//Additive mixing, probably need to add a lot of mixing properties

gl_FragColor=ColoredTexture_main()+ShowNormal_main();

}



[/java]



The result should be a valid .frag shader and a guy with good RegEx experience probably has no problem creating it :smiley:

My point is as two shader have no interference var and methods, meybe there is no problem…



However , if some codes change the var which need for other, the both two can completely go wrong.

And even if we try to track and keep two things separated, the idea of combining the results by changing their code didn’t convince me.

You know, as the merging problem didn’t tell the whole story of shader injection but it still a very good example…

If we want to merge or automaticly procedure GLSL, we should have a better knowledge of GLSL:

which name will be attribute or varying… which name should be renamed like “Shader1_varA”, which should be change to a return…



I just have those in mind, please tell me more about your idea…

The varying and attributes can be clearly identified (as they are declared with the attribute) and modified int both, the vert and the frag.

The problem staying are the uniforms, because they have to be synched with the material definitions.

An option would be to rewrite their names in the material loader so that they include a Material/Shadername.



Instead of adding the return value, we could easily introduce new helper varialbes. like, vec4 gl_FragColorTemp. Replacing all gl_FragColor=something with gl_FragColorTemp=something is easy. In the real main afterwards we could simply write gl_FragColor=gl_FragColorTmp;



I know that this is just the beginning of a more complex rewriting system. I wanted only to share the idea for discussion, if someone knows a argument why this would not work at all, i do not need to spend the time trying to implement it.

We can’t be the first ones encountering this problem. Maybe we should take a careful look at other engines to see how they solved it?

@kwando said:
We can't be the first ones encountering this problem. Maybe we should take a careful look at other engines to see how they solved it?

I'd just pay more attention to what pspeed said.

We don't need to over complicate ourselves with language translation that wont mean shader injection what so ever.. what we really need is an engine which loads the shader and it's code blocks (uniforms, if defs, etc) so we will be able to write inside the code block or replace/ remove it.

Take for example Java Assist: http://www.csg.ci.i.u-tokyo.ac.jp/~chiba/javassist/tutorial/tutorial.html
It's a kind of injector too, tho it inject's byte code.

Also jme3 is coded pretty, so adding new techniques defs and changing material defs isn't a problem what so ever >.> i bet we wont even need to change anything in the LwjglRenderer what so ever.

This would be something like http://code.google.com/p/shadernet/ with the ability to add custom blocks at runtime

@Setekh

Good point. I think we can go long way without overly complicated solutions.