Effective shader use

Im currently learning GLSL to make my game less static, but im having trouble understanding how to effectively use shaders — i feel like im trying to put in too many textures?
By example of my simple healthbar shader:

        if(v_texCoord.x < m_healthPercent.x){
           color = texture(m_fullHealthBar,v_texCoord.xy);
        }else{
           color = texture(m_emptyHealthBar,v_texCoord.xy);
        }
        
        if(color.a == m_cutoff){
            discard;
        }

I think i got good understanding of how java works (for example, passing hp as vector by reference), but as of now i feel like every “layered” (not defined with a formula) shader would require at least two textures? Also, if i wanted to alter a ready made shader (for instance, vertex lighting one), i’d have to copy it and alter the copied code + define my own material definition right? Or am i missing something big?

Sometimes you can bring out the sledge hammer of a shader to do a sledge hammer’s worth of a job. And sometimes you can bring out a sledge hammer of a shader to do the equivalent of 2-3 lines of Java code. Ultimately the trade off is up to you.

For example, a health bar might be good for learning but I don’t think I’d ever in a million years write a custom shader for that. A few lines of mesh manipulation code could take care of it with a standard shader.

Indeed, I find as the years go on, my biggest performance problems end up getting tackled better by FEWER shaders and not more. The more things I can batch into one big geometry with one shader, the better.

But as to your query, “shader nodes” was supposed to be a way to compose shaders from parts. You can do it graphically in the SDK or in code/config. But most people I think just roll up one or two custom shaders for their game (probably just light forks of the standard JME ones) and be done with it. That is in the case they can’t just use the default JME shader.

The bulk of Mythruna is only using two different shaders: one for blocks and one for characters/mobs. Both originally based on JME’s lighting shader. I have some incidental shaders to render billboard particles and fire… but probably 99% of the mesh data on screen is just two shaders.

2 Likes

I recently submitted a PR to make reusable .glsllibs containing all of the reusable PBR shader code so that the PBR shader would be more modular, and that way you wouldn’t have to have as much duplicate PBR shader code to worry about maintaining in everyone of your forks, and you could instead just call the reusable PBR functions in the .glsllib for PBR.

However my initial PR was never meged, and then a core dev also added additional features to my PR which made the scope of the PR much larger than I had planned, and I didn’t have time to finish testing before being pulled away from my jme project, so it unfortunately still has not merged into core.

But it still may be worth looking at the PR and implementing this modular design (or something similar) into your own project, especially if you find yourself frequently forking PBRLighting to make minor changes for special effects.

(Modularize PBRLighting.frag by yaRnMcDonuts · Pull Request #2191 · jMonkeyEngine/jmonkeyengine · GitHub)

1 Like

will definitely take a look, thanks!!