A quick sanity check regarding how shaders work

So I’m trying to wrap my head around how shader’s work and I’ve hit a sort of wall in my understanding. I was hoping someone could point me at a decent tutorial or something to start from or at to inform me if my thinking is correct or not.

Here’s a list of how I think Shader’s work in a practical way.

  1. Shader’s in jME are part of materials.
  2. GLSL Shader’s have access to a special set of information provided to them by the GPU, BUT-
  3. Much of the information used is passed to the Shader by setting the values in the shader (For the tessellation shader quad.setBuffer(VertexBuffer.Type.Index, 4, BufferUtils.createIntBuffer(0, 1, 2, 3)); ← This says that a patch consists of vertices 0, 1, 2, 3).

So if I wanted to say tell a shader to make something red for instance I’d make a new material that accepted a value for colour (probably a vec4 that stored RGBA values) and use that value in a fragment shader node in the material?

I know this has gotta be super basic stuff but I’m finding the language used to teach shaders a bit difficult to wrap my puny mind around.


It feels like you are trying to explain the plot of a movie of which you missed the first hour.
Idk if you read it already, but once upon a time I made a doc about shaders in JME

This is barely an introduction, but at least you will start from the beginning :stuck_out_tongue:


Yes and no, Shaders are part of the MaterialDefinition, Materials are the parameters which are passed to a shader basically.

You create a new MaterialDefinition, but yeah, you also change every Material to be based upon the new Material Definition and pass a color there. But you are talking about the Fragment Shader here, Shader Nodes might be another topic (as they might be set up slightly different, but I can’t tell you much about that).
Do note that you could also just have a draw-red-shader without the color vec4 (unless you are talking about variable colors).

1 Like

heh… Possibly. I’ve been reading up on them and it seems like there’s a secret course somewhere that just lays out how the stuff works programmatically. When I’ve been reading stuff it just seems that the article is too far along even if the article is literally labelled Shaders for Beginners.

I’ll take have a read over the article you sent along. Thanks!

1 Like

Ok. So I’m not super far off base… Just missing a bit of terminology and a bit of the process. I’ll go over the article Nehon posted and see how that goes.

Thanks for your reply!

Step 1: fork unshaded.j3md and related files.
Step 2: break it and fix it and break it and fix it… until understanding blossoms.
Step 3: look at Lighting.j3md’s shaders.
Step 4: run away in horror. (j/k)


lol I’m very familiar with your step 4 there. It’s pretty much what stopped me last time I went into shaders. I could be wrong but they do seem to have a bit of a Dwarf Fortress type learning curve almost exclusively thanks to the fact that it seems that they just “do” stuff… somehow.

It’s not hard at the basic level.

Every vertex passes through the .vert shader.

Every rasterized pixel (fragment) passed through the .frag shader.

The .vert shader can setup things that interpolate across the .frag shader calls.

You will kind of see this if you play with Unshaded.j3md as it’s super-simple and does all of those things in a simple way.

1 Like

So you’ve touched on one thing I’m rather confused on.

Let’s say I wanted to use a fragment shader to make a pixel on a shape a certain colour based on it’s position relative to vertices. How does the fragment (or pixel) know which vertices it should be using for it’s calculations?

In 1 tutorial I followed it basically painted the pixel that a geometry covered with the pixels from an image. It appeared “flat” and not wrapped on the geometry. How does pixel space figure out “where” it is on a shape?

…in this case, you probably want texture coordinates. Which, by the way, is what Unshaded.j3md does to map textures to a mesh.

You should probably wait to ask more questions until you’ve done steps 1 through 3 unless your questions are regarding setting that up.

1 Like

Fair enough. :slight_smile:

Maybe I’ll try documenting my voyage through this endeavor. I’m finding that the mental gymnastics I’m having to do between how Java works and how GLSL works a bit confusing. I’m probably over complicating things in my head so I’m probably just missing the mental map of “feature x in shaders is like feature y in Java”.

Thanks again!


A cool book I read last month.

After finishing you will also learn the Phong lighting.

Then you can follow JME tutorials



1 Like

It helps if you understand how a rasterizer works. Maybe that is a fundamental gap.

Maybe do some googling on rasterization and maybe it will be clearer what OpenGL is doing with your .vert and .frag files.

1 Like

@pspeed Funny story. I was reading up on rasterization and had an “Ah-ha” moment… Which then I managed to destroy later that night while trying to apply that knowledge… So ya. I might’ve almost been right there… Annnnd then I wasn’t. :frowning:

@Ali_RS Might have to give that a whirl if rereading over the tutorials fails. Thanks!

If you have time then this might be useful:

I only gave it a quick skim but it seems to cover everything. I was trying to find a simple overview but that’s the closest I found so far.

I mean, so many of them look promising from the images but then dive right into math cryptograms:

Essentially, you give OpenGL some vertexes (position, texture coordinates, etc.) and tell it what kind of shape you are making (triangles usually). This is JME’s Mesh.

OpenGL then takes those vertexes and calls the .vert shader for each one. The .vert shader can define 'varying’s that will be interpolated across the edges.

OpenGL/GPU takes your vertexes and interpolates them over the edges of the triangle (along with any varying variables). It then interpolates from edge to edge and calls your .frag for each ‘pixel’.

Your .frag has access to the varying values that your .vert setup which will (by default) be linearly interpolated over the shape (by OpenGL).


… Dude… I think you just made it click in my head… holy…

I… I think I get it now… Let me try to put it into thicko english for a minute…

I have a shape… For ease of example lets say it’s just a line. The 2 vertices are 5 pixels away from each other.

Your vert shader works on the 2 points at either end of the line to come up with some sort of thing to do at those points.

The 5 pixels in between are interpolated from what is done at the ends. So if I said “At end 1 be blue” and “At end 2 be red” the middle pixel 3 would be… purple and the ones on either side are just more red or blue depending on how close they are to either end.

The values used for interpolation along those pixels are passed in variables that are varying’s.

Remember this is super thicko speak. But is that more or less how it works? /me crosses fingers.

In any case. Thanks everyone I’ll run off and read the links sent along. :slight_smile:

Yep, more or less.

That interpolation works for anything, too… texture coordinates, colors, normal vectors, positions, etc…

1 Like

@thecyberbob ok so a year ago I was a shader noob … all I knew was how to set parameters in materials and thought glsl was the devil’s work. Today I create meshes proceduraly and can write shaders that draw things like rotating stars and 3D nebulas WITHOUT meshes. How did I get from idiot to shaders and procedural meshes are the coolest things since sliced bread? Well it was a frustration year but here is a cheat sheet of basic things that can help you understand the big picture … by the way, without understanding the big picture you can read tech docs a million times and still not understand. So here in my shaders for noobs list:

  1. Mesh
    A mesh consists of points in 3d space. You then connect the dots to form triangles. Each line in a triangle is called a vertex. … let’s skip normals for now. So now we have a mesh. Somehow, we must color the triangles with color before we render them on the screen. The magic starts in step 2.

  2. Vertex coloring
    Remember our triangles in our mesh? Well we need to figure out how to color them. There are two ways to do that. We can set colors to each point in the mesh. What will happen at runtime is that if all 3 points of a triangle have the same color (red for example) then the triangle will be red. If we have different colors on each point then at runtime each point will have it’s own color but the pixels between points will be interpolated. For example, point 1 is red and point 2 is green … looking along the vertex you will see the line start red and transition to green, This lets us color messes in a very primitive way. Good start but not good enough. Now we want an image to be wrapped onto our mesh. How will this magic happen? Let’s look at 3.

  3. UV Mapping
    How will I take a 2d image and magically wrap it around my mesh? Well … remember the triangles? Well, UV mapping works like this. The mesh knows that you want to wrap a 2d image, it has no clue of it’s size and it does not care. UVs measure the X and Y size of your picture ONLY with floats from 0.0 to 1.0 … Now you are thinking … WHY??? let’s clear it up with an example. We have a 1024 x 1024 picture. a pixel in photoshop located in coordinate 0,0 is seen by the mesh as 0.0,0.0 (Meshes measure coordinates in float) … Makes sence you say … now for the magic. The pixel located in photoshop at location 1024,1024 is seen by the mesh as … 1.0,1.0 … even if your image is 1 million pixels by one million pixels, the 1000000,1000000 in photoshop will still be seen by the mesh at 1.0,1.0 … Back to our 1024x1024 image. If you are still following along then you would understand that the photoshop pixel located at 512,512 is seen by the shader as coordinate 0.5,0.5 because 512 is 1/2 of the image size. Now that you understand how the mesh sees your image (Texture), let’s explore how your mesh will contain UV maps to tell the shader how to use part of your image to paint a triangle. Well, now you have to think of your mesh from the perspective of one or more triangle. What do I mean? Well again let’s look at an example. Using the 1024x1024 image again. Lets say the 2 triangles in your mesh represent a square and you want the top left hand quarter of your 1024x1024 image to be drawn on your square. Well you asign x,y coordinates to the 4 points in your mesh based in the mesh coordinate system 0 - .5 across the x and y axis. Now … we know that the mesh has the data to determine what pixel needs to be draws at any point in your object because it interpulates the x,y UV locations … so you now have the basics to start understanding materials (Remember I did not explain normals because at this point it will just make your head explode.) … Now let’s move to the shader side of the equation.

4 Shaders
Although there are more parts, lets keep it simple and just explain vertex and fragment shaders. First of, you need to understand that shaders are writen in a language called GLSL. This is a language that video cards understand and compile to their native binary code at runtime. Yes GLSL is a scripting language so you don’t need to know the different nuances of each video card’s machine code … We are programming in the 21st century for god’s sake. That said, this is the job of shaders … The shader is executed once for EVERY visible pixel on screen. The vertex shader’s primary job is to tell the fragment shader what the texcoord (Texture Coordinate) should be used for it’s a particular run cycle. The fragment shader takes the texture coordinate, and tells the rendering engine what pixel color to paint on the screen … Generally the fragment shader will have access to your 1024x1024 image we talked about, pull the appropriate pixel color, and return it.

This is the out of the box happy path … Now, once you understand the basics, keep in mind that each shader can also manipulate it’s environment. For example, The vertex shader uses your mesh and it’s UV data to tell the fragment shader what pixel in the image it should use … well, the vertex shader can for instance, distort your mesh data before it identifies the coord to pass to the fragment shader … an example of this is a shader that simulates damage on a vehicle at runtime. I will stop there because I fear that I blew up your mind with this paragraph.

I hope this at least gives you the foundation in your mind to be able to now read @nehon’s tutorial … Once you do and understand it, come back to this post and read @pspeed’s comments. You will understand those at that point.

Hope this helps.

EDIT: Please keep in mind that I explained this in a super simple to understand basic way. There are many other steps involved both in creating a mesh and what happens between a vet shader and frag shader. This is a starting point just to ground your mind around the basics so you can grasp basic tutorials. For example in reality the vert shader does not directly pass the frag shader the texture coordinates but actually translated the mesh vertex position into screen space and then the graphics card calls the frag shader to get the color of each pixel but starting with that explanation will blow your mind. You will understand what I mean when you read @nehon’s tutorial and then READ the reference link at the botom.


I don’t really get why most of the time people seems to be afraid of shaders. It’s actually pretty simple on a developer stand point. Maths used in shaders can be daunting, sure but maths are also daunting in any other language…
There are some weirdness due to the fact it’s executed on a GPU and not a CPU but all in all it’s pretty simple business.