GLSL Shader - Display Part of Texture

Hi guys! A few months ago I asked this question, yet it was suggested to me to use a Texture Atlas. This time round however, (also part of my new year’s resolution :stuck_out_tongue: ) I’m trying to learn shader language properly, and I decided to tackle this problem again seeing as I’m trying to re-create my sprite engine from scratch. In short, I’m using GLSL to draw sprites from a sprite-sheet.

The most important part of drawing a sprite from a sprite sheet in this case is to draw only a subset/range of pixels, for example the range from (100, 0) to (200, 100). In the following test case sprite-sheet, and using the previous bounds, only the green part of the sprite-sheet would be drawn.

This is what I have so far:

Definition:

[java]MaterialDef Solid Color {
//This is the list of user-defined variables to be used in the shader
MaterialParameters {
Vector4 Color
Texture2D ColorMap
}
Technique {
VertexShader GLSL100: Shaders/tc_s1.vert
FragmentShader GLSL100: Shaders/tc_s1.frag

    WorldParameters {
        WorldViewProjectionMatrix
    }
}

}[/java]

.vert file:

[java]uniform mat4 g_WorldViewProjectionMatrix;
attribute vec3 inPosition;

attribute vec4 inTexCoord;
varying vec4 texture_coordinate;

void main(){
gl_Position = g_WorldViewProjectionMatrix * vec4(inPosition, 1.0);
texture_coordinate = vec4(inTexCoord);
}[/java]
.frag:

[java]uniform vec4 m_Color;
uniform sampler2D m_ColorMap;
varying vec4 texture_coordinate;

void main(){
vec4 color = vec4(m_Color);
vec4 tex = texture2D(m_ColorMap, texture_coordinate);
color *= tex;
gl_FragColor = color;
}[/java]

As you can see, I tried to give the texture_coordinate a vec4 type, rather than a vec2, so as to be able to reference its p and q values (texture_coordinate.p and texture_coordinate.q). Modifying them only resulted in different hues.

m_Color refers to the color, inputted by the user, and serves the purpose of altering the hue. In this case, it should be disregarded.

So far, the shader works as expected and the texture displays correctly.

I’ve been using resources and tutorials from NeHe (NeHe Productions: GLSL: An Introduction) and Lighthouse3D (http://www.lighthouse3d.com/tutorials/glsl-tutorial/simple-texture/).

I would appreciate any help, especially if it points me in the right direction as to which functions/values I should alter to get the desired effect of displaying only part of the texture. Thanks in advance!

Make your quad (initially) have texture coordinates from 0 -> 1, divide the x tex coords by the number of rows of “squares”, and y by the number of columns of squares in your texture, in this case, it is 3 and 1, where you do this is up to you. This will give you the tex coords for the bottom left square (red one). Then just shift them around appropriately, for the green, multiply the x tex coords by 2 in the shader.

Just be careful of mipmapping

1 Like

Thanks @wezrule!

I followed what you said, expect for the last part. I divided the texture_coordinate.x by 3, which in retrospect should have been obvious from the beginning.

Then, I didn’t multiply since when I tried I got weird results. Instead I added to the texture_coordinate.x 1/n, where n is the number of columns, as you described. By adding 1/3, I ‘displaced’ (offset) the texture by one ‘frame.’ Problem solved :smiley: I also moved it from the .frag to the .vert file, so now the .vert looks something like this:

uniform mat4 g_WorldViewProjectionMatrix;
attribute vec3 inPosition;

[java]attribute vec2 inTexCoord;
varying vec2 texture_coordinate;

void main(){
gl_Position = g_WorldViewProjectionMatrix * vec4(inPosition, 1.0);
texture_coordinate = vec2(inTexCoord);
texture_coordinate.x /= 3;
texture_coordinate.x += (1.0f/3);
}[/java]

I tested around with a few values, and it seems to be working well. Thanks a lot! :slight_smile:

I think what you really want is to just put the texture coordinates on the quad corners like any other geometry. You can even go back to using a vec2 for it since there is no reason that every corner needs all four texture coordinates. (Though I see in the latest post that you’ve already done that.)

Since your using code will have to somehow pick with image they want from the atlas then that’s when you change the texture coordinates of the quad. The shader just uses them normally then… no special dividing or anything since you’ve already baked that into the texture coordinates themselves.

Thanks for the input pspeed :slight_smile: Is that approach shader-based? And if it is, is it better than the code I posted, performance-wise?

I admit that I didn’t understand 100% what you said, although from what I managed to gather, it’s an approach I used last year when I just changed the texture coordinates to update the animation.

This is the code I have, which gets a 9-frame sprite and loops through it (one frame per second), for anyone who’d be interested:

[java]uniform mat4 g_WorldViewProjectionMatrix;
attribute vec3 inPosition;

attribute vec2 inTexCoord;
varying vec2 texture_coordinate;

uniform float g_Time;

void main(){
gl_Position = g_WorldViewProjectionMatrix * vec4(inPosition, 1.0);
texture_coordinate = vec2(inTexCoord);
texture_coordinate.x /= 9;
float time = mod(g_Time, 9.0f);
texture_coordinate.x += (1.0f*floor(time)/9);
}[/java]

I see. You want to animate in the shader. I guess that could work in a few isolated cases if the atlas is setup properly.

The idea is to use as few atlases as possible, though, so that you can batch everything and use one material. You’d have to then enforce some constraints on the type of animation that could be included… or go back to having a vec4 that included a range of cells or something.

What exactly do you mean by texture atlas? A sprite-sheet? I can’t really grasp the problem you’re getting to, sorry.

With regards to animations within a texture atlas, I was planning on using bounds (think JComponents - [x-coord, y-coord, width, height]), and the number of columns/rows within those bounds. Therefore that part would be isolated on its own. Of course this implies that each sprite would have to have its own material, although it would share the sprite-sheet with other sprites.

I’m just putting everything on the table here, so if you have any suggestions or I’m taking the wrong approach, please let me know.

Each sprite having its own material will kill performance since you can no longer batch.

To me, the entire point would be to use as few materials as possible.

And I see no difference between a texture atlas and a sprite sheet other than how they are used. So maybe I should leave these conversation while my sanity still holds. :wink:

Thought so (re: sprite sheets) - just wanted to clarify :slight_smile:

I can understand that a grass tile repeated 100 times (100 sprites) could easily share the same material and thus be batched, just now how different sprites can use the same material.

What I can’t understand is how two different sprites which access different parts of the sprite sheet can use the same material. Say one sprite is a ghost, the other sprite the player. Since they access different parts of the sprite sheet, how can they have the same material, while the user-defined uniforms are different?

Anyway, hope I’m not making a mess here, and thanks for your help so far :slight_smile:

encode it into somthing else,

for example the 2nd texcoord set could be used to define wich sprite to use, then only the texcoord buffer needs to be updated, for no matter how many sprites are shown, so its kinda a constant (as the upload preparing needs the time not the upload itself).

Bonuse if you are able to pack more logic into this, like wich animation should be played(aka spriteset row) and only pass the time via uniform.

Thanks Empire Phoenix :slight_smile: How would that work with sprites whose widths/heights vary and so does the time/frame? I guess that would mean that different arrays would have to hold information about each individual sprite making use of the material, no?

@memonick said: Thanks Empire Phoenix :) How would that work with sprites whose widths/heights vary and so does the time/frame? I guess that would mean that different arrays would have to hold information about each individual sprite making use of the material, no?

Yes, if you do animation in the shader then you will potentially need to have restrictions on how the sprite sheet/texture atlas is setup or encode a lot of additional data into the vertex attributes.

So far I’ve understood that by using one material which contains a single sprite sheet, different sprites could show different parts of the sprite sheet. What I can’t figure out in Empire Phoenix’s approach is how to use a second texture coordinate within the shader to change which parts are shown.

PS: I’m assuming that the whole process takes place in the shader, correct me if I’m wrong.

Well, for example, if you didn’t want to do animation in the shader then you could 100% do this with Unshaded.j3md and texture coordinate manipulation on the CPU… ie: the typical way to do batched sprites.

Ah! I was getting confused because of the animated sprites, that’s what I did last year (including animations, which were bad practice in my project). In perspective with what you said earlier on, it makes perfect sense to use that with multiple static sprites so that they could be batched.

Tested it right here; two sprites sharing the same material, batched:

When it comes to animated sprites, I looked around the forums and found a reply that when it comes to animations, shaders should be used. Anyway, since I’m stepping up my efforts to learn GLSL I liked the suggestion and I’m relishing the opportunity to learn more about Shader programming.

Well, the single most important performance optimization you can make is to batch. If whatever you end up doing breaks batching then it will be less performant. Updating one buffer with lots of stuff in it is almost always going to be faster than sending lots of separate stuff.

Yes, that’s the logic I’m trying to follow. I guess the same thing with static sprites could also work for animated sprites, although for that to work, the animations would have to be simultaneous.

Thanks a lot for your help @pspeed :slight_smile: Really appreciate it!

1 Like

It just gets complicated because you have to put everything you will need to figure out which frame to play in some vertex attributes… whether texture coordinates or whatever.

For sprites that are animated but don’t move then you get kind of a clear win if nothing else is going on. But for sprites that are moving, you are already updating the position buffer anyway so I don’t know why not just calculating on the CPU and updating the texture coordinates isn’t sufficient. Aside from “learning glsl” at least.

That’s exactly the reason why I’m creating a custom sprite engine for my own game - try to optimize it with a case in mind.

Well basically assuming you use point meshes as a base (ignoring some problems with size limitation they have for now)

You could set afixed framerate for sprite animations (lets say 60fps)
then you can use ony free texcoord to determine the animation index, and another one for a time offset.

Then pass to all as uniform the time, calcualte the frame used <-vertex part -> and get that tile and display it in fragment part.

This would be powerfull for many static but animated sprites, for moving onex the benefit is probably far lower.