How to go with billboarding geometries inside a vertex shader?

Hi, I’ve been investiguating on ways to billboard thousands batched QUAD geometries in a shader so that Y rotation always aims at the camera, but the position of the geometry remains untouched in world coordinates. Basically, I batched those thousands of quads using GeometryBatchFactory and every frame, I want the GPU to adjust the rotation of those quads to aim at the camera via a shader (vertex probably). I couldn’t find ANY shader code that left the geometry position alone and only rotated it. The closest I’ve got is this:

[java]
vec3 vAt = g_CameraPosition - pos.xyz;
vAt = normalize( vAt );
vec3 vRight = cross( vec3( 0.0, 1.0, 0.0 ), vAt );
vec3 vUp = cross( vAt, vRight );
vRight = normalize( vRight );
vUp = normalize( vUp );

vec2 s = inTexCoord * vec2( 5.0, 2.0 ); // NOTICE THIS LINE
vec3 vR = s.xxx * vRight;
vec3 vU = s.yyy * vUp;

vec4 dir = vec4( vR + vU, 0.0 );
gl_Position = g_WorldViewProjectionMatrix * (pos + dir);
[/java]

You’ll notice that line with the TexCoord XY multiplier… if this value is not high like this, it doesn’t work: the geometry looks THIN on some angles just as it would normally. The problem with this GLSL code is that it looks like it’s working (kind of) but the quads have to be very tall for this billboarding method to work. I tried to send halfed sized quads to the shader in hope that it would compensate, but it does exactly as if I had low TexCoord multiplier values, which cancels the billboarding effect.

I also noticed with the above GLSL code that if the camera is near, sometimes the quad is deformed or if the camera looks a little down we can see that by going around the quad, it moves in the world space. Like if it were not rotating from the center of itself or something.

I am looking for your help on this matter guys, I know somebody has a solution to this somewhat /simple/ billboarding problem that I could not figure out. PLEASE KEEP IN MIND that I can’t rotate individual geometries on simpleUpdate() loop because they are batched, so it means I would have to lose the batching and this would degrade considerably the performance. I have thousands of those quads to update every frame (to aim at the camera) so I would much more prefer if the GPU adjusted their orientation via a shader of some sort, but I’m open to every single solution you might have.

Thx :smiley:

This is kind of non-trivial.

To do this each vertex will have to be located at the center of its quad (at least in X,Z space) and then you can use something like texture coordinates to offset them in camera space.

The trick is to first apply the world view transform (without projection: g_WorldViewMatrix) to the vertex. Then offset this position by some amount based on the x of the texture coordinate. Like, if they are supposed to be 1 meter wide then you can worldViewPos.x -= texCoord.x/2 or whatever.

Then apply the projection matrix to get it into projection space.

Yes, I’ve seen a technique like this but I was thinking there would be something simpler. The problem is that I’m using the same material for many textures and some of them are taller than the others, so the texture proportion offsetting (I think) wouldn’t work in this situation.

That’s tricky it seems!

Couldn’t we all position the quads normally but say with a (0,0,0) orientation and then compute a vector from camera to vertex and then use GLSL to rotate the vertex by this vector or something?

EDIT: Are the thousand quads sent to the GPU as one huge 6 vertices geometry? I guess “no” but I just want to make sure I’m not trying to individually rotate vertices if the GPU treats the thousand quads as one object instead of individual objects.

A .vert shader operates on vertexes. It has no idea what shapes it came from. Therefore if you need to encode everything you need to describe where it is in the shape as vertex attributes. Usually the textures closely match corners so it’s easy to use them.

I’m not sure what the “taller than others” issue is since I specifically took y out of the problem since you seemed to be describing y-axis aligned billboarding.

Here is rough pseudo code for how you’d do it with texture coordinates. You are welcome to use some other vertex attribute to encode the same information.
[java]
vec4 modelSpacePos = vec4(inPosition, 1.0);

// Convert vertex location to world-view space, ie: camera-relative space
vec4 wvPos = g_WorldViewMatrix * modelSpacePos;

// Figure out how to offset it in camera space
vec3 texCoord = inTexCoord;
float offset = texCoord.x - 0.5; // (make it -0.5 to 0.5)
wvPos.x += offset;

gl_Position = g_ProjectionMatrix * wvPos;
[/java]

If you need widths that are not 1 meter than you will have to scale the offset calculation. If you need different widths in the same batch then you can encode the ‘size’ as either inTexCoord.z or as a different vertex attribute (maybe ‘size’).

Thank you so much for taking the time to reply.
I tried to implement what you described, but it seems to do nothing compared to not altering the position at all. Here’s my GLSL vertex shader code:

[java]
vec4 wvPos = g_WorldViewMatrix * pos;
float offset = texCoord.x - 0.5; // (make it -0.5 to 0.5)
wvPos.x += offset;
gl_Position = g_ProjectionMatrix * wvPos;
[/java]

Maybe I don’t understand additional things I should do? Maybe you could explain a little more how this code should affect the situation?

Here are some screenshots to show the problem:

///// #1 : Quad is facing camera, everything looks OK

///// #2 : Cam goes around on the left of the quad; it’s not facing camera anymore…

///// #3 : If I went about a little more it would completely disappear, you get the point already…

Thank you so much for taking the time to reply.

EDIT: In the meantime, I am investigating another approach, something similar to this:

[java]
float angle = dot(pos.xz, g_CameraPosition.xz);
//angle = acos(angle);
pos.xz += sin(angle);
[/java]

The above code does not fully work, I am trying to figure out why the acos(angle) displaces the vertex SO far away it completely disappears. I must have the math wrong, but I can see my quads spin VERY VERY fast even if the camera goes very slowly, so it’s just a matter of tuning it I guess. I’m just unsure about the maths, I really think the acos() should be executed, but like I said, everything disappears (probably it’s moved so far it’s not in the frustum anymore?)

I am emmiting billboarded quads from the geometry shader… Since my vertice is always located in the center i have an advantage… Don’t know if it help, but i use this code to calculate the rotationMatrix

[java]
vec2 cameraOffset=normalize((g_WorldMatrix*center).xz-g_CameraPosition.xz);
float spriteRotatation.x=atan(cameraOffset.x,cameraOffset.y);

mat2 rotationMatrix = mat2( cos( spriteRotatation.x ), -sin( spriteRotatation.x ),
sin( spriteRotatation.x ), cos( spriteRotatation.x ));

[/java]

@.Ben. said: Thank you so much for taking the time to reply. I tried to implement what you described, but it seems to do nothing compared to not altering the position at all. Here's my GLSL vertex shader code:

…yes, but how are you setting up your geometry? Are you centering the x,z as previously described?

All of your vertex coordinates for a particular quad will need the same x,z. Only the y value will be regular.

@zzuegg said: I am emmiting billboarded quads from the geometry shader.. Since my vertice is always located in the center i have an advantage.. Don't know if it help, but i use this code to calculate the rotationMatrix

[java]
vec2 cameraOffset=normalize((g_WorldMatrix*center).xz-g_CameraPosition.xz);
float spriteRotatation.x=atan(cameraOffset.x,cameraOffset.y);

mat2 rotationMatrix = mat2( cos( spriteRotatation.x ), -sin( spriteRotatation.x ),
sin( spriteRotatation.x ), cos( spriteRotatation.x ));

[/java]

Note: there is no reason to do atan and then sin/cos… since atan was sin/cos to begin with.

Also, I think you leave out the next parts about what you do with the rotation matrix. I think if you do that then I can show you how to get rid of the matrix, too.

Anyway, your solution seems to be rotating the quads towards camera position instead of rotating them to camera orientation. (I agree it’s better for most things but it is a little more complicated than the solution I posted.)

Well, what I’m looking for is that the quads always look at the camera position yes. BTW I like the “Rotatation” var names all over the place @zzuegg :smiley: jk

@pspeed OK so I missed that part about centering the vertices, but then it means I’d have to fiddle with the VBO, correct? I think I’m not ready to do all those changes just to get the billboard effect, I really thought I could do that without changing anything except for the shader. Maybe I was wrong about this. The whole thing is to save fps by not de-batching the geometries. That’s why I thought the GPU could do all the work.

@.Ben. said: Well, what I'm looking for is that the quads always look at the camera position yes. BTW I like the "Rotatation" var names all over the place @zzuegg :D jk

@pspeed OK so I missed that part about centering the vertices, but then it means I’d have to fiddle with the VBO, correct? I think I’m not ready to do all those changes just to get the billboard effect, I really thought I could do that without changing anything except for the shader. Maybe I was wrong about this. The whole thing is to save fps by not de-batching the geometries. That’s why I thought the GPU could do all the work.

You HAVE TO DO IT to get the billboard effect. You vertexes do not know where the object center is. They cannot possibly guess where the object center is. So you need to set the positions to the object center and then adjust them out based on rotation.

It is literally the only way to batch billboarded objects. The only way.

Note: if you get my approach working then it is only a small change to make them face position instead of orientation but it is a little more complicated math.

Both approaches require that the vertexes be centered, though.

I keep forgetting to say stuff…

If you post the code on how you create your quads+geometry+etc then I will show you how to fix it.

Hi again, thank you so much for all these replies. I understand from what you said that there is only one way to do it but I’m trying to understand this concept and I’m pretty much confused now as I swear I thought I could simply rotate the vertex using some vector based math by dotting the camera and the vertex to get an angle, but I understand I’m wrong about this. I got the geometries to rotate tough, I just don’t have the right angle tough.

I can post code if you want, but I don’t know what to post, all I do is use the BatchGeometryFactory and attach nodes like I did for everything else. That’s all. I don’t do anything fancy here. I’ve been trying code in the shader only since the beginning, nothing in Java code yet. From what I understand I’d have to pass unusual texture coordinates to the shader, but I have NO CLUE how to do that exactly. I don’t recall ever passing texture coordinates before.

@pspeed said: Note: there is no reason to do atan and then sin/cos... since atan was sin/cos to begin with.

Also, I think you leave out the next parts about what you do with the rotation matrix. I think if you do that then I can show you how to get rid of the matrix, too.

Anyway, your solution seems to be rotating the quads towards camera position instead of rotating them to camera orientation. (I agree it’s better for most things but it is a little more complicated than the solution I posted.)

I thought the full code was a bit offtopic, but if you have time to look trough it, all performance benefits are welcome:

The workflow in my case is:

Mesh consists of:
Vector2f worldPosition
Vector2f worldSize
Vector2f worldTexCoord
int vegetationType (Basically an index to for accessing the texture array.

Since i know the world position of the shader, as well as the worldPosition of the camera i thought the best way of calculating the 4 vertices of the emitting quad is to rotate the offsets with a rotation matrix.

I left out the “snoise” code to make it a bit shorter…

[java]
void main(){
vec3 vertNormal=texture(m_normalMap,spriteTexCoord[0]).xyz;
if(vertNormal.z<0.9){
return;
}

 fragSpriteType=spriteType[0];

vec4 center= gl_in[0].gl_Position;

vec2 size=spriteSize[0]/2;
vec2 spriteRotatation=rotation[0];

vec2 cameraOffset=normalize((g_WorldMatrix*center).xz-g_CameraPosition.xz);

spriteRotatation.x=atan(cameraOffset.x,cameraOffset.y);

mat2 rotationMatrix = mat2( cos( spriteRotatation.x ), -sin( spriteRotatation.x ),
			          sin( spriteRotatation.x ),  cos( spriteRotatation.x ));



vec2 v1=vec2(-size.x,0);
vec2 v2=vec2(size.x,0);

v1=rotationMatrix *v1;
v2=rotationMatrix *v2;

varTex=vec2(0,0);
vec4 worldPos=center*g_WorldMatrix;

vec2 windOffset=vec2(sin(2*g_Time*(snoise(worldPos.xy)))-0.5,sin(2*g_Time*(snoise(worldPos.xy*2)))-0.5);
//windOffset=vec2(0,0);

windOffset=windOffset/2;
gl_Position=g_WorldViewProjectionMatrix*vec4(
    center.x+v1.x,
    center.y,
    center.z+v1.y,
    1);
EmitVertex();

varTex=vec2(0,1);
gl_Position=g_WorldViewProjectionMatrix*vec4(
    center.x+v1.x+windOffset.x,
    center.y+(size.y*2),
    center.z+v1.y+windOffset.y,
    1);
EmitVertex();

varTex=vec2(1,0);
gl_Position=g_WorldViewProjectionMatrix*vec4(
    center.x+v2.x,
    center.y,
    center.z+v2.y,
    1);
EmitVertex();

varTex=vec2(1,1);
gl_Position=g_WorldViewProjectionMatrix*vec4(
    center.x+v2.x+windOffset.x,
    center.y+(size.y*2),
    center.z+v2.y+windOffset.y,
    1);
EmitVertex();

}

[/java]

Thats the result:

[video]http://www.youtube.com/watch?v=NmNPYtbEeN0[/video]

@.Ben. said: Hi again, thank you so much for all these replies. I understand from what you said that there is only one way to do it but I'm trying to understand this concept and I'm pretty much confused now as I swear I thought I could simply rotate the vertex using some vector based math by dotting the camera and the vertex to get an angle, but I understand I'm wrong about this. I got the geometries to rotate tough, I just don't have the right angle tough.

I can post code if you want, but I don’t know what to post, all I do is use the BatchGeometryFactory and attach nodes like I did for everything else. That’s all. I don’t do anything fancy here. I’ve been trying code in the shader only since the beginning, nothing in Java code yet. From what I understand I’d have to pass unusual texture coordinates to the shader, but I have NO CLUE how to do that exactly. I don’t recall ever passing texture coordinates before.

@pspeed said: I keep forgetting to say stuff...

If you post the code on how you create your quads+geometry+etc then I will show you how to fix it.

Note again: quads + geometry.

So, show me the code where you create the Quad. Show me the code where you create the Geometry. Show me the code where you batch them. I’m not sure how else to explain it.

It can’t be done ‘simply’ as you say because by the time the vertex shader sees a vertex it’s just a point in space. 60, 1515, 20 Now, given no additional information, how do I rotate that to the camera? Rotate relative to what?

So first I’ll deal with the extraneous atan…

@zzuegg said: [java] vec2 cameraOffset=normalize((g_WorldMatrix*center).xz-g_CameraPosition.xz);
spriteRotatation.x=atan(cameraOffset.x,cameraOffset.y);

mat2 rotationMatrix = mat2( cos( spriteRotatation.x ), -sin( spriteRotatation.x ),
			          sin( spriteRotatation.x ),  cos( spriteRotatation.x ));

[/java]

So, atan(y, x) is returning you an angle where cos(angle) = xy and sin(angle) =y (kind of backwards from what you have but I think you have maybe a couple things canceling each other out).

    So in effect, [java]atan(cameraOffset.x, cameraOffset.y) = atan(sin(angle), cos(angle))[/java]
      So... [java]sin(angle) = cameraOffset.x cos(angle) = cameraOffset.y[/java]
        So... [java] mat2 rotationMatrix = mat2( cameraOffset.y, -cameraOffset.x, cameraOffset.x, cameraOffset.y); [/java]

        But in reality, you are only going to be using half of that matrix because what you are really doing is projecting along a vector perpendicular to the camera position (that is why the y value of your initial v1 is 0). And you already have that projection vector easily enough:

        [java]vec2 rightOffset = vec2(-cameraOffset.y, cameraOffset.x)[/java]

        Then just add that to go right or subtract it to go left (scaled as appropriate by size.x).

        1 Like

        Ha, actually, pretty straightforward.

        Upside is, it saves some lines of code…
        Downside is, even with 1million plants showing, there is nearly no speed difference, which i kind of strange. Seems that the current gpu bottleneck is not the computation part…

        But nice to have a living shader optimizer…

        I someday drop you the tessellation shader, since that causes the biggest fps drop :smiley:

        Add, i would add a few more upvotes for the kind help if i could…

        @zzuegg said: Ha, actually, pretty straightforward.

        Upside is, it saves some lines of code…
        Downside is, even with 1million plants showing, there is nearly no speed difference, which i kind of strange. Seems that the current gpu bottleneck is not the computation part…

        But nice to have a living shader optimizer…

        :slight_smile:

        It’s more a matter that I cringe when I see atan() or anything that deals with “angles” as it is so often unnecessary. When we already have vectors we actually have a surplus of information. I’m surprised it didn’t make a difference because I recall atan() being an expensive trig function… but as you say ‘everything else’ must already be taking the lion’s share of the time.

        @zzuegg said: I someday drop you the tessellation shader, since that causes the biggest fps drop :D

        Add, i would add a few more upvotes for the kind help if i could…

        re: tessellation shader, sounds fun. I’m already jealous that you have a geometry shader. I wonder if it works out faster than using using straight up batched quads. I know it will be less memory but I wonder if there is any performance difference.

        I would actually bet that using the geometry shader way it is slower than batched quads. It actually has to be because it is just added work per frame plus an additional shader stage.

        I am expecting overall performance benefits because you are able to add a few optimizations:

        -I can change from a star type placement of 3 quads to a billboarded placement on the fly. Star placement is probably required only for the very near vegetation stuff.
        -Since the vegetation patch meshes are reused and paged i save 3/4 of the bus bandwith and 3/4 of the cpu time changing the buffer values. If i find time, i am going to write a quad based vegetation patch and try to make some benches.

        It is more of a concept up to now, but it seems that it works out quite well.

        All literature i have found say: Don’t use geo shaders for optimisation, use them when there is no other way of dooing it.

        As an addition: my next book is going to be a math book. It’s really a part that is largely underrated for gamedev…

        Very interesting conversation here guys, thank you for this. Well, I guess I had to try myself and fail for a day to finally understand what you were talking about @pspeed. What I do understand now is that since vertices are POINTS, they do not inherit a rotation by themselves, they only got a position. Therefore, we couldn’t adjust their “rotation” (read position here since it’s a point) like I intended to do even if we know what the camera and that current point coordinates are simply because the point does not know what ITS OWN geometry origin is compared to the 3 other points that compose the quad geometry. As a matter of fact, the quad’s 4 vertices (or 6 since it’s 2 triangles I guess) don’t know if they’re top/left corner of the quad or bottom/right and so forth. Yes, we could guess approximately what the point’s new coordinates could be, but it would be only using our common sense and not mathematically. If somebody proves this wrong, it would have been done in the last 20 years and I couldn’t find a single solution to this on Google after 1 day of research. Now I really do think it’s not mathematically possible. It’s just EASY using our common sense to approximate or guess where the new coordinates should be, but it’s not computable.

        Therefore, like you tried to explain to me, we have no choice but to pass additional vertex information to the shader, this seems like the most optimized scenario to me:

        1. Send a single point instead of a quad’s 4 vertices to the GPU
        2. For each point, send the width, the height and the point coordinates themselves represent the “quad” center
        3. The shader would then make a rotation matrix from camera position to point’s coordinates (“quad” center)
        4. and then emit and process 4 vertices derived/expanded from the center coordinates by width/height

        It may sound stupid, but I have no clue how to put this in JME3 Java code, I understand I have to replace my quad geometry line by a point geometry line, but then how am I supposed to pass the 3 other vectors to the shader? Via the material definition? This must not be how it’s done since it would mean every update, more than 1000 materials are sent back to the GPU, doesn’t make sense. Do we have to hack the VBO float[] directly or what’s the best option do to that? KEEP IN MIND the GeometryBatchFactory ran on the 1000 geometries, so that adds a layer of complexity I guess?

        Thx :smiley: