Shaders: Transformations in Model Space

Hello everyone,

so I’ve read about the concept of model space and world space in glsl.
As far as I understand it, the model space is the space used when the model gets created in blender or viewed in .j3o and the word space is the space of the whole scene, isn’t it?

As an experiment, I tried to set the coordinates of some vertices to the origin of the model space. This should be equal to the position of the model in the scene, if I’m not mistaken.
So I wrote this small vertex shader node, which I place right before the CommonVert node:

void main(){ 
	if (texCoord.y > 0.1) {
		modelPositionOut = vec3(0.0);
	} else {
		modelPositionOut = modelPositionIn;
	}
}

This is before the CommonVert Node, so every transformation should take place in model space.
However when I look at the result, the concerned vertices are placed in the origin of the whole scene, not the origin of the object!

What’s wrong?
I hope I didn’t misunderstand model and world space.

To make it clear, Model space is the space which origin is the origin of the model. That’s not really related with blender or j3o files.
World space is the space which origin is the center of your scene.

Your assumption sounds about right, now if it’ snot working something else may be wrong.
Could you post the generated shader?

Just to be sure: So if I take a look at the scene graph, the origin of the model is the origin of the geometry, isn’t it?

This is the generated Vertex Shader:


uniform mat4 g_WorldViewProjectionMatrix;

attribute vec2 inTexCoord;
attribute vec4 inPosition;

varying vec2 CommonVert_texCoord1;

void main(){
		vec4 Global_position = inPosition;


	//location : Begin
	vec3 location_modelPositionIn = Global_position.xyz;
	vec2 location_texCoord = inTexCoord;
	vec3 location_modelPositionOut;
 
	if (location_texCoord.y > 0.1) {
		location_modelPositionOut = vec3(0.0);
	} else {
		location_modelPositionOut = location_modelPositionIn;
	}
	//location : End

	//CommonVert : Begin
	CommonVert_texCoord1 = inTexCoord;
	vec3 CommonVert_modelPosition = location_modelPositionOut;
	vec4 CommonVert_projPosition;
	vec2 CommonVert_texCoord2;
	vec4 CommonVert_vertColor;

    CommonVert_projPosition = g_WorldViewProjectionMatrix * vec4(CommonVert_modelPosition, 1.0);
	Global_position = CommonVert_projPosition;
	//CommonVert : End

	gl_Position = Global_position;
}

This is the result:

In the screenshot are 25 Geometries, translated so they are in 5x5 shape.
The vertices which I wanted to move to the origins of each Geometry are moved to the origin of my scene insted.

Just to be clear: The origin of the model is 0,0,0. If you have a vertex in the mesh at 0,0,0 then it will be located IN WORLD SPACE where the Geometry is located IN WORLD SPACE. If the Geometry is located at 45, 67, 123135 then that is a contribution to the WORLD SPACE of the Geometry.

I just found out my mistake, I feel a bit dumb now.
My 25 Geometries were batched!
Of course all those vertices are moved to the same point! :facepalm:

Thanks to both of you for the clarification of model space and world space anyway.

Maybe this can be useful for someone who has the same problem:

I found a way to work around the batching limitations.
I store the position of the objects as texture coordinates in the meshes of my objects. This way every object has the same material and can still be batched.
However this technique requires deepClones of the objects, so performance is reduced a bit. But it’s still not as much as when you don’t use batching.

1 Like

That’s cool you figured that out :wink:

If your objects are of relatively decent size (and the same) then you might be able to get away with instancing, too. Then the only information per object could be the world transform (or position if that’s all it takes).