Problem with fetching the scale from transformation matrix

Hey everyone,

I have come accross several blender models whose transformations did not load properly.
It turned out that the objects’ scales were negative for some of the directions.

The current method that fetches the scale (in Matrix4f class) from the transformation matrix looks like that:
[java]
public void toScaleVector(Vector3f vector) {
float scaleX = (float) Math.sqrt(m00 * m00 + m10 * m10 + m20 * m20);
float scaleY = (float) Math.sqrt(m01 * m01 + m11 * m11 + m21 * m21);
float scaleZ = (float) Math.sqrt(m02 * m02 + m12 * m12 + m22 * m22);
vector.set(scaleX, scaleY, scaleZ);
}
[/java]

The value of the scales will be correct but its sign might be wrong. As you can see, the results here will always be positive.

My question is: how can I determine if the sign of the scale should be positive or negative ?

I was searching for it for some time, but I did not find any answer to that.
I only learned that if one or three vectors are scaled with negative value, then the determinant of the matrix is negative. This means that there is no direct transformation in 3D space that would transform the basic identity matrix into the one we currently have, without using a negative scale.

If two factors are mirrored then the determinant is positive and we can always find a transformation, rotation and positive scale that will allow us to reach the target transformation.

I would be grateful of any help here :slight_smile:
I could of course load the proper scale from blender file and apply it - but still we would have a piece of code that does not work entirely correct.

Hmmm… I’ve only thought about this for five seconds but I’m not sure you can detect when two axis have negative scale since it would mimic a 180 degree rotation around the third axis.

I can’t think of a great way but if you already know that there is a negative axis thing you could do cross products with every two axis combination and check the dot product against the third axes.

In case this is new information, in a transform matrix, each column represents a vector of a rotated axis. So column 0 is the x axis, 1 is the y axis, 2 is the z axis. These form the orthogonal axes of a rotated space… and it’s nice because they are easy to visualize. Normally each of these axes would be length 1 which is why taking the length can tell you the scale.

If you take the cross product of normalized X and Y then that should produce normalized Z. The cross product of X and Z should produce Y, and so on. If you do a dot product against the axis you actually have and the result is negative then that scale is negative.

Actually, that can’t work. I’m leaving it as an explanation though because it shows why it can’t work. If one axis is backwards then they will all appear backwards using that test. So I guess it’s more complicated than that.

I can’t even mentally work out how one would detect which axis is negative as it’s ambiguous (for reasons stated above). A Y inversion (to me) seems no different than an X inversion on a 180 degree rotated version… and in that case, I’m wondering if it’s ok to just arbitrarily apply the negative sign to any axis. If you know one is negative then make X negative or something.

…otherwise, the information is not there.

I had similar thoughts about computing the cross product of each two axes and comparing it with the third one.
But came to the same conclusions so I guess you might be right bout it.

What we could do is to store the data in the Matrix4f about the scale sign for each of the vectors.
A single byte would be sufficient. And it would be used in the method that reads the scale from the matrix.

But a matrix4f with -x scale is no different than a matrix4f with -y or -z that’s been rotated. Like, they are literally the exact same numbers.

So I start to believe it shouldn’t matter. If it’s detected that one of the axes is negative then it shouldn’t matter which one we say is negative. It just means you aren’t guaranteed to get the same thing out as you put in… like a Quaternion. But it seems somehow wrong to add an extra data element to track something that JME doesn’t even deal with properly anyway (like non-uniform scale).

Where does this come up?

Basically, what I’d be curious to know:

  1. what does the source data look like exactly?
  2. what is the purpose of putting it in a lossy matrix if you are just going to be pulling the parts out separately later?
  3. what are the separated parts then used for?