New ASE converter; but what to do about animation?

about an eternity ago at a MacHack

Oh, neat! Those were the days!

Oh, neat! Those were the days!

Indeed they were. I was only able to go to one, and I think I still have my "Where's Waldemar?" t-shirt. From that you could probably figure out the year - I'm thinking it was '90 or '91, but I really can't remember.

Oh, and sorry for the thread hijack :).

MrCoder said:

sounds cool hevee... more jme addon projects please! :)

OK. To me it looks like we need a better skeletal animation system - more features, and easier to maintain than the current one. That's the project I'd like to see, but I don't have the time right now to start it off. Maybe some time next month, but if somebody beats me to it I'd like that a lot and hereby apply for contributor status :)

I offer my collaboration. As a modeler, tester and, where I can, as a developer.

As already said I am interested in it because I need to decide what to do with the MD5 Reader 2, skeletal system.

Though, if we need some example code we could take it from several different projects. In the jME side there are example the kman's loaders (Cal3D and MD5), and the code inside MD5 Reader 2. I know that is a basic support but you are more expert than me and I don't know if in these projects there is any usefull code for your purpose.

From the C side there are examples like Blender skeletal system and Cal3D library. Cal3D, in partilular, is used in a lot of projects, one is CrystallSpace 3D engine.

What I need (to make a simple request) is something that supports different cases (and that do not broke totally compatibility) so I can easyly migrate to it with MD5 Reader 2. Else I would be constrained to keep MD5 Reader interal system.

I would like to let you know that I found an interesting old discussion about bones and Node class in my MD5 Reader 2 fixed thread.

White_Flame was speaking about some checks that Node class (and subclasses) makes. This checks, he said, are completely not useful for a bone. Substantially, he was talking about implementing bones without extending Node class to save cycles, those checks waste, to improve optimization.

I don’t know how are implemented current jME bones, so, I post this link even if it could be outdated.

The Spatial and subclasses are made for scene graph handling, I think skeleton information should be stored in more light-weight classes…

Well, there are people who think that a scene graph should be able to deal with any spatial hierarchy, including bones. For example, if you want to put a sword in the hand of a character, adding a Sword node to the Hand node is the most logical way to express the spatial relationship. In fact, that's one of the things that scene graphs do well in my mind: spatial hierarchy. All this other business of trying to force render state into the spatial hierarchy doesn't do it for me, though – there should be a different management mechanism for a different kind of state (material vs position, say).

That being said, I don't think you should need a special Bone object. You could have a SkinController on the skin node, which would find the Node objects that have the same names as its expected bones. You could also have an AnimationController on, say, the root of the skeleton, which would animate the different Node objects. When time came to render the skinned object, the SkinController would simply gather the transform matrices from the Nodes it knows relates to its bones, and use that as the matrix palette. That, in turn, assumes per-vertex skinning (which can be done in the vertex shader, if you want), instead of the per-bone skinning that's currently being done.

jwatte said:
[...] the SkinController would simply gather the transform matrices from the Nodes it knows relates to its bones, and use that as the matrix palette.

It high probabilities that I am the only one in this forum who does not understand this sentence. If you have the time, can you explain me that a little more deeply? I am actually interested about it.

In particular, I do not understand what matrices are gathered by the SkinController. The translation/rotation matrices of the mesh vertices of the Nodes? Their animation transformations? Or the Nodes transformation matrices?

Hmm. That sentence just assumes you understand how skinning and animation actually works. If you don't, it's kind of hard to explain in a simple post. But I'll try!

Creating the data in 3ds Max, Maya, etc, plus the exporter/converter involved:

  1. Each vertex is weighted to between 1 and 4 bones.
  2. A table is built, saying "a bone named RightShoulder is index 13" for each bone used by the skin.
  3. Vertex buffer is exported where each vertex contains 4 indices and 4 (or 3) weights. If the vertex is weighted to less than 4 bones, the rest of the weights and indices are 0.
  4. Typically, the bone table is sorted so that parents come before children, so the root is typically bone 0, and the pelvis bone 1 (if rooted on the ground).
  5. The mesh is written out as the bone index-to-name table, and the vertex buffer. This can conveniently assume that the transform for all bones is identity when the character is in the reference pose.
  6. The starting position for each bone can be written out, together with the bone name and the bone parent.
  7. Animations are written out as a set of transforms (scale, rotation, translation) for each bone over time, relative to the reference pose.

    Rendering the data in a scene graph (such as jMonkeyEngine):
  8. Mesh is loaded into vertex buffer.
  9. Animations are loaded, typically as sets of splines for various key values (translation, rotation, etc).
  10. The skeleton is built, typically by placing a Node at each bone position, parented accordingly. The mesh does a look-up from Node name to index.
  11. The animation data is interpolated over time to drive the Nodes.
  12. When rendering, the mesh reads the transform matrix from each Node (as applied by the animation), and puts that in the matrix palette passed to the vertex shader (or skinning code).
  13. For each vertex, the transform matrices are blended according to index/weight found in the per-vertex data, and the vertex is transformed by that blended result.

    That's not what jME does, though. jME implements a different method of skinning, which may do less work for CPU implementations of the algorithm if there are lots of vertices that only have a single weight, but which is impossible to implement on the graphics card. The bigger CPU savings is to not have the CPU do the work at all, in my opinion :slight_smile: If you are targeting GeForce 2 or earlier cards (TNT or ATI Rage, anyone?) then those cards are not shader capable, and you'd have to run skinning in software, at which point trying to weight to 4 bones per vertex is kind-of wasteful anyway – better degrade to one bone per vertex at that point!

    Btw: An alternative to sending matrices in a palette is to send scale/translation/rotation as vector and quaternion to the shader, have the shader blend the weights in that space, and turn to matrix right before transforming the vertex. Some people believe this leads to less deflation of joints when interpolating.

Thanks for the explanation. Reading it, I discovered that the point I did not understand was if jME uses or not GPU capabilities. All the rest was something I already know a little (because of my experiences with M2 Loader and MD5 Reader 2 and MD5 format) but I lack of technical language so sometimes I do not understand names used by experts.

In my opinion, using GPU capabilities, when possible, is the right thing to do. But, at the same time, I would just add a little effort to support older cards or poorest GPU (like Intel integrated GPU chips).

I think I should be possible to do this in a acceptable way using a good object oriented architecture of the software, to switch between the 2 or more solutions. This because, for me, support for older SW or HW (where possible) is a must.


This explanation also helped me to make a diagram to be used to analyse MD5 Reader 2 and choose what to modify in it.

Note that MD5, and jME, use a non-GPU format, where the data is stored per bone. So, for each bone, a list of target vertices, weights, and per-bone vertex position are stored. Then when skinning, the code walks each bone, and does a transform of the source vertex by the bone transform, and adds it (weighted) into the target vertex. At the end, the vertex array is sent to the card. That method of skinning, while mathematically equivalent to the matrix palette method, cannot be accelerated by the GPU. The reason MD5 does this is because the DOOM 3 engine runs some algorithms on the meshes post-skinning to extract minimal stencil shadow outlines, so it can't use GPU skinning no matter what. If you use other ways of getting the stencil outline, or use shadow maps instead, having the data on the CPU is not necessary at all.

Note that even if you store all data using matrix palette format, you can easily support pre-shader graphics cards. (Note that Intel graphics cards do support shaders in modern versions) Just do what the shader would do, but on the CPU, and send the vertices to the card when you're done.

Btw: if you want to integrate my working ASE exporter into jME, it's available on googlecode, and the license is compatible with jME, so have at it!


I'm trying to use this jme/ase tool, but obviously I'm a newbie and making something wrong.  :stuck_out_tongue:

When I've imported it into Eclipse, and sorted the jME dependencies, there are still things missing!

These are the issues I have that might be able to clarify what I'm doing wrong:

Missing Imports:

import lc.Services;

import lc.basic.BasicServices;


(there are no packages with names basic and gui at all in my project under lc)

I also had some cast errors in AxisAngle where there was cloning of Vector3f going on (which seem to be ok with an explicit cast, but maybe its a symptom of me doing osmethign wrong to start with when trying to import the project to Eclipse).

Any idea what is wrong here?  :smiley:

jwatte said:

Note that MD5, and jME, use a non-GPU format [...]

I am sorry jwatte if I answer with such a delay. I want to thank you for the explanations.
I would like to ask you (and every other developer having necessary knowledges) if do you think that is possible to use data stored in a non-GPU format and convert it to get advantage of GPU acceleration.
In particular I refer to the Nvidia example on how to skinning
whirl said:

Missing Imports:
import lc.Services;
import lc.basic.BasicServices;

I'm getting the same errors. It seems the file is incomplete  :(