Some questions about hardware skinning

I’ve recently been familiarizing myself with hardware skinning in 3.1 and what the differences we’re between that and software skinning. I now have a few questions regarding hardware skinning.

  1. I have an algorithm that worked with software skinning that enabled me to perform a ray cast on a geometry. Using its bone index buffer, I could determine what bones control the particular vertex hit during the ray cast. The problem I am having now using hardware skinning is that the bone index buffer’s internal array is null (essentially, it as no array and throws a UnsupportedOperationException when calling array()). I saw that there was another buffer type for bone indexes when using hardware skinning called HWBoneIndex, but that buffer gives me the same result. Where is the data stored for bone indexes when using hardware skinning?

  2. I have another algorithm that lets me switch in and out parts of body/clothing for a character. All parts we’re exported under the same skeleton. When using software skinning it is a simple as detaching and attaching geometries to the node having the skeleton control and the geometries adjust automatically to the skeleton’s transforms. However for hardware skinning it is not that simple. I found that I had to use reflection to call the updateTargetsAndMaterials() and switchToHardware() methods from the skeleton control so that my geometry adjusted to the skeleton’s transforms. Is there an easier way to let the skeleton control know that it needs to add a new geometry and a new material in its lists?

  1. It can’t work anymore. At least not like that. With hardware skinning, the underlying buffers are never updated. A solution could be to create a shape for each bone based on all the vertices that this bone command (look into KinematicRagdollContol, we do this). Make a geometry from each shape and attach it to the corresponding bone (using the AttachmentNode), then set the geometry cullHint to always so that it’s not rendered. If you want to be able to get the bone from the geometry when you pick it, you can add its name or index as a UserData on the geometry.

  2. For this you may have to reset the skeleton before attaching the new mesh… else the mesh is attached when the skeleton is not in its binding pose and the transforms are all messed up.

  1. Well I’m not particularly fond of that solution cause it would mean rewriting some things in my code. I did find another solution thought; if the skeleton control uses hardware skinning, I just put the bone index data as a Byte array under the geometry as a userdata. Currently works using some code from the mesh.prepareForAnim() method:

    ByteBuffer boneBuffer = (ByteBuffer) geom.getMesh().getBuffer(VertexBuffer.Type.BoneIndex).getData();
    if(!boneBuffer.hasArray()){
    ByteBuffer arrayIndex = ByteBuffer.allocate(boneBuffer.capacity());
    boneBuffer.clear();
    arrayIndex.put(boneBuffer);
    geom.setUserData(“BoneIndex”, ArrayUtils.toObject(arrayIndex.array()));
    }

  2. Still need to try what you said

for 1), I’m not sure why doing this helps pick a limb. Not sure I get what you want to do in the end.

Well in short I had an algorithm that worked that helped me pick a limb like you said. But the main goal of the algorithm is to get what bone affects that limb (or the vertex that the ray collided into). That is why I used the bone index buffer to determine which bone affected the vertex. Since it worked using software skinning but not with hardware skinning because the buffer isn’t updated, I just needed to somehow get a copy of the buffer kept somewhere else to be able to use it. So that’s why I just copied the buffer into the userdata, so that my algorithm would still be able to access it and do what it needs to do.

Edit: Actually now that I think about it, I have another question. Is hardware skinning not supported on some graphic cards? Should I code my algorithms for both hardware and software skinning?