So far I have had no problems. Looks very cool by the way - nicely polished in many areas.
I’m going to dig into https://github.com/stephengold/Maud/blob/master/README.md and try to do something more tomorrow to see how I get on. I’m interested in giving the animation retargeting a try since that could save me from pulling out my hair with regards to the only semi-decent walk cycle I’ve ever managed to make.
One thing I noticed instantly though is that using middle mouse to rotate the camera stops when the mouse leaves the JME window. Not sure if that’s something you can prevent.
Here’s an outline of the process to retarget an animation from model A to model B:
(1) Set asset folders to access each model (Settings -> Asset folders -> Add)
(2) Models -> Load -> navigate to model B
(3) Models -> Source model -> Load -> navigate to model A
(4) Select the Mapping Tool (Map -> Tool)
(5) Click the “Show retargeted pose” button and the “Twist” button
(6) For each bone used in the animation, starting from the root bone: RMB the bone in model A, RMB the corresponding bone in model B, click the “Map” button, use the Twist Tool or axis dragging to align the bones similarly
(7) optional Map -> Save
(8) Animations -> Add new -> Retarget
(9) Click the “Load” button for source animation and select the walk cycle of model A
(10) Click the “Retarget” button, edit the animation name, and click “Retarget”
(11) Models -> Save
oh, right… the default behavior on a Simple Application… yeah, when the cursor is hidden by the flyCam’s .setDragToRotate(true) chain. It keeps accepting mouse input while cursor is hidden, though the cursor is temporarily put smack dab in the middle of the rendering area and then restored when the drag finishes.
I’ve set my recording software to give a hint where the mouse cursor is at all times by highlighting the area yellow. You can see the flyCam’s behavior in the following video at 30:21
So, you could probably cue off of what the FlyCam class is doing to prevent any “mouse dragged out of bounds issues”.
Someone suggested off-forum that I should post a video demonstrating some real-life uses for Maud. I plan to do so for the next release, around the end of August. Here’s my list of anticipated real-life uses:
develop jME animations from motion-capture data
copy/retarget animations between models
convert models in other formats to native J3O format
troubleshoot issues with models (or with your model-asset pipeline)
I’m interested in doing some of the retargeting at runtime. So I can have animations saved for humanoid figures and load and retarget an animation as needed in game, to save on disk space, trade off is more computation. Is there a library for this?
I don’t currently offer a library for animation retargeting. Thank you for that idea.
Before I charge ahead, I want to understand your use case a bit better. The big question in my mind is how similar those humanoid figures are, in particular how similar their armatures/skeletons are.
The 1st step in retargeting animations is simply converting the bone names/indices in the source skeleton to those of the target skeleton. For instance, bone #60 (“Calf.L”) in the Sinbad model corresponds to bone #97 (“shin.L”) in the Jaime model. With a little care in creating your models, you could avoid such issues, I suspect.
The 2nd step is re-orienting the bone rotations from one context to another. For instance, the local +Z axis of Sinbad’s Calf.L points forward, but in Jaime it’s the local +X axis of shin.L that points forward. Again, I suspect you could easily avoid such troublesome differences between your models.
Maud performs steps (1) and (2) using a “skeleton map”. In general, the map must be created manually for each pair of skeletons you plan to use for retargeting. Maud makes it relatively easy to create maps, but it’s not automatic except in the trivial case of equivalent skeletons.
The 3rd step is creating animation(s) in your target model and adding the retargeted bone tracks to those animations. If the source model and target model have equivalent skeletons, this becomes a simple cloning operation. You probably don’t need a library for that! If the skeletons aren’t equivalent, you’ll need skeleton maps, but once you have them, the creating and retargeting can be automated.
The final step is fixing up the retargeted animation(s) to address issues like feet sliding or not touching the ground, limbs poking into the torso, and so on. Maud includes tools for doing this fixup manually, and I’ll be writing more over the next few weeks. But maybe with a little care you can avoid such issues, too. Perhaps some experimentation is in order.
A new demo video is up on YouTube. It’s 15 minutes long and demonstrates animation retargeting for 2 cases: (a) equivalent skeletons and (b) using an existing skeleton map. https://youtu.be/4UwxbsOewow
I plan to create/upload another video showing how to create a skeleton map from scratch. Hopefully this weekend.
For the rest of September I plan to focus on physics. Perhaps Maud can be useful for tuning ragdoll simulations.
I just wish you had uploaded this video in HD instead of 360p but other than that it’s ok.The commentary is professionally made and the tool is working fine.
The retargeting feature is going to save us a huge amount of time from animating each model separately.
I think that the community and the engine need great tools such as this,so thanks for all your efforts