Seeking alpha testers for Maud


My Maud editor for animated models is now at a point where I’d appreciate constructive feedback. I’m seeking a few brave JME users to act as alpha testers.

The project (including documentation) is online at

The documentation explains how to build and use Maud; please read it.

I’m interested in feedback about:

  • issues you encounter while building Maud
  • places where the documentation is incorrect, misleading, incomplete, or unclear
  • bugs you encounter while using Maud, especially reproducible ones
  • suggestions on making Maud easier to use or more intuitive
  • features you want added (see the Wishlist section of the documentation for my thoughts)

Kind words are also welcome.

In return, I’ll support Maud as time permits and try to answer questions about it.

(August 2017) Monthly WIP screenshot thread

mmm, I see a lot of things which I need to include to SS Editor :wink:


After several false starts, I got Desktop Deployment to work for me.

I’ve uploaded pre-built ZIP files to GitHub:



So far I have had no problems. Looks very cool by the way - nicely polished in many areas.

I’m going to dig into and try to do something more tomorrow to see how I get on. I’m interested in giving the animation retargeting a try since that could save me from pulling out my hair with regards to the only semi-decent walk cycle I’ve ever managed to make.

One thing I noticed instantly though is that using middle mouse to rotate the camera stops when the mouse leaves the JME window. Not sure if that’s something you can prevent.


the app should hide a cursor during rotation to prevent it.


I hadn’t noticed this before. Thanks for pointing it out. The stoppage seems to happen only when the mouse’s X and/or Y are negative. I suspect a fix is possible. I’ll investigate further.


Here’s an outline of the process to retarget an animation from model A to model B:
(1) Set asset folders to access each model (Settings -> Asset folders -> Add)
(2) Models -> Load -> navigate to model B
(3) Models -> Source model -> Load -> navigate to model A
(4) Select the Mapping Tool (Map -> Tool)
(5) Click the “Show retargeted pose” button and the “Twist” button
(6) For each bone used in the animation, starting from the root bone: RMB the bone in model A, RMB the corresponding bone in model B, click the “Map” button, use the Twist Tool or axis dragging to align the bones similarly
(7) optional Map -> Save
(8) Animations -> Add new -> Retarget
(9) Click the “Load” button for source animation and select the walk cycle of model A
(10) Click the “Retarget” button, edit the animation name, and click “Retarget”
(11) Models -> Save


That’s a clever workaround. However, I think the fix is to obtain cursor motion directly from LWJGL.

If InputManager lies about something as fundamental as the cursor position, bypass it!


oh, right… the default behavior on a Simple Application… yeah, when the cursor is hidden by the flyCam’s .setDragToRotate(true) chain. It keeps accepting mouse input while cursor is hidden, though the cursor is temporarily put smack dab in the middle of the rendering area and then restored when the drag finishes.

I’ve set my recording software to give a hint where the mouse cursor is at all times by highlighting the area yellow. You can see the flyCam’s behavior in the following video at 30:21

So, you could probably cue off of what the FlyCam class is doing to prevent any “mouse dragged out of bounds issues”.


Neat trick. Anyway, the camera-dragging issue is closed, fixed in version ALPHA+1.

I’m (still) interested in feedback about:

  • issues you encounter while building Maud
  • places where the documentation is incorrect, misleading, incomplete, or unclear
  • bugs you encounter while using Maud, especially reproducible ones
  • suggestions on making Maud easier to use or more intuitive
  • features you want added (see the Wishlist section of the documentation for my thoughts)


Someone suggested off-forum that I should post a video demonstrating some real-life uses for Maud. I plan to do so for the next release, around the end of August. Here’s my list of anticipated real-life uses:

  1. develop jME animations from motion-capture data
  2. copy/retarget animations between models
  3. convert models in other formats to native J3O format
  4. troubleshoot issues with models (or with your model-asset pipeline)

Which of these are of greatest interest?


Yes a demo/tutorial is always welcome.
For me,the second feature is the most interesting.
“copy/retarget animations between models”.
Keep up the good work :slight_smile:


I’m interested in doing some of the retargeting at runtime. So I can have animations saved for humanoid figures and load and retarget an animation as needed in game, to save on disk space, trade off is more computation. Is there a library for this?


I don’t currently offer a library for animation retargeting. Thank you for that idea.

Before I charge ahead, I want to understand your use case a bit better. The big question in my mind is how similar those humanoid figures are, in particular how similar their armatures/skeletons are.

The 1st step in retargeting animations is simply converting the bone names/indices in the source skeleton to those of the target skeleton. For instance, bone #60 (“Calf.L”) in the Sinbad model corresponds to bone #97 (“shin.L”) in the Jaime model. With a little care in creating your models, you could avoid such issues, I suspect.

The 2nd step is re-orienting the bone rotations from one context to another. For instance, the local +Z axis of Sinbad’s Calf.L points forward, but in Jaime it’s the local +X axis of shin.L that points forward. Again, I suspect you could easily avoid such troublesome differences between your models.

Maud performs steps (1) and (2) using a “skeleton map”. In general, the map must be created manually for each pair of skeletons you plan to use for retargeting. Maud makes it relatively easy to create maps, but it’s not automatic except in the trivial case of equivalent skeletons.

The 3rd step is creating animation(s) in your target model and adding the retargeted bone tracks to those animations. If the source model and target model have equivalent skeletons, this becomes a simple cloning operation. You probably don’t need a library for that! If the skeletons aren’t equivalent, you’ll need skeleton maps, but once you have them, the creating and retargeting can be automated.

The final step is fixing up the retargeted animation(s) to address issues like feet sliding or not touching the ground, limbs poking into the torso, and so on. Maud includes tools for doing this fixup manually, and I’ll be writing more over the next few weeks. But maybe with a little care you can avoid such issues, too. Perhaps some experimentation is in order.


A quick status update:

  1. A new release (ALPHA+2) is up on GitHub, incorporating all changes to Maud during the month of August. Please use this release for testing.

  2. A new demo video is up on YouTube. It’s 15 minutes long and demonstrates animation retargeting for 2 cases: (a) equivalent skeletons and (b) using an existing skeleton map.

  3. I plan to create/upload another video showing how to create a skeleton map from scratch. Hopefully this weekend.

  4. For the rest of September I plan to focus on physics. Perhaps Maud can be useful for tuning ragdoll simulations.


Thank you for doing this,it’s super useful :slight_smile:


Here’s that other video I promised, walking through the process of constructing a skeleton map from scratch. 28 minutes long.


I just wish you had uploaded this video in HD instead of 360p but other than that it’s ok.The commentary is professionally made and the tool is working fine.
The retargeting feature is going to save us a huge amount of time from animating each model separately.
I think that the community and the engine need great tools such as this,so thanks for all your efforts :slight_smile:


Thanks for the feedback.

I’ll attempt HD (or at least resolution > 360px) the next time I record a video.


If you have a recent (last 3-4 years) Nvidia, AMD, or Intel graphics card I believe there should be some software to use hardware graphics processor to record video.