(January 2024) Monthly WIP Screenshot Thread

Happy New Year!

If you’ve got a JMonkeyEngine work in progress, share your screenshots (and/or captured video) to this topic.


Having basic gameplay features out of the way (optimized world generation, collision handling and sychronization) i’ve started tinkering around with a custom implementation of behavior trees and pathfinding for the mobs:

heres a look on pathfinding progress (in a test project):

As for now everything is going well, at the moment the agent ai is performed on a separate thread (and so is pathfinding). The algorithm used is A* running on a byte array for faster lookup of the values. I also plan to switch to HPA* if i decide i need a finer graph.

Im especially happy about how the behavior trees turned out, because in my previous efforts to tackle AI i would always get stuck on trying to get more refined behavior to work:

       List<BehaviorNode> children = Arrays.asList(
                new SequenceNode(Arrays.asList(
                        new LeafNode(new CheckForEnemyAction()),
                        new LeafNode(new AttackAction()),
                        new LeafNode(new KeepDistanceAction())
                new SelectorNode(Arrays.asList(
                        new LeafNode(new WalkAction())

        var rootNode = new ParallelNode(children);
        behaviorTree = new BehaviorTree(rootNode);

some mobs appear to be going through a wall because of path smoothing implemented, but that wont be the case in practice as the mobs slide on the wall on collision. My biggest issue now is handling not grid-aligned obstacles (such as chests) when pathfinding - with bigger objects i could just mark the whole tile as impassable, but will have to come up with some clever idea for smaller obstacles that dont take up the majority of the tile.


Also, here is the first enemy in the game:

Planning to have its AI implemented by tomorrow evening!


and option to have it as a pet ;p

cool work

1 Like

Today I have played a bit with animal behaviour.
In this video I showcase animal behavior using my game editor, Envision3D.
I came across this video by @HawkesByte and thought I’ll try to implement something similar. https://www.youtube.com/watch?v=DBf2OvuVy8Y

So far the behavior is very simple, each sheep has a hunger bar, once the sheep gets hungry it will walk to the nearest grass patch to consume some grass. If the sheep is full he will roam the grass lands until he gets hungry again. In which he will repeat the process. Also, the grass patch has a life span.


It is rather quiet in this forum at the start of 2024.
I had some free time the last couple of days and started integrating PointLight shadows into my editor.
Here is a screenshot of 3 pointlights in a room.


I’ve been updating my ship control screens from the previous flat UI to a push button
style UI (this is for a VR game; Starlight, so the buttons really will be pressed by the player).

Every part of that panel (including the panel itself) is a deep token which freed me from building buttons in blender (and worrying about UV mapping) and let me just draw the shapes I wanted. The more active elements (the bar displays, bar buttons and the buttons with circular progress bars) are all deep tokens with custom shaders providing the dynamic effects.

Although deep tokens are pretty fast to build I did start to feel bad creating them at runtime (starting from the images) so I started creating them as part of the gradle build process and saving them as j3os. I used a gradle task to do this so I could still iterate quickly (i.e. no utility program to create them; instead just change the png, run, see new meshes).

I’ve included an example of how I did this build time generation at DeepTokensTestBed


Started updating Depthris to add customizable controls. I am using Listbox and listeners to be able to use the keyboard/gamepad to navigate through it without the mouse as an option


Hi @bloodwalker ,
I think this is the most interesting GUI and joystick usage example I’ve seen in the last few years! Could you teach me and the community with a working code example how you handled the joystick input on the GUI please?

  • Did you use jme3-lwjgl3?
  • Did you use a com.jme3.input.RawInputListener or a com.simsilica.lemur.input.InputMapper to detect joystick input and convert it into actions?
  • How do you change the focus on the GUI components via joystick?
  • How did you add the animation effects to the GUI?
  • How does key remapping work?

Thanks in advance


Note that joystick UI navigation is included in Lemur by default. Wherever the arrow keys take you, the joysticks will take you.


The last couple of days I have spent some time to implement “decals” into my editor.
Firstly I implemented static decals then after playing with this new feature I soon realized I will need dynamic decals as well.
Here is a video showing what I did and how to make use of the decal tool.
At the end of the video I show the dynamic decals where I slapt a sticker onto a gun.


Thank you for the compliment. I used a custom navigation controller because of my custom GUI components and my UI design for the game (not the art, that was purchased :wink: ). I needed for my case different actions for different UI elements and run effects for each. I could have done it with Lemur’s default navigation but I am still learning the library as I go.

I will share what I have done soon. I will clean up what I have and move it to a sample project.


I used com.jme3.input.RawInputListener since it’s what I know most. I will look into com.simsilica.lemur.input.InputMapper in the future

In my case, I have a list of “focus-able” panels that may have a listener attached to it. once I select the index of a panel, I run an effect on it that shows the focus. It’s similar to what Lemur does, but I needed separate controls and I’m still learning Lemur, so I work with what I know (I just learned how to use the Listbox recently).

For the buttons, I just used effects with a custom style.
For the opening/close of the UI panels, I used Lemur’s Tween animations.

In my case I defined an enum that contains all the game’s actions, then I mapped its values to action listeners. This makes the actions independent of the raw inputs.
Finally, I map raw inputs to the enum values. This map is what is editable in the UI and I load/save it on the machine.

I hope this helped :slight_smile:


Just in case anyone wonders, Lemur’s InputMapper does similar in that it maps FunctionId objects instead of an enum. Input combinations are mapped to a FunctionId and application listeners are added to a FunctionId. The text in a FunctionId is meant to be human readable so that it could be used in a UI like the above.

FunctionId also has an optional parent group which could be useful for games that have a lot of mapped input… and is also useful for turning on/off entire groups of inputs. (For example, turning off UI navigation when in first-person mode or turning off movement while in a UI.)


It looks very good.

  • Are you using AWT components?
  • I imagine you are using lwjgl, right?

@ndebruyn, do you plan to release a version or publish the source code?

If I’m not mistaken, you created a game with this editor, I would like to try it (editor).

Hi @SwiftWolf , thanks for the kind words and for your interest in my project.

For the in editor components I am using my own library of components which I have developed over the years and wrote some additional ones for this editor. GalagoUI. Then for all the selection dialogs, such as file selection and color picking I am making use of swing/awt.

This editor is build on top of jME3 and that runs on top of lwjgl.

Yes I do plan to release a version of this editor in the future. Hopefully this year. I have tried my best to make it as simple and user friendly as possible.
This editor is strickly coupled to my own envisionutils library which will be bundled with it.
Also, the idea with this editor is to create a full game with it and pack or bundle it as an executable.

You are correct. I have done some game jams with it just to test the product and identify missing features. Here is some references to the games:


Thanks for the answers @bloodwalker. I am waiting to see some code examples to understand the main steps and return the favor with some useful advice. Thanks for the info @pspeed , I will try to connect the dots with the example provided by @bloodwalker. I have seen on the Forum that many have tried to find a way to use joysticks and UI. I would like to dig deeper to see what common issues users are having.

Good job @ndebruyn ! Did you implement the decals algorithm from scratch or did you tweak some existing code? Does it also work with the animated models provided with Skeleton?
Perhaps you may find the code example posted by @xuan , who translated the decal algorithm from three.js into JME, helpful. I found it very clear and interesting. :wink:

Thanks. Yes I have made use of that code by @xuan.
I think it might work with animated models.
Let me check.

1 Like

It looks very nice and polished. Your editor is looking great!
I first thought it was being drawn in screen space because it updates in real time when you move it, but seems I was wrong (I should do the same thing for my editor).

How did you solve the z-fighting? I simply moved the triangles a bit forward, I noticed some seams in your video so I think you did the same. I noticed it starts to z fight from a distance so I added an option to adjust it, but I wonder if there’s a better way.

When you activate the dynamic option (or disable the static) does it simply rotate it together to the target (add to the same Node) or recalculates the projection? I think for the latter it would be inefficient during the game.