(December 2021) Monthly WIP Screenshot Thread

Starting this month off late with two things.

First is a semi-low-quality video from last weekend of the actual Mythruna engine progress:

Youtube always makes the videos darker and I haven’t gotten the Sony Vegas settings tuned in right. Also, Mythruna grass really messes with video encoders.

Next up, I just did my very first live stream where I created a block model for testing some things in Mythruna. So if you ever wondered what my voice sounds like… you can hear me blather on with one chat participant for the better part of two hours. (I’m trying to trim the beginning where I’m saying “testing” etc. but youtube is making me wait… so if you catch it before that, just skip to 5:12.)

Spoiler, I made this:


I’m always impressed by the sense of scale when you share bits from Mythruna. It feels as big as a world.

Not sure if this is me, or stuff on your end, but the sound for your live stream seems really quiet. I usually can run with my audio settings at about 50%. I had to turn both Youtube and my speakers up to 100%, and it was still a bit on the quiet side.

Impressive :wink:
Do you think your engine might be running on Android device ?
Would that be a viable option, or does it exist some hard limitation that would definitely prevent that ?

Thanks. This was the driving force behind the engine redesign so it’s nice to get feedback that others ‘feel’ it properly. I just need to make sure I fill the world with enough interesting things that it doesn’t feel empty. But it’s already a good sign that I played the game for over an hour last night and you literally can’t do anything but walk around and take pictures.

Mmm… thanks for this feedback. I didn’t know this and will look into the issue. Watching the stream myself the other night after I was done, I also noticed that I speak at two distinct volumes: “rambling quiet” and “louder direct information” which I will try to temper in the future. (“Rambling quiet” also has the other semi-embarrassing trait of my slipping into whatever casual accent I choose to adopt at that moment.)

I’d like to. In my dreams, I get the client to run on the Occulus Quest 2 which is an android device.

I think in theory it should be possible to strip down the client a little and get it to run. The client code is basically dumb and the server does all of the work. The far terrain was specifically designed to allow for lower quality… so while in the version you see in the video there are 3 million+ triangles devoted to the far terrain, I should be able to reduce that to 500k triangles and maybe still look enough to work. I have no experience an Android and don’t know if the tricks that I use will perform extra bad or anything.

There is basically no way that the server side would run on an Android device, though. So no single player Mythruna games on Android.

Are you going to have a single official server that all players will connect to it or you are going to release the server app so anyone can host it in his/her own VPS? If so will there be a master server to list all availble servers world wide?

The server is built into the game. (Even the single player game is local client-server.) So anyone can create a multiplayer server. There will also be a stand-alone server like original Mythruna had with both a command line only mode and a Swing GUI mode.

There will be something. It remains to be seen what. The original plan from 10 years ago was that users could pay some small amount to have a “registered server” in the master server list. This would allow a certain amount of traceability and permanence. Plus registered servers could hit the Mythruna auth server to validate the clients connecting.

That was a long time ago, though, and the entire landscape has changed. I think most of that probably doesn’t make sense anymore… the tax/VAT complications alone make it infeasible. Today I’m focused on getting the game features to alpha and will let the rest grow organically. If I end up releasing to app stores like Steam then that will also probably guide the direction.

1 Like

This weekend is a very productive one. I managed to build a simple jme native template using jni and bash.

The template supports :

  • Auto-Generating C header files for java native methods.
  • Compiling the project files with the project dependencies and assets.
  • Packaging automation for java and (/natives/libs), (/natives/includes), (natives/main) for c++.
  • Jar assembling with the project dependencies and assets.
  • The user can change the dirs of the project (without changing the real build script) using the variables.sh script in build/compile and build/assemble.

The main benefit from this template is mainly IoT Game Applications using wiringPi on raspberry pi together with jme, one could build a custom jme input implementation using jme input interfaces to control games using pi peripherals (GPIO pins), so for example MonkeyIoT is an ongoing project based on this template that would hopefully interface all the hardware peripherals of the RPI for jme game developers.

Another usage is building your jme game (jar) on desktop normally using gradle then including the jar with the other dependencies and control the game using the native code and the GPIOs, that way you are able to re-use your game for an IoT project.

More useful cases to try :

  • Custom Keyboard using some pushbuttons, multiplexers (takes some inputs and produces one line of output binary number), and ofc PISO shift registers for the inputs.
  • Custom Joystick using MCP3008 ADC and a joystick module.
  • Controlling jme input using analog sensors as gyroscopes.

The next step is trying jme with wiringPi directly using this template.

wiringPi by Gordon :

MonkeyIoT would be similar to Pi4j (a jni project that wraps Gordon’s wPi) but more likely coded against jme3 input interfaces and appStates.

Also a GUI app could manage to auto generate a native jme3 template and control the bash code on limited hardware devices that couldn’t run IDEA or even netbeans, gradle is not good anyway for natives.


Recently spent some time making a few new models for my game with blender and substance painter. I still have a lot more work to do on the coding end before I really start focusing on modeling and level design as a top priority, but I still find it useful to dip my toes back into a bit of modeling every few months to get some practice and keep the workflow fresh in my memory.

Here’s some new textures for the gatherable minerals in my game I made in substance painter. More importantly, I set up (and finally understand) the layers and worflow in substance painter so that it will be easy to add new variations of the texture set for any new ore types I add in the future.

And here’s some other models I’ve been working on as well. One is a chapel that’s supposed to be haunted and will be where players learn the Necromancy skill in my game.

And the other is some type of royal elven tower that will be part of a new High Elf / Royal city themed map I’ve been imagining lately.

It will likely be a while until I get back to work and finish them, but for models that will be used in quests like the chapel, I at least like to get an untextured version in the game asap so I can get started working on coding the quest.

I also like to see the untextured version of my models in-game prior to starting the texturing process, so I can make sure the model is clean and 100% ready to be textured. Nothing is worse than having to throw away and redo the textures because of some minor artifact that goes unnoticed or doesn’t show up until its in the engine with full lighting and shadows.


After refining the volumetric lights, this is what I get for an airport scene with thick fog.
This is exactly what I wanted to achieve :smiley:

Edit: running on version 3.6.0 (latest)


Really really nice.

The better the lighting gets the more conspicuously absent the shadows are:

Is that a limitation of your approaches? Or is it something just turned off while you work on volumetric lights?

…even an oval drop-shadow splat can be better than nothing in these cases.

But I really hate to nitpick a beautiful scene like that.

1 Like

You‘re certainly right, some shadows would be nice. The truth is, that I did not implement shadows for spot lights yet. It should be quite straight forward, if the day only had more than 24 hours :wink:

I need to focus on some aircraft systems next (flight guidance and autopilot). Maybe after that, but the higher priority would be on rain, snow and thunderstorms.

1 Like

In theory, you could try dropping in the DropShadowFilter as a stop gap. It’s often better than nothing and if you already have a FilterPostProcessor setup then it should be just a drop-in. In theory.

Just a little something to give the models some presence in the world until they have proper shadows.

Edit: and if you are curious “how does it work?” it’s basically rendering a bounding box sized shadow “egg” some distance below the object. So will only really show up when the objects are on or near the ground… which for me is when the lack of shadow is most apparent.

It’s what provides the shadows in this picture:

…which would look decidedly more like a highschool tech demo if not for the shadows.


It makes sense. Thank you, I will.give it a try.

@Apollo I remember you got bad fps previously using volumetric lights. Could you tell us what are your fps now and what actions did you take to improve it (i.e. where were the biggest slowdown) ?

1 Like

Sure. In my first screenshot I had about 30 FPS. The technique which I used was quite heavy because I was doing the numerical integration in full resolution on each light during the lighting pass (I use a deferred rendering system, where all scene lighting is done in one single pass).
I improved the performance to about 120 FPS. The overall performance impact of all volumetric lights is about 3% only. Now I perform the integration in a TextureArray, i.e. multiple very low resolution texture slices along the view direction. This gave me the first 50% of improvement. In addition, I found and corrected a bug in the geometry batching algorithm for deferred light sources, which gave me the last 50%.


I’ve been practicing with Blender a bit, and decided to render and upload something relaxing to look at.
The original is a 15 min animation in Eevee. I tried to imitate the results as best I could in jme and ended up with this:

I’d be happy to improve the visuals if anyone has ideas on what to do. Currently it’s using DirectionalLightShadowRenderer, a BloomFilter and some FXAA.


Hi everyone,
here is a collection of demos on particle effects and shaders that I am experimenting with. I tried 3 different libraries for particle effects. I would like to discuss the pros and cons with you in a separate post. I want to thank @grizeldi for sharing the hologram shader, it helped me understand many things. To practice with the Materials of the jme3 engine I translated the shader with the sci-fi hexagons from the shadertoy site. I hope you like it and that it inspires you.


I’m glad you found my work useful :slight_smile:

Is the first shader showcased really running at like 4FPS or is it youtube issue? Since it looks cool, but due to low framerate I can’t really tell what’s going on. Apart from that, other shaders look pretty cool.

1 Like

Quick and dirty progress video of hooking the MOSS physics collision shapes up to JME animated skeletons… and stuff. I’m already setup to do live streams so I used that to record to save time adding text to the video.

So those of you who haven’t been to one of my live streams will get to hear my voice for the first time… such as it is.

I’m about two days behind where I wanted to be. We’ll see if I finish all that I want to accomplish this week (hint: no, I definitely won’t.)