I took the entire week between Christmas and New Years to work on some critical remaining features for the MOSS (Mythruna Open Source Software) libraries. I had foolishly hoped that this particular feature would go more quickly and I could have spent the week doing cool stuff.
…such was not the case.
After a solid week of heads-down coding, I finally had something working earlier this evening only to discover a critical flaw that almost made me throw it all away. Fortunately, clearer heads found a work-around and now I can say that it works.
The short version is that this is similar to the work I did with Jaime and bullet as far as “reactive animation” goes. The tricky parts are that a) this works with my custom zoned physics engine, and b) it’s networked.
A driver object on the RigidBody that controls the AI, etc. decides what animations to play and how fast based on velocity, collisions, etc… then updates the collision shape hierarchy with the animation playback. This playback information is also sent across to the client using SimEthereal. The client can then sync the real mesh/skin animation.
Why is this important? Well, it means that the animation frames are synched and tweened just like position and orientation… and it’s all synched together. So they will always match.
Simple in principle. Hard to make it a generic-ish library. In retrospect, I probably could have hacked this into Mythruna in a couple days… 80% of the time was spent working out the design for the libraries.
I initially planned to post these in the July thread but kept stalling it until now. Some of the long time members might remember me working on my Hostile Sector project 10 (!) years ago.
A couple of years ago, I found myself in a situation with very little time to work on spare projects, but more time to think about them. I thought I had a mostly finished game ready to be polished up and started using what little time I had to refactor it and transform it into a location based android game instead of the desktop experience it used to be. I gravely underestimated the amount of work required and the time it would take. But I have a different mind set now than when I worked on this the first time. No pressure. If I don’t find it fun to work on, I won’t. Eventually it’ll be in a state where others may play it and find it fun. Or perhaps it won’t.
If it does, I’ll make more noise, but for now, there’s no “call to action”, or anything. I just want to show something
To get a link from Imgur that I can share in JME posts I usually have to right click the image/gif, press open video/image in new tab, and then copy/paste the url that ends with .png or .mp4
Wanted to try my hand at a FPS game, never did one. I wanted something simple so I picked duplicating Wolfstein 3D from the 90s in JME 3.5.
I started making one from going through JME examples and the jme cookbook book. Even though there are holes in implementation of some of those examples because they are not real world examples and can’t apply them to real game environments.
But it is working out, as it is a very simple frame work on what a feature can be used for…
I also, created a simple map make application to edit and create new maps. That is working out. I did look at Nifty but the “NO SPRITE SHEET” options in JME, it broke down pretty quickly just like JME. In JME I ended up writing my own shader to support sprite sheets, but I didn’t want to take the time to learn how to see if I could get nifty to work with sprite sheets since basic nifty doesn’t support it. Or at least I couldn’t locate support for sprite sheets.
I ended up with my on simple Gui system but I already did that 6 months ago with my Real Game I’m making.
It is turning out very good. I got simple AI routines in the game and the enemy moves around react to seeing the player and being shot at and shoots back. I’m missing different levels of reaction based on better enemies should react better than lower enemies.
It is a 3D game using Quads and textures for everything.
I’m going to look into it myself some more first. It took me a few hours to rule out my own code and even than I’m only 99% sure.
But if you are curious, in my case I’m stepping through time with setTime() and have set the control’s global speed to 0 so that it never advances on its own. Mostly it works but every once in a while the armature goes all screwy (see video at about the 50 second mark 3… 2… 1…: https://youtu.be/We4psCUck2E?t=50)
I’ve gone as far as to log all of the times that I’m pumping, action changes, etc… nothing strange coincides with the glitch. Also, I’ve logged the location of attachments and they don’t seem to go screwy either but I only did limited testing with that trying to find a trigger to capture more state.
My suspicion is that some bit of math gets a time really close to the end/beginning of a tween and so has a tween time delta of something at or close to zero and the transform interpolations don’t like it.
Its outdated in many areas, JME changed. The Particle system is broken in the book and the animation system call many deprecated methods. but able to figure out how to change it to fix current JME.
I have been working for years now to make bad weather look good, at least virtually. I decided that the latest improvements are worth a new video, which I proudly present to you. Unfortunately the video lost a bit of quality during the YouTube conversion.
What you can see, is an instrument approach in fog, with only 300 meters visibility. This reminds me of a few bad visibility approaches that I have flown in reality. The video starts at an altitude of 500 feet (displayed by the green numbers just below the aircraft symbol in the attitude indicator). The aircraft strobe lights are flashing outside and the landing lights brighten up the fog ahead of the aircraft. The autopilot is engaged until we see the approach lights and you can hear the beep noise as the autopilot is being disengaged. From that point on, I keep the wings level and a constant rate of descent with very small control inputs until shortly before touchdown.
There is still some work to do on aircraft systems, but as far as I know there is. no other flight simulator with that much lighting.
Tonight I wrote a demo to show how the Heart library can be used to create efficient custom meshes by translating and merging simple meshes.
The checkerboards in the screenshot look identical. Both were generated procedurally (no Blender). However, the one on the left (made using CenterQuad) uses 100 geometries and 400 vertices, while the one on the right (made by merging 100 centerquads) consists of a single geometry with only 238 vertices.
Does it work with complex meshes as well?
Is this similar to “remove duplicate vertices” in Blender? If so, what is the default distance threshold when checking for a duplicate vertex? And is threshold configurable?
The utility methods work with very complex meshes. However, JME meshes were designed for display, not editing or analysis. As a result, JME’s definition of a valid mesh is very flexible. That flexibility makes it very difficult to write methods that handle ANY valid mesh.
The merge() function is designed to work with:
both list-mode meshes and indexed ones (including byte and short indices)
all 3 kinds of primitives (triangles, lines, and points)
strip-, fan-, and loop-mode meshes
bone-animated, morphing, and non-animated meshes
all vertex-buffer types, including normals, texture coordinates, colors, tangents, and binormals
buffers containing all data types (float, double, int, short, byte, long)
The merge() function does not support:
hybrid-mode meshes
levels-of-detail (LoD)
64-bit vertex indices
merging meshes with different primitives (for example triangles with lines)
merging meshes with different vertex-buffer types (for examples normals with no normals)
The translate() function has similar limitations. In addition, translate() requires that the position buffer be a FloatBuffer.
In some cases, the results of a merge won’t be useful. For instance, if you merge 2 animated meshes, it’s likely the meshes will use particular morph-target/bone indices for different purposes, in which case the result will be a mess.
Deduplication of vertices is not included in the merge() function. In the demo, dedup is performed by the addIndices() function, which is based on exact matches. I haven’t developed a threshold-based dedup function yet.
After I posted the screenshot, it occurred to me that the demo wasn’t very convincing. For the geometries shown, it would be more efficient to apply the checkerboard pattern to a single quad using a texture. So yesterday I modified the demo to apply a different Z coordinates to squares based on their colors.
Minie already uses MyMesh.addIndices() to simplify debug meshes containing up to 6000 vertices. The impact depends on the topology and debug options, but a 3x-to-6x reduction in vertices seems possible.
Here’s a screenshot showing keyboard localization in JME:
Notice that many of the actions documented in the help node (the black box with rounded corners) are bound to the (English) names of Greek letters. That’s because the screenshot is from a system with a Greek keyboard!
Here’s the same application on a system with a German QWERTZ keyboard:
The help node indicates that pressing the “Y” key will lower the camera. On a QWERTY keyboard, that key would be labelled “Z”.
The help nodes in these screenshot were generated using version 0.9.8 of my Acorus user-interface library. For keyboard localization to work as shown, you need to build your app with the “jme3-lwjgl3” library, not the old “jme3-lwjgl” one.