(July 2016) Monthly WIP screenshot thread

Nothing to be sorry about :chimpanzee_amused:

Well, for now I just prefer to use my own implementation of system I described, it’s too ugly to share and sometimes is a pain to maintain but it works for me. I replace animation controller with my own thing, I use xml to store state machine and events definitions, I use my own a bit defective event bus implementation, I use constantly edited List for events instead of just a read only plain array and integer counter - all this should be changed. Hopefully. Some day.

I am not sure if I am with @prog in the same topic regarding UE4, this what I know about UE4 regarding blending animation, using variable it is 9-10 minutes lesson

The explanation of the blend space at the beginning is brilliant…

1 Like

Adding light level stuff, using the doom way to do that i.e.

  1. every textures contain indexes.
  2. the indexes is used with the ambient light level to pick a gray pixel in a texture (range: 0-255)
  3. this gray color is actually an index for an other picture with 14 lines and 256 columns. The gray color is used to say with column, the line being decided by something else: if you take damage, if you pick item, if you wear a suit etc.

Advantage of this method: you don’t mult colors, you only do lookup.
Disadvantage … well, a lot for me. For example: you can’t interpolate indexes. For this reason i had to use nearestnomipmap and nearest. I have several ideas to solve this problem, i’ll work on that later.
You can also notice that a lot of texture bug ware fixed (mostly stretched textures).

About the ground: it doesn’t use texcoords. This come from doom, you can’t turn/shift the ground texture. The big advantage for me : i don’t have to generate texcoords. I only wrote a shader that use pixel world position and voilà.

If colors you see in the video are strange, there is 2 reason to that: the described algorithm didn’t work as expected and i had to fix it with with tape. Second reason: the video recorder darkened the result.

I also started to loop into the music. It seems to be a midi-like format, maybe there is something i can do here. I’ll also try to add support for sprite (change them according to the player position and the ennemy rotation, you know). If i can do all of that plus two or three things, i’ll consider that as a good enough pre-doom engine. I don’t plan to go to a complete playable doom game, just have something under BSD license.


Have you thought about doing a two step algorithm?
First extract the texture and convert to a normal jme texture, than use this instead?
It would allow you to do all kind of stuff with colors and do interpolation.

Yes, i said that somewhere (not on this forum it seems) : i have three solution.

  1. stay like that. There are ugly pixels and aliasing => it was there in doom, afaik.
  2. do the calculation (palette etc.), render that to a texture then use the result in-game. It what you said.
  3. throw away the “light level” palette and put the light level (the int between 0-255) in the fragment/vertex shader, render texture in full brightness (like you said but only for full brightness) and mult the color of the texture by “light level” divided by 255. (or, actually : 1 minus light level divided by 255 but whatever).

The 3) is possible because the reason why the light level is here in the first place (limit the number of different colors on the screen) no longer exists.
And i’ll likely end with a “let the user choose between 1), 2) and 3)” system.
But thanks for the idea.

Hi, after a busy May/June, here is a small release for SkyHussars.

Let me introduce SkyHussars R7!

I upgraded to 3.1.0-beta1, and I was unable to work out all the issues yet. However what I have is pretty good so far.
The sound was a bit tricky to fix, I still don’t know if this is a bug or no:

The shadows have some issues too, I get jaggies and apparently they do not always appear at the right place, but this is acceptable for now. Also the ComposeFilter did not seem to work after the update. These issues will be left for the next release to clear up.

But here is what is new:

  • AI is on logic thread now. Graphics thread is more independent of updates, and multi-threaded performance increased.
  • This enabled stress testing the game with 250 planes.
  • Wing configuration moved to configuration file, more moddable architecture

The framerate in the video is terrible, but without that on my PC it is able to churn out 30 fps while 250 high poly planes are out in the sky without any kind of lod. I did not measure the tris, because that hotkey is disabled at the moment, nonetheless the performance is nice.

All thanks to the jMonkey team, you people, are great!


This is the game on which I’m working in my spare time.


Me again, sorry. Promise i won’t post more this month.
I added things (things=actors in doom) started to implement a sky, fixed some bugs on textures, started to add alpha …

slowly i am reaching whant i want. I’ll also need to optimize all of this (i would like to talk about a “soft” batching. To make a long story short : if context swapping is what is expensive in opengl, is it possible to sort geometry rendering so geometries with the same material will render one after the other (so no context swapping) and without creating a new geometry, copy datas etc.)
(i would also ask : is there a way to give a value to an object that will not prevent it to be batched ? For example light level : it’s just an int. I can give it to every triangle in the mesh then follow to frag with a “uniform” variable but it seems under effective (copying the same value everywhere … ). I can give it to the material, but then two sector with different light levels wont be batched)
(i should open a topic for that)


post more post more!

JME already sorts by material.

1 Like

But when you batch you only have one object… so where would the light level live if not in the mesh?

Any plans to make it compatible with user created WADs (i.e. Brutal Doom?)

Working on player customization, Android.


I’ll post a video soon :wink:


@pspeed :
ok, but if jme already sort, why render 200 geometries with the same material is slower (and a lot slower) than rendering a batch node of these 200 geometries ? At least last time i checked it was true, maybe it’s not true anymore.

But when you batch you only have one object… so where would the light level live if not in the mesh?

Well, i agree with that but … why not storing it in a material but in a variable (so it will never lead to a recompilation of the shader, so you can let the current context like it already is).
I think that say to the graphic card “use this single variable in the shader” costs less than “use this array containing only a single value, do linear interpolation on similar values etc.”
Something like mesh.setSingleFloat(blabla); (like we have setBuffer)
Or even define some “user” buffer. The user will define consts like this
const int the_index_used_for_ligjt_level = 0;
const int the_foo_used_for_bar = 1;

then he binds his buffer with the light level in the first case, the foo in the second case etc.
It seems that i am trying to re-create what a material already does and it’s not entirely wrong, but it’s because the material swapping cost so much and it’s so easy to kill the framerate just by having a lot of geometries in the scene. I know you shouldn’t have a lot of geometries, but it’s where the “flow” of the engine brings you.
And people end with texture atlas and playing with texcoord and using buffer and textures for what they are not (for example i am using a texture as a palette in my code). People try to inject datas from other way than the material because its main purpose (give datas to the shader) costs so much (and a part of that comes from conditional compilation : because the value of a boolean can entirely change the code of the shader, you can’t assume that materials with even only one difference between them can be swapped by just changing this value)
But i talk too much, once again it would fit better in a specific thread.
And maybe it’s already possible to do that, i am not a professional with jme (even if i use it regularly for several years).

Any plans to make it compatible with user created WADs (i.e. Brutal Doom?)

I don’t plan to re-create doom. Even the vanilla. I only try to create a basic library that would be a good “starting point” for anyone willing to recreate it. If i can manage animation, fix every bugs in textures, give access to every data in a usable and handy way … well, it would be great and my “job” on that would be over.
Especially, i don’t plan to re-create the A.I. of doom. As it doesn’t exists in the WAD, i would need to redo everything from scratch and with a lot of reverse engineering and … well, nope.
I already plan to implement “portals” (not like in the game with this name at all, read this paragraph) to only display sectors that are actually in the view. It will be less effective than the “subsectors + BSP tree” thing that exists in the WAD, but still better than display everything all the time.
And i can try to implement the DECORATE language used in zdoom and by brutal doom. I already implemented several languages (with byaccj library, bison and yacc for Java).
But recreate doom is not in my plan, and never has been.

(and if someone wants to know : work in progress, fixing bug, clarifying code etc :slight_smile: )

200 draw calls versus 1 draw call. Draw calls are expensive… way waaaaaaaaaaaaaay way (way) more expensive than a context switch.

It sounds like maybe you don’t understand what batching is doing.

1 Like

As i understand it : create one big (one for each different material) mesh, calculate absolute position for every vertex in every geometry to batch (each geometry sharing this material, of course) store the result in the big mesh.
At rendering : only render each big mesh with its material.

Right or wrong ?
Isn’t draw call something like this : “move the camera to blablabla, with bliblibli rotation. Setup context. Render vertexes-already-stored-on-the-gpu” + rince an repeat.
Especially the render part (the last sentence) is only one order (so a very small order sent from the cpu to the gpu) as it only says “draw vertexes corresponding to this id”, the id being obtained when vertexes are sent to the gpu card. Once again, it’s what i understood when i did that using opengl directly.

It’s too late to continue this discussion, and it’s not the place. But i appreciate your help, really.
Also, there was a terrorist attack (not sure if it’s terrorist, it isn’t clear) in France this evening. I am in France, i’ll follow that and likely not answer more this evening. But, once again, thanks !

Yes, and hopefully you can see that doing that once (with driver round trips, etc.) versus 200 times is going to be a lot faster. Plus, the data may not already be there or may not be resident where it’s needed and has to be shuffled.

Anyway, you already have the proof with your example of batching being faster. But that has nothing to do with material context.

JME sorts the opaque bucket by shader first and then front to back distance (to prevent overdraw). The transparent bucket is sorted back to front by distance… because otherwise it won’t look right.

Following the tutorials on jME Beginners Guide, and just created 1000 boxes with different shapes, default white unshaded material and some of them are rotating. They are all placed in between -500, 500 WU, and finally the rootNode is rotating very slowly. That’s all. The fps is around ~10, and I think it can be improved greatly with geometry instancing.

If you stare at it for a while it starts to look like stars in the night sky :stuck_out_tongue:


I’ve been doing some internal improvements to controls and the info screen so nothing to really show BUT I’ve also done some PR related stuff like a paper model to make booths at conventions more interesting (and to satisfy my inner urge to make stuff):

Full Imgur Album Here

And also some promotional posters/wallpapers. Directed by J. J. Abrams.

These are too large to upload directly to the forum so in case they doesn’t display.