jMEЗ project

Out of curiosity, how much time do you estimate you have put into this so far?

Not really sure, because most of the engine was actually written a few months ago and I don't remember anymore. I estimate it's about the same amount of time it would take to level a Drenei shaman to 25 on WoW cause I spent about equal amount of time on both ;)

By the way, there's a new version based on the comments in this thread.
Change Log:

  • Now uses LWJGL2.1, which adds Windows XP/Vista 64 bit support

  • Binary now compatible with Java1.5

  • All GLSL shaders have been validated with the 3dlabs validate tool, so ATI cards should now be properly supported

  • Node's children list is not synchronized anymore

Starnick said:

Very interesting...tho Im curious on what the G in GЗD stands for too :)


Apparently the "G" stands for Gorilla. A bigger, stronger monkey?

tried the latest source again this morning…

it still bugs out…

check the log and got this message



shader link success but validate failure



when i comment out the validation check it works fine…

I discussed this with a friend and came to conclusion that the validation status is only when using glValidateProgram and is not needed when just using shaders. So yeah, the validation shouldn't be there.

Importers must conform to the GL specification of texture coordinates rather than the DX specification


A small nit: OpenGL and Direct3D both index textures the same. The first byte you shove into the texture buffer has V = 0, the last byte you shove into the texture buffer has V = 1. The only "convention" that differs is that DirectX is generally used with BMP and TGA (and DDS) textures, where the first scanline is at the bottom; OpenGL is more often used with JPG and PNG textures, where the common texture loaders for those formats load the top scanline first.

Thus, the difference in conventions come from the image loaders (which scanline goes first in memory), not from the APIs. And, to be really nit-picky, OpenGL doesn't even have an image loader at all, so it has no convention. Direct3D has an image loader in D3DX, which has a bottom-scanline-first convention. What you have really done is define that the jME image loaders need to put the top scanline first.

Given that OpenGL has no preference one way or the other, and Direct3D has no preference, byt D3DX loads images bottom-first, I'm curious to know why you chose to standardize on the non-standard format?
mulova said:

But sometime performance / memory handling is not sufficient enough and
it is too hard to meet the requirements for a commercial game.

I disagree with you even though some parts of the engine are not reliable enough.

mulova said:

jme2 is now stable and it is good time to have a discussion and implementation of version З.

I don't consider that JME 2.0 is stable as there are still some important problems mainly in the AWT input handling.

Would a port from JME 2.0 to JME З.0 be straightforward? I mean, would it be easier to adapt an existing game written with JME 2.0 so that it works with JME З.0 too??
Would a port from JME 2.0 to JME 3.0 be straightforward? I mean, would it be easier to adapt an existing game written with JME 2.0 so that it works with JME 3.0 too??

There are certain features in jME2 which are not really structured as good as they should be. In addition, jME3 will take away some of the burdens that jME2 forces the user to do. As a result, a porting job would burn quite a lot of user-code lines, mainly in setting up material, caching models, and input handling.

I'm curious to know why you chose to standardize on the non-standard format?

If we assume all texture loaders load their data top-field first (AWT does), then we obviously have a problem since most models and modeling tools assume the image is loaded bottom-field first. This either forces us to adopt a standard of loading textures bottom-field first, flipping the texture itself after being loaded (currently done in jME), or flipping the texture coordinates. Now, since flipping texture coordinates is faster than flipping an image, I chose that option.
Momoko_Fan said:
since flipping texture coordinates is faster than flipping an image, I chose that option.


I assume you have never worked with a real art path that is based on baked normal maps, then? Flipping the texture coordinates throw a big spanner into the works of any tangent basis, and that there is very poor tools support for pre-correcting art for such a transform. Thus, that was a poor choice IMO. (Of course, there was no shortage of those in the design of jME 1... but why keep with tradition?)
gouessej said:

I disagree with you even though some parts of the engine are not reliable enough.

As said before (in other posts), jme2 core has weaknesses about VBO and they said it has to be redesigned.
When a node is locked, as many as children count of display lists are created. I think just one is enough.
Current BoneAnimation structure is difficult to switch from cpu skinning to gpu skinning and vice versa.
Vector and other data structures must have read-only interfaces and hide members in order not to be changed by external access.
Textures are loaded and released as a whole.
These are what I feel while using jme2.
I guess I'm using most of the features of jme2, but the performance is not sufficient.

Anyway, I said about starting jme3 just because there are many things to be changed, but it cannot be done on current version.
I assume you have never worked with a real art path that is based on baked normal maps, then?

I have worked with such an art path before, but usually one wouldn't concern himself with such problems since it is the responsibility of the engine programmer, not the game programmer or the artist, to make sure that loaders of model and textures conform to the same specification for texture coordinates.

Flipping the texture coordinates throw a big spanner into the works of any tangent basis, and that there is very poor tools support for pre-correcting art for such a transform. Thus, that was a poor choice IMO. (Of course, there was no shortage of those in the design of jME 1... but why keep with tradition?)

Regenerating the tangents and binormals is possible with minimal effort inside a model loader, in the same way that flipping texture coordinates is done, and is still faster than flipping images.
Regenerating the tangents and binormals is possible


Rendering a baked normal map properly requires the exact same tangent basis when rendering as was used for baking. Usually, that basis will not be orthonormal (!).
If all you're using an normal map for is to add some roughness to a surface, then who cares, but that's kind-of aiming low in quality IMO.
jwatte said:

Regenerating the tangents and binormals is possible


Rendering a baked normal map properly requires the exact same tangent basis when rendering as was used for baking. Usually, that basis will not be orthonormal (!).
If all you're using an normal map for is to add some roughness to a surface, then who cares, but that's kind-of aiming low in quality IMO.


I see. Well it doesn't really matter now anyway. I mean it's not like all decisions made are final at this point, if there are issues with the current approach they will show up in tests and a new approach will be developed.