"Clipping" problem with glasses through glasses

Hi all,

I have a model of “shelter” with a lot of glasses, so with transparency, and the model can be extend, adding slices (3 on the picture).

To simplify, I used SharedMesh to duplicate slices but I have a problem with glasses, a part of them disappear when there are visible through an other, as you can see on the picture. It depend of the camera position.

It seems to be a problem of rendering order but I tried a lot of things and nothing changed.

I tried to reduce the far distance, to change the “twoPassTransparency”.

Glasses are 3D meshes, not only a surface, but I tried both and it’s the same problem.

The problem doesn’t appear when all the slices are only one object (made with 3DSMax). All transparent batches are set in QUEUE_TRANSPARENT.

Without SharedMesh it’s the same problem (with different objects)

A solution is to merge duplicate meshes but it’s a bit complicated and not really good.

Do you have any ideas ?



Maybe changing the Alpha test policy (setTestFunction) to AlphaState.TF_ALWAYS would fix this…

Is this set?

rootNode.setRenderQueueMode( Renderer.QUEUE_TRANSPARENT );

OpenGL alpha buffers are accumulative, this means that when one of your panes of glass is rendered it 'fails' through the current OpenGL state.  If no depth fragments exist then nothing is placed 'behind' the glass when the glass is rendered. 

I believe that command organizes the scene graph and farther objects are rendered first.

I ever tried to set the rootNode in QUEUE_TRANSPARENT but nothing changed.

No I tested all the funtion test but nothing changed (with testEnable == true)  :frowning:

It's caused by the rendering order, I guess. I never had a look into the ordering, though. Not sure how to influence that… do you already have multi-pass (jME can at least do two pass) rendering for transparent objects switched on?

Yes I use the multi pass of Jme Renderer for transparency.

I think it's a problem of rendering order too because slices are different objects and there is no bug when they are merged in the same object.

Maybe it doesn't like the curve glasses ?

Have you verified that your normals are correct?  That is important for two sided transparency as it uses face culling to do the two sides properly.


Did you solve the problem?

Hum, yes in some case, but I still have some problems with textures with alpha.

In this case I think is due to the fact that my transparent objects were not in the QueueTransparent mode, or some think like that. Make sure that objets transparent are in QueueTransparent, and others are not.

It's an old project so I don't really remember.

I had this kind of problem in Java3D and it was related to sorting of transparent objects in rendering. In Java3D I had to disable sorting due bad performance, after that the transparent looked the same. I also think problem may be in sorting. For example no sorting at all or bad bounding boxes may lead to wrong order (guess).

I think I have found the problem, but I dont really know how to correct it.

After several hours of search, I found that in RenderQueue.renderTransparentBucket() bucket is sort. If I comment the line transparentBucket.sort(), everything work perfectly in my test. And as mazander said it seems to be the problem.

(I always works in Jme 1.0)

In the sort code, bucket use a TransparentComp() comparator, and this comparator use the distances between camera and spatial to sort elements. But distance use center of spatial, so in lot of cases it doesn't correspond to positions of elements. That's why render fail in some cases, mostly when spatials are imbricated.

So why the transparent bucket sort elements ? Why bucket doesn't simply use Z-buffer ?

For the moment I simply comment the sort line, but I suppose it has an utility. For ortho bucket I understand but for transparency I don't.

Hope that anybody knows. I notice there is the same code in Jme 2.0.

the z-buffer is just a 0-1 dataset, there is nothing that ties it to an object…

you could implement your own ordering system, I think you would basically need to find the farthest point on an object in respect to the camera.  but this would still have issues when there are 'overlapping' objects, also I think it could be quite a performance drain (depending on the number and complexity of the models)…