Shader on ATI card normal(?) problem

Hey all!

This is not a support request, more a questions.

I made an animated water shader based on lighting shader with moving textures.
It works perfect on nVidia but on ATI cards it looks like the normals are messed up.

Can you guide me where to start from dealing with this issue?

Thanks in advance!

Florin

this is the image, it seems it’s not displayed properly in the previous post: http://i.imgur.com/gBprVNL.jpg

The images are blocked at my work so I can’t see them. But I would suggest, make sure you have the latest drivers, put GLSL 110 as a minimum in your .j3md if you haven’t already (it makes it more stringent), and make sure there are no implicit casts. Otherwise, we will need to see the shader in question, and if possible, a cut down version illustrating the issue

Whiel many say amd is buggy, I never found a actual bug in their shader compiler, so far always it was a part of interpreting the shader as it is written, not as meant.

Eg nvidia accepts

float var = 0;
amd says invalid cast int to float to that. Technically amd is right tho.
There are a few dozen of those things, probably also differnet depending on the driver version.

I suggest trying to kinda debug your shader, by testing each lines expeted result, if it does not fit, make the pixel to a color encoding wich line does fail.

@Empire Phoenix said: Whiel many say amd is buggy, I never found a actual bug in their shader compiler, so far always it was a part of interpreting the shader as it is written, not as meant.

Eg nvidia accepts

float var = 0;
amd says invalid cast int to float to that. Technically amd is right tho.
There are a few dozen of those things, probably also differnet depending on the driver version.

I suggest trying to kinda debug your shader, by testing each lines expeted result, if it does not fit, make the pixel to a color encoding wich line does fail.

If you put GLSL110 in your matdef (and there is zero reason not to as GLSL100 isn’t a real version at all) then nVidia will complain also.

The issue is that nvidia has been supporting GLSL since before it was official. So if you don’t specify an actual version then it will default to compatibility mode to avoid breaking all of the shaders that were written before the spec is complete. GLSL100 in JME-speak is the equivalent of saying “no version specified”. All other GLSLXXX will put the proper #version into the shader.

This is why people say 15 or so times a week to put GLSL110 in your material defs.

Even still, there are many ATI drivers that will kick out an error with no info other than “couldn’t compile”. I’d call that a driver bug. Not to mention that there are some driver versions (even recent) that just crash or don’t display things properly. @nehon had a case where he had to back-rev his driver last year to get things to work properly. He has since switched to nvidia. My experience with ATI bugginess comes from about a decade of them saying they supported OpenGL when it was almost impossible to get OpenGL actually working on their cards. For most of the 2000 decade, they were basically a joke in the OpenGL community… at least on the lists that I was on. (“My card is an ATI card and I’m having troubles…” “Heheh, why don’t you get a card that supports OpenGL?” Hilarity ensues.)

When AMD bought them it improved and I think is still improving. But compared to nVidias robust driver model, ATI is always playing catch-up there.

However, back on topic, we don’t have enough information to solve this problem. For example, my immediate thought was lack of tangents or something because that is one area where nVidia and ATI guess differently. But I don’t know if normal or bump mapping is even being used here.

Just as a sidenode, if you are ever used nvidias drivers under linux, you would hate them as well ^^.
The amd ones suck as well, but at least they kinda support the open source driver, and it is getting to production ready performance by now.

Thanks for all the hints guys!

@wezrule i tried putting GLSL110 to specify the version, no change.
@EmpirePhoenix I should have specified that there is no compile error for the shader.
@pspeed I use normal maps, not bump maps, two of them and I mix them after I move texture coordinates based on the time to achieve the animation.

I have cople more ideas to try, either way I will post results or more details here.
Thanks again!

Florin

@fbucur said: Thanks for all the hints guys!

@wezrule i tried putting GLSL110 to specify the version, no change.
@EmpirePhoenix I should have specified that there is no compile error for the shader.
@pspeed I use normal maps, not bump maps, two of them and I mix them after I move texture coordinates based on the time to achieve the animation.

I have cople more ideas to try, either way I will post results or more details here.
Thanks again!

Florin

If you use normal maps do you have tangents in your mesh? If not then you will get random results because normal maps cannot be applied without proper tangents. The GPU will pick whatever random tangent vector it wants (often just garbage from memory).

@pspeed said: If you use normal maps do you have tangents in your mesh? If not then you will get random results because normal maps cannot be applied without proper tangents. The GPU will pick whatever random tangent vector it wants (often just garbage from memory).

Would it work properly just on nVidia?

@fbucur said: Would it work properly just on nVidia?

It might look like it.

Are you using your own shader or one of JMEs?

Edit: or more specifically how are you using normal maps?

@pspeed said: It might look like it.

Are you using your own shader or one of JMEs?

Edit: or more specifically how are you using normal maps?

My own shader, based on jme3 lighting shader. Just added more normal and specular maps that are offseted in time and mixed.

@fbucur said: My own shader, based on jme3 lighting shader. Just added more normal and specular maps that are offseted in time and mixed.

JME’s lighting shader requires tangents. If you do not have tangents then that is your problem. You must have tangents or normal maps cannot be used correctly because normal maps require tangents. In order to properly calculate the lighting from normal maps it is necessary to know the tangent space and therefore it is necessary to know tangents.

Does your mesh have tangents?

1 Like
@pspeed said: JME's lighting shader requires tangents. If you do not have tangents then that is your problem. You must have tangents or normal maps cannot be used correctly because normal maps require tangents. In order to properly calculate the lighting from normal maps it is necessary to know the tangent space and therefore it is necessary to know tangents.

Does your mesh have tangents?

Sorry for the late rely, I missed the notification.
If you are referring to TangentBinormalGenerator, I see no effect with or without the generate method applied to the node containing the quads, both on nVidia and ATI.
I do generate the normals for each quad also.

@fbucur said: Sorry for the late rely, I missed the notification. If you are referring to TangentBinormalGenerator, I see no effect with or without the generate method applied to the node containing the quads, both on nVidia and ATI. I do generate the normals for each quad also.

Maybe you could tell us more about how you are generating your mesh. For example, tangent binormal generator will only work if you run it after normals. Tangent generation requires normals and texture coordinates.

What does it look like if you use the regular lighting shader? Is there still an issue? For example, if you have different texture coordinates for your different normal mapped layers or whatever then you might also need different tangents somehow… which starts to get kind of tricky.

Essentially, I think we might be getting tired of fishing for random things that may be wrong without seeing any code or samples or anything. Unless you are able to provide some kind of test case we can look at (with code posted here, too) then I’m not sure there is much more we can do to help.

@pspeed said: Maybe you could tell us more about how you are generating your mesh. For example, tangent binormal generator will only work if you run it after normals. Tangent generation requires normals and texture coordinates.

What does it look like if you use the regular lighting shader? Is there still an issue? For example, if you have different texture coordinates for your different normal mapped layers or whatever then you might also need different tangents somehow… which starts to get kind of tricky.

Essentially, I think we might be getting tired of fishing for random things that may be wrong without seeing any code or samples or anything. Unless you are able to provide some kind of test case we can look at (with code posted here, too) then I’m not sure there is much more we can do to help.

Thanks so much for the help, for the moment I changed to parallax map and the result it’s great! It works better than normal map and there’s no need for tangents.
I will however get back to this issue at a later time and will investigate more. I excluded the shader as a cause, same thing was happened with core lighting shader, normal map and tangents generated, but only on ATI cards.

Thanks again to all of you for your help!

Cheers,

Florin

1 Like