I have problems achieving a certain result: I draw clouds using small impostor quads that have a sprite texture. To speed up overall performance -for thousands of sprites- the clouds are drawn on textures which are put on a ring around the camera. The ring consists of eight impostors that are updated when the camera has moved a certain distance.
Using RenderState.BlendMode.AlphaAdditive on the cloud material gives the following result:
But this is not correct: The clouds should become more dense when there are more sprites in the cloud! I get this result because the alpha value of the sprite drawn last is written to the ring texture.
I got the latter result using RenderState.BlendMode.Color but this does not work for black clouds, of course.
Conclusion: The alpha values of the cloud sprites should add up in the frame buffer. How can I achieve this?
The RenderState.BlendModes are basically short hand for actual OpenGL blend mode calls. They just use some preset values. You can look in the code for BlendMode (I think) to see how they translate to OpenGL.
There is some online applet you can use to play with blend modes. It’s possible that the accumulation of alpha may not be possible… but that’s where I would start. If you can identity would combination of settings works then you can see if there is already a blend mode similar enough or potentially submit a patch for adding one.
I’ve added at least one blend mode myself (maybe two, I can’t remember) exactly that way.
So, today I had some time to read through the documentation about -what we call- BlendMode. I’d like to give the interested reader a more in depth view about the problem and propose a solution.
Basically, what we have in JME are some predefined sets of glBlendFunc(tions). Using them together with glBlendEquation determines how a certain color of a pixel will be blended.
We could for example use RenderState.BlendMode.Alpha which is exactly equal to glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). This means that every component of the source color that we want to render is multiplied by the source color’s alpha value. The pixel’s color in the underlying image (or the destination frame buffer) is multiplied by (1-src_alpha) and together with an additive glBlendEquation (which is the default) we get: result_color = src_alpha*src_color + (1-src_alpha)*dest_color
Remember my problem, where I renderer multiple transparent sprites to texture and use the texture on a billboard. The final color of the texture will therefore have the same alpha value as the last rendered transparent sprite.
As I stated above, RenderState.BlendEquationAlpha.Max gives better results. This is because we use the maximum of the two alpha values (src, dst) instead of the additive relation above. Yet, my cloud sprites never ever have an alpha value of 1.0 and therefore I can always see through them. This is not desired.
Now the good news: The Khronos Group (the guys who make OpenGL) have a wonderful solution to the problem. You can define a seperate glBlendFuncSeparate for the alpha value! This means that you can keep RenderState.BlendMode.Alpha for the RGB components and use glBlendFuncSeparate(GL_ONE, GL_ONE) for only and only the alpha value. Together with an additive BlendEquationSeparate (we have this already), I reckon that this would give: result_alpha = 1*src_alpha + 1*dst_alpha
WE NEED THIS
(Well, at least I do )
Instead of using only predefined values for RenderState.BlendMode, how about a fallback where we can define the parameters for glBlendFunc and glBlendFuncSeparate ourselves? I would like to implement this in GLRenderer if you agree.