Hi, I am willing to make a post process filter for my game so as to simulate the effect of shattered glass/screen. I suppose I could use a gray image with the model of the cracks and apply it somehow on the screen, but I want to do something more versatile, adjustable. Like the glass filter in Photoshop, but more “faceted” with the ability to move and rotate the pieces apart.
It seems that the first step would be to subclass the Filter class, but it is surely more to it. For example, the RadialBlurFilter makes use of 3 more files, RadialBlur.j3md, RB.vert, RB.frag.
I would greatly appreciate some help, just to give me a little impetus. I am kind of confuse at the moment. Where to start, what to do next? Are the 3 file types above mandatory for such a filter?
Thanks
the j3md defiens wich shaders to use and wich renderstates
the .vert is a vertex shader capable of modifing vertex psitions (wind in grass for example) the .frag is the one determining the final color of the pixel.
The term you need to search for is GLSL, however the one above seems like a pretty hard tsubject to start with. I suggest first learning a bit more about shaders with some simpler materials, and then move on
You probably don’t want to use a shader for it, but rather, instead of using a quad to draw on, you use a mesh that is split to several polygons (to represent the glass). You can then move those polygons around to simulate the glass effect like in the screenshot.
Thank you for the quick answers.
I took some time to document about OpenGL Shaders, which gave me a clearer idea of the subject. It still seems to me kind of weird to use shaders in order to distort the image on screen. This is because I don’t necessarily want to modify vertex positions or pixels colors, I just want to move already calculated pixels of an image to another position. Like what you can see in the Puzzle Effect of VLC media player.
So the solution Momoko suggested seems to suit my needs better. I could use a mesh, but before going to work, isn’t there a way to only obtain an object/a structure holding the rendered image, and to modify it according to an algorithm?
Thanks again
You can render the scene to a texture and then use it as a material map.
Look at the TestRenderToTexture test case.
You can acquire the image data as a native ByteBuffer from the Image class, it is stored in the Texture class.