So I wanted to make that a surprise but since it’s an epic failure I don’t have a choice and ask for help…
I found this very neat LensFlare tutorial (I won’t get into the tutorial side of it because it honestly sucks as such. Barely any comments either in the source code or inside the shader and pretty bare at the site where I found it), but that LensFlare is exactly what I have been looking for for the longest time… The main problem is that it’s written in C++ (I also won’t get into that. I could flame VS C++ 10.0 Express for many pages.) but suffice to say that for the last week I’ve been working on this. Yeah. 7+ days.
Honestly this isn’t just the fault’s of the tutorial. The added layers on jME didn’t help, but that’s to be expected. Because of all this, I had to swallow a LOT of information. The good side of this is I’m starting to really get the hang of the internal workings of the engine. Which is good.
Let’s move on to the main knot.
The way this lens flare works is (it’s far less than optimal the way it’s done but at this point I only want it to work.) that at each frame, positions are calculated and textures are applied inside a shader depending on the view matrices, etc. All those textures are applied on a quad.
Now. The flare’s internal works. By that I mean that I can trigger the shader to do its thing when the light sound is in view. The statistics window reflects the texture churning while the camera moves and it stops when the light is off screen outside the view frustum.
The main problem is that the quad is invisible or something along that line. Even if I explicitely set the quad to CullHint never (before it’s attached to the scene node) and the material to wireframe, it’s nowhere to be seen.
It’s (very) possible the shader is wrongly adapted, but shouldn’t I still see the quad’s wireframe? The Quad is sent to the GUI bit bucket if that can help.
Here’s the init method and the quad maker method:
[java]
private void initFlare(AssetManager manager, RenderManager renderManager, ViewPort vp) {
this.renderManager = renderManager;
// Set basic values for the flare.
lightScaleFactorX = 1000.0f;
lightScaleFactorY = 1000.0f;
lensScaleFactor = 50.0f;
flareIntensityDecrease = 0.00001f;
this.viewport = vp;
lensFlare1 = (Texture2D) manager.loadTexture(“Textures/Flare/Flare1.png”);
lensFlare2 = (Texture2D) manager.loadTexture(“Textures/Flare/Flare2.png”);
lensFlare3 = (Texture2D) manager.loadTexture(“Textures/Flare/Flare3.png”);
lensFlare4 = (Texture2D) manager.loadTexture(“Textures/Flare/Flare4.png”);
lightTexture = (Texture2D) manager.loadTexture(“Textures/Flare/SunLight.png”);
lensMat = new Material(manager, “Materials/TestLensFlare.j3md”);
lensMat.getAdditionalRenderState().setWireframe(true);
makeFlareQuad();
lensMat.setMatrix4(“WorldViewProjectionMatrix”, viewport.getCamera().getViewProjectionMatrix());
fb = new FrameBuffer(viewport.getCamera().getWidth(), viewport.getCamera().getHeight(), 1);
fb.setDepthBuffer(Format.Depth);
}
private void makeFlareQuad() {
Vector3f vertices[] = new Vector3f[4];
float width = (float) viewport.getCamera().getWidth();
float height = (float) viewport.getCamera().getHeight();
vertices[0] = new Vector3f(0.0f, 0.0f, 0.0f);
vertices[1] = new Vector3f(width, 0.0f, 0.0f);
vertices[2] = new Vector3f(0.0f, height, 0.0f);
vertices[3] = new Vector3f(width, height, 0.0f);
Vector2f texCoord[] = new Vector2f[4];
texCoord[0] = new Vector2f(0.0f, 0.0f);
texCoord[1] = new Vector2f(width, 0.0f);
texCoord[2] = new Vector2f(0.0f, height);
texCoord[3] = new Vector2f(width, height);
int[] indices = {2, 0, 1, 1, 3, 2};
Mesh flareMesh = new Mesh();
flareMesh.setBuffer(Type.Position, 3, BufferUtils.createFloatBuffer(vertices));
flareMesh.setBuffer(Type.TexCoord, 2, BufferUtils.createFloatBuffer(texCoord));
flareMesh.setBuffer(Type.Index, 1, BufferUtils.createIntBuffer(indices));
lensQuad = new Geometry(“FlareQuad”, flareMesh);
lensQuad.setQueueBucket(Bucket.Gui);
lensQuad.setMaterial(lensMat);
lensQuad.setCullHint(Spatial.CullHint.Never);
}
[/java]
The way I set this up for now is to instantiate the flare’s class in the scene (containing the Node) it’ll be attached to like so:
[java]
[… snipped …]
lensFlare = new LensFlare(starSphere, star.getColor(), Base.getGame().getAssetManager(), Base.getGame().getRenderManager(), getViewport());
sceneNode.attachChild(lensFlare.getFlareQuad());
[… snippep …]
[/java]
And in the same class as above (when it’s an AbstractAppState), put the lensflare.update() in the update() method.
With my luck it’ll be something awfully stupid.
Thanks for the help guys.
If you set the material to regular unshaded does it show up? Even in wire frame it will be using your shader to color the frame… so it’s not really a good test on its own.
Note: I think it would be good to figure out what you have isn’t working but ultimately the post-processing stuff is doing nearly exactly what you are manually doing with your quad. Worth considering switching to that at some point.
pspeed said:
If you set the material to regular unshaded does it show up? Even in wire frame it will be using your shader to color the frame... so it's not really a good test on its own.
Gawdamit. :x
I'll try using unshaded just to see if it'll blink on/off.
Note: I think it would be good to figure out what you have isn't working but ultimately the post-processing stuff is doing nearly exactly what you are manually doing with your quad. Worth considering switching to that at some point.
Before I finally figured out pretty much everything I had it set that way although I was never able to make it fully work. I also tested it using particles. That was actually "working" but it was static in space and having it move would have been a real pain.
Thanks for the suggestion. I'll report after some testings.
Ok, I do see the quad with unshaded. At least that’s that.
When you get to the point of wanting to do this the filter post-processing way… take a look at the fog or depth of field post processors. Fog was the one I used to learn how to make them and depth of field is the one that I wrote… I tried to comment it well in hopes that it might help someone else.
Most of what you do with the quad and depth buffer, etc. will be done for you in the filter post-processing environment. It’s really nice.
Yup. I went through pretty much all shaders, but that was I the beginning phase when I knew next to nothing of the layers and how to get the depth buffer etc… Once I have -some- result it shouldn’t be that much of a problem to migrate it there.
I’m currently implementing the vertex coloring in my own shader so I can see if things are working on that front.
I know that something is being rendered, I just can’t see the darn thing on the screen. I’m stubborn though.
Why creating your own quad? why not use the Quad class?
Also your texture coordinates are wrong :
texCoord[0] = new Vector2f(0.0f, 0.0f);
texCoord[1] = new Vector2f(width, 0.0f);
texCoord[2] = new Vector2f(0.0f, height);
texCoord[3] = new Vector2f(width, height);
texture coordinates goes from 0.0 to 1.0, doing this will yield weird result.
So you’re making a fullscreen quad to render the effect, why did you give up on Filters?
To render the quad correctly you have to make the camera orthogonal before rendering it, or you’ll have perspective issues making your quad not where you want it to be.
Look at how it’s done in the FilterPostProcessor. The quad is even a Picture objects
nehon said:
Why creating your own quad? why not use the Quad class?
I know, I just wanted to copy the exact same thing the tutorial was doing.
Also your texture coordinates are wrong :
`
texCoord[0] = new Vector2f(0.0f, 0.0f);
texCoord[1] = new Vector2f(width, 0.0f);
texCoord[2] = new Vector2f(0.0f, height);
texCoord[3] = new Vector2f(width, height);
`
texture coordinates goes from 0.0 to 1.0, doing this will yield weird result.
Yeah. I should've cleaned the code better. Honestly at the point where I was, I was willing to test/try pretty much anything to have something, anything on the screen. :/
So you're making a fullscreen quad to render the effect, why did you give up on Filters?
Because I was too dumb to get some things through my thick skull... :( Now I know how to properly (I hope anyway) make the transition to the filter.
To render the quad correctly you have to make the camera orthogonal before rendering it, or you'll have perspective issues making your quad not where you want it to be.
Look at how it's done in the FilterPostProcessor. The quad is even a Picture objects
I understand what you're saying here. The thing is, from the C++ code I'm reading, it looks like the guy who did that is either a moron or he's trying to do something that escapes me.
The quad he's making is actually reversed... Then, in his Flare rendering method, he culls the front face effectively rendering the back face. That doesn't make sense to me. That's doing several steps for nothing.
Here's his quad init.
[java]
[stuff]
QuadIndices[0] = 1;
QuadIndices[1] = 0;
QuadIndices[2] = 2;
QuadIndices[3] = 1;
QuadIndices[4] = 2;
QuadIndices[5] = 3;
[other stuff]
/* The above should be 2 0 1 1 3 2 for a camera-facing quad. Why reverse it!? */
[/java]
Then later on, in the CLensFlares::Render_LensFlare method he goes:
[java]
[stuff before]
glEnable(GL_CULL_FACE);
glCullFace(GL_FRONT);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE,GL_ONE_MINUS_SRC_COLOR);
glDisable(GL_DEPTH_TEST);
[stuff after]
[/java]
After the above, he computes camera, positions, etc and blend some flares textures together doing this:
[java]
glBindBuffer( GL_ARRAY_BUFFER, VertexBufferId);
glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, IndexBufferId);
LensFlareShader->Set_VertexAttributes();
[some more calculations]
LensFlareShader->Set_ShaderFloatVector3(&ProjectedCameraSpacePosition, "ProjectedCameraSpacePosition");
LensFlareShader->Set_ShaderMatrix4X4(&tempMatrix, "matWorldViewProjection");
LensFlareShader->Set_ShaderFloatVector4(&Brightness, "Brightness");
LensFlareShader->Set_Texture(GL_TEXTURE0, 0, pDepthTexture, "DepthTexture");
LensFlareShader->Set_Texture(GL_TEXTURE1, 1, pLightTexture, "LensFlareTexture");
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, NULL);
[/java]
to finally call glDrawElements(...). That's not really working right now... well, not in my game. All I get is sometimes a flickering orange texture over the whole screen.
I'm not putting any of my code because it's a horrible mess of comments and haphazard lines. Ok, it's not that bad, but it's not good looking.
Forgot to that the last part is repeated 8 times if the conditions are right (light source not in the middle of the screen and inside view frustum). It looks like this:
[java]
[calculations…]
LensFlareShader->Set_ShaderMatrix4X4(&tempMatrix, “matWorldViewProjection”);
LensFlareShader->Set_Texture(GL_TEXTURE1, 1, pLensFlareTexture1, “LensFlareTexture”);
/* the texture switched between those: pLensFlareTexture1, pLensFlareTexture2, pLensFlareTexture3 and pLensFlareTexture4 */
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, NULL);
[more calculations for the other 7 times]
[/java]
As I said, he’s stamping those flares all over the place and I’m having a real hard time replicating what he’s doing.
Well maybe you should not stick that much to what is done in the C++ example.
It uses no engine really, only direct opengl calls.
Maybe you should rethink the whole thing.
Basically it’s 8 quads rendered in ortho mode on the screen, in front of everything, that move according to the camera angle.
Maybe you should do a processor, that just does that first, then you’ll focus on rendering the correct pictures on these quads.
hehe Nice of you to post that 30 mins after I decided this whole damn thing was a waste of time.
Actually, that’s not entirely true. I have learned so much in the last week that I probably wouldn’t have without this crap tutorial, so, yeah. That’s a huge plus. But, you’re right. Lens flares are pretty simple and wanting to emulate this… things, was not the best idea from the start.
Now that I’m switching gears it should be easier and faster to get that thing out the door.
Thanks for all your insights though.
From my experience, you should have 2 or 3 current subject you’re working on.
So when you bump your head on one thing you can switch to another to rest your mind, move on to something else and limit the frustration. That’s what i did for SSAO and Water when my eyes start to bleed…
nehon said:
From my experience, you should have 2 or 3 current subject you're working on.
So when you bump your head on one thing you can switch to another to rest your mind, move on to something else and limit the frustration. That's what i did for SSAO and Water when my eyes start to bleed....
It's good advice in theory, but not when you like me and *kinda* obsessive. ;)
On a different note, I've switched back the lens flare to a post process filter. Up to now it's working except for a few things.
My main problem is that the engine will stretch the pngs images to the whole screen. They're 2@128x128, 2@256x256, and 1@64x64. Now, what would be the way to constrain those image to their proper sizes on the screen? Right now they're all centered in the middle of the screen (I haven't transferred the position algorithm to the shader yet) but taking the whole screen. That doesn't really work as you can imagine. ;)
yeah, that’s expected, the filter renders itself on a fullscreen quad. That’s why i talked about a processor, that would manage regular quads, but that could even be a control…
Maybe something like the particle emitter, that would have only one mesh to avoid sending 8 objects to the GPU…
A control would position them according to the camera direction and then let the engine render them disabling the depth test and depth write.
I take it you’re talking about SceneProcessor? If that’s the case then I imagine I’d have to do a quad for each texture. A bit confused here about that.
Yeah my bad, I just throw my ideas at you as they come
I agree it can be confusing
I’d see 2 ways of doing it :
1- Create a LensFlareSceneProcessor : the processor manage 8 Pictures textured with the different pictures. In the preFrame or postQueue compute the position of the quads according to camera direction and sun position, in the postFrame renders the 8 quads on top of the render scene, using a cam in parallel projection, disabling depth write and depth test.
-
Pros
- relatively easy to do, because you can easily plug the calculations and render into the JME rendering flow
- managing 8 pictures is easy you can use the setPosition method to position them onscreen
- you won’t even have to make a shader since the unshaded material should work for this
-
Cons
- I feel it’s a bit overkill using a processor for this, but maybe that’s just me…
- 8 pictures are 8 objects to send to the GPU
- Probably won’t get along very well with filter post processor…or maybe using it before the FPP…
- what if you have several lens flares? would it work ? something to test
2- Create something similar to particle emitter : A geomertry with a single mesh (the 8 quads packed together) with an associated control that would position the quads according to camera direction and sun position. there again no depth write nor depth test/
-
Pros
- Damn fast!!, just a 16 triangles geometry to render and some buffers updates
- Several flares are no problem, the engine will just handle them like any other geometry
-
Cons
- Tricky buffer updates to position the quads in the scene (updating the packed mesh)
- You’ll have to pack the 8 textures in some kind of texture atlas or maybe feed the shader with 8 textures why not…
- You’ll have to implement a shader that can handle this texture altlas or texture list and render the correct texture on the corresponding quad
- Might be tricky to correctly place the flares in the scene ( maybe you’ll have to scale them etc…) because you won’t be able to render them on top of the scene like with the processor.
I would choose solution 2 because the “Damn fast” pro is the only one that really matters IMO, but it’s really trickier.
Just curious, but in the second case, is there overhead for putting it in its own ViewPort… and if so, how much do you think?
I don’t know, but would be worth the try indeed! or maybe just putting it in the transparent or translucent bucket could be enough…
Also, there might be a way to achieve solution 2 with the existing particle emitter system and making a LensFlareParticleInfluencer…
That would make it the best solution IMO
Let it be known here that @nehon wants my soul.
Alright, thanks for your, again, invaluable input. I will be going to bed soon and hopefully I’ll wake up with an epiphany at the ready. If not then I’ll deeply analyze what you’re suggesting and go with it. Since the second suggestion seems hellish I might just go with that. Besides, it would give me a great opportunity to bug and annoy you with questions and stuff.
For your viewing pleasure, or displeasure…
Here’s what it looks like right now. As discussed, the flares are centered in the middle of the screen and full screen… But even though it’s an “almost-fail”, I think it looks spiffy.
You be the judge.
http://www.youtube.com/watch?v=fw3uYS8kZhc
You like or not?
PS: Pretty hard to tweak and fix when they’ll all jumbled together like that.