LensFlare Effect Quad not rendered

I’ve been able to scale the flares’ textures, but as expected they’re at 0, 0.







I can tweak the scaling so the texture is placed elsewhere than 0,0, but they’re overly big…



/rant on

Maybe I’m misunderstanding stuff here, but it’s beyond my understanding why I can’t achieve what I want. Why use images with different sizes, 128x128, 64x64 or even 4096x4096 if the end result will be screen-sized. Sounds moronic to me.



/rant dimmed down



The material says: We have X textures/images. Why can’t those images be used as is, with their proper sizes on the quad? I’ve been looking at “Texture bombing” and this might help, but I’m still analyzing the technique. There also might be clamping as Paul suggested to me. I will check that out later if bombing fails or is not “compatible”.



Then there’s the PostProcessing… If I can process the scene (as I would with a distortion filter or whatnot) and apply other textures depending on the backbuffer depth (for an example), then why can’t I do this with a flare texture if I provide the location of the flare? Take the water shader for example, I do understand that it’s on different plane, but those textures are not full quad (or are they?), they’re repeating afaik. So if they’re repeating, then if you remove the repeat I imagine it would simply be applied only at a certain location and the rest would probably be transparent, black or whatever.



/rant off



Again, I don’t get it. I am certain there’s a way to do that flare by modifying what I have right now. There has to be something, some functions that can enable me to do this. I refuse to believe otherwise.

In the .vert shader you can set the texture coordinates to whatever you want. If the texture is clamped instead of repeating then, for example, if you set the texture coordinates to -1, -1 and 5, 5 then your image will be 1/6th the screen size and in the lower left corner 1/6th in and 1/6th up. Scale by setting the difference between the lower corner and the upper corner and control the position by moving those corners.



You can use the values of the incoming texture coordinates (0,0), (0,1), (1,0), (1,1) to figure out which corner you are in. Done right, I think you can even do the math without resorting to if statements but I haven’t worked it out on paper or anything.



…or you could just render multiple non-full screen quads in their own ViewPort. It’s probably faster.

I won’t argue with you Paul, but in my mind, if I take a texture, a coordinate then send those to the shader, I’m willing to bet it’ll be faster than any additional viewport. Even with bloated code, directly using the GPU as we’re processing a scene would be faster.

Well, the issue is fill rate. If you are drawing a full screen quad just to render 8 little splats on the screen you are filling the screen with mostly nothing. As I understand it, a full screen ViewPort with no clearing is essentially free… at worst you’d clear the Z-buffer and you’d have one very efficient full screen z-buffer clear.



Maybe the difference is insignificant… but then why not go with the easier one.



Getting the full screen approach to work is probably an interesting exercise anyway. If it were me, I’d do it just so that I could “win”. :slight_smile:

pspeed said:
Getting the full screen approach to work is probably an interesting exercise anyway. If it were me, I'd do it just so that I could "win". :)

Welcome to my world. ;)
pspeed said:
In the .vert shader you can set the texture coordinates to whatever you want. If the texture is clamped instead of repeating then, for example, if you set the texture coordinates to -1, -1 and 5, 5 then your image will be 1/6th the screen size and in the lower left corner 1/6th in and 1/6th up. Scale by setting the difference between the lower corner and the upper corner and control the position by moving those corners.

You can use the values of the incoming texture coordinates (0,0), (0,1), (1,0), (1,1) to figure out which corner you are in. Done right, I think you can even do the math without resorting to if statements but I haven't worked it out on paper or anything.

...or you could just render multiple non-full screen quads in their own ViewPort. It's probably faster.

The problem with that approach is when you tell gl_Position = XYZ is that when the .frag comes up, most of your pre-rendered scene will be lost because you'll start at the position you've set gl_Position to, effectively discarding everything of the pre-rendered texture outside those coordinates.

Since you can't use gl_Position in a .frag file, even if you use a custom out in your .vert (to save the position where that effect should be placed), you can't iterate gl_Position. You -have- to start at the value that gl_Position has been set when the .vert file returned.

PostProcessFilter -expects- you, unless I'm really getting this wrong, to take care of rendering an image/texture/whatever that will reflect the scene graph, and rightfully so. So in reality, the filter should only be used to add something to the rendered frame (that you've been passed) that will take the whole screen. If you don't draw that scene, it won't be done by itself (entire black screen).

I think what would be needed is to implement an Overlay system. I know Unity has something like that because I found a very interesting snippet that I thought I could use, but that lead me to the above problem.

With an overlay system you could add effects to the GUI itself, the screen, etc. If someone would be involved enough, a whole GUI could even be built from that, or effects to Nifty itself. Imagine being able to add a twisting light effect (similar to a particle) around a Nifty scroll bar. With an overlay system, that could be done. At least I think so. ;)

That link explains the working of that Unity feature and I tried to use a variance of the example, but as I said it didn't work. :(

Well, hopefully I'm not too far off-base and that post was understandable (I surely don't want to edit it. :P)

Why is an overlay system different than a full screen view port?

madjack said:
The problem with that approach is when you tell gl_Position = XYZ is that when the .frag comes up, most of your pre-rendered scene will be lost because you'll start at the position you've set gl_Position to, effectively discarding everything of the pre-rendered texture outside those coordinates.


I never suggested messing with gl_Position. Presumably you are adding your own textures that are getting rendered and you could also give them texture coordinates as varying values. I made a lot of assumptions about how you were rendering the overlaid textures I guess.
pspeed said:
I never suggested messing with gl_Position. Presumably you are adding your own textures that are getting rendered and you could also give them texture coordinates as varying values. I made a lot of assumptions about how you were rendering the overlaid textures I guess.

Fair enough. :) I thought that was what you were suggesting.

pspeed said:
Why is an overlay system different than a full screen view port?

Because that can be extended by anyone, and I mean extended in "used" sense. Since we only have access to the "Texture" and "DepthTexture", every time you want to do a filter, it has to use the whole screen and by doing so, you're responsible to take care your filter doesn't effectively delete the scene rendered frame. So, by having an overlay system, the engine can render a quad, like a flare for example, that it could then be shrunk and blended with the real generated scene frame at a position I tell it to. It could also be an effect that is full screen. Even if the latter could be used as a filter, it liberates the user from taking care to make sure the scene frame is present in his effect. That liberates users from having to go through hoops like I'm doing right now (which I'll soon have to admit is pretty much impossible. Maybe if I were a GLSL guru... but I'm not.) ;)

Quick example: In the failed experiment in the video above; if the point light of the sun wasn't in the view frustum, I had to make sure that "m_Texture" was rendered. Even if the point light was in the view frustum, I had to add the "m_Texture" colors to the composition of the other images. It's an obligation. If you don't, the screen will be black/blank.

Maybe I'm not explaining it right... Let's look at it like an AbstractAppState if you will... Maybe that makes more sense?

As I said, it's entirely possible I'm complete beside the track and totally missing how things are working. If that's the case then forgive me.

[Woah... I think we're having an Earthquake! :/]

Just in case… I think there may have been a misinterpretation somewhere. There are post processor filters and there are ViewPorts. Post processor filters have a full screen shader infrastructure and you must make sure not to accidentally not render the scene. ViewPorts are mostly organizational entities that allow you to “overlay” different “scenes” on top of each other.



I’m talking about the latter when I said view port. The only chance you have of obliterating the previously rendered stuff is if you set the clear operations badly… though that’s at least more straight forward than accidentally forgetting to render the texture that contains the scene. Also, I’d hope that ViewPort is more efficient than a post processor when you don’t need the full scene in a texture but I’m only guessing.

Ah I see.



Hmm…



I’m tired, about to hit the hay… I guess when you think about it, if that viewport is empty, no spatial, empty except for quads… That could work and at the same time be speedy with low overhead. I never really thought about viewports in that sense, as a “projection screen”… Food for thoughts I guess.



Thanks.