PostProcessing with multiple viewports

Greetings! First post, so lets start by thanking the team for this upmost precious engine/framework/thing!



I have multiple viewports set up ( 8 ), each with its own camera.

All cameras are synced in rotation and location. Each one has its own viewing “range”, for example:

[java]

/* Unit.AU = 1 AstronomicalUnit, Settings.scale = 0.001f , don’t kill me for using ridiculous large numbers */

cams[ 0 ].setFrustumPerspective( 45f, (float)cam.getWidth( ) / cam.getHeight( ), 0.000001f, 0.000100f );

cams[ 1 ].setFrustumPerspective( 45f, (float)cam.getWidth( ) / cam.getHeight( ), 0.0001f, 0.0100f );

cams[ 2 ].setFrustumPerspective( 45f, (float)cam.getWidth( ) / cam.getHeight( ), 0.01f, 1.00f );

cams[ 3 ].setFrustumPerspective( 45f, (float)cam.getWidth( ) / cam.getHeight( ), 1f, 100f );

cams[ 4 ].setFrustumPerspective( 45f, (float)cam.getWidth( ) / cam.getHeight( ), 100f, 10000f );

cams[ 5 ].setFrustumPerspective( 45f, (float)cam.getWidth( ) / cam.getHeight( ), 10000f, 1000000f );

cams[ 6 ].setFrustumPerspective( 45f, (float)cam.getWidth( ) / cam.getHeight( ), 1000000f, Unit.AU );

cams[ 7 ].setFrustumPerspective( 45f, (float)cam.getWidth( ) / cam.getHeight( ), Unit.AU * Settings.scale, ( Unit.AU * 200L ) * Settings.scale );

[/java]



all these viewports get rendered on top of each other, whereas the farthest one [ 7 ] gets rendered first and clears everything, all following only clear the zbuffer.



I do this to get more precision in the z-buffer, so i can get very close to objects, but also have extreme far render distances without getting z-fighting between objects that are 10000000km apart :wink:



I want to use some postprocessors, like bloom ( or pssmshadows [ is this even post? ] ). Now i cant just go ahead and apply the same bloomfilter to every viewport as they accumulate in the shader and make the whole thing explode into rainbows. I also cannot apply a separate bloomfilter to every viewport as they would overwrite each other, effectively resulting in only the closest cam getting rendered.



One idea i did not try yet, is render everything on a texture and then apply the bloom, but somehow i believe that isnt going to work, as the texture will only be a simple colormap. Is there a way to render only the glowmaps ( as the bloomfilter does i believe ) and then use that for the bloom filter and then merge the result with the merged viewport thing?

Anyways, that is only ONE thing, i dont even know how to get started with the other filter stuff, like pssm and AO.



Any alternatives to my approach, maybe even to the viewport splitup?



Sidenote: I noticed this:

http://i.imgur.com/BAQnJ.jpg

This “gap” show up between the different viewports.

I tried overlapping the frustums a bit, but it still shows up when there is transparency involved.

1 Like

Thanks are much appreciated :slight_smile: You should probably scale your sizes so that you only use values to say up to 50.000 in the OpenGL context. Most OpenGL implementations run at 32bit accuracy and you won’t get good results with too large numbers. Try to translate from your “real” world values to screen values. So if you have a complete galaxy, check in which solar system (at least) you are and then make the coordinate system and visual object scale according to that. If you work at scales this large theres no way to have OpenGL object == world object, due to scale and sheer count/memory issues.

1 Like

Scale: Yes, i already did that, i actually have that as a variable ( Settings.scale ), that is used on all size related things (here i had it on 0.001, but i also tried it with much smaller scales, such as 0.00000001 … )

I also translate the root node instead of moving the cameras, so i get the most precision close to the camera, and all that float point jittering doesnt happen.

But the problem i try to solve with the multiple z-buffers still persist, as the size relationship is always the same, regardless of scale.

For example a spaceship or asteroid of 30wu length, i want to get up to 1wu close, or even closer without clipping, but at the same time have that gas giant about 500000kwu away not clipped AND render an atmosphere about 100kwu above the surface without zfighting with the planet. If i scale every thing down by a factor of 1000 i will have an 0.03wu spaceship and need to get 0.001wu close, and then the the athmosphere would be 1wu away from the planet → viola, zfighting again.



so im a bit at a peril here. I read about another solution scaling far objects down and placing them “not so far”, but i am sceptical about the parallax movement destroying the viewers depth perception of a large far away object when moving at very high speeds… hm hm hm…



edit:

Ok, i maybe i should be more specific with the requirements:

a space of 1AU in radius with a z-buffer precision at the rim of ~100km and a ~3m nearplane…



uuhm… http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html … gotta calculate… -.-



ok, i think its time to

1.) go to bed -.-

2.) …zzZZZz

Hm, can you print out the values of fighting objects somehow? To see if its actually an accuracy issue? I mean you still got pretty high values in your frustum settings, any way to scale that smaller? If the distances are too small, maybe you should go with actual LOD at this scale already. A spaceship thats 100000 units of any kind away cannot be much larger than a few pixels if you see another ship at 0 full scale, so replacing those ships say with image particles which get a different perspective/position to simulate the distance should also work.

1 Like

ah, yes of course, i could move the scaled versions to simulate the distance, i guess ill try that tomorrow after some good sleep, i hope it wont be to heavy on the cpu tho… hm… no it wont ( gotta stop thinking so generic, i dont have THAT many objects… and never will have … uuuuh… tired )



thanks til now, ill post back tomorrow or later, once ive given the thing more thought and have tried that parallax simulation idea.



seeeees.

For the post processing, it won’t be easy…

Processors are made to work on a single viewport. So i’m afraid you’ll have to go from a lot of tricks to make it work.



The idea would be to accumulate the 8 viewports render into a single frame buffer and then apply the post process on this accumulation.

But you’ll have issues anyway with depth…because i don’t see how this buffer could be properly accumulated with different layers and still be correct.

So you may have strange things with SSAO because it uses depth.



For shadows, one shadow processor for each viewport might work (if you use the DirectionalLightShadowRenderer it’s not a post process, if you use the DirectionalLightShadowFilter it is). So use the processor.



IMO your 8 layers of viewport will just get you into troubles…

You’d better spend time to make it all fit in one viewport

1 Like

Tip for your planets, (40km frustrum)

linearly use same scale planet for up to 20km then the last 20km use for all rest exponentailly, by scaling the geometry accordingly to the virtual distance. Now why using 20km for this? Well if two planets are behind each other they will be like at 22 and at 25 km so they can still be depth sorted normally.

1 Like

I think you will always have a problem with seams if you split a planet/moon across two viewports. Getting rounding errors of some kind not to show up here will be quite difficult, I think.



Usually when I think of splitting up a scene like this, I think “near”, “mid”, and “far”. (or just “near” and “far” if I could get away with it.)



The far away stuff will not require accurate zbuffer/sorting as that stuff is usually far apart anyway. The near sorting will require accurate z buffer but you can also adjust what near and far means dynamically as you get closer to a planet (this is often where “mid” is useful).



When I did whole earth 3D fly throughs and stuff, I adjust near and far based on distance from the surface. At the surface, near was 0.1 meters and far was “however many meters to the horizon” where as from space near was 10 or 100 and far was “diameter of planet divided by two”. A similar scheme could be setup for the solar system, I think. When you are on a planet, every other planet, star, moon, etc. is pretty far away.



This would certainly simplify your shadow situation.

1 Like

Because you asked for alternatives:



http://www.gamedev.net/blog/715/entry-2001520-logarithmic-depth-buffer/



http://www.gamedev.net/blog/73/entry-2006307-tip-of-the-day-logarithmic-zbuffer-artifacts-fix/



Since it looks like you will always have problems with your multiple viewports implementation, it might be worth to think about different approaches that will give you more z-buffer precision … the technique explained in those links works great (using one viewport) and the only tradeoff is that you need custom (read: patched) shaders for everything to manipulate the z-coordinate before writing it to the z-buffer - but that’s affordable I think :slight_smile:



Hope that helps

2 Likes

awesome, never expected this much replies in such short notice… you’re a lively bunch :smiley:



ok, so if i want to use postprocessing ( and not have to jump from uglyhack to uglyhack ) i really gotta ditch my multiple viewports…

they are not worth the trouble.



@cmur2 :

that really seems quite interesting, but im not very good at shaders yet, i hope this will go well, as it really seems the most appropriate solution.



as far as i can tell @nehon is the shaderguy around here? Have you had a look at what @cmur2 posted? Logarithmic zbuffer. I already peeked a bit into the vertex and fragment shaders of the lighting mat, but im not sure were to start with the z calculations. It was mentionend in the article that the z calculations should happen right after the modelviewprojection matrix has transformed the vertex position. I can only find worldviewprojection, worldview, normalview, and view matricies in the vertexshader… just a hint please :slight_smile:



@pspeed :

do you also mean splitting up in viewports, “near” “far” and “mid” ?

or are you referring to dynamically changing the near and far plane?





p.s.: whats up with the forum registration. i tried signing up but the activation mail never arrived, thats why im called lopho.org now, its my openid url… ( can i change that to something more “not so advertising of my domain” ?? )

If you really want to stick with the logarithmic zbuffer, you should create a copy of the Lighting material and vert/frag shader files and manipulate those … as explained in the second link you need to edit vertex and fragment shaders:



In the Lighting vertex shader (latest SVN revision, 10004) line 135 is the transform you are looking for:

gl_Position = g_WorldViewProjectionMatrix * pos;

You need to declare a “varying vec4 tempPos” somewhere before main() that will transport the vertex’ position to the fragment shader, after line 135 make an assignment “tempPos = gl_Position”.



(Edit2: since afair jME holds the inPosition in world coordinates they use the g_WorldViewProjectionMatrix to transform it into screen space as the folks in the article get in their gl_Vertex in model coordinates they use gl_ModelViewProjectionMatrix to achieve the same)



In the Lighting fragment shader you should again declare “varying vec4 tempPos” somewhere before main() and assign gl_FragDepth the improved zbuffer value explicitly by using:

const float C = 1.0;

const float far = 1000000000.0;

const float offset = 1.0;

gl_FragDepth = (log(C * tempPos.z + offset) / log(C * far + offset));



This may be a bit hacky since my shader skills are somewhat rusty with respect to OpenGL3.1 and newer but it should work :slight_smile: if you find out that not I will try myself tomorrow



Edit: in general you will need to update all shaders on all objects you use in such a way and I imagine even the post processor shaders when they attept to read e.g. depth textures of the scene etc to convert the logarithmic values back into usual ones - the gpu hardware is only interested in relative zbuffer values for comparisons but things like SSAO maybe need real/absolute values…



p.s.: my activation mail got lost too back in the days … you should create a separate thread for it in Site & Project area I think

1 Like
@lopho-org said:
@pspeed :
do you also mean splitting up in viewports, "near" "far" and "mid" ?
or are you referring to dynamically changing the near and far plane?


Yes, I was still talking about using multiple viewports... but the problems might be simpler to solve since the particular fields are much more isolated... and nothing ever exists in both. For example, bloom could be done on the final render maybe because it doesn't require depth. Things that require depth can just be done in near since far is too far to matter and probably mid too.
1 Like
@lopho-org said:
as far as i can tell @nehon is the shaderguy around here? Have you had a look at what @cmur2 posted? Logarithmic zbuffer. I already peeked a bit into the vertex and fragment shaders of the lighting mat, but im not sure were to start with the z calculations. It was mentionend in the article that the z calculations should happen right after the modelviewprojection matrix has transformed the vertex position. I can only find worldviewprojection, worldview, normalview, and view matricies in the vertexshader... just a hint please :)

Well...all the team members know their way around shaders....I guess i'm just the one actually enjoying it.

Right now you can't see any depth handling related code in the shader because depth is directly handled by the hardware, but it's a non linear depth.
The problem with log z buffer is that you'll have to handle depth yourself.
Doing what cmur2 describes should work, i don't feel that being hacky really.... Depth test should work even with a logarithmic depth.
The big drawback though is that you can't use any built in shaders and use your own for everything.
1 Like

I kind of agree with @pspeed here. You only need depth on the closest viewport, the rest can be treated the same as “sky”, i.e. way too far away to care about.

1 Like

this is great! thank you all for your help, this is really getting me somewhere.

I think ill go for the log z depth, trying to implement it right now… if that turns out to be more than i can chew ill just scale everything relative to real distance and just do some parallax trickery. everything else should be to small to worry about anyways.

thanks again, i really think i picked just the right place for this project :slight_smile:

1 Like

ok i’ve gotten the lighting fragment and vertex shader working with log z, works like a charm, gotten 168kwu precision at 150 000 000 000.0wu distance

small roundup so far

[java]

/* C is in this range: [1,0)

  • C higher -> more resolution close at the near plane, less far
  • C lower -> more resolution at the far plane, less near
  • far = farplane of the frustum
  • depth = gl_fragDepth

    */

    // from z coordinate to log(z) depth

    depth = ( log( C * z + offset ) / log( C * far + offset ) );

    // from log(z) depth back to z coordinate ( for shadows, AO, etc. )

    z = ( exp( depth * log( C * far + 1 ) ) - 1 ) / C;

    [/java]



    will go for pssm next
2 Likes

Another one on the topic of logarithmic depth buffers: http://outerra.blogspot.sk/2012/11/maximizing-depth-buffer-range-and.html

2 Likes

ooh found some even more optimal function for log depth ( via @cmur2 's link, its a very interesting read )

http://tulrich.com/geekstuff/log_depth_buffer.txt

http://tulrich.com/geekstuff/log_depth_buffer_vis.html

[java]

/* k = bits of depthbuffer (16/24/32)

  • z = z coord in viewspace
  • zn = near plane
  • zf = far plane
  • ( 2^k - 1 ) should be stored as constant, as pow k isnt a very cheap op
  • log( zf / zn ) could also be constant, at optimisation stage once far + near is fix

    */

    // z viewspace coord to depth

    depth = ( 2^k - 1 ) * log( z / zn ) / log( zf / zn );

    // alternative with constants, K = ( 2^k - 1 ) / log( zf / zn )

    depth = K * log( z / zn );



    // depth to z coords in viewspace

    z = zn * exp( ( depth / ( 2^k - 1 ) ) * log( zf / zn ) );

    // alternative with constants, K1 = ( 2^k - 1 ); K2 = log( zf / zn )

    z = zn * exp( ( depth / K1 ) * K2 );

    [/java]



    it gives constant relative precision ( z * rel.precision = real precision ). Meaning same good precision at far distance as before, but at the near plane the precision is also quite higher in comparison to the before mentioned log depth function.

    Also, shaders are really getting on to me … <³ …
2 Likes

Yea shaders are not too magic if you got the taste :slight_smile: especially not with jME as the interfaces to the shaders are well done! And they are very powerful even if sometimes hard to debug.

Thanks for sharing your new insides, the thing with constant relative precision was new to me too!

1 Like