Portal effect (with depth buffer)

It’s not done yet. I thought about it for several days and i don’t see why it would not work. And i know it’s not the fastest way to do it.

Here is what i want to do :

  1. render scene A.
  2. throw away the depth buffer (or set it to infinity for every pixel).
  3. render the portal, but only its depth. So, at this point you still have the original scene in the color buffer but the depth buffer of the scene if the depth buffer of your “portal” (can be anything, any shape).
  4. render the scene B (where the player is). Pixels of scene B that are behind pixels in your buffer should be discarded (as normal). So you end up with the scene B and in it a “window” to the scene A. If you move the camera in both scene at the same time, the illusion should be perfect.

For it to works, you need to don’t have a “hole” in scene B. I.E. if there is a pixel in scene B that is not rendered, then you’ll see there a pixel from scene A. This can be solved by adding a skybox in scene B.

What I want to do with that … well, i need that in my doom engine. It’s a bit hard to explain but the sky in doom is not a skybox : you can have a hallway behind the sky. A picture will make this clearer :

(This picture is taken from GZDoom).
If you only render the sky as a classical skybox (i.e. : a cube of 1m or something like that around the player, that follow it and that is rendered in the sky bucket) you’ll see the hallway “floating” in the sky.
Sky in doom is more a texture applied to solid walls, which display sky with no perspective or something like that (i don’t know how to describe a “sky” effect).

So, i can either :

  1. do some calculation to project vertex on the camera with correct interpolation (or do that in the fragment shader)
  2. or do what i said : render the skybox as usual, then render only the depth of these walls and ceilings (the one that should let you see the sky), then render the rest of the scene.

I thought that i could achieve that by just disable color writing in the material of these walls/ceilings. However, when i do that i also discard existing colors, so i see the color of the bacckground of the viewport. I think i did something wrong.

So my questions are :

  1. is it possible to do what i said.
  2. am i too generic (did i miss something important like someone saying “well, we only need to detect the face in the picture, then resolve some np-complete problem then”. I really think that the steps i described was as close as possible to “pseudo-code” and i only miss the actual lines of code to perform that).
  3. am i going the wrong way ? I think that even without this doom thing, the portal effect i described looks cool.

Thx for help and comments.

This would be easier to fix than anything else you’ve described. Essentially its a form of environment mapping of texture coordinates… instead of reflection your just projecting. It’s also the fastest (by far) way to get the effect you want.

Basically, just pick a texture coordinate in the vertex shader based on the view vector. Done.

I am really bad with shaders. Note that i try.
Ok, if i use gl_FragCoord (and dived it by the size of the viewport) i can “attach” the picture (it will follow the camera vertically and horizontally just as if i painted it on the canvas before rendering anything).
If i inject the camera horizontal rotation i can get the wanted horizontal effect (when you turn the camera horizontally you’ll get the standard skybox effect).
But i didn’t find a way to have the wanted vertical effect (when you look at the sky moutains should not move … i mean, they should leave your view but they should not give the feeling that they move … argh, so hard to explain). It seems related to the FOV, because when i change it the effect is more or less noticeable. And even with a constant movement of the camera from down to up i get non-constant movement in the moutain (like they go up, then down, then up).
I spent the day on that and i just want to bang my head on the desktop.
And inject the camera horizontal rotation already sounds like i am doing it wrong.

Any code ? I’ll try to grab some pdf book on glsl, but it’s like that there is no convention on this language.
It’s the first time i don’t get a language, it’s like the language is pushing me away all the time. Machine turing language was easy, functional languages was easy, object oriented languages was easy. Assembly language was easy.
But this one, i don’t get it. Part of it because of matrix, of non declared variables that exist nonetheless, part of it because i can’t get output in my programs to check step by step that everything is alright … and also part of that because the IDE put red everywhere even in stock programs (from jme).

But it’s likely just me, i also got problems with Blender and never “got into” it.

Anyway, for the problem here : a code, a link ?

That’s the tricky part. I usually end up resorting to setting colors a certain way to validate values. Things like “if value > x set color red” or just setting the red or green component to some value to see if it’s growing/shrinking like it should.

I think the basic gist of what you want to do (off the top of my head) is in the vert shader, calculate the world space view vector for the vertex. That will be the g_WorldMatrix * pos value - g_CameraPosition. (You’ll have to make sure that WorldMatrix and CameraPosition are set in the WorldParameters in the j3md… Lighting.j3md already has these for example)

Set that vector up as a varying that your frag shader can get. Then you should just be able to use it to index into a texture. Hmm… then again, it may be the angles you need and those would be better to calculate in the vert shader and pass as varyings. atan2() on the vector calculated above for x,z. Then maybe just use the vector’s y component for the y texture value.

…I’m starting to ramble now.

You could always look at how the sky is rendered.

Heheh… looking at the shader you might just be able to use it directly.

1 Like

Thx i’ll try that but :
world’s position of the pixel is kind of useless. Yes, i thought it was the key but no, the shape of the ceiling/the wall can be anything, it should display the sky as if it was a perfect sphere. So it’s only about the position of the pixel on the screen and the angle of the camera. The tricky part is the FOV : a FOV of 90° or a FOV of 10° will not change the same amount of pixel in the camera view (for the same movement of the camera). I need to integrate the fied of view in the calculation, and it’s not something linear : pixels near the center of the screen doesn’t move much and pixels far from the center of the view move a lot. And it’s why i get this “a little go up, a little go down”.

I’ll try what you say and i already have lighting.vert, ligthing.frag and lighting.j3md open in pspad (a text editor). Same goes for unshaded. I also download piece of code here and there and try to understand and reuse them but … well, you know.

I also note the “if value > x set color red” : very good idea.

I never said world position of the pixel. I said view direction of the pixel in world space. Which is precisely what you need.

But really, I looked at Sky.j3md and related and it’s kind of already doing these calculations. I think you can just use it directly and/or modify it slightly.

I tried to work with that today (sky.j3md, frag and vert) and … nope, doesn’t work.
First, in the vert shader the gl_Position is not set to the standard g_WorldViewProjectionMatrix * vec4(inposition,0) and the result of that is that the whole “sky” mesh appears to be far from the rest of the scene (and don’t hide anything).
The fragment shader is a bit interesting
(even if i don’t get the import. Ok, i get that it is a file, i even found it and read it. But in the orange book, the glsl book, they said that there is no include or anything related to that).
But it relies on normals, and after spending some time on the algorithm i think i can say that it works when the sky is a box or a sphere with normals all pointing to the camera. This because it uses interpolation (and then normalization to restore the length of 1) betweens normals of a face to get position on the texture.
And my triangles don’t have normals, nor they should : it’s sky, there will never be lighting on it

So i thing that an image of my case is needed.

(i displayed them with wire mode and a fushia color and no culling. Of course, they are gonna be in wire not, will have a real texture (what we are talking about) and front culling).

You can see in the second picture that the hallway will be visible with a standard sky approach. It’s why i need to project it on the walls.
You also can see that the “shape” of the sky is absolutly not a simple one, and can even be non-convex.

If i fake the normals be a the subtraction (inPosition - CameraPosition)
i get a pretty “end of days” effect.

Strangely enough the comportment of this sky is good (don’t follow the camera, don’t make the underlying mesh guessable).
If i normalize the subtraction i get a strange water effect (that i can’t show here, it’s more when you move) but it move with the camera and “show” the mesh.

Note that the normalization is always done in the fragment shader, even with the end of days effect.

I think that the sky.j3md etc. is perfect for a standard sky but will not work in my case.

Not related : I also spent some time on a obj exporter for doom levels (basically the same algorithm already used to build the scene, but without any reference to jme : it will work as a standalone plugin).

Also : the sphere used in the skyfactory does have smooth normals, as shown here, with shownormal material :

Or maybe i misunderstood this image i produced.

So I guess you’ll have to go back to my original suggestion of calculating the view direction for each vertex and using that to index into the texture. You still might be able to use the library that the frag shader includes. (JME adds the ability to include files. GLSL doesn’t support that.)

Ok :slight_smile: :slight_smile: :slight_smile:

Things are getting better.

And the way to do that is … much too simple, but i wouldn’t have been able to do that without your help (especially the normal trick in the sky shader was a good start).

sky.j3md (not interesting) :

MaterialDef Simple {
    MaterialParameters {
        Color Color
        Texture2D ColorMap
        Texture2D Playpal
        Texture2D LightLevels
        Boolean HiRes

        Int Index
        Int LightLevel

        Vector3 CameraPosition
    Technique {
        WorldParameters {
        VertexShader GLSL100: Shaders/sky.vert
        FragmentShader GLSL100: Shaders/sky.frag
            HI_RES : HiRes

sky.vert :

attribute vec3 inPosition;
uniform vec3 m_CameraPosition;
uniform mat4 g_WorldViewProjectionMatrix;
varying vec3 direction;

void main() 
  gl_Position = g_WorldViewProjectionMatrix * vec4(inPosition, 1.0);
  direction = -inPosition + m_CameraPosition;

sky.frag :

* fragment shader template
const float PI = 3.14159265358979323846;

uniform sampler2D m_ColorMap;
uniform sampler2D m_Playpal;
uniform sampler2D m_LightLevels;

uniform int m_Index;
uniform int m_LightLevel;

varying vec3 direction;

vec2 direction_to_coords(in vec3 dir)
  float angle_x = NB_REPEAT_HORIZONTAL* (((atan(dir.z, dir.x) / PI) + 1.0)/2.0);
  float angle_y = -dir.y + 0.3;
  return vec2(angle_x, angle_y );

void main() 
    vec3 dir = normalize(direction);
    vec2 pos = direction_to_coords(dir);
    gl_FragColor = texture2D(m_ColorMap, pos);
    /* stuff related to the game rendering that i disabled to test and that i'll re-enable now */

I still need to handle the upper part of the sky, but this was clear from the beginning : doom doesn’t include an upper part for its sky, as the player it not capable of looking up and down. GZDoom has a color fading, and i think i’ll do the same thing.
Also, i need to adress this “+ 0.3” in the fragment shader, as it defines the height of the sky (before the part i’ll need to fill myself). If it’s zero, the sky is distored to fill all the “sphere” around the player. If it’s 0.5 the sky is too low etc. I need to find if there is a real value or if it’s only about feeling. Anyway, i am on trails and it should be ok from here.


P.S. : also : this doesn’t work with mipmap, as it creates an ugly seam near z = 0. I also found some interesting parameters and formulas, i’ll keep that and maybe use them later.

Make sure you have wrapping turned on for the texture… though it may just be a problem with the mesh sharing the vertexes of that seam. (can’t do that.)