Infanite shadow volume-glsl

First a few screen shots



shader off





shader on front view





shader on side showing backfacing polygons being extruded allong light path



shader

void main() {
   
      vec3 normal, lightDir;
      vec4 diffuse;
      float NdotL;
      float extrustionFactor;
      
      /* first transform the normal into eye space and normalize the result */
      normal = normalize(gl_NormalMatrix * gl_Normal);
      
      /* now normalize the light's direction. Note that according to the
      OpenGL specification, the light is stored in eye space. Also since
      we're talking about a directional light, the position field is actually
      direction */
      lightDir = normalize(vec3(gl_LightSource[0].position));
      vec4 lightPos =normalize(gl_LightSource[0].position - gl_Vertex);
      
      /* compute the cos of the angle between the normal and lights direction.
      The light is directional so the direction is constant for every vertex.
      Since these two are normalized the cosine is the dot product. We also
      need to clamp the result to the [0,1] range. */
      NdotL = dot(normal, lightDir);
      vec4 Position=gl_Vertex;
      if(NdotL <= 0.0)
      {
         gl_FrontColor = vec4(1,0,0,1);
         gl_BackColor = vec4(1,0,0,1);
         Position = gl_Vertex+(gl_Vertex-gl_LightSource[0].position)*100;
         Position.w=0.5;
      }
      else
      {
         gl_FrontColor = vec4(0,0,1,1)*NdotL;
         gl_BackColor = vec4 (0,0,1,1)*NdotL;
      }
      /* Compute the diffuse term */
      gl_Position = gl_ModelViewProjectionMatrix*Position;
   
      
      
   }


might have to mess with it if you want per-pixel shading i just set the model to blue and outlineand extruded stuff to white
The first 2 are from lights view I don't know what to do next.  I don't know how to make the stencil buffer count theses?

doesnt this only work when the object is untransformed? gl_Vertex is in object space, and the light is in eyespace…

one way to get them into the same space would be to transform the light into object space like this:



gl_ModelViewMatrixInverse * gl_LightSource[0].position



by the way, the lightPos vector isnt used…

vec4 lightPos = normalize(gl_LightSource[0].position - gl_Vertex);

thanks I was looking off several sources they didn't match up.

gl_ModelViewMatrixInverse is used in lightPos and lightPos is used to make computation of lightDir easier on the eyes.  If you see anything else  wrong please say so


void main() {
   
      vec3 normal;
      vec4 lightDir,lightPos,diffuse;
      float NdotL;
      float extrustionFactor;
      
      /* first transform the normal into eye space and normalize the result */
      normal = normalize(gl_NormalMatrix * gl_Normal);
      
      /* now normalize the light's direction. Note that according to the
      OpenGL specification, the light is stored in eye space. Also since
      we're talking about a directional light, the position field is actually
      direction */
      lightPos = normalize(vec4(gl_LightSource[0].position*gl_ModelViewMatrixInverse));
      lightDir = normalize(vec4(lightPos-gl_Vertex));
      
      /* compute the cos of the angle between the normal and lights direction.
      The light is directional so the direction is constant for every vertex.
      Since these two are normalized the cosine is the dot product. We also
      need to clamp the result to the [0,1] range. */
      NdotL = dot(normal, lightPos);
      vec4 Position=gl_Vertex;
      if(NdotL <= 0.0)
      {
         gl_FrontColor = vec4(1,1,1,1);
         gl_BackColor = vec4(1,1,1,1);
         Position = gl_Vertex+(gl_Vertex-lightDir)*100;
         Position.w=0.0;
      }
      else
      {
         gl_FrontColor = vec4(1,1,1,1)*NdotL;
         gl_BackColor = vec4 (1,1,1,1)*NdotL;
      }
      /* Compute the diffuse term */
      gl_Position = gl_ModelViewProjectionMatrix*Position;
   
      
      
   }

Interesting, but looks really slow based on your fps.  Is it just a crazy number of triangles?  Also, it looks like a ZPass alg of some sort, but where is the shadowing done, I don't see stencil code or such.  (and no screenshot with shadows :slight_smile: )   



On a different note, I have geometry based ZPass shadows almost ready for check-in.  I'm not too happy with a straight zpass only implementation, so I've been delaying until I could find a more complete implementation, but I guess we might as well start somewhere.

Ya right now I'm just working in a shader IDE that always gets 6 fps that and I running a ti 4200 since my computer burned up so i got like what 2 software vertex shades  :smiley: but i make use of what i have.  Ya it's gonna have to be a Z-Pass algorithm it's multi pass I'll post some documentation on it later if you are interest but in return I would like to know how i can do multi pass now without hacking the core to pieces.  Just call render.draw multiple times?

actually even since i did thouse fixes it hasn't looked right.  It now just extrudes what ever way it wants but i got some ideas after a good nights sleep.



You can see my plan for this in the monkey world 3d post I just made. My thought is why do realtime when light maps can look 10 times better and be compressed to almost nothing well atleast I thought but looks like the light maps will have to include the light color(maybe in a seperate texture) for cards that don't have support for shaders but you know i'm running shaders with my card and it looks pretty good.  Also if your interested in shaders check out nvidias.developers.com under there sdk they have shader particle engines that can run up to a million particles because it's done on the card no transfer between cpu and gpu.  They have raytracers,global illumination,and alot of cool stuff my card can't run. 

normalizing the lightPos sort of kills the position  :wink:

duh that shouldn't have been so hard to see

hehe tell me about it…so darn easy to get blind on your own code  :slight_smile:

I"m glad I posted my code now this computer's hard drive is about to go out it's getting worse everyday seems like every computer i touch goes to pieces

sorry to ask this of the forum but would somebody please plug my this code in and get some screenshots of it for me in jme i've made it so you don't have to set any varibles.  I also have it set so  you don't have to set any light sources.  I just wanna see the extruding of verts of anything a sphere,cube anything.  Thanks very helpful.  Just to let you know this system is currupt to the point that if windows does load explorer.exe crashes so i have to use taskmanager to start firefox.



void main() {

vec3 normal;
vec4 lightDir,lightPos,diffuse;
float NdotL;
float extrustionFactor;

/* first transform the normal into eye space and normalize the result */
normal = normalize(gl_NormalMatrix * gl_Normal);

/* now normalize the light's direction. Note that according to the
OpenGL specification, the light is stored in eye space. Also since
we're talking about a directional light, the position field is actually
direction */
lightPos = normalize(vec4(vec4(0,0,1,0)*gl_ModelViewMatrixInverse));
lightDir = normalize(vec4(lightPos-gl_Vertex));

/* compute the cos of the angle between the normal and lights direction.
The light is directional so the direction is constant for every vertex.
Since these two are normalized the cosine is the dot product. We also
need to clamp the result to the [0,1] range. */
NdotL = dot(normal, lightPos);
vec4 Position=gl_Vertex;
if(NdotL <= 0.0)
{
gl_FrontColor = vec4(1,1,1,1);
gl_BackColor = vec4(1,1,1,1);
Position = gl_Vertex+(gl_Vertex-lightDir)*100.0;
Position.w=0.0;
}
else
{
gl_FrontColor = vec4(1,1,1,1)*NdotL;
gl_BackColor = vec4 (1,1,1,1)*NdotL;
}
/* Compute the diffuse term */
gl_Position = gl_ModelViewProjectionMatrix*Position;



}

got the shader to work but found something out I’m extruding some front facing verts too



see the yellow faces are front facing and the back facing ones are white.  The problem stems from the shader extending the outline here and not there.  Any suggestions?  PS:I am getting a warning that does not define gl_ModelViewMatrixInverse is this just the IDE or do you have to set the model view inverse matrix?



lastest code

void main() {



vec3 normal;

vec4 lightDir,lightPos,diffuse;

float NdotL;

float extrustionFactor;

varying mat4 MODELVIEW_INVERSE;



/* first transform the normal into eye space and normalize the result /

normal = normalize(gl_NormalMatrix * gl_Normal);



/
now normalize the light’s direction. Note that according to the

OpenGL specification, the light is stored in eye space. Also since

we’re talking about a directional light, the position field is actually

direction /

lightPos = gl_LightSource[0].position
gl_ModelViewMatrixInverse;

lightDir = normalize(vec4(lightPos-gl_Vertex));



/* compute the cos of the angle between the normal and lights direction.

The light is directional so the direction is constant for every vertex.

Since these two are normalized the cosine is the dot product. We also

need to clamp the result to the [0,1] range. */

NdotL = max(normalize(dot(normal, lightDir.xyz)),0.0);



vec4 Position=gl_Vertex;

if(NdotL <=0.0)

{

gl_FrontColor = vec4(1,1,0,1)*NdotL;

gl_BackColor = vec4(0,0,0,1)*NdotL;

Position = gl_Vertex+(gl_Vertex-lightPos)*1.5;

Position.w=1.0;

}

else

{

gl_FrontColor = vec4(1,1,0,1)*NdotL;

gl_BackColor = vec4 (0,0,0,1)NdotL;

}

/
Compute the diffuse term /

gl_Position = gl_ModelViewProjectionMatrix
Position;







}

this will allways be the case when extruding without splitting the vertices on the boundary and filling in the hole with new polygons…if you just move a polygons vertices, the polygon that is fronfacing and connected to that polygon will also have one or two of it's vertices moved…

the normal way is to split the vertices and rebuild the hole that comes from the extrusion…

when using shader optimizations the usual way is to build the mesh with all edges made from degenerate polygons, so that you can move polygons without affecting it's non-extruded neighbours(and with that, showing the degenerate polygons so that you don't have to fill in any hole)

Would changing if(NdotL<=0) to if(NdotL<0) fix it other wise spliting vertices can get pretty expensive for high polygon cout.  Can you imagine how powerfull shaders will be when you can create vertices in shaders.

no, it can never get very good unless working with very high poly meshes…that's my experience at least…

imagine a bad case like shadowing a box in that way, extruding the backfacing polys wouldnt leave any box-looking frontfacing pollys :wink:



generating a mesh with degenerate pollys can be done in a preprocessing stage so it doesnt cost anything at rendertime…(although more vertices to process)



yeah, damn it will be nice when shaders get vertex creation power…"real" displacement mapping, dynamic subdivisions and on and on…

Actually I don't remember why I care this I am planning on using this in a 3d editor and using whats projected as a light map but I guess I was thinking about maybe somebody else wanting to use it for there thing.  So in preprocessing stage I would find the outline (NdotL==0) then take all of those vertices and duplicate the outline, attach the duplicate to the outline and extend the duplicates to (in my case) the static geometry.  Just wondering but  since shaders process all the elements I could make a all directional ambient light.  Also this could be bad how can i limit it to do just the opposite like a spot light or directional light.


Actually I don't remember why I care this I am planning on using this in a 3d editor and using whats projected as a light map


what do you mean by using this as a lightmap?


Just wondering but
MrCoder said:

Actually I don't remember why I care this I am planning on using this in a 3d editor and using whats projected as a light map


what do you mean by using this as a lightmap?

lightmaps, I make a texture out of what is projected on the static geomtry
MrCoder said:


Just wondering but  since shaders process all the elements I could make a all directional ambient light.  Also this could be bad how can i limit it to do just the opposite like a spot light or directional light.


dunno what you mean here either...what's a directional ambient light? why is it bad? and what would the limit with a spotlight do to this extrusion code?

forget about it i found a site with tutorials on directional,point,and spot(all per-pixel) still looking for one on ambient.  I hope to include all these light types in my 3d editor. (a little off topic but seeing how I can't work on the shadow volume with this computer failing more everyday)(and that link for anybody interested http://www.lighthouse3d.com/opengl/glsl/index.php?lights

a good question is that in the depth pass stencil shadow volume you render front face if depth passes you increase stencil buffer.  Well my shader doesn't show only the shadow volume it shows the whole model do wouldn't it consider it's self in shadow thus rendering a black object?