Rendering an object from the inside

Hi all,

just checking for avenues to investigate before I start experimenting.

Standard behaviour is that if you place the camera inside an object, the object vanishes. (The reasoning behind this behaviour is that since a camera isn’t supposed to ever get into the inside of a solid object, checking polygon backfaces for visibility creates unnecessary load.)
Now I have this idea of a “ghost mode” with unrestricted camera movement. I.e. I want to render any building, starship etc. even when inside. As an added touch, I think it should be made transparent (ghost mode after all).

The naive approach would be to accompany each polygon with a “backside polygon” that has a mirrored orientation, using the same texture at, say, 50% transparency.

Which of the following potential problems should I worry about?

  • JME might not (easily) allow creating the backside polygons.
  • The number of polygons involved in each edge doubles, creating more potential for edge fighting.
  • I’m doubling the number of polygons in the scene, so checking polygon visibility will double.
  • I’m also doubling the pressure on the texture cache on the graphics card.

Maybe there are better approaches.

One idea that might or might not work is to apply some clever shader trick so that any polygon will render opaque when looking at it from the front, and with 50% transparency when looking at it from the back.
This would eliminate the backfacing polygons entirely.
To make this work, I’d have to instruct JME to not do backface culling (i.e. ignoring polygons that face away from the camera). I’d still double visibility checking overhead, but all the other problems would be eliminated.

Is any of this feasible? If no, what would be the best approach to have ghost mode?

You can tell OpenGL which side of a triangle it should render (or both).

[java]
material.getAdditionalRenderState().setFaceCullMode(RenderState.FaceCullMode.FrontAndBack);
[/java]

1 Like

Thanks, that’s the first building block for the shaders approach.
(Aside: Wouldn’t that be FaceCullMode.Off? FrontAndBack seems to cull (eliminate) both front and back side. Can’t test right now.)

[Starting my way through the shaders and materials webpages… it’s going to be an easier read, now that I have a concrete use case for the info. Will be back here with questions or reports of success, as the case may be.]

@toolforger, yes you are right. Of course you want FaceCullMode.OFF if you want to render triangles double sided. Sorry about that, I’m tired =P

Shaders are fun to right once one get the hang of it =)

Easiest way:
-one Geometry with the mesh and normal material.
-another Geometry with the same mesh (no duplication) and a different material that turns back faces on (but not front) and uses a 50% transparent diffuse color, etc…

Cheap. No shaders to write. No duplication of textures. No duplication of meshes. Just one mesh rendered twice… and even then not really… since the inside one is only rendered when you are inside, outside only when outside.

The only impact is that you now have two objects instead of one… you can always disable the second object when not needed if that really concerns you.

2 Likes

Heh. That’s brilliant.

My knee-jerk reaction was “but it won’t work in a concave object”, but the switch happens on a per-polygon basis, not on a per-object basis, right?
I’d just need to keep both the “inside” and the “outside” object enabled. (No, I’m not worrying about that overhead very much; I’ll put that on the list of things to consider if/when performance becomes an issue. It could become a bit of a hassle if I have to move two scenegraph objects instead of one, but I can always group them.)

Do you know whether z-fighting at the edges can become an issue? E.g. at a far edge of a cube, the inside of the back wall might fight with the outside of a side wall where they meet.
I understand there’s a mechanism to slightly move polygons to combat z-fighting, but I’m not sure how this applies to the polygons of a mesh (my best guess would actually be “not at all”).

@toolforger said: Heh. That's brilliant.

My knee-jerk reaction was “but it won’t work in a concave object”, but the switch happens on a per-polygon basis, not on a per-object basis, right?
I’d just need to keep both the “inside” and the “outside” object enabled. (No, I’m not worrying about that overhead very much; I’ll put that on the list of things to consider if/when performance becomes an issue. It could become a bit of a hassle if I have to move two scenegraph objects instead of one, but I can always group them.)

Do you know whether z-fighting at the edges can become an issue? E.g. at a far edge of a cube, the inside of the back wall might fight with the outside of a side wall where they meet.
I understand there’s a mechanism to slightly move polygons to combat z-fighting, but I’m not sure how this applies to the polygons of a mesh (my best guess would actually be “not at all”).

I can’t imagine how there would be z-fighting any more than on any other regular shape. If you are inside of a cube then it is drawing the inside of a cube… exactly as if you were just drawing… the inside of a cube.

I’m considering a case where I’m looking at an inside-textures building from the outside.
If there’s a round-off error, the inside textures might stick through the side wall. Without inside textures, such errors don’t matter because the stick-through wall is facing away from the camera and not drawn, but with an inside texture, the error becomes visible.

I guess I’ll have to test then :slight_smile:

@toolforger said: I'm considering a case where I'm looking at an inside-textures building from the outside. If there's a round-off error, the inside textures might stick through the side wall. Without inside textures, such errors don't matter because the stick-through wall is facing away from the camera and not drawn, but with an inside texture, the error becomes visible.

I guess I’ll have to test then :slight_smile:

The vertexes are the same so the edges will be calculated the same. The round off errors will be consistent.

If you have Z issues then it will be because of transparent sorting if the outside and inside of the shape are both in the transparent bucket. In that case you will have to take care to make the inside always sort farther away… which won’t be perfect but will be better than many alternatives.

I don’t think you will a proper effect “for free” as in the mentioned example of a space ship you will inevitably get unrealistic results. All your objects would look like “chocolate bunnies” without organs inside. So to get a proper immersion effect you should probably really do complete models with interior as else there won’t be any real interior. Plus the “realistic” view of a sub-atomic camera while stuck inside a wall of a space ship would be pitch black as theres no light reaching the center of the wall. So you would have to in fact track collisions and moving from inside the mesh to outside the mesh in the case of concave mesh collision shapes or make the “solid” parts of the ship consist of convex shapes to be able to more easily track actual overlaps instead of surface collisions.

Else just turning off backface culling and additionally enabling transparency for objects whose bounding box overlaps with the cam position (+ near frustum) would probably be the easiest solution. Personally I find backface culling the least confusing thing when moving freely in a level editor though so I don’t know if I’d try to attempt to change the “default behavior” at all here.
The content of this post is meant to be read as a straight information or question without an implicit dismissive stance or interest in having the other party feel offended unless there’s emotes that hint otherwise or there’s an increased use of exclamation marks and all-capital words.

The problem statement was simplified and taken out of context because I hadn’t worked everything out yet as a basis for discussion. “Transparent chocolate bunny” is exactly what I’m after right now. (I thought about explaining the situation in more detail, but then I realized I still haven’t sorted enough details out to make that useful in any way.)

Turning off backface culling plus local transparency. Right, that should do it and would be easiest with zero overhead.
Nice to see that the original complicated problem can be downsized to such a simple solution.

I think I’ll be using a degrading transparency setting. Make nearby stuff very transparent, increasing opacity more and more as distance to camera increases. Distance measured from camera position to center-of-object-mass.
That’s probably going to reduce the framerate though. Does anybody have a rough estimate how bad this is going to be in, say, a standard FPS scenario? (FPS is not what I’m aiming for, I’m just after a rough idea about how relevant this could become.)

@pspeed Thanks for reminding me of the transparent bucket, I knew about it but it somehow slipped.

I guess making the few objects around your cam transparent will amount to no overhead worth mentioning. For the CPU its just setting a flag and most GPUs can handle it just fine, you increase the fill rate just by that one geometry face really.
The content of this post is meant to be read as a straight information or question without an implicit dismissive stance or interest in having the other party feel offended unless there’s emotes that hint otherwise or there’s an increased use of exclamation marks and all-capital words.

I was more thinking about making a substantial interval of the viewable range transparent.
E.g. look at this awesome video:
[video]http://www.youtube.com/watch?feature=player_embedded&v=6AVAzGQMxEg#![/video]
Then consider making the hull transparent so you can see the inside structure without having to click an option.
Plus, maybe, making the aggregates transparent so you can see that there’s something behind it, and zoom in on it if you want to find out what’s there.

Here’s a video that’s nearer to what I have in mind (dammit, player doesn’t start at the timecode given in the link - skip to 0:15 and watch the semitransparent buildings shine through each other to see what I mean):
[video]http://www.youtube.com/watch?v=qtbHOG5w4Ew&list=UUqHYtvq4aTD1e3LP4t-kLzA&feature=player_detailpage#t=15s[/video]
.
.
Transparency for everything on the screen might be a lot more work I guess, but by how much?

CPU load is no different - everything still gets sorted just in a different order.

However you get MUCH higher overdraw - since the object behind needs drawing, then the one in front of that, then in front of that, etc.

You also get some unavoidable artifacts due to sorting and z-issues.

In other words mass amounts of transparency isn’t recommended for most situations but might work depending on your specific case…

Agreed - CPU load is not a problem, GPU load can be if you insist on drawing all details. Amounts to some LOD reduction strategy I guess.

Is there a page somewhere that discusses the kinds of artifacts you get with too much transparency?

@zarch: the fill rate is only increased by the backface rendering, you always increase the fill rate also when the object in front is not transparent.

Are hidden objects not culled?
I had hoped that happen either in JME (at object level) or inside the GPU (at triangle level). Never got around to verifying that though.

No, z-ordering as well as z-culling is a very complicated topic and mostly comes down to application-specific implementations. Definitely not happening on the OpenGL or GPU level.
The content of this post is meant to be read as a straight information or question without an implicit dismissive stance or interest in having the other party feel offended unless there’s emotes that hint otherwise or there’s an increased use of exclamation marks and all-capital words.

http://http.developer.nvidia.com/GPUGems/gpugems_ch29.html as @normen says, it’s quite complicated

1 Like

Ah I see. That article was really helpful so see the issues involved.
GPU occlusion culling not worth it below several hundred fragments - that was a surprise.
So for minecraft-style level of detail, occlusion culling would be relevant only at the application level (since JME doesn’t do any occlusion culling).
Good to know, that’s definitely going to influence application design.