I’ve read a small article about “Polybumping”, basically making extreem high-poly-count model(~300K polygons) and then make a low-poly-count-model(1,5K !!) polygon out of it.
My question is, how is this done? Is this done with some feature in a 3d moddeling program? And! How can I use it?
The idea of copying a high-poly model and sticking it on a low poly model doesnt seem too farfetched and gives nice results.
A little more insight from a site I found:
10. Could you please give us more explanations about this technology? How does it work? What are advantage and risk this technology included?
A: The key idea to PolyBump is deceptively simple. First we make a very high poly model - most of our characters are about ~400,000 polys, but some go up to a million or more. We then take that mesh and transform it into a complex "normal map" - which is a map showing how light should reflect off of all of the details properly on a per-pixel basis. We then use that normal map as a sort of "lighting calculation texture" on a low-poly model - in the case of most of our mercenaries 2,000 polys. This means that as we move this model beneath a real dynamic light, we see the light change as though all the original details were actually in place, when the model is actually much simpler. The end result is a character (wall, machine, vehicle, etc.) that looks like it was a million polys, which is actually much simpler. This means we can use a lot more of them, in an environment, and more lights with them as well.
The big drawback, of course, is that this only works when we use real dynamic lights - because the details of a PolyBumped object are essentially built from their interaction with the light. This is why we developed the Dot3 Lightmapping technology, which is a new way of using traditional lightmapping technology with per-pixel lighting to create a vivid lighting model that still lights the world with an infinite number of lights that can express the PolyBump normal maps. Lightmaps and Bump-Mapping were mutually exclusive to each other, but our patent-pending dot3-lightmap technology makes this possible.
many 3d modeling progs support this…z-brush is the most commonly used for that process(it easily supports working with 3 million poly models, and you can switch between the different LOD's while you are editing)…the programs generate a normalmap and an uv-set for the low poly version…on the clientside you just use normalmapping with that normalmap(just google on that)…
Yes, but the "new" development here seems to be you can use a normalmap together with bump mapping. Bla bla bla patents.
So if I get it right, the thing they're doing which makes it look so good is apply bump-mapping on a model?
Im going to find some info about normal mapping, was under the impression that a normal map is a high-polygon model made with less polygons but enough to have good detail (between Polybump and High polycount in)
A normal map is using a texture instead of normals (which are per vertex) to do lighting. You could look at http://www.unrealtechnology.com/html/technology/ue30.shtml , but a little closer to home we also have http://www.jmonkeyengine.com/jmeforum/index.php?topic=3054.0 (press 6 to see the difference)
the bad news is we artist needda work a load more then
normal maps, parallax mapping, displacement, there are several flavours. But to me is all the samme -different results- : I need to work then the hi res and the low pol one, and combining the stuff is not easy or comfortable…yet. Not at all.
Zbrsuh, Max 7 or+ are softwares that do this. And many more.
what about that "Bla bla bla patents." ?
maybe don't call it "polybumb" or anyother fashion name already registered, but engines all over the internet are adding normal maps to their features(ogre, irrlicht, etc, etc). While I am no interested on it. Is extra load of work, it has killed many small companies, that now need a cinema film sized team to make same games…
my friend and I are targetting low machines, this technique tends to need shaders 2.0, which is not even in geforce 4 generation, so, no good for that…
But many open source engines do add it. Is not a problem, and there must be way enough papers and code for it, and freely available.
BTW, the flavours actually offer important differences, I wasn't being accurate (a bit in a rush)
Normal maps, or dot3 bump (maybe there are differences, dunno) was brought by doom3,but before was from Crytek, that shot showed here is from that company, is a technique that I don't verymuch like. In silohuette contours you still see the peaks, the low pol shape.
Parallax mapping is sort of more realistic. Hard to explain for me.
Displacement mapping was there since allways for 3d art packages like Max. Is kindof elevation map for a whole character or object, which in greys contains the elevation data, but in a very accurate way…
I like this one the best, if it's really true displacement. I think the good thing is new 3d cards can or will be able to do it by hardware in real time…Anyway, to me that the polygon processing it's happening anyways…
I think dx10 has support for this but my info is too vague.
At the moment this true displacement totally replaces normal maps, I'll be happy. Is reallycloser to get what really is the original hi res model made by the artist.
BTW, doom3 demonstrated me it's all a compound of balanced graphic things…speculars are really important, too. And many other material features. To me , just normal mapping does not bring realism. And often, a well worked 3d model, with good textures, convince me more…
I came across this JH Labs site while reading up on Normal/Displacement/Parallax mapping. The Java source is available for review and the Image Editor demos the displacement filter. It’s a very well put together editor.