Anybody have any experience with this?
I’ve found D3D examples but I think I need some OpenGL examples to really understand. Maybe even with some pictures…
The silence is deafening.
Here’s the origional paper: http://www.cbloom.com/3d/techdocs/splatting.txt
Here’s screenshots of what it turns out like: http://www.cbloom.com/3d/splatting/index.html
The concept, as far as I can gather is to alpha blend the textures of one quad into another using a multitextured alpha map. We can do this a lot easier now-a-days (the article was written in 2000) since most hardware supports at least a couple multi-texture channels. My old 2002 card supports 3.
I’m missing some understanding of the above article. Is he saying that a particular texel is a combination of all the surrounding textures?
Right now I’m putting a different texture on each of my quads. Instead of having the blatant edges showing I’d like them to blend from quad to quad.
In my terrain, the height is produced by a height map, but the textures on the quads are not derrived by height, but by a land usage bitmap. Therefore, the proceedural system in jME doesn’t seem to be a good fit. Plus, since my map is VERY large, having just one texture over the whole thing will be memory intensive.
Could someone who has a better idea of how these things work take a look at the article above and help me out? Thanks.
I probably won’t be much help… I’m just learning 3D programming myself.
After your initial post, I searched on google to see if I could find anything… and found the same article that you reference. I haven’t tried to read it very carefully, but I think it’s a little beyond my knowledge at this point. The images I saw are definitely nice, though.
I also saw an old archived post that said they had successfully implemented the theory in that article in Java3D. Not that that’s much help for jME usage, but it’s encouraging at least.
Is he saying that a particular texel is a combination of all the surrounding textures?
Isn't a texel just the "pixel" of the texture image being used for a given pixel on your screen (on a 3D surface, of course)? It's something like that.. I have a good definition in a book I have at home.
Anyway.. if that's the case, it would make sense that that's what he's saying: Any given texel is a combination of all the texels around it. That would give you the smooth trasitions/combinations that splatting is supposed to give you.
My guess is you're trying to do something like this:
1) Find all the places where your textures butt up against each other
2) For every point, select a small area around it
3) Average all the texture deatails of that area to come up with a generated texture of the individual point.
Sorry, probably not much help. That's my layman's understanding of it, though. If I get some time today (unfortunately, not very likely), I'll try to read the article in a little more depth and see if I can understand any of it.
--K
Reading through the article myself, what he’s doing is generating alpha textures for each terrain type (grass, rock, water, etc.) per TerrainBlock (to use our terms). He then renders each TerrainBlock X number of times (where X is the number of terrain types there are on that block.) On each “pass” you setup multitexturing with the type’s texture (say “grass.png” or something) as your first texture and the generated alpha map as your second texture (setting the blend params to use alpha to decide what the final texture color will be.)
He suggests rendering the whole block first with the most common texture set in order to eliminate possible transparent holes between patches of terrain types. (in that pass, single texturing would do fine.)
He also suggests giving each terrain type a global priority level so they are drawn in the same order for each block, otherwise you’ll get artifacts at the edges of blocks.
He then goes on to discuss more advanced techniques to get better visual performance. But I think this is probably enough for you to start with.
So basically, I think you can do this in jME, you just need to generate the alphatextures on your own (the hardest part, but not that hard… :)) and then use multitexturing.
The real catch is that at the moment you would have to have multiple copies of the same TerrainBlock, one for each terraintype encountered on it. What you really want is a way to reuse the geometry of one TerrainBlock for multiple draw calls. You could almost use CloneNode and Clone for this, but it doesn’t support unique renderstates per Clone. I’ve been pondering a GeometryClone object for some time now, perhaps I’ll have a go at it soon… For the now though you could experiment getting splatting working via multiple copies of your TerrainBlocks.
Hope that helps!
Thanks, it does. I was wondering if he was advocating multiple renders to make it work… that’s scarry when I’m only getting 35 frames per second to start with.
So, what I’m now going to do is create one texture per chunck (TerrainBlock) programatically using the source textures (really just images loaded short term) as starting pixel colors and mix them together into 1. I can do all this in the same external application that creates the map mesh for me. This way my load time can be much smaller and I’ll just save the generated terrain images.
This is kind of a ‘poor’ man’s version of what he’s doing so I don’t have to do multiple passes or even multi-texturing.
Good, I ended up using a variation of the proceedural terrain code you find in jME, and then saving out each terrain block’s texture after I generate it. I ended up creating an external application that generates my terrain mesh and terrain textures based on height maps and land usage maps.
I also use jME CLOD code for each block, and can keep up the fps to above 100 with a mesh that starts out with greater than 100K tris.
For this particular map I ended up with about 200 textures. It takes a bit to load, but I haven’t optimized yet.
http://hicom.resonus.net/ftproot/newmapscreen.jpg
The grid effect is because I’m not good enough with my textures yet :). Plus I still have some floating point and integer<->float math issues left over that I still need to deal with that’s not placing my texture coordinates quite right.
I haven’t run into a problem with the gpu memory.
Yes, you can put more than one level of polys over each other, but to get them to blend right you can’t have them at exactly the same place (you’d use multi-texturing for that). You’d have to have one slightly above the other… might be tricky.
Considering that I’m blending 20 different textures together, the processing time to do the texture creation is not minor, and thus it means that I want to do it outside the main application. It also rules out multi-texturing since there’s at most (that I know of) 8 levels of multi-texturing channels, which means multiple render passes.
So I feel it’s all a trade off. Plus, since I’m going to be adding a bunch of stuff still to the map, I want to save the number of tris I’m doing.
Thank’s for the questions. It helps me to figure things out in my brain.
Guurk, nice job, that’s looking fantastic.
Looks good.
I’m going to put terrain and clodmesh into the jme loader. This way, you can generate your clod records before users use the application, and those records can be quickly loaded into jme when it’s ran. That’s where a lot of that load time goes.
That would be awesome. Thanks.
The jme reader/writer should now support clod/autoclod/terrainblock/terrainpage/bounding volumes. It doens’t yet support procedural texture generator though. Try saving the terrain of you block to a file and reading it from there. See if that helps any. If you’re using clods for your terrain it should load a lot faster. You’ll have to add the texture (if you use procudural texture generator) yourself though.