Alternate methods of multitexturing (not alpha-map png based like the terrain examples)

I’m trying to mimic the appearance of another application’s texturing. I have a list of vertices, and faces (triangles) making up a “landscape mesh” which should be textured with about 7-15 different textures (depending on the terrain of the “landscape”). Each triangle has a texture code associated with it, signifying which texture that particular triangle should mostly consist of. And of course, the textures should blend smoothly between each face.



So I’m trying to develop a strategy that allows this (which does NOT utilize pre-made alpha map png files, texture alphas need to be done at run time). Right now I figure if I calculate the “strength” of each texture at each vertex (in the vertex shader)–by factoring in the terrain types of all it’s neighboring faces (unsure how to do this yet)–I should be able to set alpha values based on how far a pixel is from a vertex. The generated ‘alpha map’ would be used by the frag shader to blend each texture per pixel.



Is this even feasible, or should I be looking at a totally different strategy? Even if it does work, I’m not certain how it’d look. I have the shader code for the application I’m trying to mimic (but they are HLSL and we use GLSL), but it seems like they’re doing this blending step elsewhere:



sampler MeshTextureSampler = sampler_state { Texture = diffuse_texture; AddressU = WRAP; AddressV = WRAP; MinFilter = LINEAR; MagFilter = LINEAR; };



I’m not sure what this HLSL “MeshTextureSampler” is but it seems like this application may have pre-blended all the textures as needed, and created a single texture for the entire mesh based on the face/terrain code data. In the pixel/fragment shader all they really seem to do is this:



float4 tex_col = tex2D(MeshTextureSampler, In.Tex0);



After that it’s just shadows, lighting, etc – no sort of texture blending at all as far as I can tell, which leads me to believe this texture blending work is being done on the CPU beforehand, I suppose. Any suggestions welcome.

Hi, here is my idea :

Lets suppose i have a terain with 30 textures. Do i have to use all 30 texture coordinates ?


  1. Have one big image with all 30 possible textures. you only change the u-v coordinates to specify what texture the corresponding triangle shows.

    advantage: easy if you show only a specified mini-texture (one of the 30) on a specified triangle.

    disadvanage: must be math freak to understand how to make a smooth transition between a mini-texture to another, so it doesnt look tiled e.g not to have seams


  2. Texture blending is merging all textures into 1? wont that have issues with the size of the texture e.g if your map is bigger than 2024x2024 pixels.

1- The UV coords can probably be shared between textures, but I think my issue may be that even getting access to the “adjacent face data” that I need might not be possible?



2- Dunno, is there a max texture dimension size? If the other app is doing this, I’m not sure how yet. This is the shader code which I believe it uses, but like I said, I don’t see where it’s doing any texture blending and they just seem to be using a single texture “MeshTextureSampler”:



(of course these are HLSL, not GLSL!)

Vertex shader



[java]struct VS_OUTPUT_MAP

{

float4 Pos : POSITION;

float4 Color : COLOR0;

float2 Tex0 : TEXCOORD0;

float4 SunLight : TEXCOORD1;

float4 ShadowTexCoord : TEXCOORD2;

float2 ShadowTexelPos : TEXCOORD3;

float Fog : FOG;



float3 ViewDir : TEXCOORD6;

float3 WorldNormal : TEXCOORD7;

};

VS_OUTPUT_MAP vs_main_map(uniform const int PcfMode, float4 vPosition : POSITION, float3 vNormal : NORMAL,

float2 tc : TEXCOORD0, float4 vColor : COLOR0, float4 vLightColor : COLOR1)

{

INITIALIZE_OUTPUT(VS_OUTPUT_MAP, Out);



Out.Pos = mul(matWorldViewProj, vPosition);



float4 vWorldPos = (float4)mul(matWorld,vPosition);

float3 vWorldN = normalize(mul((float3x3)matWorld, vNormal)); //normal in world space





Out.Tex0 = tc;



float4 diffuse_light = vAmbientColor;



if (true /_UseSecondLight/)

{

diffuse_light += vLightColor;

}



//directional lights, compute diffuse color

diffuse_light += saturate(dot(vWorldN, -vSkyLightDir)) * vSkyLightColor;



//apply material color

// Out.Color = min(1, vMaterialColor * vColor * diffuse_light);

Out.Color = (vMaterialColor * vColor * diffuse_light);



//shadow mapping variables

float wNdotSun = saturate(dot(vWorldN, -vSunDir));

Out.SunLight = (wNdotSun) * vSunColor * vMaterialColor * vColor;

if (PcfMode != PCF_NONE)

{

float4 ShadowPos = mul(matSunViewProj, vWorldPos);

Out.ShadowTexCoord = ShadowPos;

Out.ShadowTexCoord.z /= ShadowPos.w;

Out.ShadowTexCoord.w = 1.0f;

Out.ShadowTexelPos = Out.ShadowTexCoord * fShadowMapSize;

//shadow mapping variables end

}



Out.ViewDir = normalize(vCameraPos-vWorldPos);

Out.WorldNormal = vWorldN;



//apply fog

float3 P = mul(matWorldView, vPosition); //position in view space

float d = length§;



Out.Fog = get_fog_amount_new(d, vWorldPos.z);

return Out;

}

[/java]

“Pixel” (Fragment) shader



[java]PS_OUTPUT ps_main_map(VS_OUTPUT_MAP In, uniform const int PcfMode)

{

PS_OUTPUT Output;



float4 tex_col = tex2D(MeshTextureSampler, In.Tex0);

INPUT_TEX_GAMMA(tex_col.rgb);



float sun_amount = 1;

if ((PcfMode != PCF_NONE))

{

sun_amount = GetSunAmount(PcfMode, In.ShadowTexCoord, In.ShadowTexelPos);

}

Output.RGBColor = tex_col * ((In.Color + In.SunLight * sun_amount));





//add fresnel term

{

float fresnel = 1-(saturate(dot( normalize(In.ViewDir), normalize(In.WorldNormal))));

fresnel *= fresnel;

Output.RGBColor.rgb *= max(0.6,fresnel+0.1);

}

// gamma correct

OUTPUT_GAMMA(Output.RGBColor.rgb);



return Output;

}

[/java]

How about this - what if I used the extra texCoord vertex attributes (texCoord2 - 8 ) to store the “strength/alpha” of each texture at any particular vertex? If a vertex is dominated by one texture (say a mountain texture) 90%, I’d store a high value in the mountain texture’s texCoord field–meaning the frag shader should sample that texture more heavily than the others. It just so happens that there’s 2 components * 7 = 14 possible values, so that’d be enough for sure (I could probably get away with just 7 textures). The main thing I’d need to tweak is how to calculate these alpha weights per vertex, but I could probably manage that since I’d be in Java with all the map data available, simply writing to the texCoord vertex buffers.

I’d be counting on the GPU to do what it does with vertex colors, interpolate the texture alphas based on distance to the vertices…theoretically blending these things somewhat. (Using a varying input to the frag shader, would it do this, or am I completely misunderstanding how this works?)

Make any sense?

  1. so you are proposing to fill the texcoords2-8 with alpha’s that will specify the positions of the texture. Generate 1 heightmap for each texcoord that represents the texture’s alpha.



    [java]HillHeightMap heightmap = null;



    try {

    heightmap = new HillHeightMap(513, 1000, 50, 100, (byte) 3);

    } catch (Exception ex) {

    ex.printStackTrace();

    }[/java]

    Heightmap chooses N random points and increases their value by a random amount. Then it smooths those heights with the nearby points.

    .


  2. i realised that using only 1 texcoord will cause seams. Maybe if u use a 2nd texcoord it will allow the smooth transition of 2 nearby textures but it will require leet haxx maths.
  3. the other idea is to have 1 texture for the whole map. http://en.wikipedia.org/wiki/MegaTexture

Well mostly yes, (I however cant see the benefit of using a additinal texturecoord instead of using a alphamap.

Heightmap

alphamap


Sorry my mistake, now that i see it better, i think EmpirePhoenix is right.

What you are actually want to do is generate "alphamap", i think this is just easy, because it is only a hue operation. E.g you take Heightmap and hue some random places with red or green or blue color.

So you will have to implement hue operation in java. And create alphamap from your code.

I think a megatexture is what they’re doing in the other app, but I’m not familiar with that. (And wouldn’t that mean putting more work on the CPU? This ‘alpha’ has to be able to change often as a terrain is ‘painted’.)



I was able to do this by using additional texCoords (letting the gpu interpolate the alpha values). Solution I posted above (last one) does seem to work, and it seems pretty darn similar to what I was aiming for (minus shadows of course).

I still need to work on the water/river textures so they aren’t blurry. Not yet sure if I need to do a second pass with another SceneProcessor, tweak my current shaders, or what. Then there’s some shadows and other minor effects to add. If anyone has any suggestions for tackling that stuff I’m listening.



http://s1.postimage.org/2an3watey/mb2.jpg

This has about 4-5 blending textures.



Maybe I’m wrong but I’m not sure if a heightmap or alphamap would have done this so easily? If I can do this ‘cleaner’ with an alphamap I’d be interested (it’s not so pretty now), but are you talking about creating an alphamap beforehand (via CPU?) I’m not sure how I’d generate all that alpha data–basically what I’m doing here is no different than using plain vertex colors and letting the GPU interpolate between those. If there’s a better way of getting the GPU to do that for me, that’d be great, but the only alpha examples I found with the jME source were with static, pre-made png alphas.

Well basically with a alphamap youd efine like red = (texture one); blue = (texture two) ect, ()for better reading quite confusing without

now you can simply define how much of wich texture to be present at wich fragment.

Actually what you did ther looks really similar to the terrainTeststuff in the engine already.

(Also it does not matter where the alphamap does come from,it can be anything froma jpg to a procedual creation right on the fly in the gpu.)

Also note that if you use a alphamap the values will be also interpolated between the next few pixels.

I strongly suggest you to take a look at the alphamap terrain stuff in the test/terrain package, and try to understand how it works and if it will help you.

yes why not generate the alphamap beforehand via CPU.

A texture is just an image.



So load the BufferedImage with the original data (java.awt says hi), do your stuff.

Why re-invert the wheel, you already have all the alpha’s in the heightmap. Its just needs some color.

When you finish tell jme to convert the bufferedImage to texture.



And if you care about performance, this process adds 0 burden since it is only done once in the application init.

Well an additional complication is that it can’t just be done once, the terrain textures (even the mesh itself) will be painted and changed in real time (i.e. map editor).



I’ve already looked at the terrain examples before, and came to the same conclusion I’m at now (right or wrong). I think I understand how alphamaps basically work (no different than a stretched mask in a graphics editor app?), using each channel as an alpha for each texture. (Now, wouldn’t this alphamap need to stretch over the entire terrain? That’ll be a huge couple of alphamaps, even if the resolution is reduced.) I’d need to use four alphamaps, which may not be an issue, but I still can’t see how this would work efficiently. Let me try and explain where I’m at (because it is ugly, I was focusing on getting it working first), and see if we can’t improve on it.



Currently I have a bunch of float arrays I’m passing as “texcoords” (which are actually vertex texture alpha values) - notice I had to stuff 2 texture alphas in each texcoord component (x and y), hence the long names (plainForestBuffer = plain & forest buffer, etc). The values being set (texWeights) are weighted values calculated based on the vertex’s neighboring face terrain types (recall the form of the data I’m working with, the terrain codes are per-face).



Sorry for code formatting, java tag is still mangling things a bit.

[java]

float[] oceanDeepOceanBuffer = new float[vertices.size() * 2];

float[] mountainSteppeMountainBuffer = new float[vertices.size() * 2];

float[] steppeSteppeForestBuffer = new float[vertices.size() * 2];

float[] plainForestBuffer = new float[vertices.size() * 2];

float[] snowSnowForestBuffer = new float[vertices.size() * 2];

float[] desertDesertForestBuffer = new float[vertices.size() * 2];

float[] riverFordBuffer = new float[vertices.size() * 2];

index = 0;



for (Vertex v : vertices) {

float[] texWeights = MapUtil.calcuateVertexTextureWeights(v);





oceanDeepOceanBuffer[index] = texWeights[0];

oceanDeepOceanBuffer[index + 1] = texWeights[1];

mountainSteppeMountainBuffer[index] = texWeights[2];

mountainSteppeMountainBuffer[index + 1] = texWeights[3];

steppeSteppeForestBuffer[index] = texWeights[4];

steppeSteppeForestBuffer[index + 1] = texWeights[5];

plainForestBuffer[index] = texWeights[6];

plainForestBuffer[index + 1] = texWeights[7];

snowSnowForestBuffer[index] = texWeights[8];

snowSnowForestBuffer[index + 1] = texWeights[9];

desertDesertForestBuffer[index] = texWeights[10];

desertDesertForestBuffer[index + 1] = texWeights[11];

riverFordBuffer[index] = texWeights[12];

riverFordBuffer[index + 1] = texWeights[13];

index += 2;

}

[/java]



I’m setting the buffers in the mesh like so…



[java]

mapMesh.setBuffer(VertexBuffer.Type.TexCoord2, 2, oceanDeepOceanBuffer);

mapMesh.setBuffer(VertexBuffer.Type.TexCoord3, 2, mountainSteppeMountainBuffer);

… up to TexCoord8

[/java]



Vertex shader, the “texCoords” simply pass through like so:



[java]

varying vec2 texCoord2;



void main() {



texCoord2 = inTexCoord2;



[/java]



Fragment shader uses the “texCoords” (alpha weights) interpolated by the GPU to mix corresponding textures as needed. (All textures use the same actual texCoords currently):



[java]

varying vec2 texCoord2;



void main() {



vec4 des = texture2D(m_desertTex, texCoord.xy * m_desertScale);

vec4 ste = texture2D(m_steppeTex, texCoord.xy * m_steppeScale);

vec4 mou = texture2D(m_mountainTex, texCoord.xy * m_mountainScale);

vec4 oce = texture2D(m_oceanTex, texCoord.xy * m_oceanScale);

vec4 pla = texture2D(m_plainTex, texCoord.xy * m_plainScale);

vec4 riv = texture2D(m_riverTex, texCoord.xy * m_riverScale);

vec4 sno = texture2D(m_snowTex, texCoord.xy * m_snowScale);



outColor = mix(outColor, oce, texCoord2.x);

outColor = mix(outColor, oce, texCoord2.y);

outColor = mix(outColor, mou, texCoord3.x);

outColor = mix(outColor, mou, texCoord3.y);

outColor = mix(outColor, ste, texCoord4.x);

outColor = mix(outColor, ste, texCoord4.y);

outColor = mix(outColor, pla, texCoord5.x);

outColor = mix(outColor, pla, texCoord5.y);

outColor = mix(outColor, sno, texCoord6.x);

outColor = mix(outColor, sno, texCoord6.y);

outColor = mix(outColor, des, texCoord7.x);

outColor = mix(outColor, des, texCoord7.y);

outColor = mix(outColor, riv, texCoord8.x);

outColor = mix(outColor, oce, texCoord8.y);

[/java]



An old post I found in search seemed to indicate that “misusing” vertex buffers like texCoordX was the only way to get different types of data to the shaders, so I just accepted that as a solution and tried this. I don’t particularly like the specifics of the solution (esp abusing the texCoord buffers), but since I only need a single value (weight) per texture, per vertex (for the GPU to interpolate)-- I can’t picture how I’d map those weights to each vertex in an alphamap. Or “define how much of which texture is present at which fragment” as it was said - wouldn’t this need to be a huge alphamap where I’d need to figure out a way to fill each polygon with a “blurred edge triangle”? I think that’d take forever, even if I could do this without a ton of code.



This is the same data using vertex color instead, you can see where the GPU interpolates various colors–in this below example all I had to do was set a single color value on each vertex (via color vertexbuffer), and the GPU did all the rest - same as what I’m doing with the alphas:



http://s3.postimage.org/yk83th4iq/mb3.jpg