Using GPU for erosion

Hi everybody, I am working on a procedural world generator (GitHub - ftomassetti/worldgen) and I visualize the resulting worlds with jmonkeyengine. I was wondering how I could use GPU to perform real time erosion. Can you point me to the techniques I need to learn? Thank you so much!

This is one of the go-to articles for that: http://hal.inria.fr/docs/00/40/20/79/PDF/FastErosion_PG07.pdf
It’s called Hydraulic Erosion.

Thank you, I read the paper and I previously implemented the same algorithm in… JRuby. As you could suspect it is pretty slow, so I thought I could run it on the GPU, as the author of the algorithm did.

I think I should use a fragment shader (a frag file to be referred in my material).

I don’t know how to use invoke in my material the fragment shader to modify the height of the point.

Right now I use a fragin my material:

Technique {
		VertexShader GLSL100:   Common/MatDefs/Terrain/Terrain.vert
		FragmentShader GLSL100: MatDefs/MyTerrain.frag
		
		WorldParameters {
			WorldViewProjectionMatrix
		}

        Defines {
            TRI_PLANAR_MAPPING : useTriPlanarMapping
        }
	}

but the frag just calculate the color of pixels (I just modified the Terrain material to use more textures):

void main(void) {
        ....
	gl_FragColor = outColor;
}

it does not modify the height of the point, it derives from the height map which is then passed to the TerrainQuad.

At this stage I am confused about the general design (I have experience in other fields, but I never used 3D and shaders: I am not worried about learning the syntax of shaders, it is just that i don’t know what can I do with them, when I can invoke them, how can I combine them with TerrainQuad…)

You’ll want to do this in a vertex shader.

Frag shader = lets paint pixels!!

Vert shader is you big opportunity to F with the placement of vertices. But! Keep in mind, if you need to know the actual place of these vertexes outside of the shader, this is impossible without transform feedback.

I think the article actually uses a fragment shader.
They’re using R for the height, G for water content, etc.
(I’d probably use a greyscale 3D texture, it’s easier to add additional planes to cater for additional data in the erosion algorithm. But that’s just a detail.)

I’m not sure whether a vertex shader would be faster. You’ll need the vertex array since that’s what the heightmap part of the final result gets transformed to anyway, I just don’t know whether vertex shaders can execute as quickly as fragment shaders because they carry that additional coordinate baggage, while it’s a computation result in a fragment shader. I guess it depends on the GPU and driver version, though I suspect that fragment shaders are faster simply because more operations follow standard patterns that drivers optimize for - but that’s just a guess, benchmarks would give more reliable answers.

Whatever the technique, this needs:

  • multipass rendering
  • possibly reading back texture data into Java-side buffers
    I have too little experience with these things to give concrete advice about them, unfortunately.
@toolforger said: I think the article actually uses a fragment shader. They're using R for the height, G for water content, etc. (I'd probably use a greyscale 3D texture, it's easier to add additional planes to cater for additional data in the erosion algorithm. But that's just a detail.)

I’m not sure whether a vertex shader would be faster. You’ll need the vertex array since that’s what the heightmap part of the final result gets transformed to anyway, I just don’t know whether vertex shaders can execute as quickly as fragment shaders because they carry that additional coordinate baggage, while it’s a computation result in a fragment shader. I guess it depends on the GPU and driver version, though I suspect that fragment shaders are faster simply because more operations follow standard patterns that drivers optimize for - but that’s just a guess, benchmarks would give more reliable answers.

Whatever the technique, this needs:

  • multipass rendering
  • possibly reading back texture data into Java-side buffers
    I have too little experience with these things to tell where to look first.

nm… scratch that… frag shader… keep forgetting.

Thank you everybody!

Possibly reading back texture data into Java-side buffers

Mmm, so I could use the frag shader to calculate the “colors” when actually I could then use the calculated colors to updated the heightmap, right?

Any idea about reading back the texture into java-side buffers? Is it feasible with jmonkeyengine?

That’s exactly where my expertise ends, I know these techniques exist but I don’t know how to do it, or how to do it in JME.
I hope somebody else can chime in.

@ftomassetti said: Thank you everybody!

Possibly reading back texture data into Java-side buffers

Mmm, so I could use the frag shader to calculate the “colors” when actually I could then use the calculated colors to updated the heightmap, right?

Any idea about reading back the texture into java-side buffers? Is it feasible with jmonkeyengine?

I will answer the one question that I can: Yes, it’s feasible.

You might have a look at the render to texture examples as a start, I guess. That’s where I’d start if I needed this.

If you let a vertex shader render the heights of your heightmap then you wouldn’t even need to extract the values back to CPU necessarily. I didn’t read the articles, though.