Perlin noise: permutation and gradient textures

I’m working on perlin noise. Most of the examples I found feature permutation and gradient tables implementation using textures. Is there any way to do this using jme? I did a search through the available documentation and didn’t come across with any case where a texture was created manually not using an image or something. Or maybe I’m missing the concept here? Please, advise :slight_smile:

@alexd said: I'm working on perlin noise. Most of the examples I found feature permutation and gradient tables implementation using textures. Is there any way to do this using jme? I did a search through the available documentation and didn't come across with any case where a texture was created manually not using an image or something. Or maybe I'm missing the concept here? Please, advise :)

Not sure I follow… when yo say texture created manually, do you mean image? or texture?

Is you goal to create an image? Or are you reading data from an image?

I imagine he means a texture lookup.

Here’s the thing though. If it’s done with a shader, it can be done in jME. It most probably can be done in jME anyway. :wink:

@t0neg0d said: Not sure I follow... when yo say texture created manually, do you mean image? or texture?

Is you goal to create an image? Or are you reading data from an image?

@madjack said: I imagine he means a texture lookup.

Here’s the thing though. If it’s done with a shader, it can be done in jME. It most probably can be done in jME anyway. :wink:

madjack is right, I need to create lookup tables using 1D and 2D textures, but all I found was how to create a texture using an image.

Here’s what I want to do:

The permutation table:

The first big change that is needed is the way we store the permutation table. In the CPU version, we iterated through a for loop in the class constructor and ‘randomly’ added the integer values between 0 – 255 to an array, which we called the permutation array. For the GPU however, we will pass this ‘array’ to the GPU as a texture, and use the tex2D to sample the information. Here is permTexture2D file that I be be using.


taken from here: http://britonia.wordpress.com/2010/02/01/procedural-generation-–-textures-on-the-gpu/

Have you looked at ImageRaster (part of jme core) and ImagePainter (plugin library, available from SDK) ?

@alexd said: madjack is right, I need to create lookup tables using 1D and 2D textures, but all I found was how to create a texture using an image.

Here’s what I want to do:

taken from here: http://britonia.wordpress.com/2010/02/01/procedural-generation-–-textures-on-the-gpu/

Honestly, pushing this process off to the GPU in this case is going to decrease performance. Consider then two scenarios:

CPU handling some floats

as apposed to…

CPU creating image
Image being piped to GPU
Every pixel of rendered geometry for every frame performing a texture lookup.

I think the choice between the two is a no-brainer… but perhaps someone else will have a different take.

@t0neg0d said: Honestly, pushing this process off to the GPU in this case is going to decrease performance. Consider then two scenarios:

CPU handling some floats

as apposed to…

CPU creating image
Image being piped to GPU
Every pixel of rendered geometry for every frame performing a texture lookup.

I think the choice between the two is a no-brainer… but perhaps someone else will have a different take.

I am not that good at such advanced topic as performance comparisons between GPU and CPU in terms of noise generation algorithms. I’ve done a quick reading. As different approach is used for GPU and CPU noise generation and there are algorithms such as Improved Perlin Noise that are optimized specially for GPU the comparison doesn’t seem that straight forward. But the tendency is clear that noise generation migrates from CPU to GPU. It was mentioned back in GPU Gems 2 ( http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter26.html ) and poined out several advantages of using procedural generation in GPU.
There is a number of more up to date articles as well that prove the tendency like http://www.graphicon.ru/2008/proceedings/English/S8/Paper_3.pdf etc. The main reason is GPU architecture is much better at large number of parallel calculations than CPU.

@t0neg0d said: Honestly, pushing this process off to the GPU in this case is going to decrease performance. Consider then two scenarios:

CPU handling some floats

as apposed to…

CPU creating image
Image being piped to GPU
Every pixel of rendered geometry for every frame performing a texture lookup.

I think the choice between the two is a no-brainer… but perhaps someone else will have a different take.

Note: the image only needs to be created and piped to the GPU once. Depending on what you are using the noise for then you can reuse that same set of images for every noise based texture.

What you get in return is infinitely varied procedural textures instead of the same image repeated over and over.

Will it perform worse than a single texture? Of course. But it’s not emulating a single texture. To get the equivalent with static textures you’d have to generate hundreds or thousands of them… and you couldn’t animate them either.

Thanks pspeed, couldn’t explain it better :wink:

@zarch said: Have you looked at ImageRaster (part of jme core) and ImagePainter (plugin library, available from SDK) ?
As I understood looking at these classes, I need first to create an image using my lookup table (array), pass it to jme, which will convert it to texture that I can use inside a shader. So lookup table (array) values should be translated to colors when creating an image. The thing that remains unclear to me is how a single array element value can be written to an image pixel which usually requires several values (RGBA or similar). Should I use fromIntARGB(int color) from ColorRGBA class which is used in ImagePainter to create an image or is there a different way?
@alexd said: I am not that good at such advanced topic as performance comparisons between GPU and CPU in terms of noise generation algorithms. I've done a quick reading. As different approach is used for GPU and CPU noise generation and there are algorithms such as Improved Perlin Noise that are optimized specially for GPU the comparison doesn't seem that straight forward. But the tendency is clear that noise generation migrates from CPU to GPU. It was mentioned back in GPU Gems 2 ( http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter26.html ) and poined out several advantages of using procedural generation in GPU. There is a number of more up to date articles as well that prove the tendency like http://www.graphicon.ru/2008/proceedings/English/S8/Paper_3.pdf etc. The main reason is GPU architecture is much better at large number of parallel calculations than CPU.

But without vertex feedback, what’s the point?

Oh wait… holy crap. I missed the boat here all together. I honestly thought the end goal was for generating meshes, not textures. Sorry!

Where is your noise function going to reside? In the shader? Or prior to this point?

@alexd said: Thanks pspeed, couldn't explain it better ;)

As I understood looking at these classes, I need first to create an image using my lookup table (array), pass it to jme, which will convert it to texture that I can use inside a shader. So lookup table (array) values should be translated to colors when creating an image. The thing that remains unclear to me is how a single array element value can be written to an image pixel which usually requires several values (RGBA or similar). Should I use fromIntARGB(int color) from ColorRGBA class which is used in ImagePainter to create an image or is there a different way?

Use an image format with the number of channels you need. If you only have a single value use a greyscale format, if you have 4 use rgba, etc.

(Rgba to greyscale just gives the average of the values iirc - so just set all 3 colour channels to the channel you want to set).