The algorithm itself is not any trouble (I hope!) but the method of implementation is giving me some hang ups. I’m just not certain how to fit it into JME.
Ultimately what I need to do is generate a heightmap based on some disturbance points. This height map changes each frame, and as such relies on its previous state to calculate it’s new state. So I am thinking I need two textures to double buffer between them.
Here is what I want to do:
Create two textures, one to hold my previous height map, one to hold the current
Pass the previous height map to a shader along with any new disturbance points
Have the shader output the new height map based on alogirthm
Use the new height map on a model to disturb the look
Swap the previous height map and current height map so that the next frame works OK
I have two problems I’m not sure how to approach. The first is I don’t know how to just render to a texture. Like, create a 256x256 texture, run a shader on it, and get an output texture of some dimension. Is this kind of thing possible with JME? I feel like I’m missing something obvious…
The second problem is after I get that texture, how can I swap it with the previous texture to acheive the double buffer effect?
Actually you might be happier with using direct vertex manipulation on a grid for displaying, that way you don’t have the problem to manipulate the texture. And do the calculations on the cpu with two 2dimensional arrays.
@EmpirePhoenix that idea sounds interesting, if one were to use a grid mesh (or terrain?) is there a way to grab the height of each point in the terrain’s grid to add to the first 2D array, then run a displacement algorithm on those array values for storing into the second 2D array which would be copied over to the first array upon the next frame?
and if so, could a physics collision shape be updated each frame to match it?
like a water surface that provides a buoyancy force to a boat, moving it up and down relative to the ripples ( waves ).
I ultimately decided to go with vertex manipulation for a couple of reasons. The first was that with arbitrarily large meshes, using textures to cover them wasn’t all that feasible (massive video memory consumption). Also, I found that the resolution of the ripples was hard to keep consistent between varying sizes of grids. The second reason, which relates to the first, is that I need to handle meshes of any size and shape. Using a texture for this gets more tricky as the UV map gets very awkward for bizarre shapes, like a Ying-Yang symbol for instance. The neighbouring textures coordinates aren’t necessarily neighbours on the mesh, so applying the ripple calculations gets difficult.
I have a working sample using a mesh and a simple grid, but it’s not very flexible. It’s too hard-coded and only supports a simple grid. I’m currently working on an algorithm to use any mesh, and when I manage to get it working I’ll show it here.
Also, I don’t see when you couldn’t add a collision object to it to mimic a wavy water surface and have floating objects. Could be kind of cool! You could ride a get ski along it … or build a tubing game where you have to try to knock the riders off with expert wave making skills