Pathfinder on GPU

I'm trying to gear to generate simulations using the GPU jCUDA, I decided to use as my jME

viewing environment, the concept is very simple so I've been researching the part of'm working on

something more or less like a simulation where JCUDA rendering with JME, the engine that will display

use, is completely disengaged. Thus the simulation and rendering of agents is left to jCUDA while the

scene is up to the jME.  :smiley: }:-@ :smiley:



With this scenario I was thinking, what the kernel actually have to work, because the character is created

within the JME, but must be controlled within the JCUDA, but the language of creation is in Java and the kernel is in C,  what I can keep being controlled without being completely tied to language in order to transmit and

get back the result. Thinking about the alternative motion may be a solution, only the schemes

movement can be treated in the kernel, returning a new movement to be effectively performed by

character within the scene.  :? :? :?



But when you work viewing something that is processed by the GPU, you can use the interoperability that exit OpenGL and CUDA.



So, for example, if you do a simulation of particles, you have (x, y, z) of each particle. OpenGL renders these particles by using these coordinates, the OpenGL commands to the graphics pipeline that information, so the idea is that you see in the pipeline to send pick, make CUDA write a VBO (where the coordinates X, Y and Z of the particles are updated) so that the graphic pipeline render this information. You can see that the information I know of within the GPU and intuitively we can consider that, in general, the simulation will be faster.



And my question would be how to do this, to interoperate between Java and C, can someone help me with this, I was looking at some pathfinders in C just to get an idea.



http://www.koders.com/c/fid4E309F9D459219CB8032447C3DEEC379EB83DC63.aspx

Wouldn't jCUDA handle this? The C for CUDA code (the kernel) is what runs on the gpu, all you're doing is passing data and getting data back. Really all the interaction that would be needed, and the jCUDA binding I'd assume would make the necessary JNI calls.



Or is this more a question of what data to pass and/or how to encode such data (obviously you wouldn't be passing objects, you'd be passing all the little details that pertain to pathfinding).

At first I thought I would only do the pathfinder, the part of jME I get the projection and rendering and GPU processing which should be the next step.



I tried to think of a number of things that would be interesting for example to spend a part of the neighborhood of the land into the register of texture and process upon him the next move, but do not know if it is feasible to implement that vision something of communication between these characters and others in the simulation in order to generate more reality.



But I think it is possible and plausible, perhaps not the best form of optimization, but we can go until hitting good enough.