I'm starting to test the jCUDA JME, because I want to create some NPC's using this structure but I was thinking

on how to create and move these NPC's on the ground. I first got thinking about how to take the whole lot or piece

it to every NPC in order to take the information from the ground. But I'm not sure how someone in the forum has

any ideas? :?

Nice, CUDA has been discussed in other threads such as this in regards to AI:


Since this is such an unknown topic (especially w/ jCUDA), you're probably going to have to do a lot of research/feeling around. The most straight forward way I'd say would be to set up your threads to represent one NPC, and put your NPCs together in 1D/2D blocks. May not be the best way though. Or are you asking about something else?

but I was thinking on how to create and move these NPC's on the ground

Hmm,...might be a good point to start with the basics before starting to optimise the AI!? :D

My initial idea is to create the NPC's inside the GPU, so I'll have to have some things implemented, such as path analysis. And so with that in hand I began to think of an approach much more guided by agents as may be found in some current articles, such as:

"A Framework for Megascale Agent Based Model Simulations on the GPU";

"Continuum Crowds";

"The High Performance Agent Based Modeling Framework on Graphics Hardware Card with CUDA";

"GPU-Accelerated Path-planning for Multi-agents in Virtual Environments";

"Multi Agent Navigation on the GPU";

among others …

I can even ship the items after it are simple to findings, found it quite interesting to see the proposal for a software implemented on the GPU, as an agent and each kernel as its behavior.

Well what I seek to find here is someone who has some experience in using JME and jCUDA, because I was thinking of getting a piece of land to the registrar of textures in order to be able to process it using an example of the ways pathway already optimized for the GPU.

I've been reading some articles and talking to some people who have already implemented things in JCUDA, and came to this conclusion:

The simulation in JCUDA and the rendering in jME are completly decoupled.

I try to simulates and renders agents in JCUDA and the jME renders the scene.

The technique is the same used in CUDA Particle demo (by Green), which is included into the CUDA SDK.

And here comes the problem, I'm thinking about how to control it. For we can run simulation on CUDA by creating kernels and get back agent positions and orientations in order to draw them.

However this is not an efficient solution. The best solution is the direct rendering (avoiding agents to transfer data from GPU to CPU).

And that's where I lost a little on how to do this? To be able to then render the result in jME?