Anyway here’s a first result of the improved raytracer with a nice jeep model directly imported from blender (Thanks to the blender importer that work very well, good job)
You can just check which triangle it is and then check the texture coords for that vertex… Its just two buffers containing that data… I don’t know what exactly you want to “find out”, all the data is there, else the model could not be displayed :?
Yes, its in the mesh, in the buffer, stored as floats, so best get it as a FloatBuffer. I don’t exactly remember how the data is stored though, I think it should be two floats for each vertex (three floats). But that is easily found out through searching the site and… :google:
First it’s certainly faster than if i had do it myself (to obtain collision result). All the stuff in jme are very kindly done, never tought it was possible to do raytracing of scene as easy as this.
But I use simple raytracing algorithm (consuming and unefficient) and don’t know how to tell you how the performance are.
Sometimes, depending on the scene i got good results and sometimes (even if seem to have less poly) not…
But i can try to give you some time for next time. For theses images, count less than hour, depending if shadow, ao or refelction depth…
One with texture (i still have some problem find correct uv or something so i’ve cheated i bit) :
For my personal amazement And I also love raytracer, as well as JME, and when i find the “raytracer” sample in jme3 i was surprised and needed to try.
After that, I’m using JME3 for all my personal 3D experiment but also in my work i use JME3 when 3D is needed. For realtime presentation of something jme3 rocks. But if you want to have more beautifull image to show, you need to make something else like using raytracing … but here i’ve decided to directly integrate a “custom” renderer suitable for jme3 that render better image (but in slower time than realtime of course ^^).
I don’t think you can do it in realtime with quality like this ^^.
The only possible thing that i can do now to optimize it is :
Optimize the algorithm in term of how to raytrace efficiently.
Find if it’s possible to multithread that.
But for the 2nd point, i’ve tried and i got a nice “conccurent modification exception” … but i’ll try to “duplicate my scene” in order to see if i can multithread to gain time in render.
I’ll post some result from time to time, and perhaps why not the (or a) code
If it was we’d see it in games all the time. Theres example applications rendering maybe one model using two GPU’s etc… But its simply not that far yet.
I have seen some real time engine using CUDA, using real time using ray tracing algorithms with more then 30fps. I even played one of them using nvidia 210m chipset. Dont remember the names right now.
But I’ve always wondered if its possible for CUDA why not OpenCL? As far as I know OpenCL is pretty mature right now. Is it just because the lack of initiative of lack of standard hardware?
I already did some mandelbub and simple sphere/triangle raytracer with GLSL, or a billiard game attempt with Optix, but for sure hardware isn’t still powerfull enough now, it will be for future as normen said.
But for sure rendering in realtime everything without cheating isn’t for now.