Interesting problem! Intersection help needed! (or collisions? not sure!)

Hi all,

i am developing an auv (autonomous underwater simulator) simulator using Java 3d (i should say at this point i am not specifically using JME but the java 3d api.)  I tend to read these forums for tips though and find them a lot more helpful than the java 3d forums at sun (which no one seems to read).

Anyway,  my question is to do with 3d shapes intersecting.

The auv had three sonar objects which i am attempting to model at this moment.  Each sonar unit is a squashed cone object, what i need is not only that they intersect with another object (i.e they detect an object) but the portion of the object that they intersect.

What i really wish to do is to produce a sonar map in gray scale of each unit, the closer an object is the darker the pixels on the map would be.  Of course the 2d map would not need to be included in any scene it would merely be used for calculations (perhaps shown in a separate jpanel).

As you can see the intersection between the sonar and an object is needed, it is then mapped in 2d by using the height of the intersection to represent the strength in colour on the map.  This map will then be used as a sensory input to a neural network controller on the auv.

So, is it simple to get the portion of intersection between two geometries??  I have seen a lot on picking and collision detection and i am not sure if getting an intersection is the same?

any thoughts??



You could do ray sampling (depending on how high fidelity the sonar map must be. You shoot out a number of rays from the sonar that is contained in the sonar cone. Check for collisions, if a collision with the ray occurs calculate the distance from the origin of the ray and the collision point. Pretty brute force, but effective if you don't need extremely high resolution.

supposing i have say 10 auv's with 3 units each = 30 units, and say i wanted to cover a good proportion of the sonar area maybe 50 rays??

That's 1500 rays each time i update the model, is this going to kill me??  By the way, because this is an evolutionary system i want to update the model as quickly as possible (only sleeping the system to slow things down when i wish to view the simulation)

also, can you explain how i would send out rays in different angles from the origin?  If you think look back at the sonar 'field' its not uniform in all dimension, obviously it all begins from a single point and disperses (which is why i represented it as a cone object)

So there is no way to get a resulting volume from the two shapes intersections??

thanks for the quick reply!

edd  :slight_smile:

I might be missing something, but can you just use the z buffer? If you put the camera "in" the sonar unit, you will get an image showing distance to first collision, which seems like what you want. You can make sure you only render objects the sonar would "see". You could even blur it with a shader if you want :wink: You can just treat z values greater than the sonar range as the same value, whatever your sonar unit would give for "nothing there". Some of the tests let you show the z buffer as grayscale so you can see the effect.

As far as the ray count goes - just use the appropriate resolution for the renders, I would imagine you will be able to render a lot of low resolution images very quickly, with the right settings.

using the camera idea, i would need 30 cameras, one for each unit, producing images.  Is that feasible?

Just to clarify my requirements fully:

The simulation is going to model the movements of around 10 auv's each capable of moving in the x,y,z axes.

Each auv will be fitted with 3 sonar units (forward left right)

The sonar units need to produce sonar maps which will then have features extracted and fed into the inputs of neural controllers in the auv's

The simulation is an evolutionary one, thus the training of the networks happens over a long time (100's generations)

A generations may consist of 2 minutes of simulated time in which the auv's move around by associating the outputs of their networks with the their thrusters.  The inputs contain information from

                  the sonars.  I wish this 'simulated time' to be as short as possible in 'real time' for computational reasons.

Auv's will be assessed by calculating how well they do things such as flocking and obstacle avoidance.

So, the only problem i have is as i have said, with gathering the sonar data.  Preferably i need a 2D image/map so that i can extract features from it.

I am not too hot on z-buffers, but if the camera you propose to put in the unit just gives me a 2D image of the view how do i exactly get the distance information for each point?

I really was hoping i could get the volumes from intersections  :? :frowning: :?  would have been so much easier!

lets have some more crazy ideas then guys!


****EDIT – i just looked up z buffers on wikipedia and saw the grayscale image produced by one, which would look very similar to what i need!.  If i could some how get the z buffer representations for all the sonar units 'field of views' that would be brilliant! i think… ****

boolean operations in 3d like that are very complex.

z-buffers are non-linear and have pretty sucky resolution.

so, i would go for mojo's ray method. easy to do. and can you really do as many as 50 inputs on that neural network without killing your comp? s

MrCoder said:

boolean operations in 3d like that are very complex.
z-buffers are non-linear and have pretty sucky resolution.

so, i would go for mojo's ray method. easy to do. and can you really do as many as 50 inputs on that neural network without killing your comp? *s*

I wouldn't actually have an input for each ray, rather i would plot the ray traces on a map and perhaps divide it into 6 sections, with each section providing an input to a neuron, like  a weighted average.  Thus there would be say 18 input neurons.

However, 50 inputs to a NN is not a big deal (200-300 starts getting big) and it could be possible perhaps even better to you the ray sampling as the input.

Do you think someone could explain how i would go about getting ray sampling to happen from each unit??  Do i add ray sampling objects??  I am afraid i have never used them at all???

thanks for all your inputs so far,


sort of depends on how many hidden layers you use, and how big they are offcourse…but great! :wink:

check out TestObjectWalking and Spatial.calculatePick…

are they java3d examples or jme examples??

And will all this be able to be done using the java3d api rather than the jme one??



I think it's relevent to clarify here that apart from both jME and Java3D being based in Java that's about where their similarities end. :o

Everything discussed in these forums pertain to jME and have absolutely nothing to do with Java3D apart from the ideology of a 3D engine.  I would highly recommend making the switch to jME if you can allocate the time to do so…you can gain much more through these forums as well as from the engine than you'll ever get out of Java3D.  As you've said the community support is dead and the engine is nearly so.

The only reason i used Java 3d is because i don't need any special effects or graphical beauty in my model.

I just needed a simple scene graph creator so i thought Java3D would be fine.

If JME can offer me something useful for me (such as a solution to solve the problem i have in this thread) then i would make the change straight away.

Now, back to the thread question, i still don't know how to do this ray tracing lark!  Is it a simple process, and can it be done in Java3D?