# Ideas for implementing robot vision sensors

Hello,

I’m making a robotic simulator in which I have multiple ( 1 - 100 ) simple robots whizzing around on a planar terrain ( The whole thing is in 3d) . I need to give the robots some perception and one of the things that I want to try is to give them ‘vision’ and I would like to hear your input. My idea so far is to cast out rays spread out in a conal shaped visual field , say 7-15 of them and analyze the collisions. The arena contains objects which belong in 3 different nodes ( fence , other robots, dynamic objects) all which sit on the rootNode and as a result of the sensor reading I would like to receive a list of all the objects that the rays collided with (only first collisions) . Is there a way to limit the distance that the ray travels (there is only a limited vision field) ?

Do you think that this will perform for a high amount of robot units (at least 20-30) at a decent fps rate? Does anyone have any better ideas?

Its much easier doing it the other way around and checking each object for visibility. E.g. use the corners of the model and the middle point and cast rays back to the “robot”. Before you can do a general FOV (angle) / distance check based on the location of the object to avoid unnecessary ray checks. Contrary to the “real world” you know about all objects in your virtual space.

Same for the ray “distance” - just do a normal distance check for the object position / robot position. You’re kind of thinking backwards - again, the virtual space is not the real world.

I’m not sure I understand what you mean. Yes I know about all my objects in my virtual space and I can calculate every robots FOV but I can’t just go : getAllObjectsAt(FOV) can I ? I mean I could loop through all my objects in the rootNode
and check if they are located within my robots FOV but that would be an O(n^2) calculation where n is the amount of robots. Is there some structure that allows me to efficient search for Nodes spatially without me having to loop and poll all of them?

What do you think raycasting does? You totally overestimate the overhead of looping through your objects.

I see :)) I thought raycasting used some parallel GPU OpenGL magic to do some quick collision analysis.
Okay… so what you are saying is that the most efficient way for me to do it is for each robot to loop through all the objects and find the ones which are in my FOV and then check them individually for visibility?

If you have multiple robots it might be smart to check each ones FOV in one loop over all objects. But the difference vs looping for each robot shouldn’t be very significant.

The GPU only has information about objects that are currently rendered and even if not it would have to do the very same thing. There is no “magic” in programming apart from insanely fast iterations of calculations.

Ok, thanks for the answer.One last thing though. Aren’t there ways to structure/ partition Nodes that makes geometrical search faster?

Sure, you can store a list or map of your stuff based on any parameters you want. But again, I think you overestimate the overhead immensely. Just implement it and see what your actual bottlenecks will be, then micro-optimize.

Thanks a lot for the advice. I’ll try it and see.