I’m considering to use jMonkeyEngine to build a simple Dynamic Vision Sensor (DVS) sensor simulator (a.k.a. Address Event Representation (AER) camera).
The main difference between standard cameras and DVS is that standard cameras are Frame Based , i.e. they produce a full frame every period while DVS produce an stream of timestamped events of the pixels that change. If you want to know more look at
I am looking at the jmonkey code and trying to think which would be the best approach to implement this.
I am trying to test it with the TestJaime demo.
As far as I see, I see two main different approaches
Try to reduce the speed of the action by a large factor (e.g. 1000 ) and then capture the difference between consecutive frames to produce the events. In the mentioned demo this is controlled by the cinematic FPS.
Try to modify the main rendering loop so that the restriction to render a frame with relation to the real time clock is removed. By doing so, I could increase the temporal resolution and compute consecutive frame differences to produce the stream of pixel change events.
In both cases the goal is to increase both the temporal resolution of events. To clarify it, in an standard frame base approach an object with a fast movement would “jump” from a position in the frame to another distant (more than 1 pixel away) position.
In a DVS system the same movement would procude an stream of events happening at every pixel of the path at a higher temporal resolution.
Does anyone have an idea (or pointer) about which would be the best way to implement this?