I am currently trying to simulate a sensor that rotates 360 degrees around one axis in 0.25 degree increments. My current problem is that when I perform this rotation in the simpleUpdate loop even though I get +3500 fps the object is actually rotating quite slow… in other words, it performs a full rotation at 10.5Hz. If I go full screen it drops down to 2.7Hz. I would expect that if the frame rate is +3500 fps the object would be moving so fast that I wouldn’t be able to see it. I need the object to rotate as fast as I can, but always ensuring that it does so in the specified angle increment.
Any thoughts? Any help would be appreciated. Below I show what I am doing.
@Override
public void simpleUpdate(float tpf)
{
super.simpleUpdate(tpf);
A few things to remember as others have hinted at:
definitely multiply by tpf or your speed will vary with time which is probably not what you want. That’s what tpf is there for.
tpf loses accuracy the more FPS you have… which is normally not a problem, but…
no matter how many FPS you are cranking through, you are only going to see your monitor’s refresh rate. If you are rendering 3500 FPS then you aren’t even seeing 90% of those frames… and you are also wrecking tpf accuracy at the same time.
Conclusion: turn vsync on… use tpf as a multiplier. * FastMath.TWO_PI * tpf would be 1 full rotation per second. Adjust accordingly.
I guess I should have provided a little more detail on what I am trying to do. I am trying to simulate a LIDAR sensor. Therefore I have a laser sensor spinning around. With every rotational step I cast a ray from the laser onto the environment and return the collision. This is why I need very fast and small rotational increments. Typically these sensors provide a full 270 degree reading (1080 points at 0.25 degree resolution) at 40Hz. I am trying to simulate such a device…
Are you trying to simulate what it’s doing or simulate what it might look like if you could see the laser? The former has nothing to do with a visualization (and should not even be using JME’s render thread I guess) and the second just looks like a blur.
…unless you mean that each single degree sample comes in at 40 hz… that would be quite different and is actually slow enough visually that you might see it. It would take almost 7 seconds to do the full 270 degree sweep.
But if it’s a full 270 degree sweep, 40 times a second, you’ll never see that. Your monitor is only drawing (likely) 60 frames a second anyway. At best you’ll see a sweeping flicker.
So I guess we need to know what you are simulating exactly.
Running the above snippet I see that the frequency at which the simpleUpdate is called on my system is around 4000Hz (Every Tick Freq). When I measure the frequency at which a full sweep happens (Every Sweep Freq) I get 2.7Hz which makes sense because at 0.25 degree increments I need 1440Hz to perform a full sweep in one second (360/0.25 = 1440). This means that if simpleUpdate runs at 4000Hz, full scan sweeps would happen at 2.7Hz (4000/1440 = 2.7Hz) - which is what I am seeing. 2.7Hz to perform a full 360 degree sweep at 0.25 degree increments is too slow. I can work with it off course, but I am trying to make it as close to 40Hz as I can. My experiments right now shows that simpleUpdate is not running fast enough for me to reach anywhere close to 40Hz. I would be happy with at least 10Hz. Maybe there is an alternative way to simulate a LIDAR without using collisions in the update loop? I read about zBuffer before…would that be a faster option? I do not know the inner workings of JME3 well enough to make a wise decision on this…
Well, I think you need to decide if you are simulating a visualization or simulating a LIDAR sensor and then do one or the other. Because trying to use a visualization to drive sensor simulation is going to be problematic.
…but note: none of the JME structures are setup to be fast enough to do distance queries at 21000 FPS. It would take special (very special) data structures to support that.
When I simulated LIDAR before, I did it differently. I took frame grabs and pretended there were beams. I could get a whole “square” of beams that way just be sampling the frame buffer texture of my off screen render. At the very least you could get a whole line of values all at once.
Since I don’t know what you are doing with the data, it’s hard to guess what would work.
I am trying to have a virtual sensor that I can move around an environment with obstacles. The sensor publishes every scan of points which I then plot on the screen as a point cloud. Hence as I move around it basically “maps” the environment through these ray collisions.
This sounds like a better approach. I would definitely like to try this approach. Do you know of any examples I could follow or use as a starting point regarding the use of frame buffers? I am not very familiar with the use of the frame buffer, but after looking through this forum I was able to follow an example where I extend the LwjglRenderer and create a depthBuffer from it. I was able to convert the depth values to actual distances from camera. I want to be able to keep my current 3rd person view camera so that I can still pan/zoom and move around the scene. My understanding is that the frame buffer gets generated from the camera? Do I need a second camera acting as the lidar from which the depthBuffer is generated? This is where it gets confusing for me. Thanks a lot for your help. I really appreciate it!!
You can render to a texture and then read from that texture. That’s what you need to be looking for.
There is a JME test in the test suite that shows rendering to a texture. I don’t recall the name but a familiarity with all of those tests would serve you well anyway.
Then you just have to figure out how to copy the data from the texture to memory that you can read. You can probably look at ScreenShotAppState for that maybe… I don’t remember what example I used for my off-screen capture (I used it in SimArboreal). Worst case, you can look at the sim arboreal editor’s code online.
Great! I found those examples and I can run them. I just need to take some time today to understand them well. I will report back sometime later today. Thanks a lot!!
So after playing around with RenderToTexture and RenderToMemory I am having a little trouble trying to extract the depth information from the frameBuffer:
Camera offCamera = new Camera(512, 512);
offView = renderManager.createPreView("Offscreen View", offCamera);
offView.setClearFlags(true, true, true);
offView.setBackgroundColor(ColorRGBA.DarkGray);
// create offscreen framebuffer
offBuffer = new FrameBuffer(512, 512, 1);
// setup framebuffer's cam
offCamera.setFrustumPerspective(45f, 1f, 1f, 1000f);
// setup framebuffer's texture
Texture2D offTex = new Texture2D(512, 512, Format.RGBA8);
offTex.setMinFilter(Texture.MinFilter.Trilinear);
offTex.setMagFilter(Texture.MagFilter.Bilinear);
// setup framebuffer to use texture
offBuffer.setDepthBuffer(Format.Depth);
// offBuffer.setColorTexture(offTex);
// set viewport to render to offscreen framebuffer
offView.setOutputFrameBuffer(offBuffer);
Texture2D depthTexture = new Texture2D(512, 512, Format.Depth);
offBuffer.setDepthTexture(depthTexture);
// attach the scene to the viewport to be rendered
offView.attachScene(collidables);
How can I do this? Any thoughts? There is a FameBuffer.getDepthBuffer() which returns a RenderBuffer, but I cannot get any depth information of the rendered image from this…