Where transform is the camera object and target is the target object which the camera tries to follow.
I’m interested especially in:
how to get the euler angles of the camera and the target
how to lerp (I guess lrep==interpolate?) between angles
this is not mandatory - can I control my app state to run last after all the other updates on the scene graph has finished their jobs. Note - I’m going to add & remove app states dynamically during the program execution.
Afaik, to get the angles between a coordinate vector(x or y or z) & other one (Euler Angles) use : transform.normalize().angleBetween(Vector3f.UNIT_Y); in Rads Quaternion rotation = new Quaternion().fromAngles(0, interpolatedAngle, 0);
yes, lerp is Linear(ler) Interpolation(p), so Vector3f#interpolate(vec3f, vec3f, float)
I think may be using a static AtomicInteger(something similar to mutexes(mutual exclusive events) to be shared by all AppStates & keeps adding it when you execute every single BaseAppState & then when it reaches the appStates.length()-2 which is the last index to execute before your event, start executing your exclusive event or state, may be add that guard inside the update of this mutex state.
Best way is to keep the euler angles yourself and construct the quaternion only when needed. That’s what Unity is doing.
quat.toAngles() can kind of give them to you but they may jump around because you cannot reliably get back euler angles from a quaternion… at least not the same ones you put in.
One thing I don’t understand and prevent me from using this camera script:
No matter what angle I’m setting in the “angle” variable. It kind of ignoring it so in this code:
And I don’t understand why. I would expect that the first version will maintain a specific angle related to the target location and it’s not happening.
I’m missing something with the vector’s math…
Add println or logging for the rotation and the offset. For example if offset is 0,0,0 then it won’t matter how much you rotate it.
We get a very narrow view of your code so we can’t tell if there is a mistake somewhere else… and anyway, if this were my program step 1 would be confirming all assumptions by logging.
Thanks! I’ll post the entire class and a demonstration video. I just wanted to understand if my my assumption was right - given that the offset is not 0,0,0 , when using that math, should I expect the camera to follow the object in a constant angle?
This looks like you’re trying to get a 45 degree angle (?), but it’s actually only 4° which may be barely visible.
I’d use angle = 45 * FastMath.DEG_TO_RAD; to make it easier to understand and less bug prone.
Thanks a lot! I’ll change that. No matter what angle I’m using (even if its 4 deg) My issue with this code is that the camera is not tracking the object in that constant angle. I’ll post a video sample soon.
Not randomly. The offset seems right but whenever the object is rotating I expected the camera should rotate as well to maintain the same point of view and this doesn’t happen. I’ll post a demo video.
If your angle is constant then the offset is effectively constant… it won’t matter what direction the camera is facing unless you also include the camera rotation.
Here is a sample video showing the problem. The angle is fixed on 45 degrees (45*FastMath.DEG_TO_RAD)
As you can see, the car is rotating according to the track’s path but the camera doesn’t follow the rotation of the car.
Let me try to explain what your code is doing:
Create an angle rotation.
Rotate an offset by that rotation.
Get the world space translation of the target and add the WORLD SPACE rotated offset… irrespective of object or camera rotation.
Set the camera location to that.
Have the camera look at the object.
If you want the camera rotated relative to the object then you will have to include the object’s rotation in your calculation.
The original code probably got the object’s rotation for the angle.