This returns a Vector3f… x and y are no-brainers… z is depth in some form. Seeing how x and y are going to be valid screen coords even if the object is behind the camera, how do you determine which side of the camera the object is on with the returned value in z of the vector?
I think z is bound between near and far plane but I could be wrong. Have you tried to see if it goes negative behind you? I know that everything goes all wonky for a point right on the camera (since that is technically all points on the screen).
At any rate, if you want to know if something is in front of or behind the camera (and don’t care about the screen coordinate) then that’s probably the slowest way to do it. A dot product with the look vector (and the point relative to the camera position) would tell you. If you are trying to ignore the points behind you then you could do that before ever passing the point to getScreenCoordinates.
A dot product with the look vector and pos-cameraPos will tell you the distance from the camera along the look vector… which you could also use to filter out super near and super far points.
Will definitely do that instead. But in case anyone does need to know… it looks like it is always a value between 0 and 2. anything less than 1 is in front of the camera. if you ignore 1 altogether it avoids what you were talking about when something is occupying the same vector as the camera. But your approach is obviously much, much better.