Need help for Camera

Hi,

I tried various samples with respect to camera movement and control and finally found TestChaseCameraAppState which is the closest to what I wonna get.
Behaviour is not bad, but mouse controls are not enable.
Recently I got FocusCameraState from jayfella, which has the mouse controls, but not the behaviour I want.

So I tried to combine both, but was not successful.
I added logs to onAction and onAnalog, but I don’t get the mouse movement directions.
I use chaseCamAS.setToggleRotationTrigger() for the middle mouse button and added an inputManager mapping for toggle translate and the left mouse button.
Of cause I added input mappings for all 4 directions and mouse movement, but I don’t get them.

I tried the strings from FocusCamera, as well as the strings from CameraInput, but both listener method offer translation trigger as name only.

I don’t know, how to proceed.

Should I subclass ChaseCamera, or can I get it with ChaseCameraAppState?

May be its better to describe, what I wonna get. I don’t know jme-speech, so please admit “normal” words:
As I already mentioned, TestChaseCameraAppState is very close to what I want respect to visual behaviour.
That said, means, the quad with the monkey image is my object. I don’t have visual counterpart to the teapot. An invisible object as replacement would be fine.
So I talk about the movement of the monkey quad.

I’d like to get some similar mouse control like blender (2.7) with turntable control. Which means, middle-mouse button enables rotation. Moving the mouse with pressed middle-button left/right rotates around the Y-axis and moving the mouse up/down rotates around the X-axis.
Pressing middle-mouse button by holding shift key pressed changes the rotation to translation. If mouse-button with shift key does not work, it would be fine to use left mouse button for translation and middle button for rotation.

But the ultimate highlight in blender mouse control: turning the mouse wheel by holding shift pressed translates the object along the Y-axis and turning the mouse wheel while holding control-key pressed translates the object along the X-axis.

Is is possible to implements a similar mouse control and if so, where/how should I start?

Using arrow-keys work fine.

… and when I moved the scene to a state I like, what properties do I have to log/set to get that visual aspect at app creation time?

I read the advice, that subclassing is not the favorite way. So I need a helping hand to learn how to do it right and how to put things together.

This may get you close to where you want.

    //create 3rd person view.
    private ChaseCamera addHeadNode(Node body) {
        BoundingBox bounds = (BoundingBox) body.getWorldBound();
        Node head = new Node("headNode");
        body.attachChild(head);
        //offset head node using spatial bounds to pos head level
        head.setLocalTranslation(0, bounds.getYExtent() * 2, 0);
        //use offset head node as target for cam to follow
        Camera cCam = getApplication().getCamera();
        //Change cam farthest from default of 1000 wu
        cCam.setFrustum(1.0f, 150.0f, -0.5f, 0.5f, 0.5f, -0.5f);
        ChaseCamera chaseCam = new ChaseCamera(cCam, head, app.getInputManager());
        //Set arrow keys to rotate view, sets CHASECAM_TOGGLEROTATE, you still 
        //have to map the triggers globaly, see KeyBoardRunState.
        //Uses default mouse scrolling to zoom.
        chaseCam.setToggleRotationTrigger(new MouseButtonTrigger(
                MouseInput.AXIS_WHEEL),
                new KeyTrigger(KeyInput.KEY_LEFT),
                new KeyTrigger(KeyInput.KEY_RIGHT),
                new KeyTrigger(KeyInput.KEY_UP),
                new KeyTrigger(KeyInput.KEY_DOWN));
        //duplicate blender rotation
        chaseCam.setInvertVerticalAxis(true);
        //disable so camera stays same distance from head when moving
        chaseCam.setSmoothMotion(false);
        //never hide cursor if used for picking
        //if setting to true, it will restrict cursor movement to viewport
        //window on rotation.
//        chaseCam.setHideCursorOnRotate(false);
        //set camera to face spatial on start
        chaseCam.setDefaultHorizontalRotation(1.57f);
        chaseCam.setRotationSpeed(4f);
        chaseCam.setMinDistance(bounds.getYExtent() * 2);
        chaseCam.setDefaultDistance(10);
        chaseCam.setMaxDistance(25);
        //prevent camera rotation below head
        chaseCam.setDownRotateOnCloseViewOnly(false);     
        
        return chaseCam;
    }

Body is a node the model is attached to.

Hi,

thank you for your attention and support.

I don’t want to offend you, but your code has nothing, that improves my situation. It does not change the mouse triggers and frustum makes it even worse.
For me, frustum sounds like frustration :frowning:

Do you know FocusCameraState from @jayfella? That’s way beyond your code. Sorry, but is the truth.

As I don’t know how to improve mouse support with ChaseCameraAppState, may be I can get further with FocusCameraState?

The problem for me is, I don’t like the deformations of the camera and I don’t understand the math at all.

I tried to add the motions keys from TestChaseCameraAppState to FocusCameraStates, which works fine at first sight. But as soon as the mouse is touched, any key based motion is forgotten and the model jumps to the place before.

I then tried to adapt the math from mouse-translation to motion keys, but with that, the model does not move at all.
I’m not sure at all, that I did it right, so sorry, if that should work.
The problem is, that I don’t understand the math from FocusCameraState.
@jayfella: are you willing to explain the math of your camera?
I would like to understand it. But there are so many words in jme I never heard before and even with the help of a dictionary I don’t know what it means.

I guess I can’t solve my situation until I understand the backgrounds.
Motion handling from TestChaseCameraAppState is pretty easy and works fine for me.
Is there a way in between, that could work?
From user point of view I want a clear distinction between rotation and translation. I don’t get that feeling with FocusCameraState. Nearly every mouse action results in a rotation and sometimes I don’t know, why the model rotates the way it does (based on my mouse movement and my very small understanding).

Currently I created a node for the ground image (monkey quad) and a node for my model, so I think they could move indepandantly.
May be I should create another node for the focus point, but actually I don’t know, what to use as focus point.

My ground plane has a dimension of 2000x2000 and a height of about 500 - the model I create is within that space. But from model to model the differnence may be big respect to size and spatial count.

So I don’t want visual clipping in that space and (if possible) I don’t want any deformations.

But as I don’t know, what’s possible with jme3 I have to ask weired questions.

Unless @jayfella named his class ChaseCamera, which seems unlikeley because that would be the same name used by jme, I could only assume you gave up with using his class.

It wan not meant to be used as anything other than a demo of how to do what you asked below.

Which is exactly what this code does, using the ChaseCamera class without subclassing it, just using methods that already exist.

The arrow keys function exactly as blender does, moving the camera up or down or rotating around Y, so does the middle mouse button, ie, scroll button.

Read the wiki, it has your answer.

The best route is probably educating yourself in JME.

https://wiki.jmonkeyengine.org/jme3/beginner/hello_input_system.html

And read the tutorials on the scene graph and maths docs.

This should help you understand The majority of what’s happening.

Hi,

I already read those tuts. Don’t know - but your camera is hard stuff. May be to hard for me.

Anyway - I started with TestChaseCameraAppState from scratch and now I have the mouse triggers working. Don’t know, what went wrong the first time.
So I’m on my way …

What I did not find out yet - how can I save the current scene, so that it looks the same on restart of the application?

I tried to log the camera settings and set them at startup, but result is not as it was before. Then I logged settings from focusPoint too and set them at startup - stil not the scene as before.

I apply movements to focus-spatial, but rotation and zooming happens internally from jme3. So I don’t know, where I have to look for the properties, that I need to set to get the same visual impression at restart.

The scene is a visual representation of data - but it is not data itself, so you should save the data that is used to represent the scene so you can reload it. So for example you have a “Player” Game Object. The player has a position and rotation (and maybe even a scale) - called a Transform. Save that information so you can reload it.

Zooming happens from code that you have the ability to change. It will only do what you tell it to do.

When you start out writing games you come to realize that it’s not quite the same as traditional desktop/server development. There’s a lot to learn. Take it slowly, keep grinding away. Even the mighty and great Arnold Schwarzenegger had his first day at the gym.

That’s exactly what I wanted, but that did not work.
From my understanding I have a camera and a focuspoint. Both are moved by different operations, so if I save position and rotation of both, I will get the same picture than before.
But that’s not the case.

Having a lot to learn is not the problem for me. I like learning!
The frustration begins, when I have to realize, that things don’t fit together.

Recently I worked out concentric circular buttons for swing. I started the wrong way like this: oh this does not work, have to overwrite this and that does not work, have do overwrite it too …
Then someone told me, that I started the wrong way and I learned to understand the difference.
When I got it, I was able to delete most of the code and it worked like charming.
The good point: I had to care for my point of interest only. The rest of the system fit perfectly together.

When starting with jme, I thought: well, I don’t create a game. Just create a model and let it move around. And I was convinced, that creating the model would be the most challenging part for me.
But I was wrong - the model was the easy part.

I can’t use jme-infrastructure as it does not fit together.
When I use a model of 100x100x100 then I need different factors for motion rotation and zooming as for a model of 1000x2000x200

… but zoom factor is not available.

Some samples do zooming by changing viewport, others change the distance between camera and focuspoint …
From the jme-user point of view (not the game user), I don’t care how zooming is working. Same is true for transforming/translating, what ever. But I want a reliable/repeatable way to use it.
Apparently there is no way to do so.

Other issue: when you turn on parallel projection, zoom factor changes that much, that the whole model is invisible (camera is inside the model). Have to reduce zoom factor by several thousands to get the same visual size, than before.
Of cause no zooming, rotation or translating works as before. All need new factors as well :frowning:

Yes, I reach the point where I understand, why you worked out your camera the hard way. Than you have everything under your control and can reload a current scene.

I thought, I use jme3 as others more intelligent than I have worked out all the math and I can use their stuff.
Now I’m at the point, where I understand, that things work only if I work them out by myself. So I have to care for math …

Hm, didn’t want that - but looks like I have to.

I mean, I haven’t seen any of your code so I can only guess what you are trying to do from meandering descriptions… but it sounds like you want more of a “3d editor” sort of camera than a “play a game” sort of camera.

If you don’t know the math do do this yourself then the easiest way is to create a pivot node that will represent your focus for orbiting. (ie: the point in space the camera will rotate around).

Make a child node at (0, 0, -zoomDistance). Attach at is a child to pivotNode.

To zoom, move the child node in and out using zoomDistance above.

To ‘truck’ the camera, move the pivot node. (Like shift+middle mouse button in Blender.)

To orbit the camera, just use pitch and yaw to rotate the pivot node. pivotNode.setLocalRotation(new Quaternion().fromAngles(pitch, yaw, 0));

Then use a camera control to attach your camera to the child node. (Or make the child node a CameraNode).

Zoom will always be tricky in parallel projection as you can only do it with a the viewport settings. There’s no foreshortening so the only way to control “how big” things look is where you put the sides of the camera… since those are also parallel planes.

I mean, I guess you can scale the root node in the second case but it’s not going to be as logical and satisfying as the ‘sliding zoom’ in the perspective case. And you still need to figure out where your viewport sides start out to capture that part of the scene that you want.

Even experienced 3d developers have trouble wrapping their heads around orthogonal projection… because it’s not really “3D” anymore from a certain perspective.

Edit: and if you sometimes want to rotate the camera regularly like a first-person camera, then the ‘complex math’ just moved to a different place as now you need to allow free look and then decide where the new pivot should be. Easier to do if you understand the math in the first place.

1 Like

That’s true!

Situation is like this: you stand in front of a machine and want to look at a workpiece. For smaller workpieces you need a closer look, for bigger workpieces you need more distance.
Control should work for both extremes.

Well, that’s what I thought that ChaseCamera is doing. Therefore I thought, ChaseCamera would be the best place to start.
But there are too many things that are protected, hidden, not accessible …

What made me crazy is not the frustum handling. I don’t like the word, but I understand what it is doing.
The real drawback is the huge difference in scale. With “normal” camera you have to shift or rotate by values like 4-10
Doing the same with parallel projection, you need values of 2000
Both for the same model size!
That’s hard to figure out - if you never saw any similar.

I have to confess, I have no idea about gaming. I mean, I didn’t play a game - ever! So I’m much more extrinsic than a noob.

You see the difference?
For me, thinking in 3D orthogonal projection is daily work.

Everything is easier if you understand math. That’s one of my biggest drawbacks - I haven’t learned enuf in younger days and now its hard to understand even pretty easy things …

Meanwhile I read nearly everything from wiki, but the gap to my understanding is so HUGE …

… anyway: I started with parallel projection from scratch and think I’m on the way to understand a bit. First I tried to move the point, where the camera looks at, but it resulted in a rotation and not in a translation. So I had to learn to distinct, when to move the camera and when to modify orientation of the model.

Here’s my test code (without mouse for better separation of actions):

package jme3test.renderer;

import com.jme3.app.SimpleApplication;
import com.jme3.input.KeyInput;
import com.jme3.input.controls.ActionListener;
import com.jme3.input.controls.AnalogListener;
import com.jme3.input.controls.KeyTrigger;
import com.jme3.material.Material;
import com.jme3.math.ColorRGBA;
import com.jme3.math.FastMath;
import com.jme3.math.Quaternion;
import com.jme3.math.Vector3f;
import com.jme3.scene.Geometry;
import com.jme3.scene.Node;
import com.jme3.scene.debug.WireBox;
import com.jme3.scene.shape.Quad;
import com.jme3.util.JmeFormatter;
import java.util.logging.ConsoleHandler;
import java.util.logging.Handler;
import java.util.logging.Level;
import java.util.logging.Logger;


/**
 * @author django
 */
public class TestMyCamera extends SimpleApplication implements AnalogListener, ActionListener {
   public TestMyCamera(float[] limits) {
      size = new Vector3f(limits[1] - limits[0], limits[5] - limits[4],
                          limits[3] - limits[2]);
      l.log(Level.INFO, "initial size calculation: " + size);
      rotation = new Quaternion();
   }


   @Override
   public void onAction(String name, boolean isPressed, float tpf) {
      if (RotateTrigger.equals(name)) {
         rotate = isPressed;
      }
   }


   @Override
   public void onAnalog(String name, float value, float tpf) {
      if (ZoomIN.equals(name)) {
         zoomFactor -= 300f * tpf;
         resizeView();
      }
      else if (ZoomOUT.equals(name)) {
         zoomFactor += 300f * tpf;
         resizeView();
      }
      if (rotate) {
         if (KeyLeft.equals(name)) {
            rotation.fromAngleAxis(tpf, Vector3f.UNIT_Y);
            rotateMachine();
         }
         else if (KeyRight.equals(name)) {
            rotation.fromAngleAxis(-tpf, Vector3f.UNIT_Y);
            rotateMachine();
         }
         else if (KeyUp.equals(name)) {
            rotation.fromAngleAxis(tpf, Vector3f.UNIT_X);
            rotateMachine();
         }
         else if (KeyDown.equals(name)) {
            rotation.fromAngleAxis(-tpf, Vector3f.UNIT_X);
            rotateMachine();
         }
      }
      else {
         if (KeyLeft.equals(name)) {
            camLoc.x += 150f * tpf;
            moveCamera();
         }
         else if (KeyRight.equals(name)) {
            camLoc.x -= 150f * tpf;
            moveCamera();
         }
         else if (KeyUp.equals(name)) {
            camLoc.z += 150f * tpf;
            moveCamera();
         }
         else if (KeyDown.equals(name)) {
            camLoc.z -= 150f * tpf;
            moveCamera();
         }
      }
   }


   @Override
   public void simpleInitApp() {
      flyCam.setEnabled(false);
      cam.setParallelProjection(true);
      camLoc = new Vector3f(size.x * 0.5f + 50f, size.y * 2f, size.z *
                            0.5f + 50f);

// INFORMATION RootLogger 09:06:12 camera  location: (-150.48007, 1000.0, -426.1831)
// INFORMATION RootLogger 09:06:12 camera direction: (-0.3407991, -0.6815982, -0.64751816)
// INFORMATION RootLogger 09:05:26 resizeView - aspect: 1.6 - zoomFactor: 1780.575
      cam.setLocation(camLoc);
      cam.lookAt(camLoc.negate(), camDir);
      camLoc.x = -150f;
      camLoc.z = -430f;
      moveCamera();

      registerInputs();
      resizeView();
      createMachine();
   }


   protected void createMachine() {
      Geometry box = new Geometry("Box", new WireBox(size.x, size.y, size.z));
      Geometry ground = new Geometry("Ground", new Quad(size.x * 2, size.z * 2));
      Material m = new Material(assetManager, Unshaded);

      l.log(Level.INFO, "workspace: " + new Vector3f(size.x, size.y, size.z));
      m.getAdditionalRenderState().setWireframe(true);
      m.setColor("Color", ColorRGBA.Green);
      box.setMaterial(m);
      box.setLocalTranslation(-(size.x * 0.5f), size.y, -(size.z * 0.5f));
      m = new Material(assetManager, Unshaded);
      m.setTexture("ColorMap", assetManager.loadTexture(
                   "Interface/Logo/Monkey.jpg"));
      ground.setMaterial(m);
      ground.setLocalRotation(new Quaternion().fromAngleAxis(-FastMath.HALF_PI,
                                                             Vector3f.UNIT_X));
      ground.setLocalTranslation(-1.5f * size.x, 0, size.z * 0.5f);

      machine = new Node("Machine");
      machine.attachChild(box);
      machine.attachChild(ground);

      rootNode.attachChild(machine);
   }


   protected void moveCamera() {
      l.log(Level.INFO, "camera location: " + camLoc);
      l.log(Level.INFO, "camera direction: " + cam.getDirection());

      cam.setLocation(camLoc);
   }


   protected void registerInputs() {
      inputManager.addMapping(ZoomIN, new KeyTrigger(KeyInput.KEY_ADD));
      inputManager.addMapping(ZoomOUT, new KeyTrigger(KeyInput.KEY_SUBTRACT));
      inputManager.addMapping(KeyLeft, new KeyTrigger(KeyInput.KEY_LEFT));
      inputManager.addMapping(KeyRight, new KeyTrigger(KeyInput.KEY_RIGHT));
      inputManager.addMapping(KeyUp, new KeyTrigger(KeyInput.KEY_UP));
      inputManager.addMapping(KeyDown, new KeyTrigger(KeyInput.KEY_DOWN));
      inputManager
              .addMapping(RotateTrigger, new KeyTrigger(KeyInput.KEY_LSHIFT),
                          new KeyTrigger(KeyInput.KEY_RSHIFT));

      inputManager.addListener(this, ZoomIN, ZoomOUT, KeyLeft, KeyRight, KeyUp,
                               KeyDown, RotateTrigger);
   }


   protected void resizeView() {
      float aspect = (float) cam.getWidth() / (float) cam.getHeight();

      l.log(Level.INFO, "resizeView - aspect: " + aspect + " - zoomFactor: " +
            zoomFactor);
      // calculate Viewport
      cam.setFrustum(-2000, 6000, -aspect * zoomFactor, aspect * zoomFactor,
                     zoomFactor, -zoomFactor);
   }


   protected void rotateMachine() {
      l.log(Level.INFO, "machine orientation is: " + machine.getLocalRotation());
      l.log(Level.INFO, ">>>  rotate machine by: " + rotation);
      l.log(Level.INFO, "    machine's location: " + machine
            .getLocalTranslation());

      machine.rotate(rotation);
   }


   public static void main(String[] args) {
      JmeFormatter formatter = new JmeFormatter();

      Handler consoleHandler = new ConsoleHandler();
      consoleHandler.setFormatter(formatter);

      Logger.getLogger("").removeHandler(Logger.getLogger("").getHandlers()[0]);
      Logger.getLogger("").addHandler(consoleHandler);
      TestMyCamera app = new TestMyCamera(
              // machine limits given from outside
              new float[]{0, 900, 0, 1800, -500, 0});

      app.start();
   }
   private Node machine;
   private float zoomFactor = 1800f;
   private boolean rotate;
   private Quaternion rotation;
   private Vector3f camLoc;
   private final Vector3f size;
   private static final Logger l;
   private static final Vector3f camDir = new Vector3f(0, 1, 0);
   private static final String Unshaded = "Common/MatDefs/Misc/Unshaded.j3md";
   private static final String ZoomIN = "zoomIN";
   private static final String ZoomOUT = "zoomOUT";
   private static final String RotateTrigger = "trigRotation";
   private static final String KeyLeft = "kLeft";
   private static final String KeyRight = "kRight";
   private static final String KeyUp = "kUp";
   private static final String KeyDown = "kDown";


   static {
      l = Logger.getLogger("");
   }
}

You can’t imagine, how long it took to get the monkey fit the bottom face of the Wirebox :frowning:

Its that kind of inconsistency that drives me crazy.

but … well, everything is a challenge :wink:

Yeah, but understand that “3D orthogonal” is not really 3D in any sense that makes regular 3D hard. When everything isn’t divided by Z, then 3D is SUPER simple… but everything in your life is controlled by the viewport size.

You place those sides really far apart and your model will have to be scaled up to 1000 to be seen. You place those sides really close together and even 1 unit will be too big. (See previous diagram.)

…and given your comment about units, I still think that’s tripping you up. If the sides of your projection area are setup properly then the objects should be close to the same size… exactly the same size at a specific distance.

I mean, you can’t really just call that on its own and expect to get anything sensible. Probably your frustum sides end up being the screen/window dimensions… so yeah, everything would have to be huge. Your camera ‘sides’ would be thousands of units apart.

Chase camera is meant to follow a 3D game character running around.

You will find that the further you get outside of “game”, the harder it will be to make a “game engine” do the things you want without extra work… and the harder it will be to get sensible answers from “game developers”.

For example, for anything other than fixed angle isometric view style games, ortho views will make many players physically ill. So it doesn’t get a lot of play here… and for the games that it does, developers likely hand-tweaked their camera settings once and then never touched it again under fear of pain.

Well, things to become smaller with rising distance isn’t the problem.
The real “problem” is, that vertical lines become horizontal in near front. That’s extremly unrealistic.
Don’t know - may be a fisheye will produce such artefacts.

For “my” users its hard to explain, why a vertical tool moves horizontally from near to far …

Sure, but from description I read, that the camera can turn around that player. So, although the math are much more complicated, the visual effect is the same, when the camera turns around the player or the player rotates around its own axis (assuming that there are no other objects in scene).

I know that. Don’t think, that I was happy about using jme. I have no affinity to games or gamers …
But - when you search for java and 3d - there’s a lot of dead projects. May be one or two will be considered serious. And I didn’t want to bother with grafik cards, different hardware, whatever.
So from my research jme is the only one, that could do the job. I want to run my app on a pc, others like to run it on a pi …

I had to kick myself to start with jme. Didn’t want that job.
But - I’ll do it anyway and you are a game developer that is able to understand a foreign mind and help him anyway.

So thank you!

I don’t understand at all what you mean here but I guess it doesn’t really matter.

I think he means that as it moves away it also moves toward the center - like a fisheye lens or perspective projection.

@jayfella
You’re right!
I’m not that good with words. I only know a fisheye lens from fotographs that produces such bending of lines

Here’s what I mean:


The line from the two small lines toward the corner of the L is vertical

Can you show me a different view where it’s vertical?

I can’t think of a way that line would be vertical (x, z the same at top and bottom) and show up horizontal unless you rotate the camera… and then it doesn’t matter if you are orthographic or perspective, that line isn’t going to look vertical anymore.

Maybe you mean something other than orthographic.

Currently not. I didn’t implemented the parallel projection already.
But I can show you a picture from another application, that visualizes the same source code:
3D-Source
In jme-speech I changed Z and Y axis.
But visual result should be the same.

I was able to workout the “restore scene”. As my model will never be moved, I only have to store rotation of the model. My cam will never be rotated, so I only have to store location of the camera and last not least I need to store the zoom factor …
3 values and everything is fine :slight_smile:

When you try out the source, you can see, what I mean with vertical lines. On restoring the scene, I apply a rotation for 3 axis, but before and after the vertical lines are vertical.

package jme3test.renderer;

import com.jme3.app.SimpleApplication;
import com.jme3.input.KeyInput;
import com.jme3.input.controls.ActionListener;
import com.jme3.input.controls.AnalogListener;
import com.jme3.input.controls.KeyTrigger;
import com.jme3.material.Material;
import com.jme3.math.ColorRGBA;
import com.jme3.math.FastMath;
import com.jme3.math.Quaternion;
import com.jme3.math.Vector3f;
import com.jme3.scene.Geometry;
import com.jme3.scene.Node;
import com.jme3.scene.debug.WireBox;
import com.jme3.scene.shape.Quad;
import com.jme3.util.JmeFormatter;
import java.util.logging.ConsoleHandler;
import java.util.logging.Handler;
import java.util.logging.Level;
import java.util.logging.Logger;


/**
* @author django
*/
public class TestMyCamera extends SimpleApplication implements AnalogListener, ActionListener {
  public TestMyCamera(float[] limits) {
     size = new Vector3f(limits[1] - limits[0], limits[5] - limits[4],
                         limits[3] - limits[2]);
     l.log(Level.INFO, "initial size calculation: " + size);
     rotation = new Quaternion();
  }


  @Override
  public void onAction(String name, boolean isPressed, float tpf) {
     if (RotateTrigger.equals(name)) {
        rotate = isPressed;
     }
     else if (AltRotateTrigger.equals(name)) {
        altRotate = isPressed;
     }
     else if (Restore.equals(name)) {
        restoreScene();
     }
  }


  @Override
  public void onAnalog(String name, float value, float tpf) {
     if (ZoomIN.equals(name)) {
        zoomFactor -= 250f * tpf;
        resizeView();
     }
     else if (ZoomOUT.equals(name)) {
        zoomFactor += 250f * tpf;
        resizeView();
     }
     if (rotate) {
        if (KeyLeft.equals(name)) {
           rotation.fromAngleAxis(0.1f * tpf, Vector3f.UNIT_Y);
           rotateMachine();
        }
        else if (KeyRight.equals(name)) {
           rotation.fromAngleAxis(-0.1f * tpf, Vector3f.UNIT_Y);
           rotateMachine();
        }
        else if (KeyUp.equals(name)) {
           rotation.fromAngleAxis(0.1f * tpf, Vector3f.UNIT_X);
           rotateMachine();
        }
        else if (KeyDown.equals(name)) {
           rotation.fromAngleAxis(-0.1f * tpf, Vector3f.UNIT_X);
           rotateMachine();
        }
     }
     else if (altRotate) {
        if (KeyUp.equals(name)) {
           rotation.fromAngleAxis(0.1f * tpf, Vector3f.UNIT_Z);
           rotateMachine();
        }
        else if (KeyDown.equals(name)) {
           rotation.fromAngleAxis(-0.1f * tpf, Vector3f.UNIT_Z);
           rotateMachine();
        }
     }
     else {
        if (KeyLeft.equals(name)) {
           camLoc.x += 150f * tpf;
           moveCamera();
        }
        else if (KeyRight.equals(name)) {
           camLoc.x -= 150f * tpf;
           moveCamera();
        }
        else if (KeyUp.equals(name)) {
           camLoc.z += 150f * tpf;
           moveCamera();
        }
        else if (KeyDown.equals(name)) {
           camLoc.z -= 150f * tpf;
           moveCamera();
        }
     }
  }


  @Override
  public void simpleInitApp() {
     flyCam.setEnabled(false);
     cam.setParallelProjection(true);
     camLoc = new Vector3f(size.x * 0.5f + 50f, size.y * 2f, size.z *
                           0.5f + 50f);

// INFORMATION RootLogger 09:06:12 camera  location: (-150.48007, 1000.0, -426.1831)
// INFORMATION RootLogger 09:06:12 camera direction: (-0.3407991, -0.6815982, -0.64751816)
// INFORMATION RootLogger 09:05:26 resizeView - aspect: 1.6 - zoomFactor: 1780.575
     cam.setLocation(camLoc);
     cam.lookAt(camLoc.negate(), camDir);
     camLoc.x = -150f;
     camLoc.z = -430f;
     moveCamera();

     registerInputs();
     resizeView();
     createMachine();
  }


  protected void createMachine() {
     Geometry box = new Geometry("Box", new WireBox(size.x, size.y, size.z));
     Geometry ground = new Geometry("Ground", new Quad(size.x * 2, size.z * 2));
     Material m = new Material(assetManager, Unshaded);

     l.log(Level.INFO, "workspace: " + new Vector3f(size.x, size.y, size.z));
     m.getAdditionalRenderState().setWireframe(true);
     m.setColor("Color", ColorRGBA.Green);
     box.setMaterial(m);
     box.setLocalTranslation(-(size.x * 0.5f), size.y, -(size.z * 0.5f));
     m = new Material(assetManager, Unshaded);
     m.setTexture("ColorMap", assetManager.loadTexture(
                  "Interface/Logo/Monkey.jpg"));
     ground.setMaterial(m);
     ground.setLocalRotation(new Quaternion().fromAngleAxis(-FastMath.HALF_PI,
                                                            Vector3f.UNIT_X));
     ground.setLocalTranslation(-1.5f * size.x, 0, size.z * 0.5f);

     machine = new Node("Machine");
     machine.attachChild(box);
     machine.attachChild(ground);

     rootNode.attachChild(machine);
  }


  protected void moveCamera() {
     l.log(Level.INFO, "camera location: " + camLoc);
     l.log(Level.INFO, "camera direction: " + cam.getDirection());

     cam.setLocation(camLoc);
  }


  protected void registerInputs() {
     inputManager.addMapping(Restore, new KeyTrigger(KeyInput.KEY_R));
     inputManager.addMapping(ZoomIN, new KeyTrigger(KeyInput.KEY_ADD));
     inputManager.addMapping(ZoomOUT, new KeyTrigger(KeyInput.KEY_SUBTRACT));
     inputManager.addMapping(KeyLeft, new KeyTrigger(KeyInput.KEY_LEFT));
     inputManager.addMapping(KeyRight, new KeyTrigger(KeyInput.KEY_RIGHT));
     inputManager.addMapping(KeyUp, new KeyTrigger(KeyInput.KEY_UP));
     inputManager.addMapping(KeyDown, new KeyTrigger(KeyInput.KEY_DOWN));
     inputManager
             .addMapping(RotateTrigger, new KeyTrigger(KeyInput.KEY_LSHIFT),
                         new KeyTrigger(KeyInput.KEY_RSHIFT));
     inputManager
             .addMapping(AltRotateTrigger,
                         new KeyTrigger(KeyInput.KEY_LCONTROL),
                         new KeyTrigger(KeyInput.KEY_RCONTROL));

     inputManager
             .addListener(this, Restore, ZoomIN, ZoomOUT, KeyLeft, KeyRight,
                          KeyUp, KeyDown, KeyForward, KeyBack, RotateTrigger,
                          AltRotateTrigger);
  }


  protected void resizeView() {
     float aspect = (float) cam.getWidth() / (float) cam.getHeight();

     l.log(Level.INFO, "resizeView - aspect: " + aspect + " - zoomFactor: " +
           zoomFactor);
     // calculate Viewport
     cam.setFrustum(-2000, 6000, -aspect * zoomFactor, aspect * zoomFactor,
                    zoomFactor, -zoomFactor);
  }


// camera location: (570.54846, 1000.0, -435.03146)
// resizeView - aspect: 1.6 - zoomFactor: 1312.2611
// machine orientation is: (-0.052158512, -0.36279693, 0.05984255, 0.92846483)
  protected void restoreScene() {
     if (restored) {
        return;
     }
     restored = true;

     // try to set a saved rotation
     rotation.set(-0.023189535f, -0.36359754f, 0.027150702f, 0.93086207f);
     rotateMachine();

     camLoc.x = 570.54846f;
     camLoc.z = -435.03146f;
     moveCamera();

     zoomFactor = 1312f;
     resizeView();
  }


  protected void rotateMachine() {
     l.log(Level.INFO, "machine orientation is: " + machine.getLocalRotation());
     l.log(Level.INFO, ">>>  rotate machine by: " + rotation);
     l.log(Level.INFO, "    machine's location: " + machine
           .getLocalTranslation());

     machine.rotate(rotation);
  }


  public static void main(String[] args) {
     JmeFormatter formatter = new JmeFormatter();
     Handler consoleHandler = new ConsoleHandler();
     consoleHandler.setFormatter(formatter);

     Logger.getLogger("").removeHandler(Logger.getLogger("").getHandlers()[0]);
     Logger.getLogger("").addHandler(consoleHandler);
     TestSRDCamera app = new TestMyCamera(
             // machine limits given from outside
             new float[]{0, 900, 0, 1800, -500, 0});

     app.start();
  }
  private Node machine;
  private float zoomFactor = 1800f;
  private boolean rotate;
  private boolean altRotate;
  private boolean restored;
  private Quaternion rotation;
  private Vector3f camLoc;
  private final Vector3f size;
  private static final Logger l;
  private static final Vector3f camDir = new Vector3f(0, 1, 0);
  private static final String Unshaded = "Common/MatDefs/Misc/Unshaded.j3md";
  private static final String ZoomIN = "zoomIN";
  private static final String ZoomOUT = "zoomOUT";
  private static final String Restore = "Restore";
  private static final String RotateTrigger = "trigRotation";
  private static final String AltRotateTrigger = "altTrigRotation";
  private static final String KeyLeft = "kLeft";
  private static final String KeyRight = "kRight";
  private static final String KeyUp = "kUp";
  private static final String KeyDown = "kDown";
  private static final String KeyForward = "kForward";
  private static final String KeyBack = "kBack";


  static {
     l = Logger.getLogger("Test");
  }
}

I’ll gonna implement that stuff in my application next …

Here’s the first try of parallel projection integrated in my app.
I’m very happy - looks promising.


Acutally running command is highlighted in red and already processed path is brown. White is the upcoming rest.

Now I can dedicate at finetuning like creation of a moving tool, saving properties and the like.

Thanks for all the help.