I’ve already posted this topic on Android forum. I’ll do it again here:
Worked on JME/Android package and got a few suggestions. I think that trying to map Android MotionEvent’s to mouse/keyboard events is not a good idea at all … From my perspective, we should forget mouse/keyboard in JME on Android for the following reasons:
- Android devices are exclusively touchable devices (or trackball for some, but same events)
- Keyboard should only be used for typing text, not to drive a game/application.
I built a completely new AndroidInput Class with high level events:
– TAP: User taps on screen (single touch)
– DOUBLETAP User double taps on screen (single touch)
– LONGPRESS: User touches down without releasing and maintains touch for more than 1s
– DRAGG: User drags his finger on screen (single touch)
– SCALE: User pinches (or spreads) two fingers on screen (double touch)
– FLING: User draggs rapidly the screen, like throwing away (single touch)
and low level events:
– GRAB: User touches down the screen (single/double touch)
– RELEASE: User touches up the screen (single/double touch)
Every event is synchronized with Application thread using a classical ArrayBlockingQueue + Pool mechanism to avoid deadlocks and events lost.
Once detected and synchronized:
- A picking (only one at event start) is done to get the scene object at event location.
- The object is asked if it has an action defined for this type of event.
=> If so, the corresponding action is called.
=> If not, we ask the application if it has an action defined and so call it.
This hierarchical process lets the programmer decide if his application will be ‘Event driven’ or ‘Object driven’.
I used this mechanism for a while since JME2 on multitouch screen/walls and it’s very easy to program and use, and very responsive.
In fact I ‘downgraded’ it from multitouch to bitouch by reducing # of event types and port it to Android.
I’m working on a SensorInput class (almost finished).
The enclosed video shows a demo on a Samsung Galaxy Tab.
. Left box has a ‘reversed scale’ animation attached to TAP action, a ‘move’ action on DRAG and a ‘scale’ action on SCALE.
. Right box has a ‘rotation’ animation attached to TAP.
. Middle box has no action attached so it passes to Application.
. Middle box is attached to gyroscope sensor so it maintains its 3D position vertical and shows north…
. Application has a ‘pan’ action (real pan, i.e. move in XY plane) attached to DRAG, a ‘zoom camera’ attached to SCALE and a ‘fit view’ attached to TAP.
I released the source code of this demo and a PDF document explaining the technical concept in a zip here: https://docs.google.com/leaf?id=0Bwx_qcgLXtQtMjFmYmM5ZTctNTAyOS00ODhlLWFiMjUtNDlkYzFhY2RmYTA4&hl=fr:
Its rough source code, no javaDoc, no comments but the tech doc enclosed will greatly help (I think).
Any question ? Ask me !
Hope this helps.