Android Input/Sensor system

I’ve already posted this topic on Android forum. I’ll do it again here:



Worked on JME/Android package and got a few suggestions. I think that trying to map Android MotionEvent’s to mouse/keyboard events is not a good idea at all … From my perspective, we should forget mouse/keyboard in JME on Android for the following reasons:


  • Android devices are exclusively touchable devices (or trackball for some, but same events)
  • Keyboard should only be used for typing text, not to drive a game/application.



    I built a completely new AndroidInput Class with high level events:



    – TAP: User taps on screen (single touch)

    – DOUBLETAP User double taps on screen (single touch)

    – LONGPRESS: User touches down without releasing and maintains touch for more than 1s

    – DRAGG: User drags his finger on screen (single touch)

    – SCALE: User pinches (or spreads) two fingers on screen (double touch)

    – FLING: User draggs rapidly the screen, like throwing away (single touch)



    and low level events:



    – GRAB: User touches down the screen (single/double touch)

    – RELEASE: User touches up the screen (single/double touch)



    Every event is synchronized with Application thread using a classical ArrayBlockingQueue + Pool mechanism to avoid deadlocks and events lost.

    Once detected and synchronized:
  1. A picking (only one at event start) is done to get the scene object at event location.
  2. The object is asked if it has an action defined for this type of event.

    => If so, the corresponding action is called.

    => If not, we ask the application if it has an action defined and so call it.



    This hierarchical process lets the programmer decide if his application will be ‘Event driven’ or ‘Object driven’.



    I used this mechanism for a while since JME2 on multitouch screen/walls and it’s very easy to program and use, and very responsive.

    In fact I ‘downgraded’ it from multitouch to bitouch by reducing # of event types and port it to Android.



    I’m working on a SensorInput class (almost finished).



    The enclosed video shows a demo on a Samsung Galaxy Tab.



    . Left box has a ‘reversed scale’ animation attached to TAP action, a ‘move’ action on DRAG and a ‘scale’ action on SCALE.

    . Right box has a ‘rotation’ animation attached to TAP.

    . Middle box has no action attached so it passes to Application.

    . Middle box is attached to gyroscope sensor so it maintains its 3D position vertical and shows north…

    . Application has a ‘pan’ action (real pan, i.e. move in XY plane) attached to DRAG, a ‘zoom camera’ attached to SCALE and a ‘fit view’ attached to TAP.



    I released the source code of this demo and a PDF document explaining the technical concept in a zip here: https://docs.google.com/leaf?id=0Bwx_qcgLXtQtMjFmYmM5ZTctNTAyOS00ODhlLWFiMjUtNDlkYzFhY2RmYTA4&hl=fr:

    Its rough source code, no javaDoc, no comments but the tech doc enclosed will greatly help (I think).

    Any question ? Ask me !



    Hope this helps.



    http://vimeo.com/19178070

10 Likes

WOWWWWW!!! Thanks MAN!!! RESPECT!

Thanks mifth !

Great stuff!! Imho a gesture specialized input handler is urgently needed for the android port of jme3.

I would really like to see that in the repository.

+1

:wink: Thanks larynx and mifth for your support !

I have worked on a similar synchronized system but dedicated for sensors (gyro, gps, orientation…) and relying on same Event / Action mechanism as the input handler.

I’m away for all this week, but I can release it if someone is interesting next week.