More IOS woes.

I’m struggling to get the Touch screen handler to work properly on IOS.

I’m getting a response from the listener and I’ve managed (eventually) to interpret the co-ordinates returned to match the input.injectTouchEnd from the harness.

The problem I’m getting is that when I use the RAY method of picking the spatials it doesn’t produce any results from the collision detection.

It’s almost like the co-ordinates returned don’t match the ones that the spatials are using so the RAY doesn’t intercept.

Do I need to use inverted X or Y axis? (input.invertX() and input.invertY() )

Do I need to use getJmeX() and getJmeY functions in IOSInputHandler to convert to the wierd and wonderful Apple version of the co-ordinates?

If so, which X and Y inputs do I use? Is it the event.getX and event.getY or invertedX and invertedY or a combination of all of the above?

I noticed that in one example on here related to Touch co-ordinates the Y co-ordinate was derived by subtracting Y from the screen height to get Y? What’s that all about? Inverting Y?

To many questions… Sorry.


By the way, as you will see in many of my other posts. It worked perfectly first time on Windows, Linux & Android.

Indeed many questions. :smile:

You implemented the TouchListener interface? I guess this is what you are talking about. Good, but why do you need to match the harness input?

I used TouchEvent.getX()/getY() and that worked for me, but I just wanted to match some nifty buttons.

If you don’t have success with TouchEvents, what about the simulated mouse events? Maybe this helps to find out which coordinates you expect. I had to use TouchEvents, because I wanted to support multi-touch.