More IOS woes.
I’m struggling to get the Touch screen handler to work properly on IOS.
I’m getting a response from the listener and I’ve managed (eventually) to interpret the co-ordinates returned to match the input.injectTouchEnd from the harness.
The problem I’m getting is that when I use the RAY method of picking the spatials it doesn’t produce any results from the collision detection.
It’s almost like the co-ordinates returned don’t match the ones that the spatials are using so the RAY doesn’t intercept.
Do I need to use inverted X or Y axis? (input.invertX() and input.invertY() )
Do I need to use getJmeX() and getJmeY functions in IOSInputHandler to convert to the wierd and wonderful Apple version of the co-ordinates?
If so, which X and Y inputs do I use? Is it the event.getX and event.getY or invertedX and invertedY or a combination of all of the above?
I noticed that in one example on here related to Touch co-ordinates the Y co-ordinate was derived by subtracting Y from the screen height to get Y? What’s that all about? Inverting Y?
To many questions… Sorry.
By the way, as you will see in many of my other posts. It worked perfectly first time on Windows, Linux & Android.