I have a separate thread for AI which calculates the next action for AI to take and then waits for couple of seconds.
It’s quite CPU heavy, so it takes about 500ms to calculate one action on my 7 year old 2.1Ghz intel core 2 duo
whereas it takes MINUTES on quad core 1.5Ghz android nexus 7 tablet. (I also tried my code on two more phones with different android versions and got similar results)
I added a counter which counts how many times my AI thread changes state (java.lang.Thread.State):
On my laptop the counter roughly corresponds to completing one iteration and waiting.
On android tablet the state switches between RUNNABLE and TIMED_WAITING/WAITING THOUSANDS of times before it completes one iteration.
(changing thread priority had no effect and there is no memory shortage)
So my question is, has anyone experienced similar issues and how could they be solved?
I am considering making AI more primitive but less cpu intensive if no solution is present.
Well…phones CPUs are slow compared to desktop CPUs. Comparing frequencies makes no sense because they have different architectures.
You should definitely use a more primitive AI, or if the results are predictable, you could consider to precompute lookup tables an just look up at runtime.
But milliseconds compared to minutes? Maybe my thread is constantly put to waiting to prevent overheating since there are no cooling systems on tablets and phones?
Well idk. I don’t know what your code does, some operation are a lot more expensive than others on android. Remember you’re not executing your code on the same VM when on desktop and on android, so there could be millions of reasons…
Also the system can choose to kill a process when its in the background so I guess it also have some way to limit CPU usage. On that matter, maybe a search on google would be more fruitful than a question on this forum though.
I have already spent a weekend on this issue and didn’t manage to find anything relevant. I’m asking here hoping that someone already made a game of significant complexity for android using jme
I’m concerned because I’m still using placeholder assets (no textures, no sounds, primitive models) and already experiencing performance issues.
As nehon mentioned, processor speed isnt’ really relevant. Mobile CPU’s also have a reduced instruction set - which further reduces their capability. I’ll get it out right now - writing a game for mobile devices is hard. Really hard - unless of course it’s a very simple game like flappy bird. Absolutely everything you do will impact performance. There is very little room for lazy code. But… if you dig in and keep going, it can be very rewarding. It’s somewhat interesting to see how far you can push yourself to gain a few milliseconds.
The only solution here is to “roll your own” - making a decent game will never be a case of drag n drop, and games that work really smoothly will almost always be of custom design.
Having said that, here are two interesting articles to read. They should get you started.
The Dalvik VM used on Android is more primitive in terms of CPU optimizations and garbage collection. Based on what you described, it is half a second vs 180 seconds (3 minutes), that’s about 500% decrease in performance which is not unusual if comparing a slow VM running on a limited mobile device vs. a fast VM running on a fast PC. With that said, I would use the Android built-in profiling tools to see what method(s) are causing a slowdown, it might be something obvious you’ve overlooked that the desktop VM happened to optimize efficiently.
So I managed to identify the problem.
Since in planning navigation takes place between game world states, I would only store changes between world states in the graph nodes. Upon visiting the node I would CLONE the original world state, apply changes stored on the node and then proceed to evaluate goal conditions.
As you can see the ai thread is being constantly interrupted by the garbage collector, hence the thousands of thread state changes between running/waiting I noticed earlier.
To avoid creating new objects by cloning I now keep TWO world states. When I need a fresh copy of the world state I iterate through every field of every object in the original world state and assign their values to the second world state.
Performance on android tabled is now almost the same as on laptop (0-100ms difference) so the issue was android VM being shit, not ARM processors being slow.
Specifically, it’s Android’s garbage collector that sucks. Many other things may have worse performance but that (in my opinion) seems to be the biggest one.