I have started working on another port of a Unity3D tutorial to JMonkeyEngine3. The project is simply called Penelope.
This is a simple mobile game where you run about collecting objects within a time limit.
I am setting a personal target date to 07/26/2015, and I will be hosting the game project on GitHub I’ll post a link to the project after I have the majority of the assets converted.
Taking what I learned from my first attempt trying to port a Unity3D FPS Shooter Tutorial to JMonkeyEngine3 This project might be more achievable there are less assets and the overall concept is simpler. Hopefully I will learn some more things and be able to finish the Unity3D FPS Tutorial.
In the Unity3D tutorial:
Models in FBX format
Scripts written in JavaScript
Textures in TIF and PSD format
Instructional tutorial in PDF format
Asset Conversion
Model conversion process FBX → OBJ → Blender → J3O
Image conversion process TIF/PSD → Gimp - PNG
So there is one thing I came across right off the bat in image conversion. I found images that contained little to no alpha value in there pixels while the red green and blue values were there. The images look fine when turned into a Unshaded material but has anyone seen anything like this? and if you have what is the purpose of doing so?
The RGBA channels can be used to store any kind of information so there might be specular/glossines/roughness/ambient occlusion or something entirely different in the alpha channel.
I lose all animation and any secondary UV maps. This means all models need to be reanimated and baked shadow maps need to be recreated. But I am OK with this I need practice animating anyway.
So a little update to go with this I found that skybox is throwing an error on my Galcaxy S4 so I have created a simple sphere cube model and mapped textures to it. I set the QueueBucket to Sky and CullHint to Never but it still looks like it is just a large sphere encompassing the level. If you have an idea let me know otherwise I will post what I end up doing.
I just want to say android development in the JMonkyeEngine is the most amazing thing I have seen in a long time. I just create the on screen controls using the TouchListener interface and this is cake.
FYI: more in general than for your tutorial since you might be trying to stay clean of external dependencies… but I thought I’d mention anyway.
The Lemur GUI library also supports touch “out of the box” for its GUI elements. It even supports multitouch (ie: you can drag multiple sliders at the same time, hit multiple buttons at the same time, etc.)
Edit: internally it’s using the JME listener support you mention it just wraps it in a platform-agnostic way.
I have to share this I just extended the ChaseCamera to work for my game it works really nice unless you move both thumbs to the right side of the screen then let go of the one that entered the screen second but this might not be a bad thing.
The Lemur GUI library also supports touch “out of the box” for its GUI elements. It even supports multitouch (ie: you can drag multiple sliders at the same time, hit multiple buttons at the same time, etc.)
Lemur GUI actually looks really cool but for this project I think I can get away with only using Picture and BitmapText for my GUI. This project is very simple the title is only one image, and the in game HUD has a few lines of text and one button.
Makes sense and for a project where you are trying to show of JME I understand not wanting to pull in any extra complexities.
Note for other types of projects, Lemur’s event/picking can work easily with any spatial including BitmapText and Picture. One line of code to add a mouse listener to any JME Spatial and then the picking is handled for you.
Lemur was made to build custom GUI implementations on top of its different modules… it just so happens it also includes its own GUI implementation also. But picking, input mapping, etc. are all independently useable parts.
It depends on the situation. At least as often, I simply enable/disable them and leave them attached.
…that’s why there is BaseAppState which unifies all of the enable/disable/initialize/terminate so that you don’t have to worry about questions like “am I initialized and enabled or just initialized and not yet enabled… but when I’m later enabled what do I do?” etc.
I have finished the main logic for the TitleState, but I still need to tweak the MotionPath tension and way points to get the CameraNode to move the way I want.
I started working on a Cinematic IntroState I am having trouble making the camera shake like an earthquake is happening.
I have finished all the logic for the on screen controls, I removed the touch logic from the ChaseCamera and am now using a RemoteControlState to handle all the controls. This did require exposing the rotate and zoom methods for the ChaseCamera.
What I have left to do is: create collision shapes, create the GamePlayState, create control for emeracite sprite, bake shadows, animate main character, and create smooth scrolling sewage texture.
I think I am going to do some testing and see witch method work better for Android. I am just wondering if the memory used to keep a state alive will have an impact.
Wow, that’s cool. I painfully noticed that nifty doesn’t and that iOS is even harder to support. So I watched carefully the events keeping an eye on Android and iOS at the same time… best idea I had. Maybe Lemur would be easier. Does it run on iOS?
I don’t know if anyone has tried it. Presuming the touch events are delivered like on android then I see no reason it wouldn’t work on iOS if all other things are working.