Penelope Android Game

I have started working on another port of a Unity3D tutorial to JMonkeyEngine3. The project is simply called Penelope.
This is a simple mobile game where you run about collecting objects within a time limit.

I am setting a personal target date to 07/26/2015, and I will be hosting the game project on GitHub I’ll post a link to the project after I have the majority of the assets converted.

Taking what I learned from my first attempt trying to port a Unity3D FPS Shooter Tutorial to JMonkeyEngine3 This project might be more achievable there are less assets and the overall concept is simpler. Hopefully I will learn some more things and be able to finish the Unity3D FPS Tutorial.

In the Unity3D tutorial:

  1. Models in FBX format
  2. Scripts written in JavaScript
  3. Textures in TIF and PSD format
  4. Instructional tutorial in PDF format

Asset Conversion
Model conversion process FBX → OBJ → Blender → J3O
Image conversion process TIF/PSD → Gimp - PNG

Here is the game running on Unity3D

6 Likes

So there is one thing I came across right off the bat in image conversion. I found images that contained little to no alpha value in there pixels while the red green and blue values were there. The images look fine when turned into a Unshaded material but has anyone seen anything like this? and if you have what is the purpose of doing so?

The RGBA channels can be used to store any kind of information so there might be specular/glossines/roughness/ambient occlusion or something entirely different in the alpha channel.

1 Like

That’s good to know. I just killed all the alpha on those images.

public class FixStupidImages {
    public static void main(String[] args) {
        File directory = new File("C:\\Users\\U292337\\Desktop\\fix_stupid_images");
        for(File file : directory.listFiles()) {
            if(file.getName().toLowerCase().endsWith("png")) {
                try {
                    BufferedImage tempBufferedImage = ImageIO.read(file);
                    BufferedImage bufferedImage = new BufferedImage(tempBufferedImage.getWidth(), tempBufferedImage.getHeight(), BufferedImage.TYPE_INT_ARGB);

                    Graphics graphics = bufferedImage.getGraphics();
                    graphics.drawImage(tempBufferedImage, 0, 0, null);
                    graphics.dispose();
                    for(int y = 0; y < bufferedImage.getHeight(); y++) {
                        for(int x = 0; x < bufferedImage.getWidth(); x++) {
                            int argb = bufferedImage.getRGB(x, y);
                            int a = 255; //(argb >> 24) & 0xFF;
                            int r = (argb >> 16) & 0xFF;
                            int g = (argb >> 8) & 0xFF;
                            int b = (argb >> 0) & 0xFF;

                            argb = ((a & 0xFF) << 24) | ((r & 0xFF) << 16) | ((g & 0xFF) << 8) | ((b & 0xFF) << 0);
                            bufferedImage.setRGB(x, y, argb);
                        }
                    }

                    ImageIO.write(bufferedImage, "png", file);
                } catch(Exception e) {
                    e.printStackTrace();
                }
            }
        }
    }
}

IMHO you also could look up for what the alpha channel is used in the original tutorial :wink:

I looked in the PDF and didn’t see anything so I am going to split out the textures and see what they look like

int argb = colorImage.getRGB(x, y);
int a = (argb >> 24) & 0xFF;
int r = (argb >> 16) & 0xFF;
int g = (argb >> 8) & 0xFF;
int b = (argb >> 0) & 0xFF;

argb = ((255 & 0xFF) << 24) | ((r & 0xFF) << 16) | ((g & 0xFF) << 8) | ((b & 0xFF) << 0);
colorImage.setRGB(x, y, argb);

argb = ((255 & 0xFF) << 24) | ((a & 0xFF) << 16) | ((a & 0xFF) << 8) | ((a & 0xFF) << 0);
alphaImage.setRGB(x, y, argb);

I have never used unity, but i would look at the shaders to see for what they use the alpha channel.

It looks like the images contain a specular map. I think I am going turn black to alpha and bake it onto the texture.

@skidrunner, cool idea.
I can’t wait to see the progress you make with this task.
I am watching this space, please keep us informed.

Regarding the FBX models, how will you be converting them, especially the character model with animation?

I lose all animation and any secondary UV maps. This means all models need to be reanimated and baked shadow maps need to be recreated. But I am OK with this I need practice animating anyway.

A good place to get animation references www.referencereference.com

So a little update to go with this I found that skybox is throwing an error on my Galcaxy S4 so I have created a simple sphere cube model and mapped textures to it. I set the QueueBucket to Sky and CullHint to Never but it still looks like it is just a large sphere encompassing the level. If you have an idea let me know otherwise I will post what I end up doing.

I just want to say android development in the JMonkyeEngine is the most amazing thing I have seen in a long time. I just create the on screen controls using the TouchListener interface and this is cake.

FYI: more in general than for your tutorial since you might be trying to stay clean of external dependencies… but I thought I’d mention anyway.

The Lemur GUI library also supports touch “out of the box” for its GUI elements. It even supports multitouch (ie: you can drag multiple sliders at the same time, hit multiple buttons at the same time, etc.)

Edit: internally it’s using the JME listener support you mention it just wraps it in a platform-agnostic way.

I have to share this I just extended the ChaseCamera to work for my game it works really nice unless you move both thumbs to the right side of the screen then let go of the one that entered the screen second but this might not be a bad thing.

This is how I initialize the class.

chaseCamera = new ChaseCamera(cam, geometry, inputManager);
chaseCamera.setInvertVerticalAxis(false);
chaseCamera.setTouchBounds(settings.getWidth() / 2, 0, settings.getWidth() / 2, settings.getHeight());
chaseCamera.setUseTouch(settings.isEmulateMouse());

And here is the rest of it. This needs a lot of clean up but its working for now.

public class ChaseCamera extends com.jme3.input.ChaseCamera implements TouchListener {
    
    private boolean useTouch;
    
    private int touchIndex = -1;
    private Vector2f lastTouch = new Vector2f();
    
    private float touchBoundsX, touchBoundsY;
    private float touchBoundsWidth, touchBoundsHeight;
    
    public ChaseCamera(Camera cam, final Spatial target, InputManager inputManager) {
        super(cam, target, inputManager);
        registerTouchInput(inputManager);
        setTouchBounds(0, 0, cam.getWidth(), cam.getHeight());
    }
    
    @Override
    public void onAnalog(String name, float value, float tpf) {
        if(!useTouch) {
            super.onAnalog(name, value, tpf);
        }
    }
    
    public void onTouch(String name, TouchEvent event, float tpf) {
        if(!useTouch) {
            return;
        }
        
        float touchX = event.getX();
        float touchY = event.getY();
        
        if(name.equals("touch_all")) {
            if(inTouchBounds(touchX, touchY)) {
                if(touchIndex == -1) {
                    touchIndex = event.getPointerId();
                    lastTouch.set(touchX, touchY);
                    event.setConsumed();
                } else if (touchIndex == event.getPointerId()) {
                    switch(event.getType()) {
                        case MOVE:
                            rotateCamera(((touchX - lastTouch.x) / touchBoundsWidth) * (FastMath.PI / 2));
                            vRotateCamera(((touchY - lastTouch.y) / touchBoundsHeight) * (FastMath.PI / 2));
                            lastTouch.set(touchX, touchY);
                        case DOWN:
                            lastTouch.set(touchX, touchY);
                            event.setConsumed();
                            break;
                        case UP:
                            touchIndex = -1;
                            event.setConsumed();
                            break;
                    }
                }
            } else {
                if(touchIndex == event.getPointerId()) {
                    touchIndex = -1;
                }
            }
        }
    }
    
    public final void registerTouchInput(InputManager inputManager) {
        this.inputManager = inputManager;
        inputManager.addMapping("touch_all", new TouchTrigger(TouchInput.ALL));
        inputManager.addListener(this, "touch_all");
    }
    
    public void setUseTouch(boolean useTouch) {
        this.useTouch = useTouch;
    }
    
    public final void setTouchBounds(float x, float y, float width, float height) {
        this.touchBoundsX = x;
        this.touchBoundsY = y;
        this.touchBoundsWidth = width;
        this.touchBoundsHeight = height;
    }
    
    private boolean inTouchBounds(float x, float y) {
        return x > touchBoundsX && x < (touchBoundsX + touchBoundsWidth)
                && y > touchBoundsY && y < (touchBoundsY + touchBoundsHeight);
    }
}

The Lemur GUI library also supports touch “out of the box” for its GUI elements. It even supports multitouch (ie: you can drag multiple sliders at the same time, hit multiple buttons at the same time, etc.)

Lemur GUI actually looks really cool but for this project I think I can get away with only using Picture and BitmapText for my GUI. This project is very simple the title is only one image, and the in game HUD has a few lines of text and one button.

Makes sense and for a project where you are trying to show of JME I understand not wanting to pull in any extra complexities.

Note for other types of projects, Lemur’s event/picking can work easily with any spatial including BitmapText and Picture. One line of code to add a mouse listener to any JME Spatial and then the picking is handled for you.

Lemur was made to build custom GUI implementations on top of its different modules… it just so happens it also includes its own GUI implementation also. But picking, input mapping, etc. are all independently useable parts.

1 Like

Quick question how are people switching states? I know the obvious answer attach the new state and detach the old one, but is that the right method?

It depends on the situation. At least as often, I simply enable/disable them and leave them attached.

…that’s why there is BaseAppState which unifies all of the enable/disable/initialize/terminate so that you don’t have to worry about questions like “am I initialized and enabled or just initialized and not yet enabled… but when I’m later enabled what do I do?” etc.

Another little update

I have finished the main logic for the TitleState, but I still need to tweak the MotionPath tension and way points to get the CameraNode to move the way I want.
I started working on a Cinematic IntroState I am having trouble making the camera shake like an earthquake is happening.

I have finished all the logic for the on screen controls, I removed the touch logic from the ChaseCamera and am now using a RemoteControlState to handle all the controls. This did require exposing the rotate and zoom methods for the ChaseCamera.

What I have left to do is: create collision shapes, create the GamePlayState, create control for emeracite sprite, bake shadows, animate main character, and create smooth scrolling sewage texture.

I think I am going to do some testing and see witch method work better for Android. I am just wondering if the memory used to keep a state alive will have an impact.

Wow, that’s cool. I painfully noticed that nifty doesn’t and that iOS is even harder to support. So I watched carefully the events keeping an eye on Android and iOS at the same time… best idea I had. Maybe Lemur would be easier. Does it run on iOS?

I don’t know if anyone has tried it. Presuming the touch events are delivered like on android then I see no reason it wouldn’t work on iOS if all other things are working.