2.5D Hybrid Proto - News and questions

Intro

Hello everyone, I come back :smiley:

I don’t know whether you remember that I was working on a 2.5D Engine with pre-rendered images and 3D objects (like old Alone in the dark, Resident evil etc…).

I achieved some good result, I created the base “engine” for manage a game with this system (I will provide sources soon as possibile :smile:)

This is for example a screenshot:

Feature are now the following:

  • Automatic set-up of scene cameras, pre-rendered layers and models from an XML file.
  • Automatic switch between cameras views related by player position (mapped on XML);
  • Dynamic way to show player (and other 3D entities) in front or behind the mid-grounds layer (as you see in the example image the player is behind the desk, but it could be even in front of that).

I want to explain a bit about the algorithm to manage the depth relationship between entities and layers (semi-transparends pictures rendered at screen depth), because I want to discuss with you for additional (an maybe better ways to do it).

2.5D Management

In blender, after creating the HD render (and as you see some manual post editing) I try to semplify the scene, just to keep:

  • All I need for collision detection;
  • A floor splitted in different areas.

These areas geometry name (retrieved with ray casting) are mapped inside an XML. For all possibile view in the scene I have a map where for each geometry name a relative depth value (RDV) will be assigned.
In this way, when a 3D entity walks on an area, the entity will take the RDV of that geometry.
Inside XML there’s a mapping for give some static RDV even to the pre-rendered images.
In this way i sufficient create the gap between two areas (for example A1 and A2) in close proximity where the layer (for example L1) should be present.
In this case is sufficient to set the RDV inside the xml such that.

RDV(A1) < RDV(L1) < RDV(A2)
or
RDV(A2) < RDV(L1) < RDV(A1)

(it depends obviously by the point of view).
After that the renderer will render entities and layers in the order of RD Values (using @pspeed LayerComparator).

That’s all, for this system there are some pro and cons:

PRO:

  • Good performances and memory usage;
  • Only a simple scene required;

CONS:

  • Mechanism is too much intricate and difficult to mange;
  • Entities could be only completely behind or in front of layer, but not middleways (this is good in some cases and bad in others).

Other Ways

I want to keep anyway this possibility, but I want to look to other implementation of the algorithm, and ways I was thinking were 2:

  • Render static image onto the collision box, as a normal texture, BUT without covering the box, it must be seen completely parallel to camera and with the original dimension on the screen, irrespective of the box depth in the scene. I don’t know how to do it, but maybe you could help me to understand how.
  • Use a depth map to fill z-buffer. I don’t know whether is possibile, but in my mind I think that if I create one rendering for the pre-rendered image and one for depth map, I can have information about which part will cover the actor and which not. But in this case I don’t have any idea about how proceed, and I’m a bit afraid for performances.

What do you think about this two last points? Are good ways or better to keep my one.

Thank you very much.

Ray

7 Likes

Hey nice! yes I remember !
You achieved a very nice vintage looking!
Nice work

1 Like

Thanks nehon,
now I hope i can simplify the pre-rendered layers depth management, meanwhile I’m making a significative refactoring, just to have something a more normalized, robust and scalable architecture.
In particular I want to simplify the depth management because in this case there’s to much dependency between depth management and base scene geometry, and it would be better to cut it.

EDIT:
Or maybe well…
Try to abstract the depth management, in order to have the possibility choose the strategy implementation runtime and even have the possibility to create in the future additional implementations… this sounds me better :smiley:

1 Like

Someone else was doing something like this recently and even had lighting working. I don’t know how similar their approach was but the effect was nice as they had lighting and shadows in the engine.

They rendered a simplified (geometrically) version of the scene and then draped the high def image over it or something. I forget the details even though I helped solve some issues with it. :smile:

Hi pspeed!
It will be interesting to discover a bit more about the “best” way to manage depth for 2.5 games. I think there are a lot of possibility, but I don’t have any idea about which is the best, or at least the standard.

For the two points that I talked about

  • Render static image onto the collision box, as a normal texture, BUT without covering the box, it must be seen completely parallel to camera and with the original dimension on the screen, irrespective of the box depth in the scene. I don’t know how to do it, but maybe you could help me to understand how.
  • Use a depth map to fill z-buffer. I don’t know whether is possibile, but in my mind I think that if I create one rendering for the pre-rendered image and one for depth map, I can have information about which part will cover the actor and which not. But in this case I don’t have any idea about how proceed, and I’m a bit afraid for performances.

The first one seems very closer to the standard that I’ve understood reading some documents (only trick is to put a quad inside the space, but keeping the image as if it was at screen depth).

Second one seems a bit modern solution, but if performances are good, it’s welcome :smile:

Anyway I want to abstract the management of depth, and maybe test all 3 implementations (the 2 above, and the implementation that I already have)

I looked and found the other thread:

…took a little digging. :slight_smile:

Hi pspeed, thanks for information!
As I saw it seems that in this case the low poly scene will be used to fill the depth buffer (As I think I understood, is it right?).
This is the first solution that I tried, but in my opionion there are some limits with this solution, because:

  • Everything will strictly depend by the low-poly geometry;
  • If I have a midgrounds with a lot of details even “low”-poly geometry must have right?
  • In order to have a good low-poly geometry you must be a good 3d modeler :smiley:

In particular the second one is the real problem from my side, because the idea is to have a very light and performant solution. For example with my system, with one place composed by 3 scenes (only collision boundary), 2 3D characters (with skelet. animation) and more or less 20 pre-rendered backdrops/mid-layers (1280x720 8bit), situation is this:

  • Executable with all assets ~20mb
  • Run test on a mid-low end notebook: ~250-300FPS

But sure my solution is not so precise and automatic as low-poly-depth-rendering (let’s call in this way :smiley: ) solution. Infact it will be fantastic to find a junction point between the 2 ways and join only the pros of both system :smiley:

Hi, that looks cool!

You’re right about what I did when it comes to performance, it’s got nothing on yours. As for the low poly models, I actually just used the decimate modifier selectively in blender which is more than adequate and requires very little effort. In my project the character crawls under things and stuff like that so it just seemed easier.

If you don’t mind me asking, after you’ve done a render in blender do you then use the camera position in blender to set the camera in JME?

Hi JERSTERRRRRR,
I will answer you gladly.
I don’t know how it works for you case, but in my case every scene is managed by a different app state.
The scene is loaded directly as native .blend scene. Inside the scene I only keep:

Base geometry (for collision)

  • Floor area information (where for each floor area I can manage a different view).
  • All cameras (vith the same view of the pre-rendered backdrop).

For example if for a certain region I need a specific view (for example relative to camera named “VIEW_1”) I will name the region
in blender for example as “FLOOR_VIEW_1”.

In pratical when a new app state will be initialized I load the .blend scene and I create a visitor through the entire scene structure.
Inside the visit method I will find all camera, you can do in 2 ways:

  • if the spatial name matches the pattern “%VIEW%”;
  • You can create an immutable array list of string with all camera
    names that you need to manage, and check whether the list contains
    the current visited node, if is, it means that you are iterating over
    a camera, and so you can add this.

In both cases when you find your camera node you can add it to a list (or probably better to a map) of CameraNode.

Ok now you have all you cameras settings (settings are token directly by blender). When you character walks on a new area (you can say this through ray-cast) you should
check on you camera list, and if there’s a camera name that matches the current region name (for example your character is on FLOOR_VIEW_1 and a camera named CAMERA_VIEW_1 exists)
you should copy the new camera settings inside you camera, you can do this simply using:

camera.copyFrom(newCameraNode.getCamera());

This is the concept, but there’s a little trick, because dimensions in blender are a bitt different that JME3, so when you copy the cam info inside the current camera, you
should make a little transform before. You should try in this way.

// Assume that you have the new view camera info inside a CameraNode called newCameraNode.
CameraNode tempNode = newCameraNode.clone();
Quaternion rot3 = tempNode.getLocalRotation();
tempNode.setLocalRotation(new Quaternion(-rot3.getZ(), rot3.getW(), rot3.getX(), -rot3.getY()));
camera.copyFrom(tempNode.getCamera());

This is more or less the idea, sure there are thousands ways to optimize it. For example what I’m trying to do now is a system to create an abstraction over everything.
For example cameras info will be retrieved using a specific implementation of a CamerasHandler, in this way you can implement your favorite way without change the structure.
If I can suggest you It will be a good idea to try all of this without using directly a .blend scene, because as I understood is not so good using directly blender format
but better use for example OGRE xml format. In this case the problem is that you should re-build your cameras settings, because infos for focal length and FoV are a bit
different than the format used in JME3. I want to study this.

Hope this can help you!

1 Like

Hey, thanks for the detailed reply. This is something I am particularly interested in

I cannot get my camera settings from blender to look the same in JME. I am using some code from another post on here (sorry can’t remember who…)

    float blenderFocalLength = 28f;
    //film back for a 35mm camera is 36x24mm
    float filmBackRatio = FastMath.sqrt(FastMath.pow(36f, 2) + FastMath.pow(24f, 2f));
    float frustumSize = 0.02075f * filmBackRatio / blenderFocalLength;
    float aspect = (float) cam.getWidth() / cam.getHeight();
    cam.setFrustum(0.1f, 1000f, -aspect * frustumSize, aspect * frustumSize, frustumSize, -frustumSize);

    //location and rotation values copied from blender
    cam.setLocation(new Vector3f(0.92148f,0.59512f,4.35292f));
    Quaternion q1 = new Quaternion(0.0028521444f, 0.99713457f, 0.050813507f, -0.055968814f);
    cam.setRotation(q1);

The camera appears to be looking the right way, but its positioning is slightly off, I have to manually shift it left slightly. I was wondering if you had managed to set the camera up to get the same view in blender and JME?

I think is a bit tricky issue… I remember that I tried even to export from blender the camera with OgreXML,
load in JME3 but the two view didn’t overlap in a good way.
Until now only way that works from my side is as I told you in the previous post, but I know that is not a good way because
you must work only with .blend scenes, and you loose a lot of flexibility.

But maybe there’s another way (I want to try it soon as possible):

  1. Write a tool class for import a dummy .blend scene with all cameras that you need.
  2. For each camera found, copy blender camera info inside the current camera (with the code of my previous post), and serialize into a file.
  3. Now in you game/app you can load you scene (in your format, example OgreXML) and that load cameras info from serialized cameras (that you exported before).

Does it sounds good for you? For now in my opinion seems the fastest way (and even so flexible)

Let me say whether you discover another way.

Ray

UPDATE:

Maybe it will be useful to have a look to blender camera helper:
(jmonkeyengine/CameraHelper.java at master · jjpe/jmonkeyengine · GitHub)
In particular the private method CameraHelper#toCamera250(Structure structure, Structure sceneStructure) seems very interesting,
in this case I think it will be possible to emulate completely the blender camera having:

  • width
  • height
  • focal_length
  • sensor_size
  • near
  • far

I don’t know whether is the same stuff you tried, tell me if you have some good news!