Hello everyone, I come back
I don’t know whether you remember that I was working on a 2.5D Engine with pre-rendered images and 3D objects (like old Alone in the dark, Resident evil etc…).
I achieved some good result, I created the base “engine” for manage a game with this system (I will provide sources soon as possibile )
This is for example a screenshot:
Feature are now the following:
- Automatic set-up of scene cameras, pre-rendered layers and models from an XML file.
- Automatic switch between cameras views related by player position (mapped on XML);
- Dynamic way to show player (and other 3D entities) in front or behind the mid-grounds layer (as you see in the example image the player is behind the desk, but it could be even in front of that).
I want to explain a bit about the algorithm to manage the depth relationship between entities and layers (semi-transparends pictures rendered at screen depth), because I want to discuss with you for additional (an maybe better ways to do it).
In blender, after creating the HD render (and as you see some manual post editing) I try to semplify the scene, just to keep:
- All I need for collision detection;
- A floor splitted in different areas.
These areas geometry name (retrieved with ray casting) are mapped inside an XML. For all possibile view in the scene I have a map where for each geometry name a relative depth value (RDV) will be assigned.
In this way, when a 3D entity walks on an area, the entity will take the RDV of that geometry.
Inside XML there’s a mapping for give some static RDV even to the pre-rendered images.
In this way i sufficient create the gap between two areas (for example A1 and A2) in close proximity where the layer (for example L1) should be present.
In this case is sufficient to set the RDV inside the xml such that.
RDV(A1) < RDV(L1) < RDV(A2)
RDV(A2) < RDV(L1) < RDV(A1)
(it depends obviously by the point of view).
After that the renderer will render entities and layers in the order of RD Values (using @pspeed LayerComparator).
That’s all, for this system there are some pro and cons:
- Good performances and memory usage;
- Only a simple scene required;
- Mechanism is too much intricate and difficult to mange;
- Entities could be only completely behind or in front of layer, but not middleways (this is good in some cases and bad in others).
I want to keep anyway this possibility, but I want to look to other implementation of the algorithm, and ways I was thinking were 2:
- Render static image onto the collision box, as a normal texture, BUT without covering the box, it must be seen completely parallel to camera and with the original dimension on the screen, irrespective of the box depth in the scene. I don’t know how to do it, but maybe you could help me to understand how.
- Use a depth map to fill z-buffer. I don’t know whether is possibile, but in my mind I think that if I create one rendering for the pre-rendered image and one for depth map, I can have information about which part will cover the actor and which not. But in this case I don’t have any idea about how proceed, and I’m a bit afraid for performances.
What do you think about this two last points? Are good ways or better to keep my one.
Thank you very much.