Sounds and the scenegraph

I was thinking of ways to integrate sounds with a scenegraph. One possibility is Spatial objects can have Sounds added to them. Then, whenever a Spatial is rendered, it will be displayed to the screen and the sound will be played. This will allow for quick culling of sound as well as display data. So it will be fast. BUT and this is a big but, it means that if you can’t see the object you can’t hear it. That’s a major no-no. So we would have to figure out a way to have two seperate bounding spheres, one for visualization and one for sound system. Where the sound system bounding sphere are large.



Any ideas how to get two seperate bounding spheres working. The speed of a scene graph comes from ignoring nodes whose bounding spheres are not within the camera frustum, if we add huge bounding spheres where we have to process sounds and rendering seperatly, this could become a performance issue.

Are there also concepts to model a sounding object

that comes nearer and becomes louder?

As a suggestions (which may or may not fit into the current design), for sound processing we need a lightweight status checking mechanism. If we imagine that sound travels in all directions with the same speed, then we simply need to know the maximum x,y and z (in this case they would be) that we check to see if the sound should be played or not. So if we have a seperate rendering loop for sound objects (which can be attached to other objects) which sets the max distance within which sound is audible we do a quick distance check on the max x,y & z of sound object compared to the max distance of the scene within which we are operating. If within then we play the sound.



We can support distnace simply by reducing the volume of the sound in a stepwise fashion (i.e. dividing the max distance to 10 steps) and set the volume accordingly.



tomcat

Are there also concepts to model a sounding object
that comes nearer and becomes louder?


I believe that is part of OpenALs 3D sound capability. Each sound has a position and attenuation. Volume is calculated from the users relative distance.

tomcat: The more I think about it, the less sense it makes to plug the sound into the scenegraph. Perhaps it should have it's own processor for playing sounds, independant of the graphics scenegraph. Arman can make it a clean queue system or something and once it's finished we can try to determine a good way to associate a sound to a Spatial object (who should have the same position). Perhaps this is where Entity comes in, the glue that holds all these subsystems together.

That is, Entity has a reference to Spatial, Sound, Finite State (for the Finite State Machine AI), physics etc.

Still not sure yet, I need to just get back to the graphics stuff where I know what I am doing. :)

That is, Entity has a reference to Spatial, Sound, Finite State (for the Finite State Machine AI), physics etc.


I think this is a great idea as we need to hang things off an anchor and the entity could be a good anchor.

Looking at the new code, I see that the something called "Node" extends the "Spatial" object, but really dont understand the difference. I am intrigued as how the new system works (a quick explanation would be great).

I also dont see the terrain, particle system and some of the other good stuff that was done in the previous version. I hope they will be added to the new one as they were cool features.

As a matter of interest, when the new version would be ready to play with? ;)

tomcat
Looking at the new code, I see that the something called "Node" extends the "Spatial" object, but really dont understand the difference. I am intrigued as how the new system works (a quick explanation would be great).


Spatial is the base class for the scenegraph. It handles rotation, translation, and scale (as well as cumulative for the world). It also holds render states etc.

Node extends Spatial and maintains children. This represents "inner" nodes of the scenegraph. It handles propogating information to the parents, and merging bounding spheres to allow for quick culling of branches.

Geometry extends Spatial as well and represents the leaf nodes of the tree. Once a Geometry node is reached (it means it hasn't been culled out) it is rendered.

Then classes like TriMesh extend Geometry, they have information on how to render themselves.

So, starting at the root node, you can recursively travel down Nodes and their children, culling whenever possible.

That a pretty basic overview of how the scenegraph works.

I also dont see the terrain, particle system and some of the other good stuff that was done in the previous version. I hope they will be added to the new one as they were cool features.


Terrain will extend TriMesh. I will be implementing terrain before too long, and it will be an improvement. First, I am going to implement a level of detail scheme for TriMesh. This means ALL geometry will have level of detail added to really speed things up. Where lowest level will be imposters and highest will be the full mesh. Then terrain will be nothing more than a mesh defined by a heightmap. I will also implement paging, so you can have unlimited (limited by disk size) sized worlds. Page (heightmaps) are loaded into memory from disk as they are needed. Terrain will be very exciting.

Particle System is also on the to do list. I just have to figure out how to make it work within a scenegraph.

I plan on keeping all the same functionality from the previous version into the new. The new will just be MUCH MUCH faster, TriMeshes are now drawn as VertexArrayPointers which that alone makes about a 10x speed increase.

I will also have models going in before too long.

As a matter of interest, when the new version would be ready to play with?


That's a tough one. I'm making good progress, believe it or not. I'm getting all the things under the covers finished up. Once that is taken care of, special effects and stuff should be quick. The thing that is keeping you from really playing is building models and terrain. I'd put that at a month or two.

To give an idea here is my to do list, in order:

Input Controllers <---- here now.
Extend TriMesh with primitive shapes.
Picking
Collision Detection
Bezier curves (get complex looking models from low poly count models).
Surfaces (leading into models)
Models (custom format)
Loader (convert MD3, Milkshape into custom)
Timer
Continuous Level of Detail
Terrain
Terrain Pages
Quadtrees/Octrees
Portals
BSP
Lens Flare
Environmental Mapping
Bump Mapping
Volumetric Fog
Projected Lights
Projected Shadows
Particle System
Morphing

That's what the initial official release of the graphics system is slated to have. That will give a powerful start.

Notice GUI is not in there yet, not sure where I want to slide that in.

Wow, thanks :smiley: for the explanation and the impressive list of items. I gussed you’d busy and I see why. I have been compiling the new version (I had to do that for the sound to work) and have run a coupeleof tests (line etc). I like to start using some of the features when you upload them.



tomcat

i also think it is reasonable to separate the rendering process of

graphics and the rendering process of sounds, for instance using

a GraphicsRenderer and SoundRenderer to traverse the

scene graph. But i also think that sound should be part

of the scene graph, because the scene graph describes

the model of the gameworld and the gameworld is the

aggregation of sounds and graphics and ai. But of course

there has to be different scene graph clipping when rendering sounds as when rendering graphics.



To rendere sounds in relation to distance is not enough i think.

In a dungeon for instance the sounds of the near room shouldn’t

be hearable.



PS: Very impressive todo list, i am very curious about the first jme release. Meanwhile i play with the cvs code. :slight_smile:

Good point. Sound does have it’s own “renderer” so those are seperated. Arman is currently adding support for MP3 and Ogg. So when he’s finished with that, we’ll have to make a decision one way or another.

Just wanted to update the pointer… :slight_smile:



Input Controllers

Extend TriMesh with primitive shapes.

Timer

Picking

Collision Detection

Bezier curves (get complex looking models from low poly count models).

Surfaces (leading into models)

Models (custom format) <---- here now.

Loader (convert MD3, Milkshape into custom)

Continuous Level of Detail

Terrain

Terrain Pages

Quadtrees/Octrees

Portals

BSP

Lens Flare

Environmental Mapping

Bump Mapping

Volumetric Fog

Projected Lights

Projected Shadows

Particle System

Morphing