Best practice to individually control volumes for different audio tracks

What is the best practice on having multiple audio tracks, for example one for ambient sounds like wind or rain, one for effect sounds generated by player interaction with the game, for example shooting a gun and hitting a target, and one for the sound track?

How would I assign an audio source to one of these different audio tracks? And how would I be able to individually control the master volume for each of these tracks? I know that one can set the global master volume for all audio sources via the listener, but this does not help that much.

Also, I do not want to modify the volume of each of the audio nodes in the scene graph as these volumes should be relative to the master volume of their assigned sound layer. Unless, of course, the volume of these audio sources changes over time and based on some effect, and, yet, these volumes must still be relative to the overall master volume of the assigned audio track.

Just keep a list of your playing sources and change the volume for all of them. Or have logic in the sources that update the volume based on some global values, like “update(tpf){volume = globalVolume * layerVolume * myVolume}”

Hm, this sounds like a good idea, but then I would have to keep track of all of my audio sources and the assigned audio layers/tracks.

How about this, for example:

My initial thought was to improve on the audio listener interface and allow one to set different volumes for the available tracks, with the audio renderer then being able to determine the master volume for each of the tracks from that setting. In addition, both AudioParam and ListenerParam would be extended to feature additional constants for assigning the correct track to each of the audio sources, for example AudioParam.TRACK and ListenerParam.TRACKVOLUME.

Upon creating the audio node, I would be able to assign a AudioParam.TRACK parameter to it, say 1 for ambient. The track assignment would be application defined.

Now, when setting the volume on the listener for the ambient track, I would be using the method listener.setTrackVolume(int track, float volume). Subsequently, the audio renderer will then be able to determine the correct master volume from the listener and by checking the AudioParam.TRACK parameter for each audio source.

The audio renderer would query the listener for the ListenerParam.TRACK param and retrieve all defined tracks and their assigned volumes and configure itself as required, e.g. TrackVolumeParam[] listenerParams#getTrackVolumes(), with TrackVolumeParam being a simple pojo that holds the track identifier and the assigned volume.

That way, one would be able to assign once the audio track to use for all of the available audio nodes and control the master volume by a single call to the audio listener interface during each update or whenever it is changed based on some general application setting.

Would that be possible?

I found http://hub.jmonkeyengine.org/forum/topic/volume-control/ but this dates back to 2003 and no work has been done so far to make this happen.

You try to adapt the library to your code which is never a good idea. Thats why we for example disencourage extending Node, its a bad programming pattern. If you just write a class that holds your audio nodes for one layer and write the code to loop over them once you’re done and you can use your class with any audio system you ever encounter. If you have a class that makes the AudioNodes instead of instantiating them yourself you can even automate the adding to a list (though thats not exactly complicated).

But what if, for a change, the library would, and as the great comedians over at Monty Python once put it, adapt, adopt and improve upon user input and, as far as I can tell, also user requirement?

And I would most gladly provide you with a patch for making this happen, so that you do not have to do all the work.

SCNR.

@axnsoftware said: But what if, for a change, the library would, and as the great comedians over at Monty Python once put it, adapt, adopt and improve upon user input and, as far as I can tell, also user requirement?

And I would most gladly provide you with a patch for making this happen, so that you do not have to do all the work.

SCNR.

If we did that we’d have lots of funny code like this in our engine. Again, the interface is for the audio renderer, can you tell me what the audio renderer would do with a setVolume method? Please don’t play as if I would reject sensible extensions here, you clearly misunderstand what the interface is for if you want us to add this.

No objection to your objection on my behalf. Eventually, this is not about the renderer as all of the required code needs to be implemented in the audio node and the audio listener.
Apart from that, the audio renderer must export the audio listener via a public getListener() method, so that the audio node can access the listener and query it for its assigned audio track’s master volume.

The listener will keep track of the track master volumes and provide this via a simple api extension, namely setTrackMasterVolume() and getTrackMasterVolume().

In the audio node, upon updateGeometricState() and play() and playInstance(), the master volume for the assigned track will be taken into account and the volume will be adjusted accordingly.

Basically like so:

float masterVolume = getRenderer().getListener().getTrackMasterVolume(getTrack());
float actualVolume = getVolume() * masterVolume;

where masterVolume is in the range of 0.0f (muted) to 1.0f (maximum volume).

And, yes, I have already implemented the feature and I have already begun to implement a test case for it. And, apart from the immature looks of it, it seems to be working just fine.

As for the test case, I have found that whenever audio nodes, regardless of whether they are played using playInstance() or play(), have not been attached to either the rootNode or some child node thereof, they will not be assigned a channel by the underlying audio renderer implementation. In case of playInstance()/playSourceInstance() this is correctly so, but in case of play()/playSource() this seems to be an error. And, looking through the code, I cannot imaging how this could ever be possible. As such, I have not been able to track down this issue to its root cause. However, by attaching the audio node to for example the root node will remedy that situation and the channel will be properly assigned by the audio renderer implementation. This basically means that whenever I play an audio node that has not been attached as a child to the scene graph, that node will always have a channel id of -1 during its updateGeometricState phase.

Apart from that, AudioNode#updateGeometricState will only update the location/position for audio sources that have been assigned a channel. While this seems to be ok, it might be an edge case or over optimization of some sort. Because of that, and, based on both the current listener position and the audio node’s position, longer playing positional sounds that are played using playInstance() might not be repositioned in space correctly.

The point of AudioNode is to attach it, to have sound “in the scene”. If you need anything else you will have to either use it as it is or implement your own AudioSource. Nobody said that the current system is perfect, in fact we plan to change it quite a bit. Still your propositions are specific to your problems and again you try to use the system as your own code logic. You don’t need to “get” the master volume from the renderer, you set it at some point, just save that.

Actually, my point is about extending the framework so that one does not have to either add multiple instances of the very same control that keeps track of a simple application setting and sets the audio nodes volume according to that setting, or derive from audio node and use these nodes instead of the original audio node.
In fact, this can all be done inside the framework and without that much of a hassle. Perhaps a wee-bit of API compatibility unless one would introduce a AudioSourceExt interface or a AudioRendererExt interface in order to overcome that.

See also http://code.google.com/p/jmonkeyengine/issues/detail?id=625 for a patch that will add a global mechanic for controlling the volume for multiple, application defined audio tracks.

Please see also http://hub.jmonkeyengine.org/forum/topic/volume-control/ for a rather dated feature request.

Not that it is an option that you can necessarily use, but it is another option:

tonegodGUI has all this setup in it. As well as global alpha for visuals.

If you are using another GUI library, you can gank the code from here if needed. But, it looks like you have that covered from your last post!

@normen said: The point of AudioNode is to attach it, to have sound "in the scene".

Your existing TestAmbient… “test case” dictates otherwise as it will not attach the audio nodes to the root node or any child node thereof.

@t0neg0d said: Not that it is an option that you can necessarily use, but it is another option:

tonegodGUI has all this setup in it. As well as global alpha for visuals.

If you are using another GUI library, you can gank the code from here if needed. But, it looks like you have that covered from your last post!

Hm, I will have a look into this, but I doubt that it will fit my needs, especially when it comes to programmatically assigning audio tracks to individual audio sources and switching them between individual tracks and so on, with both the user and the game logic having the ability to set the volume for each of these tracks on an individual basis.

@axnsoftware said: Hm, I will have a look into this, but I doubt that it will fit my needs, especially when it comes to programmatically assigning audio tracks to individual audio sources and switching them between individual tracks and so on, with both the user and the game logic having the ability to set the volume for each of these tracks on an individual basis.

You misunderstand… this provides nothing more than a way of storing / applying global audio. The rest would be up to you. It’s not a game audio engine, just happens to have a way of applying global audio volume over whatever volume you have defined at the time you play an audio file.

@normen said: Nobody said that the current system is perfect, in fact we plan to change it quite a bit.

Great. However, I did not ever mention that the system is imperfect, I, and possibly others, may just need some extra functionality in the framework, that is all.

Still your propositions are specific to your problems and again you try to use the system as your own code logic.

Nope, it is about extending the framework so that it will provide for much needed functionality. Implemented so, that it will save cpu cycles during each frame.

You don't need to "get" the master volume from the renderer, you set it at some point, just save that.

I do not get the master volume from the renderer, I get it from the listener. Please see the above posting and the provided patch.

@t0neg0d said: You misunderstand... this provides nothing more than a way of storing / applying global audio. The rest would be up to you. It's not a game audio engine, just happens to have a way of applying global audio volume over whatever volume you have defined at the time you play an audio file.

Oh, I understood quite clearly. The global audio volume I can set via appinstance.listener.setVolume(). For that, I do not need a UI framework. Apart from that, I will definitely have a look into your framework after being done fiddling around with Nifty.

@axnsoftware said: Hm, I will have a look into this, but I doubt that it will fit my needs, especially when it comes to programmatically assigning audio tracks to individual audio sources and switching them between individual tracks and so on, with both the user and the game logic having the ability to set the volume for each of these tracks on an individual basis.

See, if you expect any library to be the code that you want to create you will run into this issue every time. I don’t know how to explain it otherwise, your approach is flawed, you always try to make the library look like the code you are about to write. Just write your own code that works in the way you want it and just try to understand and accept how the library works instead of trying to bend it into another shape.

@axnsoftware said: Oh, I understood quite clearly. The global audio volume I can set via appinstance.listener.setVolume(). For that, I do not need a UI framework. Apart from that, I will definitely have a look into your framework after being done fiddling around with Nifty.

Also try out Lemur… depending on your needs. Awesome work there.

@normen said: See, if you expect any library to *be* the code that you want to create you will run into this issue every time. I don't know how to explain it otherwise, your approach is flawed, you always try to make the library look like the code you are about to write. Just write your own code that works in the way you want it and just try to understand and accept how the library works instead of trying to bend it into another shape.

Come on, just have a look at the patch and envision the possibilities that could arise with having that kind functionality. And, given previous similar feature requests, it deems to me that this would be the most logical step to take in order to satisfy customer demand.

Alternatively, I could always also fork the existing engine and create my very own version from it, if that is what you want, in order to get things done.

@axnsoftware said: Come on, just have a look at the patch and envision the possibilities that could arise with having that kind functionality.

The patch would be easier to read if it didn’t have your non-jme code formatting changes included… Anyway, the code binds completely independent code to the audio renderer and uses it in an extended AudioSource (namely AudioNode). You could just as well have your own instance of the listener, completely independent from the renderer and extend AudioNode and use your listener there yourself. Its the old problem again, trying to cram your stuff into the library code.

@axnsoftware said: Come on, just have a look at the patch and envision the possibilities that could arise with having that kind functionality. And, given previous similar feature requests, it deems to me that this would be the most logical step to take in order to satisfy customer demand.

Alternatively, I could always also fork the existing engine and create my very own version from it, if that is what you want, in order to get things done.

Global volume control is game specific… granted, most games provide the option, but it is still outside the scope of the engine itself. Instead of a patch, why not submit your control as a proposed addin, like BetterCharacterControl is not a physics specific… it’s game specific… even more so, it’s games with upright character specific. So, perhaps sumbitting the Control instead of a patch might be a better approach?