TestMusicStreaming throws IllegalStateException

After upgrading to 3.0 Stable, I am experiencing audio issues. While investigating, I tried running the jme3test.audio.TestMusicStreaming test and encountered an apparently unrelated error:

IllegalStateException: Only mono audio is supported for positional audio nodes

Perhaps the test case should use a different OGG file. Or perhaps this is a regression.

The test case should use a different audio file or more likely just call setPositional(false). It’s true by default and so many cases like these don’t need positional audio at all.

1 Like

It turns out the error wasn’t unrelated at all. My game was throwing the same exception, only because the game sound was played by a Nifty button, the exception was masked.

I guess I either punt on positional sound or else convert my stereo sound effects to mono.

@sgold said: It turns out the error wasn't unrelated at all. My game was throwing the same exception, only because the game sound was played by a Nifty button, the exception was masked.

I guess I either punt on positional sound or else convert my stereo sound effects to mono.

Mono and positional. Stereo/mono and not positional. Those are really the only choices. So, yeah. You will have to choose one.

Done. This wasn’t an issue with RC2.

@sgold said: Done. This wasn't an issue with RC2.

RC2 would silently fail to position audio that was stereo. The only thing that changed was that now it gives you an error when you try to play a sound positionally that is not mono.

1 Like

That’s good to know. Thanks.

I’ve proposed fixing the test: here

Actually let’s talk about this.
Since we’re in 3.1, let’s think a bit more about this…
Shouldn’t we check if the sound is mono and if it’s not just disable positional and just drop a warning, instead of crashing?

Also about the reverb being defaulted to true…never got that one.

@normen, couldn’t we generalize the audioSource concept by making an AmbientAudioSource that would just play ambient sounds and not be in the scene graph.
An d have the audioNode be positional, because that’s the point of having it as a node in the scene graph.
AudioNode would be to sound as LightNode and CameraNode are to light and camera. Only mono audio could be set to them.
IMO it would be a lot more consistent

@nehon said: Actually let's talk about this. Since we're in 3.1, let's think a bit more about this... Shouldn't we check if the sound is mono and if it's not just disable positional and just drop a warning, instead of crashing?

Also about the reverb being defaulted to true…never got that one.

@normen, couldn’t we generalize the audioSource concept by making an AmbientAudioSource that would just play ambient sounds and not be in the scene graph.
An d have the audioNode be positional, because that’s the point of having it as a node in the scene graph.
AudioNode would be to sound as LightNode and CameraNode are to light and camera. Only mono audio could be set to them.
IMO it would be a lot more consistent

Actually thats already possible as I changed the Audio == AudioNode connection, theres now an interface for AudioSource so you can make any audio source. The (incomplete) audio system I have locally goes a slightly different way though and uses only a few AudioSources while mixing other audio before the OpenAL layer.

The reason I put effort into documenting this “feature” is that it doesn’t do what a naive developer would expect. If we can change the interface to make it more intuitive, that would be great.

The naive developer (I think I can speak for him or her :slight_smile: ) expects that if you play a stereo sound on a positional AudioNode, both channels play from that location, as if there were two AudioNodes at that location, one for each channel. No exception, no warning message, just do the most reasonable thing. Would this be difficult to implement?

For non-positional audio, I would prefer an interface which does not include irrelevant methods like setParent(), getChildren(), setShadowMode(), setLocalTransform(), addLight(), and setQueueBucket(). The interface should be as independent of the scene graph (and graphics in general) as possible.

For positional audio, my preferred interface would be an AudioControl which I could add to any spatial in my scene. This would avoid the need to subclass com.jme3.scene.Node .

@sgold said: The reason I put effort into documenting this "feature" is that it doesn't do what a naive developer would expect. If we can change the interface to make it more intuitive, that would be great.

The naive developer (I think I can speak for him or her :slight_smile: ) expects that if you play a stereo sound on a positional AudioNode, both channels play from that location, as if there were two AudioNodes at that location, one for each channel. No exception, no warning message, just do the most reasonable thing. Would this be difficult to implement?

For non-positional audio, I would prefer an interface which does not include irrelevant methods like setParent(), getChildren(), setShadowMode(), setLocalTransform(), addLight(), and setQueueBucket(). The interface should be as independent of the scene graph (and graphics in general) as possible.

For positional audio, my preferred interface would be an AudioControl which I could add to any spatial in my scene. This would avoid the need to subclass com.jme3.scene.Node .

Its more of a general misunderstanding of Stereo vs. Mono vs. Positional. Stereo already implies that the contents of the audio file have a position in the stereo field, e.g. panned hard right or left or something in between. If you would place that in a 3D sound field you’d have to specify a direction and angle for that audio (as if you were positioning your two loudspeakers in the room). That in turn would be the same as placing the single audio sources in the stereo file in the 3D field separately. If you just play both channels from the same 3D position thats the same as reducing the stereo file to a mono audio file with both left and right channels and placing that in the 3D field. Add to that even stereo sound files can encode positional information via psychoacoustic “tricks” (as you might have in your TV with “3d sound” settings).

Generally theres no disagreement that the audio functionality of jME3 is not very exhaustive, the AudioNode (or AudioSource) basically just wraps the OpenAL functionality (and its limits). We do intend to extend this. You can easily make your own AudioControl with the AudioSource interface though, as said.

Need to make the ideal the enemy of the good. Simulating a stereo music system in a 3-D environment might make a cool demo, but it’s not something I ever intend to do in a game. Nor am I suggesting that TestMusicStreaming should be such a demo.

The thing is, sometimes a game uses sound effects or spoken sounds which were recorded for other purposes. Many of these happen to come in stereo, but if they contain pans or other stereo effects, those are irrelevant to the game. What I want to do is play the sound effect or spoken sound at a specific location without worrying whether the asset is mono or stereo. Automatically downmixing to mono would give me that. Second choice would be to implement automatic downmixing in my game, which I could do if AudioNode or AudioSource allowed me to play a specific channel of a stereo sound asset.

Third choice would be to downmix by hand, which is what I do now. In that case it would be nice if the SDK included a sound editing tool.

@sgold said: Need to make the ideal the enemy of the good. Simulating a stereo music system in a 3-D environment might make a cool demo, but it's not something I ever intend to do in a game. Nor am I suggesting that TestMusicStreaming should be such a demo.

The thing is, sometimes a game uses sound effects or spoken sounds which were recorded for other purposes. Many of these happen to come in stereo, but if they contain pans or other stereo effects, those are irrelevant to the game. What I want to do is play the sound effect or spoken sound at a specific location without worrying whether the asset is mono or stereo. Automatically downmixing to mono would give me that. Second choice would be to implement automatic downmixing in my game, which I could do if AudioNode or AudioSource allowed me to play a specific channel of a stereo sound asset.

Third choice would be to downmix by hand, which is what I do now. In that case it would be nice if the SDK included a sound editing tool.

Obviously you should prepare your assets accordingly before running the game. If the stereo position is irrelevant they would/could be mono files. If they are stereo files that contain the same info on both channels they just waste space. If somebody recorded a mono (e.g. voice) sound in a stereo file he’s a (sound)retard. If it was about a multilayer image file you’d just reduce them to one layer in photoshop before exporting them instead of creating a new Material and letting the GPU do that, wasting resources.

I don’t think that the engine should waste resources this way without mentioning it, as in “just downmix it if its stereo”. As said, its important to know about these things and no proper game sound designer will make these mistakes. Theres a reason theres a difference between pan (mono) and balance (stereo) in stereo mixing. So even if this would be handled somehow by the engine you’d always get a warning and should always convert your files to mono before deploying/selling the game.

If you get around to making a stereo-to-mono audio tool for the SDK I’ll be happy to add it, for now free audio editors like Audacity make this trivial.

I suppose since resources are being wasted then a warning is in order. Throwing an exception would serve as a fine warning, if I could count on seeing the exception.

Unfortunately, Nifty seems to silently catch all exceptions triggered by Nifty controls. Can anything be done about that?

I’m curious how you would prefer to fix TestMusicStreaming for 3.1.

I’ll keep “an audio tool for the SDK” in the back of my mind as a future project. Could be fun!

@sgold said: I suppose since resources are being wasted then a warning is in order. Throwing an exception would serve as a fine warning, if I could count on seeing the exception.

Unfortunately, Nifty seems to silently catch all exceptions triggered by Nifty controls. Can anything be done about that?

I’m curious how you would prefer to fix TestMusicStreaming for 3.1.

I’ll keep “an audio tool for the SDK” in the back of my mind as a future project. Could be fun!

TestMusicStreaming is fixed in svn.

@sgold said: I suppose since resources are being wasted then a warning is in order. Throwing an exception would serve as a fine warning, if I could count on seeing the exception.

Unfortunately, Nifty seems to silently catch all exceptions triggered by Nifty controls. Can anything be done about that?

I’m curious how you would prefer to fix TestMusicStreaming for 3.1.

I’ll keep “an audio tool for the SDK” in the back of my mind as a future project. Could be fun!

Nifty would more than likely stop working in most people games if this were to happen. For instance using duplicate id’s on controls. Have you tried actually tracking one of these down? It’s a nightmare… even if you could narrow down which warning is relevant, it still gives you no clue as to what the actual problem is.

As for automatically converting mono to stereo… this is a truly bad idea. Creating proper audio assets is as critical as any other portion of game dev and it is worth the time to learn how this should be done and to do it properly. @normen gave a clear and precise description of why mono audio files are used for positional audio and:

Making the assuming that stereo files were incorrectly formatted is likely to be just as wrong as assuming that setPositional was improperly called. There is NO WAY to know which the developer meant and I’d be beyond frustrated if I made this mistake and JME decided what I “really meant” to do with nothing more than a warning.

Hi, i’m having the same problem as the OP and am quite new to JME. I download a WAV file just to see if it would work and it did, regardless of whether the setPositional was true or false. Then I used a custom WAV file which would throw the error when set to true. However, when I turn it to false, the sound seems to work but stutters and I think it just replays from the start every second. Below is the method which is what the JME audio tutorial suggest. Any help as to why this might be happening?

private void initAudio(){
bg = new AudioNode(assetManager, “Sounds/111.wav”, true);
bg.setLooping(true); // activate continuous playing
bg.setPositional(false);
bg.setVolume(3);
rootNode.attachChild(bg);
bg.play(); // play continuously
}

EDIT: I was able to fix the problem by setting the loop to false

Yes, looping and streaming are incompatible in JME 3.0.x.

And in all versions, positional audio requires mono sources.