Wwise integration in JME, good or bad idea?

Hello! I have talked with me sound artist recently and he told me that Wwise was a nice tool he could use to ease the sound integration process.

Wwise is a sound middle ware, that allow you to do some stuff I don’t really understand but hey, am not a sound guy xD Anyway, from what I have seen its build in C++, but there are tools in Java that can be used to read C++ code in a Java application.

But how good is it? Is it reasonable to add that c++ element in my code for sound’s sake? Will it slow the application? Will it stop me from deploying my app in Android? Will it add a huge amount of complexity to my application deployment? Am kind of a Java purist, but that doesn’t really mean anything x)

Well the integration can be difficult but also easiy, depends on how good you are with wwise, c++ and jni/jna.

The main question is, what does it add?
Certainly a mainentance cost, as you now have to maintain the binding.
Features that jme cannot already do? (If it does not anything usefull here throw it out asap, middleware creep can become a problem)

If jme cannot do the feate as easy, compare it with the cost of maintining an middleware, usually writing 3 times more code in jme will be faster for all non large projects.

As for android be aware that there are arm7 arm6 arm64 and x86 cpus, so for c you need to compile it to all (or they need to provide), not sure if there are futher limitations.

1 Like

Thanks, this does confirm the doubts i had. But no matter what i fell like i still need a sound integration software somewhere :confused: I can’t expect my sound guy to work on everything, and i aint coding it xD

What software do people usely use for the sound on JME?

I don’t understand the question.

Jme is build with openal for sound,
take alook at some of the examples containing an Audionode.

Yes you can play the sound real easy, but you can’t really mix it.

For exemple if you have multiple sound node in a town, cause there is wind, fire, the character step, breath and the music, you need to deal whit the intensity of all of those and doing it whiout a integration software seem… difficult.

I still cannot really follow,

if you add multiple sounds, they will play and mix based on the settings you gave them via openal.
What exactly do you expect this mixing to do? maybe I can then understand better.

With wwise you can adjust settings in real time, change transitions and such without touching the code.

Ah I see, without code stock jme cannot do similar things.

Exacly, and because most of the time sound artist are not programmer, and need to work on a lots of files at the same time, you need some sort of help :confused: And I wondering what help i could bring right now.

Sound artist make track, they mix sounds in their best studio app to make a music.
But I don’t think you have to mix a whole song directly in a game. ex: the main theme isn’t mixed.

You mix tracks in jme as well, as they can be played simultaneously ( maybe with some sinc issues ) . You will more often mix ambient / environment sounds which openAL can do.

I agree that openAl is quite mysterious. There are a lot of different settings you can use and i don’t know any of them.
So I think your are looking for a wysiwyg way to edit the openAl settings so your audio guy can handle it. Which sound good … let’s make this into the sdk.


offtopic : EAX for environment reverberation is based on a hardware feature which in some case people don’t have the correct driver for. It’s a shame that it still hardware based

Yeah, that sort of thing is usually handled by a scene editor. You control the sound environment in the scene by placing audio nodes in the right places and configure them.

EAX has been obsolete for quite a while. Its been replaced with EFX in OpenAL which is done in software. jME3 supports it using the Environment class.

1 Like