In 2017 valve released this library ( Steam Audio ), that is designed to provide physics-based sound propagation, HRTF and some other fancy things to game engines and editors.
There are already implementations for unity and udk, but the interesting part is that there is a very nice C99 header that should allow us to write a JNI binding and implement it on jme without too much effort.
I might do this on my own at some point, but to speed things up I’m asking if there is someone interested in collaborating.
From my initial investigation, i’ve come up with a list of pros and cons.
Pros:
Very nice features.
Multiplatform support (Windows,Linux,OSX and Android)
Permissive license
Support for accelerated computation with amd true audio and opencl
With Steam Audio, sound appears to flow and wrap its way around mazes and corridors accurately, and adapts to changes in geometry and materials on the fly.
I wonder about the license as well:
Are you allowed to use it on DRM Free non-Steam games? Does it require steam to be active?
Is it like the NVIDIA Stuff where we’d have to hide the implementation/bindings until the user has agreed to their license terms and maybe sent in the company id etc?
And the other question: Is it feasible for regular 3d games or is this more for ahead-of-time simulations? But I guess you don’t need the maze features and could instead work with simpler stuff.
And the last question: Can one simply use another jme audio listener and keep AudioNodes as they are?
The Steam Audio SDK is available free of charge, for use by teams of any size, without any royalty requirements. Steam Audio currently supports Windows, Linux, macOS, and Android. Just like Steam itself, Steam Audio is available for use with a growing list of VR devices and platforms.
Steam Audio SDK is not restricted to any particular VR device or to Steam.
If implemented correctly it should behave pretty much like jme current audio renderer if configured with minimal features, more advanced things will be more taxing for the cpu(or gpu) for sure, but it’s a realtime library.
It seems to have also a built in caching system, so we could also implement it on jme sdk or add a flag that enables caching on first run, to speed up some computations.
In the way i imagine this, it would be implemented as an audio renderer for jme, so it would replace ALAudioRenderer for a desktop application, once the audio renderer is attached, the audio nodes will be played by Steam Audio without any additional change. This if we implement all the features needed by jme.
Really great then, I guess ou could look into projects which generate the JNI bindings.
I’d love to help but I fear it’s not a thing where you can put only a few hours of work into it (like you need to get into the whole thing first)
The jni bindings should be the easiest part, since the api is quite simple. The biggest effort seems to be translating the informations we get from jme into something meaningful to steam audio.
For now the plan is to convert the audio data into a format usable by steam audio and get the java mixer to play it.
Once we have this basic system working, we’ll start implementing steam audio.
Thanks for the information. I was not aware of the advantages this library had in terms of JNI binding. And I totally agree that the process is going to take a lot longer than a few hours but might be worth it.
looks at lemur wiki
I’m not saying that you’re not doing great work with the code, but beginner documentation is a mess. I learned lemur purely by browsing through @MoffKalast’s code, since there’s no explanation anywhere on how to get started. Sure, once you know what you’re doing, javadoc is pretty well written, but until then, there’s about zero documentation one can use to get started.
Especially the layout managers need their wikipage, because nobody sane is gonna use absolute positioning. I do wonder, how I managed to miss the page you linked, since I somehow remember seeing a different getting started page, my bad.
Yeah, layout documentation is for sure needed. It’s stuck right behind a refactoring of how fill is done.
Edit: this topic probably deserves its own thread. I like feedback on what documentation is missing because sometimes I just don’t know. I already know how to use it all, already.