Sound physic engine

Did anyone ever hear of something like that? It sounds like a good and obvious idea to me but i couldn't find something like that googeling around.

Imagine the following:

You model an object (so give it a shape), you define some parameters like material (wood, iron, concrete), density, if it's solid or hollow etcetc. then you load it into your physic engine and, booooom, let it collide with your environment. You could even calculate the reverb/delay from your surroundings.

I know that there are physical modeling synthesizers to do something like that for modeling real instruments.

I can barely imagine the cpu power you would need to do something like that but i think it will be less then doing f.ex. fluid simulation :wink:

any ideas on that?


I don't even think it would be that difficult… I think the issue would be with the sound samples themselves.  Basically you would probably have to create all of them yourself (even w/ royalty free samples I think there would still be an issue if you made any money off of a game built with them…)

Good Idea though :D, it could even be incorporated with the physics engine and have ppl supply their own samples.

I think what bitkid means is to synthesize the sounds, and thus largely eliminate the need for samples. And to answer your question, bitkid: no, I have never heard of any such thing, but I find the idea incredibly interesting!

I think pure synthesizing of the sound would be incredibly difficult (and probably not prone to produce very good results)…

(but who knows, I am wrong a lot it seems)

probably easier to take two samples (say glass and wood) and 'blend' the 2 to create the sound desired…

"… thus largely eliminate the need for samples …"

that's basicly the main idea behind it. just calculate the "resonance body" of a given object and let it sound :slight_smile:

as i said, i can't really think about how much power you would need to do that, especially if you wan't to calculate

the room.

" … probably easier to take two samples (say glass and wood) and 'blend' the 2 to create the sound desired …"

good idea. but the samples should be included in the engine, freeing the user of searching for the right samples.

just some related links:

some time ago, there was another post about generating sounds dynamically, which also referred to a paper explaining how to create the samples.

Unfortunately i cant find it anymore :slight_smile:


not exactly what i was looking for, but it might help getting started:

see also

I've thought about something like this before being able to completely synthesize sound.

The only thing you can do at this point would be to write a real-time sound engine and improve the performance later, don't worry about people telling you that its too process intensive.  Get the basics working with a physics engine. I'd love to see this technique develop into an industry standard. I believe it will make for much more immersive games all around, especially for indie developers.

If there already is an expiremental sound engine out there then I'd probably support it by making a game demo to show its usefullness.

What are your plans, if any?

there are various plans :wink:

atm i have to do the sound design for a computer game where i need a lot of "physical" sounds … so what i do i search sample databases for the correct sounds/parts of the sounds, load them into the sequencer, arrange them … load the result into a wave editor do the cutting/editing etcetcetc.

imagine the following example (simplified!) … you need the sound of a collapsing building. if i'm lucky i find the correct sample, if not i have to find samples from "colliding small things" and arrange them into a big bang. would be pretty sweet to just model a house from small beton bricks, give them some velocity and record the result :slight_smile: (it's just an example, so don't gimme … it's much easier to do the sample searching/cutting)

other plan would be f.ex. to build an interactive sound generating device (you have a character walking through a world interactin with the objects). atm i do this with a max/msp patch … where i play the sounds.

edit: @core-dump … i see where he is going but that would be only one (small) part of what i'm thinking about …

i’ve once seen an amazing documentary about the nextGen of physical sound synthesis, and found a quite interesting paper about it.

I wrote a line about it in a post where a guy asked for interesting topics for his thesis…

…let me check the history…

…ah in April 2008…


And the Paper can be found here:

This site might also be interesting… haven’t taken a closer look for now, but there is a link to a master thesis about this topic:

that was the post i was looking for .:emp…imm0|82:.  :slight_smile:


it is a fuckin' interesting topic, and it would make life much easier for game-developers if you'd get your sounds automatically by just parameterizing your physical objects.

n1 thx … will be a good bedside reading :slight_smile:

(also posted unintentionally on the original thread  :smiley: )

edit: wow … just by reading the introductions it's already clear … it's exactly what i wanted to know/ i was talking about. nice to see that there is something happening in academic research in this direction :slight_smile:

I don’t think this would be practical in real-time at this moment, at least on a large scale.  For complex sounds such as a collapsing building you would cache the sound of a more simplistic simulation that gives “believable” results (you mentioned something to this effect).  I would argue this is much easier to do with sound than with graphics.  I think if you look at 3d packages (maya, 3ds max) as opposed to real-time implementations (i.e. games), you will find get the general idea of how to do what you want to do, conceptually that is.  Graphics deals with the physics of light and thus we have materials and shaders in these packages to tweak how light interacts with the various objects we create to achieve an, often times, realistic result.  With your idea you are just dealing with the physics of vibrations and so just like there are lambert, blinn, phong materials for graphics and the properties that go with them.  It would be helpful if you tried to replicate this structure for sound. 

As for computing power, with architectures such as CUDA on the rise, and multicore processors being the standard for your average computer buyer I don’t think that will really be a problem.

well, i think that it is neither implemented at all nor would it be fast enough for realtime-apps … yet.

But if you look on what is possible with physics and shaders today, i bet it wont take that long (if there is once an implementation) until one find a way to use it in realtime-apps. Even if it’s just a kind of fake-solution as it is with the shaders for “reflection” today. Real, physically correct raytracing is as well not (yet) usable for realtime-apps with current power of computers.