Recording Video & Audio

Hey all,



So I would like Gentrieve 2 to have built-in recording so people can upload “Speed Run” videos to YouTube. I knew of the VideoRecorderAppState, which I was planning to use, until I realized it doesn’t record audio :frowning:



Fortunately, according to this: https://wiki.jmonkeyengine.org/legacy/doku.php/jme3:advanced:capture_audio_video_to_a_file



I can get Audio & Video via the “Advanced Way”. However, it looks like this creates separate video and audio files, which doesn’t make it simple to upload a video to YouTube, since they must be combined. I’ve been looking around for a solution, and found Xuggler and JAVE, but they are both GPL (and I don’t feel like releasing the source code for my game). I tried compiling Xuggler myself to get a LGPL version, but got some source code errors (ugh, compiling from source never seems to work).



JFFMPEG (http://jffmpeg.sourceforge.net/download.html) looks like it can easily be made LGPL, but it says it uses JMF (Java Media Framework), and pretty much everything I read about JMF says it is old and shouldn’t be used. I found a JMF alternative here: http://fmj-sf.net/, but this just seems to be getting overly complex and I’m wondering if anyone else has a straightforward solution for recording a video, with audio, into a single movie file.



EDIT: Also just read that by enabling audio recording would result in a player not hearing any audio during gameplay… maybe this just won’t work as hoped :frowning:



Thanks,

  • Phr00t

Just a note since I’m not familiar with the projects and might be misunderstanding you… but if an open source project is GPL then you can’t just turn it into LGPL on your own. That’s the author’s decision.

JFFMPEG says this: “The GPL classes may be separated from the rest of JFFMPEG by simply removing all classes that implement the GPLLicense interface.”



So, if those classes (which I don’t think I would need) are removed, only LGPL code would be left.



However, even if all these problems were solved, not having any sound play for the player during recording is probably a showstopper. Maybe I should just link to CamStudio and be done with it. :confused:

I guess there is a reason this isn’t more commonly built into games. :frowning:

Syncing audio/video can be quite a bitch yes, especially when you have no sound syncing data in the first place (audio buffer frame size and frame rate in a game are not synced per se, neither is framerate and actual play start time). I started to work on a syphon (OSX-only library to reference textures between apps) library for jme which would allow to pipe a “video stream” of textures into another application. Using a virtual audio device and a recorder application on the “other side” it would allow me to grab audio/video from the game using the “finished” streams from the game. The same could be done in a similar or other way on windows but I guess starting out at the system level and not at the game/engine won’t be avoidable really if you don’t want to go through sync hell (I know what I am talking about ^^).

CamStudio it is then! :stuck_out_tongue:

I’ve captured hd video with a canon 1100d on a tripod. You have to tinker with the screen refresh rate and so on. I’m sure sound would be doable too. Low tech, I know.

This definitely can be done… You can get the video frames in RGB format by using Renderer.readFrameBuffer(). The audio is available through an OpenAL loopback extension. With just this data you can write the frames by compressing them to MJPEG (since Java comes with a JPEG encoder), the PCM audio can be written uncompressed. Most media containers support this combination of codecs, so now all you need is just the logic to take the data you have and put it into the video file.

@Momoko_Fan said:
so now all you need is just the logic to take the data you have and put it into the video file.

...and sync audio and video :)
@normen said:
...and sync audio and video :)

It seems that if you do it right you just need to shift audio by a constant number of samples to account for the time it takes OpenAL to mix the audio. Now that I think about it, most GPUs queue up a certain number of frames too so somehow you need to match the number of queued frames to time, and then the audio latency to time, and take the difference, and that's your shift.

Yeah, you have to write a timecode for the video frames with that data and know all the latencies ^^