Nowadays, film production teams use so-called Foley tracks when background sound is not available. Nevertheless, mimicking necessary sound in a studio is expensive; therefore, filmmakers rely on available pre-recorded tracks. In this case, the accompanying sound often is not synchronized with the video.
A recent study on arXiv.org looks into the possibility of building a deep learning algorithm that learns the correspondence between audio and video and generates sound for a given video clip. The researchers propose a visually guided class conditioned deep adversarial Foley generation network. The generated samples of the GAN network are conditioned with temporal visual information of a video frame sequence. Numerical and qualitative evaluations show that the proposed system can synthesize synchronous sound with good audio quality.
Deep learning based visual to sound generation systems essentially need to be developed particularly considering the synchronicity aspects of visual and audio features with time. In this research we introduce a novel task of guiding a class conditioned generative adversarial network with the temporal visual information of a video input for visual to sound generation task adapting the synchronicity traits between audio-visual modalities. Our proposed FoleyGAN model is capable of conditioning action sequences of visual events leading towards generating visually aligned realistic sound tracks. We expand our previously proposed Automatic Foley dataset to train with FoleyGAN and evaluate our synthesized sound through human survey that shows noteworthy (on average 81%) audio-visual synchronicity performance. Our approach also outperforms in statistical experiments compared with other baseline models and audio-visual datasets.
Research paper: Ghose, S. and Prevost, J. J., “FoleyGAN: Visually Guided Generative Adversarial Network-Based Synchronous Sound Generation in Silent Videos”, 2021. Link: https://arxiv.org/abs/2107.09262