Accompanying website to the paper _MambaFoley: Foley Sound Generation using Selective State-Space Models, Marco Furio Colombo, Francesca Ronchini, Luca Comanducci, Fabio Antonacci, submitted at ICASSP 2024.
Abstract
Recent advancements in deep learning have led to widespread use of techniques for audio content generation, notably employing Denoising Diffusion Probabilistic Models (DDPM) across various tasks. Among these, Foley Sound Synthesis is of particular interest for its role in applications for the creation of multimedia content. Given the temporal-dependent nature of sound, it is crucial to design generative models that can effectively handle the sequential modeling of audio samples. Selective State Space Models (SSMs) have recently been proposed as a valid alternative to previously proposed techniques, demonstrating competitive performance with lower computational complexity. In this paper, we introduce MambaFoley, a diffusion-based model that, to the best of our knowledge, is the first to leverage the recently proposed SSM known as Mamba for the Foley sound generation task. To evaluate the effectiveness of the proposed method, we compare it with a state-of-the-art Foley sound generative model using both objective and subjective analyses.
Audio Examples
In this page, we present audio samples generated using our model MambaFoley
with samples generated using T-Foley
and AttentionFoley
.
In order to present a relevant and significant comparison between generative models, we offer a comparison between seven different categories generated with the same temporal conditioning.