In this contribution, we will discuss a prototype that allows a group of users to design sound collaboratively in real time using a multi-touch tabletop. We make use of a machine learning method to generate a mapping from perceptual audio features to synthesis parameters. This mapping is then used for visualization and interaction. Finally, we discuss the results of a comparative evaluation study.
“Designing Sound Collaboratively - Perceptually Motivated Audio Synthesis” Authors: Niklas Klügel, Gerhard Hagerer & Georg Groh 14th International Conference on New Interfaces for Musical Expression , NIME’14, 30 June - 4 July 2014 Goldsmiths, University of London Paper (EXTENDED VERSION FROM ARXIV)