I have specialized in Ambisonics and binaural spatialisation techniques. I design tools, carry independant researches, sound experiments and artistic projects in that field.
Extended abstract, 21st International Conference on Auditory Display (ICAD-2015) July 8-10, 2015, Graz, Austria. Coauthors: Miguel Ortiz (Goldsmith Universtiy London) and Stéphanie Bertet (Sonic Art Research Center, Belfast).
A modular tool for real time spatialisation of multiple sources in three dimensions, based on a mixed Ambisonics or Higher Order Ambisonics (HOA) to binaural technique, coupled with an interface that allows to position sound sources using free-hand gestures with a Leap Motion in a visual 3D environment. It is implemented in the real-time programming environment Max/MSP.
ICAD 2015 Live Patch Demo !
These works were produced at SARC under the guidance of Dr Stéphanie Bertet and Dr Miguel Ortiz. They describe tools design with MaxMSP for the spatialization of sound sources in two and three dimensions, using a mixt Ambisonics to binaural spatialization technique allowing for the simulation of a virtual speaker layout with simple headphones, and a positionnig system implementing an emergent behaviour -boids- algorithm.
This report describes the conception of an algorithm for the real time conversion of 2D and 3D B-Format Ambisonic signals to binaural signals. A first algorithm was designed for the 2D Ambisonics to binaural conversion. A 3D version is currently developed from the 2D algorithm which is fully functional.
As a sound artist and designer, I am particularly interested in the development of three dimension sound installations via Ambisonics, binaural and Wave Field Synthesis spatial audio techniques. Additionnally to a proposal, you will find reports, papers and sound samples that relate to the virtual sound sculpture project I am currently developing.
This document is a Phd proposal initiated in the continuation of my spatial audio studies, which I use as a workframe for a virtual sound sculpture project that encompasses sound installation/design and interaction, and involves sound creation, spatial audio techniques, psychoacoustics and cognition. From a historic prospect, the concept of sound sculpture embraces plastic, conceptual as well as sonic arts, making an idiosyncratic use of motion in the composition of shape. A prominence of matter on sound has first related sound sculpture to plastic arts, sonority being a mere illustration of a motion conceived for its visual effect, until the sound dimension got to be treated as a property in its own right, with sound-specific goals and criteria.
With virtuality comes sources dematerialization, and therefore sound autonomy. Visual representation should only be envisaged as an assessment tool, a concurrent representation of a common idea - which can be an exciting prospect-. Hence, a virtual sound sculpture can be defined as a sound stream apprehended as a coherent morphology, produced by dematerialized sources, and characterized and positioned in a three-dimensional sound field. My desire to create perceptually meaningful sound structures was nurtured by a sensibility for plastic arts, and my life as a musician and listener. Experiencing sound in space with various techniques (Ambisonics, WFS...) allowed me to sense a range of object/space related sensations,how these could affect the material itself and generate specific listening modes. Therefore, I am interested in the use of perceptual effects as a compositional material: the extremes of the frequency range, microtones, rugosity, beats, differential tones, how they can provoke haptic, sound density or volume sensations, how they interact with timbre, motion and space. How to use those "ear-born" and "head-born sounds" and relate them with "psybertonal topology" (the mapping of interaural spatial imaging with acoustic spatial imaging) (borrowing from Marian Amacher). Integrating these elements in a compositional language inevitably requires a knowledgeable use of auditory perception, psychoacoustics, cognition and spatial audio techniques as well.
Research and experimental design should assume an large part in this project, in order to ground creation on informed techniques and languages. The techniques, material means, pluri-disciplinary tools we are provided today require extensive skills in spatial audio and auditory perception, and the understanding of the acoustic, perceptual and cognitive facts that characterize sound and space interaction. Besides, while visual cues dominate human perception, auditory perception is less accurate and qualitatively different. The first is straightforward, makes the most of intuitive and robust tools for the manipulation of visual signals. On the contrary, the second is generally affected by a natural propensity to tranlate our visual fantasies into auditory terms, when thinking sound in space.
The first part of this document renders the artistic, technic and conceptual approach of the project. The second part describes the technical elements of an installation-based implementation. A third part proposes several experimental design axis to assess human subjects perception of sound streams in three dimensions. Finally is given an overview of the main interests of this theme with respects to research and creation.
This paper aims to provide a critical standpoint for the design of virtual sound sculptures. Few studies have been lead in that field, and some difficulties arise when combining creation and research. Firstly, perceptual and cognitive factors interact in sound shapes perception.
This interaction is all the more complex as it occurs in ecologically valid conditions, that is in natural environments. Secondly, there is no obvious equivalency relationship or straightforwardness between the sense of proportions, depth, density, or distance in visual and in auditory perception, which have their own mechanisms. Thirdly, real life ranging depends on source and space properties that include spectral complexity and reverberation.
We will particularly investigate distance and sound shape assessment, which relate to three-dimensional sound sculpture design through the review of recent study. This study will lead to spatial audio installation and experimental project. Elementary synthetic signals will be spatialised in a natural room acoustics, using decorrelated point sources. Combining binaural and ambisonics techniques will allow the internal and external exploration of auditory scenes in the close and far field. Measuring distance and shape assessment could allow the study of the influence of sound and space properties and auditory system, on perception of sound shapes. This could contribute to the design of perceptually meaningful auditory scenes.
This report describes the modal formulation in the digital domain of the modeling of an elementary vibrating system, the harmonic oscillator, and the string. Despite the important share given to mathematical developments, an effort is made to relate those with a physical reality.
Starting with the study of harmonic oscillator based on the spring-mass system, we will describe the discretization and modeling of the system in the digital domain from the analog domain, to allow for the computation of digital signals. A complementary approach of complex vibrations modeling will be described from the point of view of another system : wave propagation. Finally, we will show how the digital model of the string vibration, and by extension, that of the membrane, can be integrated in a modal synthesis algorithm. The codes used to generate sounds with the oscillator, string and membrane models in Matlab are given in the appendix of the report.
This study should be extended to membranes. The results can then be exploited in the broader context of a spatial audio installation, to bring a sense of auditory coherence to clouds of spatialized point-sources, whose displacement is controlled by simulating emergent behaviours, fed and filtered by the digital model depending on their position. While the membrane is characterized by a set of material parameters, the simulation of the position in the final rendering depends on excitation points and pick-up positions of the signals.