The function of the toolkit is to give participants a sketching tool to develop prototypes using interactive gestural-sound mappings. It integrates complex techniques for sound synthesis with machine learning of movement, making these techniques available to participants with no background in interactive systems, sound design, and more generally in programming and physical computing. It comprises modules for receiving movement data from sensors, analysing data through machine learning and gesture recognition, and mapping participants’ gestures to sound synthesis.
The toolkit consists of three main categories of modules. The first one is the Receiver module. It receives motion data from devices such as the IMU sensors, Leap Motion, Myo and other input devices.
The second category of modules performs movement and gesture analysis and machine learning for gesture recognition, based on algorithms developed by Dr Baptiste Caramiaux. The system is based on a pre-recorded database populated by a user’s gestures and permits the early recognition of a live gesture as soon it starts. It also estimates variations of characteristics of speed, scale and orientation. This is particularly useful in the case of real-time interactions with sound in the prototyping phase as it facilitates procedures of gestural-sound mapping.
The Synthesis modules compose the third block of our toolkit. They enable participants to play with pre-recorded sounds and to manipulate them. In the Trigger module participants can play sound samples once, as if pressing a key on a keyboard or hitting a drum snare. The Scratch module works similarly to a vinyl player and the variation of speed changes the pitch of the sound sample. The Scrubbing module allows users to start the sound playback from a chosen playhead position and to change it in real-time. The Manipulate module controls pitch, speed and cutoff filter values of the sampler. Other sound synthesis modules are currently under development.
The modules can be assembled and linked as the users prefer. They are individual and can be copied, duplicated and rearranged.
The toolkit is freely available from Github at https://github.com/12deadpixels/Gestural-Sound-Toolkit
This toolkit has been designed by myself and Baptiste Caramiaux, and implemented in order to investigate research questions on sonic interaction design, through a series of workshops at IRCAM, Goldsmiths, Parsons (New York) and ZHdK (Zurich). The results of this research have been published at the ACM SIGCHI 2015 conference.
The following is a short documentation video, displaying the toolkit being used by our participants in the context of the workshops.
GST has been developed as part of the MetaGestureMusic project, which received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. FP7-283771.
Original authors: Alessandro Altavilla, Baptiste Caramiaux]]>
CHI 2015 was an impressive state of art of today’s research in HCI, covering a broad and expanding galaxy of practices and disciplines, reflected by the many diverse sessions running each days. The differences between the various approaches would find always a way to manifest themselves and generating interesting points of debates, positive tension and critical interventions, often in very simple ways.
In the Speech & Auditory Interfaces paper session on 23rd April, Baptiste Caramiaux and I presented the paper Form Follows Sound: Designing Interaction From Sonic Memories, which we wrote this paper with Atau Tanaka and Scott Pobiner (Parsons, New School For Design, New York. We presented a series of participatory Sonic Interaction Design workshops in which we explored how to generate scenarios for interaction with sound using embodied gestural interfaces and drawing upon participants’ memories of everyday sound and connected situations.
We then presented the method used in the workshops, consisting of an ideation phase followed by realisation of working prototypes, and concluded the presentation discussing the results.
The following is a very short video we prepared for the ACM digital library which shows the structure of the workshops and gives details about the Sonic Incident, the Gestural-Sound toolkit and finally presents the three embodied sonic interaction models (Conducting, Manipulating, Substituting) we discuss in the paper.
In the short q&a and at the end of session, members of the audience raised interesting questions on the applicability of the sonic incident technique for product sound design. Although there was no time during the presentation to show this, the paper describes an initial guideline that designers can follow step by step. The paper is available on the ACM Digital Library.
Here some personal highlights of the conference:
Last but not least, congratulations to goldsmiths BA computing student Pedro Kirk for winning chi2015 Student Research Competition! You can read his paper here.
In the context of Human Computer Interactions, Sonic Interaction Design is commonly considered as a design practice that exploits the use of sound to facilitate users’ interactions with products and services, mediated by computational technologies. As interaction with sound may be task oriented or experience-based, an understanding of the nature of contextual aspects of bodily sonic experience and action–sound relationships are important factors to consider for designing richer sonic interactions.
Our approach to Sonic Interaction Design looks at the relationship between human movement and everyday listening experience to look at novel ways of imagining interactions with sound and inform the design of gestural-sound mappings using interactive technologies. To do so, we investigate a user-centric approach that first considers the affordances of sounds to evoke gestures in order to imagine embodied interaction with sound, and based on this, generates interaction models for interaction designers wishing to work with sound.
We designed Form Follows Sound, a series of workshops in Sonic Interaction Design in which participants drew upon their everyday sonic experiences to ideate imagined sonic interactions, and then realise interactive prototypes.
We carried out the workshop 4 times in 4 different places, with a total of 43 participants of varying degrees of experience with sound and music:
Participants’ background included graphic and interaction design, theatre and dance performance, music technology, engineering, rehabilitation, physics, bioengineering, social science and art.
The workshops started with an ideation phase, which does not involve technology. The aim of this phase was to generate ideas for action-sound relationships based on memories of sounds from participants’ everyday lives. This phase consisted of three different activities:
The ideation phase was structured in this way to aid participants to (1) access to sonic qualities and the context of the experience described in the incident and generate design ideas, and (2) to include aspects of body movements which we could explore with motion-tracking based technology in following phases of design.
The following realisation phase brought the participants to the realisation of working technology prototypes of the imagined sonic interactions. We created breakout groups to facilitate a group dynamic of mutual negotiation, teaching and understanding.We provided a technological toolkit to realise a wide range of gestural sound interactions. The toolkit is an open project available on Github, and includes gesture sensors, and software tools for motion data processing, mappings and sound synthesis. The system allows real time interaction, meaning that sound is sculpted and modified live as movement is performed. Finally each group selected one imagined scenario from the set of sonic incidents described in the ideation phase and developed it in an interactive prototype.
From a research perspective, this project provided three contributions:
Overall, our workshops methodology helps participants without specialist audio engineering or musical training to work with sound, representing a change in meaning of sound in their practices, by encouraging them to think about, discuss, and manipulate sound through our methodology.
This short video shows the structure of the workshops, then gives details about the Sonic Incident, the Gestural-Sound Toolkit and finally presents the three embodied sonic interaction models.
This research has been funded within the ERC Project MetaGestureMusic, granted by the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. FP7-283771. The result of this research have been published in a paper presented at ACM CHI2015, available here: Form Follows Sound: Form Follows Sound: Designing Interactions from Sonic Memories.
This blog post also appears on research project website mgm.goldsmithsdigital.com.
Alessandro Altavilla, Baptiste Caramiaux and Atau Tanaka.
Thanks to ERC, EAVI and the Department of Computing, who are funding my trip, I will be in Seoul, together with Dr. Baptiste Caramiaux, to present our research to colleagues from all over the world. CHI is one of the most exciting and established conference on human computer interaction. There I hope to receive valuable feedback for my research.
As the paper is going to be fully published by ACM Digital Library soon, you can have a read to the abstract below.
Title: Form Follows Sound: Designing Interactions from Sonic Memories
Authors: Caramiaux, B., Altavilla, A., Pobiner, S., Tanaka, A.
Sonic interaction is the continuous relationship between user actions and sound, mediated by some technology. Because interaction with sound may be task oriented or experience-based it is important to understand the nature of action-sound relationships in order to design rich sonic interactions. We propose a participatory approach to sonic interaction design that first considers the affordances of sounds in order to imagine embodied interaction, and based on this, generates interaction models for interaction designers wishing to work with sound. We describe a series of workshops, called Form Follows Sound, where participants ideate imagined sonic interactions, and then realize working interactive sound prototypes. We introduce the Sonic Incident technique, as a way to recall memorable sound experiences. We identified three interaction models for sonic interaction design: conducting; manipulating; substituting. These three interaction models offer interaction designers and developers a framework on which they can build richer sonic interactions.
See you in Seoul!]]>
Following the structure of the previous workshop at IRCAM, Parsons and Goldsmiths, I scaled the workshop from 3 days to 1. I enjoyed the students’ fresh approach to the topic, their energy and creativity during the ideation phase and impressed by the quality of the prototypes realised at the end of the day. Considering that many of the students were at their first experience with sound design and max/msp it was a considerable achievement in one single day!
Daniel Hug and Mortiz Kemper published here a detailed report of their 2 weeks course in Sonic Interaction Design (only in German), in which my workshop as guest lecture took place.
Thanks to Daniel, Moritz, the Interaction Design course at ZDhK and all the students for the fantastic time and energy during the workshop!
This was a commission for the HCC2 (Human Computer Confluence) Summer School 2013 organised by Isabelle Viaud-Delmon, Hugues Vinet, Marine Taffou, Sylvie Benoit and Fivos Maniatakos (IRCAM)
In this workshop we focused mainly on the augmentation of body with accelerometers, giving to the participants a framework for user-centered design of sonic interactions, developed in the last months and refined during the previous workshops at Goldsmiths and Ircam.
The first day of workshop was co-developed with Frederic Bevilacqua, Jules Françoise, Eric Boyer (Real Time Musical Interactions team, IRCAM) and with Patrick Susini, Olivier Houix, Nicolas Misdaris (Perception and Sound Design Team, IRCAM)
The 6 participants coming from diverse fields such as HCI, fine arts and performance, robotics, music technology and rehabilitation, biophysics and electroacoustic music composition, developed and presented two collaborative working prototypes realised during the workshop.
I will post more detailed information soon.
In the meanwhile I would like to personally thanks Fivos and Isabelle for organising this and all our participants for these three fantastic productive days.]]>
This is the accepted abstract.
Affordance is a concept originated in the field of ecological psychology, described by Gibson as the potential for actions between an animal and its environment. Norman applies this concept in HCI and industrial design, focusing on perceivable possible actions between humans and objects. A perceived affordance constitutes a social signifier, indicators relevant to social usage shared by people.
Sound is a fundamental property of everyday interactions as it contributes to perceive complex affordances. Emerging studies consider sound as a medium for everyday physical, embodied, technological and social interactions.. According to Gaver, sound conveys information on interactions between materials and substances. For Labelle, sound defines relations and thresholds between private and public space. Goodman describes a materiality of affective vibrations created by modern acoustic technologies as political appropriation of sonic domains.
Our practice-based research in sonic interactions, seeks to investigate the validity of analysis of sound-related affordances. In our user studies on embodied sound cognition, participants related sound to physical affordances and social signifiers, such as known musical and everyday sounds. We extend this approach delivering workshops in Sonic Interaction Design and examples of recent sonic arts practices in public space, as a field for investigating sonic affordances.
This year, i presented in the poster session a research paper written with Atau Tanaka and Baptiste Caramiaux, on current investigation on sonic affordances.
Following up the current research on sound-related affordances as design principle for interactive musical interfaces and sonic interaction design, which initial approach was presented at at SMC 2012, I have realised a second user studies early this year.
In this second user study, based on experiment and following interviews, we were interested in exploring topics such as gesture-sound relationships, (Godoy, Leman and Caramiaux), physical affordances of the control device, and cultural association between sound, control device and everyday, non-specialist user.
We designed 3 different sound stimuli based on three categories of sounds, based on Schaeffer/Chion categories, and using digital synthesis (PhiSem, STK, AM).
These sounds were then mapped to three different Sound-Gestures mappings as described in the image below.
The experiment consisted of a task oriented experiment where 7 participants (non-musicians) were asked to play digital sound by moving their limbs. To realise this experiment of active control of digital sound through movement, A 3D wireless micro accelerometer (the Axavity WAX3) was attached on the hand of participants, using a velcro strap. The accelerometer was mapped to a sound-synthesis engine running on an external laptop, invisible to the participants, connected to loudspeakers in dual-mono configuration.
The participants were asked to explore the three different sound-gesture mappings, randomly assigned every 1.30 minutes. No information were given to the participants about the technological system, the mapping, the expectations and goal of the experiment.
The following up interviews revealed us that the description of gestures produced by participants was somehow influenced by the identification of plausible sound sources. These actions were often related to everyday actions/objects, and this was particularly evident during the impulsive and iterative tasks.
For the continuous gesture-sound mapping, the sounds and the actions produced were described more abstractly. However, the complex perceivable modulating parameters in the mapping contributed to further kinetic explorations, where changes in the sound were perceived as a clear relationship with some qualities of the movement.
In such sense, I can notice that the visibility of the mapping together with more abstract quality of the the sound stimuli, aided kinetic exploration and a process of articulation of the experience that went beyond identification of everyday sounds and related actions. Intuitiveness of the gesture-sound mapping was as quality found by the participants. They often described the experience as “natural” or “easy”.
This research can be seen as an initial step to question the role of sound-related affordances, cultural constraints and physical affordances of the device for Sonic Interaction Design.
On June 12th-14th 2013 we gave our second workshop on Embodied Sonic Interactions at Parsons, New School of Design, New York.
Myself , Baptiste Caramiaux and Atau Tanaka were invited to work with Scott Pobiner (Parsons Assistant Professor, Designing Strategies) to deliver a 2 days workshop to a group of students coming from different areas of Design, Music and Performing Arts.
FORM FOLLOWS SOUND workshop was introduced by a lecture from Atau Tanaka, on our research on Embodied Sonic Interactions.
On the 1st day participants were involved in a series of activities to sensitise attention to sound, such as Sonic Incident and Sonic Mimic Game and writing association between sound, action and everyday (sonic experience).The workshop was articulated in 2 days, adopting a similar structure to the pilot we did at Goldsmiths one month ago.
We use descriptor cards to guide this process, but we included empty ones they could use to add words.
At the end of the 1st day Baptiste introduced a technical tutorial on designing interactive sound with Max 6, using micro accelerometer 3d we provided to participants (Axivity Wax), user-friendly Machine Learning interfaces and sound-design scenarios.
Participants ended the day familiarising with the interactive system.
On the 2nd day participants were grouped and chose one sonic interaction each to prototype, realise, perform and present to the others.
After final test, 5 groups realised 5 working demos for the public, followed by an open discussion.
In the demo proposed we observed a series of demos that use sound as an expressive medium to drive an imagined interaction in the everyday, mainly expressed through augmentation of objects or parts of the body.
It is interesting to notice that none of participants produced a musical instrument.
Sound was used to drive sequence of actions, situated in the everyday, relating them to the sonic experience described on the first day.
The limitations and constraints of the system proposed were translated and used proficiently in terms of designed objects.
MachineLearning techniques were understood in such terms and translated in building physical constraints to let the system works as designed.
The participants put an extreme energy in the whole workshop and we want to personally say thanks to all of them, Dan Winckler for the video documentation, Bridget for her help and suggestions and Scott for making the workshop possible and such great and informative experience.
(posted on eavi.goldsmithsdigital.com)
More info here:
Announcement on EAVI website.
Looking forward to it!