-
Godøy, Rolf Inge
(2023).
Exploring sound-motion links in motormimetic cognition.
Show summary
The focus of my talk is on the intimate links between sensations of sound and of motion in music, summarized in the expression motormimetic cognition. The purpose of coining this neologism was to give a name to the mental re-enactment (in some cases, also as overt, visible body motion) of sound-related motion in listening to, or merely imagining, musical sound, and typically, as re-enactments of assumed sound-producing body motion, but also of more overall sensations of energy and/or affect.
My motivation for exploring this topic was a number of personal, introspection-based experiences of sound-producing body motion sensations when listening to music, or when merely imagining music. After quite extensive readings in various domains of the cognitive sciences, it dawned on me that maybe other people could have similar motion sensations when listening to, or merely imagining, music. When publishing papers on motormimetic cognition in musical experience, the response of people in the music cognition community was quite varied. However, in the last couple of decades, there has with the growing popularity of so-called embodied cognition in the cognitive sciences, become more accepted that there are indeed extensive links between perception and body motion in most, perhaps all, domains of human behavior. Yet, there are needless to say still very many outstanding questions as to what we mean by embodied cognition in music, and in my opinion, we seem in particular to lack more detail and systematic knowledge of how such embodied elements play out in very concrete musical features. And this is the aim of my presentation, namely to give an account of how the fusion of sound and motion can be explored in more detail.
One leading idea here is that there are constraints in sound production, both of instruments and sound-producing body motion, concerning biomechanics as well as motor control, and that we may enhance our understanding of motormimetic cognition in music by studying such constraints, first of all in performance, but also in improvisation and composition. This will include constraints and affordances of motion and body postures associated with patterns of textures, rhythm, various figures, ornaments, contours, spectral and formantic shapes, as well as the associated sense of effort and affect.
The basic idea here is to regard musical sound as intimately linked with sensations of motion, to the extent that we may actually perceive salient musical features as multimodal phenomena, e.g. in the case of a drum fill where sensations of drum sound and hands/arms motion are totally fused. Recognizing the extent of this multimodal fusion of sound and motion in music perception, should then have consequences for how we think about various theoretical and practical music-related activities, i.e. encourage us to think about a work of music as just as much a choreography of sound-producing motion as sequence of sounds.
-
Karbasi, Seyed Mojtaba; Jensenius, Alexander Refsum; Godøy, Rolf Inge & Tørresen, Jim
(2023).
Exploring Emerging Drumming Patterns in a Chaotic Dynamical System using ZRob.
Show summary
ZRob is a robotic system designed for playing a snare drum. The robot is constructed with a passive flexible spring-based joint inspired by the human hand. This paper describes a study exploring rhythmic patterns by exploiting the chaotic dynamics of two ZRobs. In the experiment, we explored the control configurations of each arm by trying to create un- predictable patterns. Over 200 samples have been recorded and analyzed. We show how the chaotic dynamics of ZRob can be used for creating new drumming patterns.
-
Godøy, Rolf Inge & Hestholm, Marion
(2022).
Spillerom Ernst Simon Glaser og Rolf Inge Godøy.
[Radio].
radio.nrk.no.
Show summary
Då solocellist i Kringkastingsorkesteret, Ernst Simon Glaser, bestilte seks nye verk for solo cello, ville han at komponistane skulle skrive i dialog med Bachs cellosuiter. Kva oppnådde han med det? Echoes of what will come er tittelen på utgivnaden med verk av seks komponistar som leverer eitt eller fleire nikk til gamle Johann Sebastian.
Namnet Rolf Inge Godøy vert i dag knytt til forsking på musikk og rørsle ved Universitetet i Oslo. Godøy har vore sentral i oppbygginga av eit forskingsmiljø, men i sitt nye tilvære som professor emeritus vil han ta opp att komponeringa!
-
Lartillot, Olivier; Godøy, Rolf Inge & Christodoulou, Anna-Maria
(2022).
Computational detection and characterisation of sonic shapes: Towards a Toolbox des objets sonores.
Show summary
Computational detection and analysis of sound objects is of high importance both for musicology and sound design. Yet Music Information Retrieval technologies have so far been mostly focusing on transcription of music into notes in a classical sense whereas we are interested in detecting sound objects and their feature categories, as was suggested by Pierre Schaeffer’s typology and morphology of sound objects in 1966, reflecting basic sound-producing action types. We propose a signal-processing based approach for segmentation, based on a tracking of the salient characteristics over time, and dually Gestalt-based segmentation decisions based on changes. Tracking of pitched sound relies on partial tracking, whereas the analysis of noisy sound requires tracking of larger frequency bands possibly varying over time. The resulting sound objects are then described based on Schaeffer’s taxonomy and morphology, expressed first in the form of numerical descriptors, each related to one type of taxonomy (percussive/sustained/iterative, stable/moving pitch vs unclear pitch) or morphology (such as grain). This multidimensional feature representation is further divided into discrete categories related to the different classes of sounds. The typological and morphological categorisation is driven by the theoretical and experimental framework of the morphodynamical theory. We first experiment on isolated sounds from the Solfège des objets sonores—which features a large variety of sound sources—before considering more complex configurations featuring a succession of sound objects without silence or with simultaneous sound objects. Analytical results are visualised in the form of graphical representations, aimed both for musicology and music pedagogy purposes. This will be applied to the graphical descriptions of and browsing within large music catalogues. The application of the analytical descriptions to music creation is also investigated.
-
Karbasi, Seyed Mojtaba; Jensenius, Alexander Refsum; Godøy, Rolf Inge & Tørresen, Jim
(2022).
A Robotic Drummer with a Flexible Joint: the Effect of Passive Impedance on Drumming.
Show summary
Intelligent robots aimed for performing music and playing musical instruments have been developed in recent years. With the advancements in artificial intelligence and robotic systems, new capabilities have been explored in this field. One major aspect of musical robots that can lead to the emergence of creative results is the ability to learn skills autonomously. To make it feasible, it is important to make the robot utilize its maximum potential and mechanical capabilities to play a musical instrument. Furthermore, the robot needs to find the musical possibilities based on the physical properties of the instrument to provide satisfying results. In this work, we introduce a drum robot with certain mechanical specifications and analyze the capabilities of the robot according to the drumming sound results of the robot. The robot has two degrees of freedom, actuated by one quasi direct-drive servo motor. The gripper of the robot features a flexible joint with passive springs which adds complexity to the movements of the drumstick. In a basic experiment, we have looked at the drum roll performance by the robot while changing a few control variables such as frequency and amplitude of the motion. Both single-stroke and double-stroke drum rolls can be performed by the robot by changing the control variables. The effect of the flexible gripper on the drumming results of the robot is the main focus of this study. Additionally, we have divided the control space according to the type of drum rolls. The results of this experiment lay the groundwork for developing an intelligent algorithm for the robot to learn musical patterns by interacting with the drum.
-
Godøy, Rolf Inge
(2021).
Impulse-driven rhythm objects.
Show summary
Impulse-driven rhythm objects
Various recent research in movement science has converged in suggesting that human motor control is intermittent, i.e. that skilled and rapid body motion require piecewise pre-planning combined with point-by-point triggering of such anticipatory planned chunks of body motion (Loram et al. 2014). Known by various names such as open loop, feedforward, or serial ballistic, the idea of intermittent control is based on constraints of our motor system, first of all that the motor system is too slow to allow for continuous feedback control in demanding tasks, but also on the need for a contextual activation spreading of preparatory motion of the effectors (fingers, hands, vocal apparatus) for upcoming events, known as coarticulation. Based on our motion capture data of musical performance, the aim of this paper is to show how these constraints are manifest in what we call impulse-driven rhythm objects, i.e. in holistically conceived and perceived multimodal chunks of combined motion and sound.
Besides drawing on insights from movement science, this object focus also has a background in notions of objects and/or gestalts in music (Schaeffer 1966), with sound objects typically in the duration range of approximately 0.3 to 3 seconds, recognizing the overall shape of the sound object as crucial for both the generation and perception of musical features. With an added related idea of action gestalts in motor control (Klapp and Jagacinski 2011), we can speak of multimodal sound-motion objects where the sensations of sound are closely linked with sensations of body motion and effort.
To better understand the manifestation of such impulse-driven rhythm objects, it will be useful first to give an overview of the basic ideas of intermittent motor control and to provide some plausible theoretical framework for how such piecewise anticipatory motor control might work, and then go on to demonstrate how motion capture data testifies to rather strong consistencies in the performance, suggesting a high degree of pre-planning. Also, this motion capture data can demonstrate how the workings of coarticulation are manifest in preparatory motion (e.g. fingers/hands move ahead to optimal position for upcoming tone onsets (Godøy 2014)). This is typically evident in fast rhythm objects such as ornaments, where due to the required speed, it will not be possible to make corrections mid-course, but where error corrections will have to wait until the next instance of the ornament. Lastly, the element of effort in the performance of such rhythm objects may be indirectly inferred from the motion capture data, but better explored with EMG data, i.e. muscle activity data. As demonstrated with our own and other EMG data, this concerns in particular optimization of motion, where the holistic anticipatory control of motion chunks may contribute to minimizing energy cost and enhancing sense of fluency (Gonzales Sanchez et al. 2019).
Keywords: rhythm objects, sound objects, intermittent, multimodal, motor control
-
-
Karbasi, Seyed Mojtaba; Godøy, Rolf Inge; Jensenius, Alexander Refsum & Tørresen, Jim
(2021).
A Learning Method for Stiffness Control of a Drum Robot for Rebounding Double Strokes.
Show summary
In robot drumming, performing double stroke rolls is a key ability. Human drummers learn to play double strokes by just trying it several times. For performing it, a model needs to be learned to provide anticipatory commands during drumming. Joint stiffness plays a key role in rebounding double stroke task and should be considered in the model. We have introduced an interactive learning method for a drum robot to learn joint stiffness for rebounding double stroke task. The model is simulated for a 2-DoF robotic arm. The algorithm is simulated with 3 different drum kits to show the robustness of the learning approach. The simulation results also show significant compatibility with human performance results. In addition, the refined learning algorithm adjusts the stroke timing which is important for producing proper rhythms.
-
Godøy, Rolf Inge
(2020).
Musical Intermittency .
Show summary
One major topic in music perception, music analysis, music information retrieval and related domains, is how listeners parse continuous streams of sound into more discontinuous chunks. One possible source of such parsing could be in the so-called intermittency of motor control and effort in sound-producing body motion. In my talk, I'll focus on my ongoing research exploring such intermittency in musical experience.
-
Godøy, Rolf Inge
(2020).
Understanding intermittency in sound-producing body motion.
Show summary
One major topic in music perception, music analysis, and music information retrieval, is how listeners parse continuous streams of sound into more discontinuous chunks. One possible source of such parsing could be in the so-called intermittency of motor control and effort in sound-producing body motion. In my talk, I'll focus on our ongoing research exploring such intermittency in experiences of music.
-
Godøy, Rolf Inge
(2019).
Constraint-based musical expression - John Blacking Memorial Lecture, European Seminar of Ethnomusicology (ESEM), Durham University, September 5, 2019.
Show summary
The focus of my talk is on how various human and instrumental constraints shape musical expression. There can be no doubt that the repertoire of musical expression is vast, as is the variety of musical instruments and associated kinds of human sound-producing body motion. Yet there are obvious constraints on what sounds traditional musical instruments can produce (e.g. a flute cannot sound like trombone, a piano cannot sound like an oboe), and needless to say, also constraints on human sound-producing body motion (e.g. need for breathing, need for rests, limits to speed), engendering constraint-based idioms (easy and well-sounding vs. difficult and not so well-sounding snippets on instruments).
This may all seem rather trivial, but the crucial point is to see how constraints are carried into musical expression and strongly influence it, even to the point that we expect such constraints in music. And: that musical expression is constraint-based does not diminish its value; it is rather a testimony to extensive human agency in music. It can be argued that human body motion constraints actually shape salient perceptual features of musical sound, first of all in what we perceive as motion and contour patterns in music, as well as in various details of articulation. Taking constraints into account amounts to a more holistic approach to musical expression by focusing on musical sound as produced by human body motion, or focusing on what I have previously called motormimetic features in musical experience. This perspective is a departure from Western abstract notions of pitch and duration in favor of holistic sound and body motion features in music, a shift of perspective I believe was also a recurrent topic for John Blacking.
-
Godøy, Rolf Inge
(2019).
Music and Body Motion.
Show summary
How, and to what extent, is our experience of music influenced by the body motion of the musicians, both those we can see and those we can only imagine? Could it be that music is just as much a matter of choreogrphy as a matter of sound?
-
Godøy, Rolf Inge
(2018).
Presentation of RITMO.
Show summary
A presentation of the aims and organisation of the RITMO Centre of Excellence and various ideas for continued cooperation with IPEM in research and PhD training.
-
Godøy, Rolf Inge
(2018).
Impulse-driven sound-motion objects in musical imagery.
Show summary
Musical imagery can be defined as having experiences of music in our minds in the absence of any physically present sound. Experiences of 'tunes in the head' seem to be quite common among both musically trained and untrained people, but one of the pressing questions here is how such images are triggered, or what is the engine of musical imagery in our minds. One possible answer could be that mental images of sound-producing body motion may trigger images of sound in our minds, e.g. images of hitting motion triggering images of drum sound. In this presentation, the focus will be on motor elements of musical imagery, in particular on how fragments of combined sound and motion, what may be called sound-motion objects, are triggered by motion impulses in our minds.
-
Godøy, Rolf Inge & Song, Min-Ho
(2017).
Impulse-driven sound-motion objects.
Show summary
Our own and other research seems to suggest that perception and cognition of musical sound is closely linked with images of sound-producing body motion, and that chunks of sound are perceived as linked with chunks of sound-producing body motion, leading us to the concept of sound-motion objects in music (Godøy et al. 2016). One challenge in our research is trying to understand how such sound-motion objects actually emerge in music. Taking into account findings in motor control research as well as in our own research, we hypothesize that there is a so-called intermittent motor control scheme (Sakaguchi et al. 2015) at work in sound-producing body motion, meaning a discontinuous, point-by-point control scheme, resulting in a series of holistically conceived chunks of sound-producing motion, in turn resulting in the perception of music as concatenations of coherent sound-motion objects.
References
Godøy, R. I., Song, M-H., Nymoen, K., Romarheim, M. H., & Jensenius, A. R. (2016). Exploring Sound-Motion Similarity in Musical Experience . Journal of New Music Research, 45(3), 210-222. doi:10.1080/09298215.2016.1184689
Sakaguchi, Y., Tanaka, M., and Inoue, Y. (2015). Adaptive intermittent control: A computational model explaining motor intermittency observed in human behavior, Neural Networks, Volume 67, July 2015, Pages 92-109, ISSN 0893-6080, http://dx.doi.org/10.1016/j.neunet.2015.03.012.
-
Godøy, Rolf Inge
(2016).
Understanding the musical instant.
Show summary
Granted that we in musical experiences may have a range of feature durations from the short (in the area of a few hundred milliseconds) to the very long (that of several hours), the focus of this chapter is on the short range, on what we may subjectively perceive as the musical instant. This is based on the conviction that very many salient features, both in the perception and in the production of musical sound may be found at this timescale.
We have had theories suggesting the importance of short fragments in musical experience, from Husserl's idea of perception by a series of 'now-points' to Schaeffer's theories of sonic objects, as well as systematic psychoacoustic studies of duration thresholds in sound perception and recent studies of duration thresholds for salient musical features by Gjerdingen and Perrot, by Krumhansl, and by Plazak and Huron.
In parallel, research on human motor control has suggested that human motion is goal-directed and proceeds by what may be called key-postures at intermittent points of orientation such as at downbeats and other accents, surrounded by continuous motion, e.g. of the mallet/hand accelerating from the starting position to the impact with the drum membrane and bouncing back again to equilibrium. We believe there is a close relationship between sound-producing motion and perception, and that experiences of the musical instant are linked with biomechanical and motor control constraints involved in music. In particular, we think that motion acceleration peaks and ensuing impacts in performance are typical of salient moments in music.
Understanding the musical instant as linked with various constraints of sound-producing motion could be useful for several domains of music-related research, e.g. in understanding chunking, beat extraction, and entrainment in musical experience.
-
Godøy, Rolf Inge; Song, Min-Ho; Nymoen, Kristian; Haugen, Mari Romarheim & Jensenius, Alexander Refsum
(2016).
¿Por qué marcamos el ritmo de la música con los pies?
[Internet].
BBC Mundo.
Show summary
¿Alguna vez has estado en un bar o en un restaurante, sentado en la calle o en un salón cuando suena música y tú y otros empiezan a golpear el piso con el pie al ritmo de la música?
-
Godøy, Rolf Inge
(2015).
Motormimetic Feature Mapping in Musical Experience.
Show summary
There can be no doubt that we in music often experience correspondences between different sense modalities such as between sound, vision, motion, and touch (just to mention the most prominent ones), evident in dance and other kinds of music-related body motion, and also reflected in listeners' innumerable accounts of visual associations with music and in the ubiquitous use of visual metaphors for musical sound. In short, it should not be controversial to claim that music is a multimodal form of art, involving a number of sensations beyond 'pure sound'. But how the different sense modalities are activated and interact in musical experience, still presents us with a number of unanswered questions.
In our research, we have been pursuing the idea of what we call "motormimetic cognition", meaning that we see evidence of an incessant mental simulation of sound-related body motion in music perception, primarily of assumed sound-producing body motion (e.g. hitting, stroking, bowing, blowing), but also of various kinds of sound-accompanying body motion (e.g. dancing, walking, gesticulating). We regard such mental simulation of body motion as applicable to most (perhaps all) features of music, and we believe that motormimetic cognition is an amodal and universally applicable mental activity that can translate between modalities in musical experience. We also believe motormimetic cognition can be the basis for a systematic research effort on feature mapping between different modalities in music, in particular between sound and vision by way of studying body motion trajectory shapes and/or posture shapes in music-related contexts.
In my presentation, I shall give an overview of the main ideas of motormimetic cognition and also demonstrate how it is relevant for work with new technologies and in multimedia art.
-
Godøy, Rolf Inge
(2015).
Chunking by intermittent motor control in music.
Show summary
Various theories in music perception and cognition, from classical gestalt theory to more recent experimental work and data-driven modeling, have contributed to our understanding of chunking in musical experience. But our own research on music-related body motion has singled out intermittency in motor control, i.e. a basically discontinuous and point-by-point control scheme, as an essential factor in chunking. Classical motor control theories with claims of so-called closed loop continuous feedback have in recent years been challenged by models suggesting intermittent control manifest in so-called open loop and preprogrammed motor commands, because continuous feedback loops are thought to be too slow for many highly demanding tasks. Various findings in human motor control, i.e. the so-called psychological refractory period, principles of posture-based motion control, of action hierarchies, of goal-directed behaviour, and our own research on music-related body motion, seem to converge in suggesting the existence of intermittent motor control for chunking at the short-term timescale of very approximately 0.5 seconds duration. Following a so-called motor theory approach, the basic tenet here is that schemas of sound-producing body motion are projected onto whatever musical sound it is that we are hearing. This in turn means understanding chunking by recognizing a number of constraints of body motion and motor control, something that suggest an unequal distribution of attention and effort in musical experience, hence, the idea of chunking by intermittent motor control in music.
-
Nuwer, Rachel; Kobb, Christina Sofie; Godøy, Rolf Inge & Jensenius, Alexander Refsum
(2015).
Playing Mozart’s Piano Pieces as Mozart Did.
[Newspaper].
The New York Times.
Show summary
Classical piano pieces by such composers as Beethoven, Mozart and Chopin likely sounded much different when the masters first performed those works than they do today. Pianos themselves have changed considerably — but so, too, has technique.
-
Godøy, Rolf Inge
(2015).
Sound-Motion Similarity in Musical Experience.
Show summary
Main similarity issues in our sound-motion research:
• Cross-modal similarity in music, in particular subjective perception of similarities between sound and body motion
• Approximate similarity in imitation: the ability to reproduce salient and readily recognizable features in oral and improvisational musical contexts
• Similarity in musical translations: recognizing musical ideas as similar across very different instrumental and/or vocal versions and arrangements
A suspected common factor in these (and several other similarity issues) is motor cognition, i.e. our disposition to perceive and think in terms of body motion
-
Godøy, Rolf Inge
(2015).
Discontinuity within continuity: intermittency in musical experience.
Show summary
Although we may experience music as continuous, as making us feel carried away in an unbroken flow of sound and body motion, we also know that music consists of a series of sonic and body motion events in succession. With events such as tone/sound onsets, or more composite events such as chunks with distinct rhythmical or melodic patterns, we could say that there is a duality of discontinuity and continuity in musical experience. This duality has intrigued phenomenological studies of temporal consciousness, leading Husserl (in dialogue with several of his contemporaries) to suggest that musical experience proceeds by a series of intermittent moments in time, by so-called 'now-points', each including a micro context of the immediate past, present and immediate future expectations. Various researchers in the 20th century have followed up with other ideas for reconciling the discontinuous and continuous, but in recent decades we have seen some significant advances in understanding the behavioral and neurocognitive bases for unit formation, by what we call intermittency in musical experience, closely linked with our present research on body motion in musical experience.
Our basic tenet is that we have several concurrent timescales in musical experience, ranging from the very fast (e.g. single vibrations or impulses) to the slower (various singular sounds and/or concatenations of sounds) and very slow (phrases, sections or even whole works of music), but that there is a discontinuity at work both in the production and the perception of musical sound. We see motor control as basically intermittent (as opposed to the continuous control claimed by classical control theory) and proceeding by a series of anticipatory images of future motion chunks. This is linked with a general constraint on action and perception, the so-called psychological refractory period, suggesting that our organism is optimally attuned to controlling and perceiving chunks of action and sound in the approximately 0.5 seconds timescale. These (and some additional) factors converge in suggesting that there are basic intermittency constraints in musical experience, but that with the concatenation of discontinuous chunks in succession, we may also experience continuity at larger timescales in music.
-
Godøy, Rolf Inge
(2014).
Quantal elements in music cognition.
-
Godøy, Rolf Inge
(2014).
Coarticulation in the production and perception of music.
Show summary
. The term 'coarticulation' designates the fusion of small-scale events such as single sounds and single sound-producing actions into larger chunks of sound and body motion, resulting in qualitative new features at the medium-scale level of the chunk. Coarticulation has been extensively studied in linguistics and to a certain extent in other domains of human body motion, but so far not so much in music, so the main aim of our lecture is to provide a background for how we can explore coarticulation in music. The contention is that coarticulation in music should be understood as based on a number of physical, biomechanical and cognitive constraints, and that it is an essential shaping factor for several perceptually salient features of music.
-
Godøy, Rolf Inge
(2014).
Motor constraints shaping musical experience.
Show summary
We have in recent decades seen a surge in publications on embodied music cognition, and it is now broadly accepted that musical experience is intimately linked with experiences of body motion. Going further into this, it is also clear that music performance is not something abstract and without restrictions, but something traditionally (i.e. before the advent of electronic music) also constrained by our possibilities for body motion.
There are a number of biomechanical constraints reflected in musical sound, such as maximal speeds of human motion, need for rest, economy of effort, and avoiding strain injury, and there are also constraints of motor control, such as the need for grouping and planning ahead. These constraints often lead to a fusion or contextual smearing of sound producing body motion, in turn also affecting the sound output, effectively contribution to shaping musical sound.
One such prominent constraint-based phenomenon is so-called phase-transition, designating the fusion of otherwise singular actions into more superordinate actions with increasing speed of body motion, e.g. as happens when we accelerate the performance of any rhythmical pattern from slow to fast. Another constraint-based outcome is so-called coarticulation, meaning the fusion of otherwise distinct body motions into more superordinate body motion, entailing also a contextual smearing of musical sound. In our research on music-related body motion we see evidence of such body motion constraints on the shaping of musical sound. We can even claim that we expect such constraints reflected in segmentation, phase-transition, and coarticulation in music, hence, that we may speak of a mutual attunement of bodily constraints and perception in music. Such constraint-based phenomena in musical performance could then be seen as an alternative to more traditional notation-based paradigms in music research.
-
Godøy, Rolf Inge
(2014).
Sound and body motion timescales in musical experience.
Show summary
Musical experience, be that in performance or listening, obviously unfolds in time; however, this may be forgotten when we focus on musical features such as style and historical context. When considering musical schemata, it could be useful to clarify the timescales. Granted that we in music have timescales extending from the very short of audible vibrations to the very long of whole works, we also have different schemata at different timescales. This has become particularly evident in our research on music and body motion, which leads us to suggest three main timescales at work in musical experience:
• The micro timescale of continuous sound and body motion with features such as pitch, stationary dynamics and timbre, as well as fast fluctuations of these features.
• The meso timescale, approximately at the 0.5 to 5 seconds timescale, of what we call chunks or sonic objects. This is the timescale of many salient musical sound features such as rhythm, texture, melodic fragments, modality, expressivity, as well as most salient body motion features.
• The macro timescale is that of several meso timescale chunks in succession, such as in sections and whole works of music; this is the scale on which narrative or dramaturgical musical elements are found.
Although historically informed listening may variably involve all these timescales, there can be little doubt that the most important is the meso timescale, and this is also the timescale where music-related body motion elements are most clearly manifest. Clearer notions of timescales along these lines could be useful for discussions of schemata in musical experience, and should encourage us to be more critical of various inherited notions of form in Western musical thought.
-
Godøy, Rolf Inge
(2014).
Postures and motion in musical experience.
Show summary
There are innumerable and strong links between sound and body motion in musical experience, as we may readily observe everywhere in listening and performance situations. In what may be broadly called a motor theory perspective, our perception of musical sound is so closely linked with our experiences of sound-producing body motion that music could be understood as a fusion of sound and body motion, i.e. as a composite, multimodal form of art.
A fair amount of research in music psychology and other cognitive sciences from the last couple of decades seem to support such a motor theory perspective. In our own research we have looked at how people with different musical training make spontaneous body motion that reflect salient features of sound production, and we have also looked at the actual sound-producing body motion made by professional musicians in various performance situations.
In making a summary of our own and others findings, we are now developing a model of sound-motion feature correspondences based on the twin concepts of postures and motion in musical experience. Briefly stated, postures denote the shape and position of sound-producing effectors (fingers, hands, arms, torso, feet, vocal tract) at salient moments in the music, and motion denotes the continuous transition between these postures. The basic idea is that these postures are landmarks, or what we have called goal-points, in the continuous stream of sound and body motion in music, and that they are the basis for the formation of chunks (gestalts, sonic objects) in musical experience. The long-term aim of this work is to enhance our understanding of unit formation in music, or more generally, to understand the interplay of continuity and discontinuity in musical experience.
-
-
Haugnes, Gunhild M.; Jensenius, Alexander Refsum; Tørresen, Jim & Godøy, Rolf Inge
(2014).
Musikk + IT = kreativ boom.
[Internet].
Institutt for informatikk.
Show summary
Verktøy som påviser CP hos premature barn, musiker som ble skiapp-grunder, utvikling av bukse med innebygde trommer.
-
Godøy, Rolf Inge
(2013).
Understanding Coarticulation in Music.
Show summary
The term 'coarticulation' designates the fusion of small-scale events such as single sounds and single sound-producing actions into larger chunks of sound and body motion, resulting in qualitative new features at the medium-scale level of the chunk. Coarticulation has been extensively studied in linguis¬tics and to a certain extent in other domains of human body motion, but so far not so much in music, so the main aim of this paper is to provide a background for how we can explore coarticulation in music. The contention is that co¬ar¬ti¬cu¬la¬tion in music should be understood as based on a number of physical, bio¬me¬chanical and cognitive constraints, and that it is an essential shaping fac¬tor for several perceptually salient features of music.
-
Godøy, Rolf Inge
(2013).
The Convergence of "Hard" and "Soft" in Music Technology.
Show summary
Music has had a long-lasting relationship with technology, extending from sophisticated mechanical instruments in earlier centuries to present day digital means of music production and distribution. With one foot in technology and another foot in musical aesthetics, the convergence of "hard" and "soft" in music technology is more than ever the case in present music technology research and development. Given the seemingly limitless possibilities of digital music technology to generate any sound, previously heard or unheard, one major challenge now is to develop better means for accessing subjective and affective features of music, in short, to make more musically meaningful man-machine interaction schemes.
-
Godøy, Rolf Inge
(2013).
Coarticulation in the production and perception of music.
Show summary
In the past couple of decades, we have seen much research documenting close links between music and body motion. However, we need to have a better understanding of how meaningful units of sound and body motion are generated and perceived in music. The phenomenon of coarticulation, meaning the fusion of micro-level actions and sonic events into larger and somehow meaningful chunks of sound and motion, could help us not only to better understand sound and body motion links in music, but also contribute to our understanding of expressive and affective features of music. Coarticulation has been extensively studied in linguistics, to a certain extent in human movement science, but not so much in music. In my presentation, I shall give an overview of our own and other research on coarticulation in music.
-
-
Godøy, Rolf Inge; Jensenius, Alexander Refsum & Grønli, Kristin Straumsheim
(2013).
Musikkens elektroniske fremtid.
[Internet].
Forskningsrådets Nyheter.
Show summary
Musikere vil trenge nye, teknologiske ferdigheter og spille digitalt utvidede instrumenter. Lyttere vil gå fra passive konsumenter til å være med å påvirke og produsere.
-
Godøy, Rolf Inge
(2012).
Continuity and discontinuity in music-related motion.
Show summary
The many and close links between sound and body motion in music seem now to be well documented, and it seems fair to claim that sensations of body motion are very often (or perhaps always) integral to musical experience.
But in spite of enhanced methods for studying music-related body motion in the last couple of decades, we still have substantial challenges in understanding how such body motion is perceived and conceived by musicians and listeners alike. One main question here is how continuous streams of sound and body motion are segmented into somehow meaningful chunks, in other words, how continuity and discontinuity interact in our subjective experience.
In our research, we have found it useful to distinguish between different timescales of sound and body motion, and furthermore, to focus on what we call the meso-level timescale with chunks of sound and body motion in the approximately 0,5 to 5 seconds duration range. At this timescale, we believe sensations of continuity and discontinuity coexist in holistically perceived chunks of sound and body motion, and that this coexistence is based on the convergence of various physical, biomechanical, neurocognitive and musical-aesthetical constraints. In my talk, I shall give a summary of past and present research on this topic, including practical applications of our ideas here to various music-related body motion data.
-
Godøy, Rolf Inge
(2012).
Sonic Object Design.
-
Godøy, Rolf Inge; Jensenius, Alexander Refsum; Voldsund, Arve; Glette, Kyrre Harald; Høvin, Mats Erling & Nymoen, Kristian
[Show all 8 contributors for this article]
(2012).
Classifying Music-Related Actions.
Show summary
Our research on music-related actions is based on the conviction that sensations of both sound and body motion are inseparable in the production and perception of music. The expression "music-related actions" is here used to refer to chunks of combined sound and body motion, typically in the duration range of approximately 0.5 to 5 se¬conds. We believe that chunk-level music-related ac¬tions are highly signifi¬cant for the experience of music, and we are pres¬ently working on establishing a database of music-related actions in order to facilitate access to, and research on, our fast growing collection of motion capture data and related material. In this work, we are con¬fronted with a number of perceptual, concep¬tual and technological is¬sues regarding classification of music-related ac¬tions, issues that will be presented and discussed in this paper.
-
Godøy, Rolf Inge
(2012).
Postures, Trajectories, and Sonic Shapes.
Show summary
During the last decades, there has been a growing interest in the relationships between sound and body motion in music, resulting in several publications claiming that music is multimodal, i.e. that music in addition to sound also includes elements of body motion such as kinematics (visual images of motion trajectories), dynamics (sense of motion effort) and haptics (sense of touch): we hear the sound of a musical performance and at the same time see (or imagine) the body motions of the performers and mentally simulate the effort and sense of touch related to the performance.
One common feature of these multimodal elements in music is the notion of shape: we see or imagine the shape of the body motion trajectories, of the fluctuating effort, and of the tactile experience of playing the instruments (or the motions of the vocal apparatus in the case of singing). Also, notions of shape are well established in the perceptual attributes of sound as so-called envelopes, both in the overall dynamic unfolding of sounds and in the stable, as well as in the evolving, or even transient spectral content of sounds. And needless to say, notions of shape are integral to our Western conceptual apparatus as reflected in common practice music notation (and its more recent extensions such as MIDI) for representing e.g. melodic, textural and intensity shapes.
Given this background, the focus of my presentation will be on modelling shape in musical experience by sequences of key-postures of the effectors (fingers, hands, arms, torso, etc.) at salient moments in the musical performance (downbeats and other accents), with continuous and so-called coarticulated (fused) motion trajectories between these key-postures. Based on evidence from so-called motor theories of perception, sonic shapes can be linked with the shapes of such key-postures and trajectories, enhancing our understanding of music as multimodal embodied shapes.
-
Godøy, Rolf Inge
(2012).
Thinking Shapes in Musical Experience.
-
Godøy, Rolf Inge; Andresen, Kari; Jensenius, Alexander Refsum & Thomsson, Annica
(2012).
Lyd for kroppen.
[Internet].
uio.no.
Show summary
Musikk og bevegelse hører sammen. Hva skjer så i hodet når du sitter helt stille? Nå forskes det på forbindelsen mellom kroppen og lydene.
-
Godøy, Rolf Inge
(2011).
Coarticulation in Music-Related Gestures.
Show summary
In our research on music-related gestures (http://www.fourms.uio.no/), we have come to believe that the phenomenon of coarticulation plays an essential role in both the production and the perception of music. Coarticulation here means the fusion of singular actions and sound-events into more superordinate continuous movements and sound passages, e.g. the singular rapid finger movements and sounds of a piano performance fused into more superordinate hand/arm movements and continuous melodic contours.
Coarticulation is well known in linguistics and in some human movement sciences, but relatively little studied in music. However, the few available studies of coarticulation in music as well as our own video and motion capture data seem to show coarticulation at work in relation to singular sound-events, and analyses of the sound similarly show contextual smearing of sound events that are the hallmarks of coarticulation.
With a recognition of coarticulation at work in the production and perception of music, we believe we better can understand how various contextual effects emerge in music, i.e. that various phenomena such as rhythmic, textural, and melodic patterns can be understood as shaped by coarticulation.
-
Godøy, Rolf Inge & Bjørkeng, Per Kristian
(2011).
Kropp og sinn i ett og alt.
[Newspaper].
Aftenposten.
Show summary
Intervju med Rolf Inge Godøy og kolleger om kroppsrelatert opplevelse i musikk.
-
Godøy, Rolf Inge
(2011).
Sound-Action Timescales. Lecture at the International Summer School in Systematic Musicology, Jyväskylä, Finland, 08/08/11-18/08/11.
Show summary
In our ongoing research, we seek to correlate different sonic feature timescales with sensations of body movements, ranging from fast (e.g. trembling, shaking, etc.), to slower (e.g. whole arm movement), to slow (e.g. torso, whole body movement), and also to quasi-stationary body postures. In many cases, there are clear causal relationship between sound-producing actions of musicians and emergent sonic features (e.g. tremolo movements of the hand and tremolo sounds), causal relationships that seem to be readily perceived by listeners. But images of embodied energy patterns in sound can also be extended into generic categories applicable to sounds regardless of origin, providing a conceptual apparatus for categorizing sonic features in music theory, music analysis and music information retrieval.
-
Godøy, Rolf Inge
(2011).
Images of sound, postures and trajectories in music. Keynote lecture, the Embodiment-Experiment Seminar, Department of Music, University of York, May 10th-11th, 2011.
-
Godøy, Rolf Inge
(2011).
Sonic feature timescales and music-related actions.
-
Kozak, Mariusz; Nymoen, Kristian & Godøy, Rolf Inge
(2011).
The Effects of Spectral Features of Sound on Gesture Type and Timing.
-
Godøy, Rolf Inge
(2010).
Musical Gestures: Sound, Movement, and Meaning - En bokpresentasjon.
-
Glette, Kyrre Harald; Jensenius, Alexander Refsum & Godøy, Rolf Inge
(2010).
Extracting action-sound features from a sound-tracing study.
Show summary
The paper addresses possibilities of extracting information from music-related actions, in the particular case of what we call sound-tracings. These tracings are recordings from a graphics tablet of subjects' drawings associated with a set of short sounds. Although the subjects' associations to sounds are very subjective, and thus the resulting tracings are very different, an attempt is made at extracting some global features which can be used for comparison between tracings. These features are then analyzed and classified with an SVM classifier.
-
Jensenius, Alexander Refsum & Godøy, Rolf Inge
(2010).
Input technologies for music-related actions.
-
Godøy, Rolf Inge
(2010).
Music-related Actions.
Show summary
The close links between sound and movement are ubiquitous in musical performance, listening, or innumerable everyday situations. Body movements (real or imagined) seem so integral to musical experience that it is hard to think of music without also thinking of body movement.
Increasing interest in studying music-related body movement has given us improved methods and technologies for our research, yet one of the most intriguing issues is how we conceptualize and represent sound and movement in our minds as meaningful actions: how can we have more or less solid images of sound and movement as these in their very nature are transient and ephemeral? The question is both conceptual and pragmatic as it directly concerns how we capture, process, and represent sound and movement data.
Our strategy is to focus on fragments of sound and movement, on what we call music-related actions at the chunk-level, and in my presentation I shall give an overview of the main elements of our ongoing research here.
-
Godøy, Rolf Inge
(2010).
Sound Shapes.
Show summary
We have seen important advances in musical acoustics, psychoacoustics, and more recently in embodied music cognition, but we still seem to lack a good conceptual apparatus for speaking about subjectively experienced sonic features in music. Inspired by the seminal work of Pierre Schaeffer and coworkers half a century ago, one of our long-term goals is trying to bridge the gap between subjectively experienced sonic features expressed in various tactile and/or kinematic metaphors such as rough, smooth, narrow, open, thick, thin, etc., and corresponding sound signal features. Such tactile-kinematic metaphors can be collectively called 'sound shapes', and could also be useful for musical aesthetics as a conceptual apparatus for speaking about subjectively experienced sonic features.
-
Jensenius, Alexander Refsum; Glette, Kyrre Harald; Godøy, Rolf Inge; Høvin, Mats Erling; Nymoen, Kristian & Skogstad, Ståle Andreas van Dorp
[Show all 7 contributors for this article]
(2010).
fourMs, University of Oslo – Lab Report.
-
Godøy, Rolf Inge
(2009).
Anticipatory chunking of music-related actions.
Show summary
One major element in music-related body movement is the emergence of meaningful units of sound and movement, what we call chunking. Chunking is often explained by various gestalt-like principles such as closure or belonging, or by qualitative discontinuities such as shifts between sound and silence, or by detecting repetitions of rhythmic, melodic, timbral, etc. patterns, hence essentially by looking at the signal (be that in sound and/or in movement). As a supplement to these signal-based cues for chunking, we now turn to the role of anticipatory cognition, meaning to the preparatory elements in movement and the control of movement. Anticipatory elements are clearly observable in the phenomenon of coarticulation, meaning the subsumption and contextual smearing of otherwise separate actions and sounds into more superordinate units, so that the shape and position of the effectors (lips, vocal tract, fingers, hands, etc.) at any moment are determined by what to do next (as well as by what was just done). There is now converging evidence from various behavioral research for the existence of anticipatory chunking, suggesting that we may conceive of a chunk of music-related movement by an "instantaneous" overview image, "in-a-now", as was suggested by phenomenological philosophers more than 100 years ago. The challenge now is to substantiate these ideas of anticipatory chunking in our research on music-related movement.
-
Godøy, Rolf Inge
(2009).
Sound, Movement, Key-Frames and Inter-Frames.
Show summary
Close links between sound and movement are ubiquitous in
musical performance, listening, or innumerable everyday situations. Body
movements (real or imagined) seem so integral to musical experience that it is
hard to think of music without also thinking of body movement.
Increasing interest in studying music-related body movement has given us
improved methods and technologies for our research, yet one of the most
intriguing issues is how we conceptualise and represent sound and movement
in our minds and in our research: How can we have more or less solid images
of sound and movement as they in their very nature are transient and
ephemeral? The question is both conceptual and pragmatic as it directly
concerns how we capture, process, and represent sound and movement data.
After years of theoretical reflection alternating with practical work, our
solution is to regard music-related movement as focused around key-frames,
meaning salient postures in time, interleaved with inter-frames, meaning
continuous movement between the key-frames. Borrowed from film
animation and now applied in human movement science, we believe key-
frames and inter-frames correspond to similar elements in the sound, giving us
a coherent framework for studying music-related movement.
-
Godøy, Rolf Inge & Jensenius, Alexander Refsum
(2009).
Body Movement in Music Information Retrieval.
-
Godøy, Rolf Inge
(2009).
Chunking sound-actions in musical experience.
Show summary
One of the major challenges in studying sound-action relationships in music is that of how somehow meaningful units of sound and action emerge from the continuous stream of sensations, a process we like to call chunking. It seems that chunking is related to a number of constraints for body movement and for sound perception, as well as related to various musical-aesthetical elements. In this lecture, I will give an overview of some current theories of chunking as well as demonstrate how we are trying to study chunking in our ongoing research.
-
Jensenius, Alexander Refsum; Skogstad, Ståle Andreas van Dorp; Nymoen, Kristian; Godøy, Rolf Inge; Tørresen, Jim & Høvin, Mats Erling
(2009).
Reduced displays of multidimensional motion capture data sets of musical performance.
Show summary
Background: Carrying out research in the field of music and movement involves working with different types of data (e.g. motion capture and sensor data) and media (i.e. audio, video), each having its own size, dimensions, speed etc. While each of the data types and media have their own analytical tools and representation techniques, we see the need for developing more tools that allow for studying all the data and media together in a synchronised manner. We have previously developed solutions for studying musical sound and movement in parallel by using synchronised spectrograms of audio and motiongrams of video. Now as we have started using an infrared motion capture system in our research, we see the need for better visualisation techniques of the highly multidimensional data sets being recorded (e.g. 50 markers x 3 dimensions x 100 Hz). While there are several techniques for doing this independently of audio and video, we are working on tools that integrate well with our displays of spectrograms and motiongrams.
Aims: Creating reduced representations of multidimensional motion capture data of complex music-related body movement that can be used together with spectrograms and motiongrams.
Results/Main Contribution: We present some of the visualisation techniques we have been developing to display multidimensional data sets: 1) reduction based on collapsing dimensions, 2) reduction based on frame differencing, 3) colour coding of movement features. We show how these techniques allow for displaying reduced displays of multidimensional motion capture data sets synchronised with spectrograms and motiongrams.
Conclusions/Implications: The techniques presented allows for studying relationships between movement and sound in music performance, and make it possible to create visual displays of movement and sound that can be used on screen and in printed documents.
-
Godøy, Rolf Inge; Jensenius, Alexander Refsum & Nymoen, Kristian
(2009).
Coarticulation of sound and movement in music.
-
Godøy, Rolf Inge & Jensenius, Alexander Refsum
(2009).
Typomorphological features of sonic objects.
-
-
Lillebo, Maria Røbech & Godøy, Rolf Inge
(2008).
Sanger som fester seg på hjernen.
[Internet].
P4.
-
-
-
Godøy, Rolf Inge; Jensenius, Alexander Refsum & Landsverk, Johanne
(2008).
Musikk = kroppsrørsle.
[Newspaper].
Forskerforum.
-
Godøy, Rolf Inge
(2008).
Goal-points and trajectories in music-related movement.
Show summary
In our research on music-related movement (http://musicalgestures.uio.no), we have seen that listeners with very different levels of musical training all seem to be able to imitate sound-producing gestures suggested by the music, evident in various kinds of 'air instrument' performance such as air guitar, air drums, air piano, etc. We understand this in the framework of 'embodied cognition', meaning that perception is closely linked with incessant mental simulations of body movements (Gallese and Metzinger 2003). This means that we make sense of what we see, hear, feel, etc., by mentally simulating (and sometimes also overtly carrying out) various body movements, both our own and those of others, associated with whatever we perceive and think.
Also, the various 'air performance' gestures and other sound-related gestures we have studied seem centered on certain salient points in the music such as various accents (downbeats or other kinds of accents) or melodic or textural peaks. We understand this rendering of salient points as goal-directed behavior, meaning robust perception and rendering of the goals of movements and more variability or 'inaccuracies' in the movement trajectories between these goals (W o h l s c h l ä g e r et al. 2003). We use the expression 'goal-points' to denote this phenomenon, meaning the shape or posture and the positions of the effectors (e.g. shape and position of the hands on the keyboard, angle and position of hands and arms in relation to the drums, etc.) at certain points in time. Between these goal-points, we have more or less continuous movement trajectories, however these trajectories are subordinate to the goal-points.
We thus see music-related movements (both sound-producing and sound-accompanying movements) as organized around such a succession of goal-points, and this may have significant consequences not only for how we interpret music-related movement, but also for how we segment or chunk musical sound in general.
-
Godøy, Rolf Inge
(2008).
Chunking Sound for Musical Analysis.
Show summary
One intriguing issue in music analysis is that of segmentation, or parsing, of continuous auditory streams into some kinds of meaningful and analytically convenient units, a process I here prefer to denote as chunking. The purpose of this paper is to present a theory of chunking in musical analysis based on recent ideas of embodied auditory cognition and our own research on musical gestures (http://musicalgestures.uio.no).
Although the topic of chunking in sound has been discussed from the time of early music-related gestalt theory and phenomenology at the end of the nineteenth century up to present theories of auditory perception, there can be no doubt that the most consistent focus on chunking of musical sound may be found in the theoretical works of Pierre Schaeffer with his ideas of the fragment, of the sonic object as the most significant phenomenon in music (Schaeffer 1966). Typically, the sonic object in Schaeffer's theory is in the range of a few seconds, what I here call a meso-size chunk. Interestingly, recent neurocognitive research seems to agree with the idea of attention spans of approximately three seconds (Pöppel 1997, Varela 1999). And as to the human action side, there seems to be a similar convergence of human actions to such meso-size chunks in the approximately three-second range (Schleidt and Kien 1997). Our own observation studies of sound-related gestures also seem to converge on this size as a 'normal' size chunk of music-related movement (Godøy 2006a, Godøy, Haga, and Jensenius 2006a and 2006b).
In this paper, I shall present converging evidence in support of the primordial role of such meso-size chunks in music perception and cognition, and argue that such meso-size chunks also should be the basis for musical analysis, as well as present various musical examples to illustrate this.
-
Godøy, Rolf Inge
(2008).
Sound Actions: Human movement in the perception and cognition of music.
Show summary
We can see people moving to music everywhere: in dancing, in marching, in all kinds of everyday private or not so private listening situations like in walking down the street making movements to the music of an iPod, or at concerts (provided it is socially acceptable for listeners to move), and of course in the performance of music. Listeners, regardless training or level of expertise, seem to be able to spontaneously make movements that more or less reflect various salient features of the music. This makes us believe that human movement is an integral part of not only music perception, but also of music cognition in general, in that we may remember and imagine music as movements and not only as "pure sound".
In this lecture, various research findings on the intimate links between human movement and music will be reviewed, and the consequences these findings could (or should) have for other areas of music research will be discussed. In particular, the issue of segmentation of music-related movements into somehow meaningful action chunks will be focused on, suggesting that various biomechanical and motor control elements may be influential in how we perceive and/or imagine musical sound.
-
Jensenius, Alexander Refsum; Nymoen, Kristian & Godøy, Rolf Inge
(2008).
A Multilayered GDIF-Based Setup for Studying Coarticulation in the Movements of Musicians.
Show summary
The paper presents some challenges faced in developing an experimental setup for studying coarticulation in music-related body movements. This has included solutions for storing and synchronising motion capture, biosensor and MIDI data, and related audio and video files. The implementation is based on a multilayered Gesture Description Interchange Format (GDIF) structure, written to Sound Description Interchange Format (SDIF) files using the graphical programming environment Max/MSP.
-
Godøy, Rolf Inge; Jensenius, Alexander Refsum & Nymoen, Kristian
(2008).
Production and perception of goal-points and coarticulations in music.
Show summary
From our studies of sound-related movement (http:-slash-slash musicalgestures.uio.no), we have reason to believe that both sound-producing and sound-accompanying movements are centered around what we call goal-points, meaning certain salient events in the music such as downbeats, or various accent types, or melodic peaks. In music performance, these goal-points are reflected in the positions and shapes of the performers' effectors (fingers, hands, arms, torso, etc.) at certain moments in time, similar to what is known as keyframes in animation. The movement trajectories between these goal-points, similar to what is known as interframes in animation, may often demonstrate the phenomenon of coarticulation, i.e. that the various smaller movement are subsumed under more superordinate and goal-directed movement trajectories. In this paper, we shall present a summary of recent human movement research in support of this scheme of goal-points and coarticulations, as well as demonstrate this scheme with data from our ongoing motion capture studies of pianists' performance and other researchers' motion capture data. ©2008 Acoustical Society of America
-
Godøy, Rolf Inge
(2007).
Gesture research in music composition context.
-
Godøy, Rolf Inge
(2007).
Temporal phenomena and enigmas in the perception and cognition of musical sound.
-
Godøy, Rolf Inge
(2007).
Gestural-Sonorous Objects.
-
Godøy, Rolf Inge
(2007).
Chunking sound in listening and analysis.
Show summary
Short abstract:
One intriguing issue in the analysis of electroacoustic music (and other kinds of music as well) is the segmentation or parsing of continuous auditory streams into meaningful and analytically convenient units, a process I here denote as chunking. This paper shall present elements of a theory of chunking and propose a three-layer model that can accommodate musical features at three different time-scales: i) Micro-level (or sub-chunk level), focused on the content of the chunk, what Schaeffer called its contexture, including features such as grain and motion. ii) Meso-level (or chunk-level), focused on the overall shape-features of the chunk, corresponding to Schaeffer's typological categories. iii) Macro-level (or supra-chunk level), consisting of the cumulative memory of several successive chunks, as in the case of longer passages of music. This three-level model is reasonably well founded and could be convenient for analytical purposes, something that will be illustrated with sound examples during the presentation.
-
Godøy, Rolf Inge
(2007).
Geometry and effort in gestural renderings of musical sound.
Show summary
In our current research on music-related gestures (http://musicalgestures.uio.no), we have had a particular focus on the spontaneous gestures that listeners make to musical sound. This has been motivated by the belief that perception and cognition of musical sound is intimately linked with mental im-ages of movement, and that a process of incessant motor imagery is running in parallel with listening to, or even just imagining, musical sound. We have called this motormimetic cog-nition, and see evidence for this in a number of research findings as well as in our own observation studies. Furthermore, we believe hand movements have a privileged role in motormimetic cognition of musical sound, and that these hand movements may trace the geometry (i.e. elements such as pitch contours, pitch spread, rhythmical patterns, textures, and even timbral elements as shapes) as well as convey sensations of effort of musical sound, hence the focus in this paper on geometry and effort in the gestural renderings of musical sound.
There are many different gestures that may be associated with music. Using the Gibsonian concept of affordance, we can thus speak of rich ges-tural affordances of musical sound. For practical purposes we can in this paper think of two main categories, sound-producing gestures (such as hitting, stoking, bowing) and sound-accompanying gestures (such as dancing, marching, making various movements to the mu-sic), as well as several sub-categories of these. The distinction between these two main categories as well as their sub-categories may not always be so clear (e.g. musicians make gestures in performance that are probably not strictly necessary for producing sound, but may be useful for reasons of motor control or physiological comfort, or have communica-tive functions towards other musicians or the audience).
But in order to carry out more systematic observation studies of gestural renderings, we have proceeded from giving subjects rather well-defined tasks with limited gestural affor-dances onto progressively more open tasks with quite rich gestural affordances, meaning proceeding from studies of air-instrument performances where subjects were asked to make sound-producing movements, to what we have called sound-tracing studies where the musical excerpts were quite restricted as to their number of salient features, on to what we called free dance gestures with more complex, multi-feature excerpts and rather general instructions to subjects about making spontaneous gestural renderings based on what they perceived as the most salient features.
The idea of gestural rendering of musical sound is based on a large body of research ranging from classical motor theory of perception to more recent theories of motor in-volvement in perception in general, and more specifically in audio perception, as well as in music related tasks in particular.
Obviously, auditory-motor couplings as well as the capacity to render and/or imitate sound is not restricted to hand movements, as is evident from vocal imitation of both non-musical and musical sound (e.g. so-called beat-boxing in hip-hop and other music and scat singing in jazz). But the focus on hand movements in our case is based not only on innu-merable informal observations of listeners making hand movements to musical sound, but also on the belief that hand movements have a privileged role from an evolutionary point of view and from a general gesture-cognitive point of view. Furthermore, we believe that a listener through a process of translation by the principle of motor equivalence, may switch from one set of effectors to another, revealing more amodal gestural images of musical sound.
-
Jensenius, Alexander Refsum; Kvifte, Tellef & Godøy, Rolf Inge
(2006).
Towards a Gesture Description Interchange Format [Poster].
-
Godøy, Rolf Inge
(2006).
Coarticulated gestural-sonorous objects in music.
-
Godøy, Rolf Inge
(2006).
Gestural-Sonorous Awareness in Musical Imagery.
Show summary
One intriguing issue in research on musical imagery (e.g. various contribu-tions in Godøy and Jørgensen 2001) has been the relationship between sonorous and gestural images in our consciousness. Informal accounts by musicians and neurocognitive studies (e.g. Zatorre and Halpern 2005 and various references given there) seem clearly to support the idea of close links between auditory and motor ele-ments in musical imagery. Furthermore, earlier conceptual work on embodied cognition (e.g. Johnson 1987) and related neurocognitive work (e.g. Berthoz 1997) now seem to fuse into a coherent understanding of bodily movement as a general basis for cogni-tion and con-sciousness (e.g. Gallese and Lakoff 2005). Lastly, our own current studies of musical ges-tures (http://musicalgestures.uio.no) seem to indicate that listen-ers, even non-experts (novices), have spontaneous and fairly robust images of sound-pro-ducing gestures (Godøy, Haga, and Jensenius 2006), leading us to the idea of ges-tural-so-norous awareness in the perception and imagery of music.
This paper will briefly review the abovementioned research and propose a model for understanding how gestural images are integral to our aware-ness of musical sound, and furthermore link this with phenomenological theory of internal temporal con-sciousness in music (Husserl 1893), and finally suggest some practical applications of volitional, "gesture-guided" mu-si-cal imagery.
Keywords:
Consciousness, awareness, intentionality, gesture, sound, musical imagery, motor im-agery.
-
Godøy, Rolf Inge; Haga, Egil & Jensenius, Alexander Refsum
(2006).
Exploring Music-Related Gestures by Sound-Tracing - A Preliminary Study.
-
Jensenius, Alexander Refsum; Godøy, Rolf Inge & Kvifte, Tellef
(2006).
Towards a gesture description interchange format.
Show summary
This paper presents our need for a Gesture Description Interchange Format (GDIF) for storing, retrieving and sharing information about music-related gestures. Ideally, it should be possible to store all sorts of data from various commercial and custom made controllers, motion capture and computer vision systems, as well as results from different types of gesture analysis, in a coherent and consistent way. This would make it possible to use the information with different software, platforms and devices, and also allow for sharing data between research institutions. We present some of the data types that should be included, and discuss issues which need to be resolved.
-
Jensenius, Alexander Refsum; Gupta, Ram Eivind; Godøy, Rolf Inge; Haga, Egil; Aksnes, Hallgjerd & Kristoffersen, Kristian Emil
(2005).
Kroppslyd.
-
Jensenius, Alexander Refsum; Godøy, Rolf Inge & Spilde, Ingrid
(2005).
Mellom teknologi og musikk.
[Internet].
Forskning.no.
Show summary
Musikkteknologi er et temmelig ferskt fagfelt. 30. september og 1. oktober samles fagfolk fra hele landet på Musikkteknologidagene 2005, for å diskutere form og framtid.
-
Godøy, Rolf Inge & Ng, Kia
(2005).
COST287-ConGAS Project Presentation.
-
Godøy, Rolf Inge
(2005).
Embodied Phenomenological Music Theory.
-
Godøy, Rolf Inge
(2005).
Gestural Sonorous Objects: Re-thinking Schaeffer's Typo-morphological concepts.
Show summary
In this paper, I will try to show how Pierre Schaeffer's focus on fragments of musical sound, on what he called sonorous objects (Schaeffer 1966), can be re-interpreted as intimately linked with mental images of action fragments, with what I here call gestural objects. To demonstrate these gestural-sonorous object links, I will briefly present some relevant concepts from Schaeffer's work, some ideas from recent work on embodied cognition, and conclude with some Schaeffer-inspired elements in our on-going research on gesture-based explorations of musical sound and the relevance of this for the analysis of electro-acoustic music. Essentially, this means rethinking Schaeffer's concepts of sonorous objects as gesture-related concepts, or as procedural knowledge, i.e. as active knowledge of movement.
-
Godøy, Rolf Inge; Haga, Egil & Jensenius, Alexander Refsum
(2005).
Playing "Air Instruments": Mimicry of Sound-producing Gestures by Novices and Experts.
-
-
Paulsen, Cathrine TH & Godøy, Rolf Inge
(2004).
Luftgitaren viktigere enn du tror.
[Internet].
Forskning.no.
-
Jensenius, Alexander Refsum; Godøy, Rolf Inge; Haga, Egil & Aksnes, Hallgjerd
(2004).
Musical Gestures.
-
Godøy, Rolf Inge; Jensenius, Alexander Refsum & Borchgrevink, Hild
(2004).
Seminar innleder prosjekt om musikk og gestikk.
[Internet].
MIC.
Show summary
I forbindelse med oppstart av et forskningsprosjekt om musikalsk gestikk inviterer Institutt for musikk og teater ved UiO til et seminar over dette emnet 14. og 15. mai. Prosjektet, som involverer både norske og internasjonale forskningsmiljøer, skal undersøke sammenhenger mellom musikk, menneskelige bevegelser og musikkens begreper. Prosjektet ledes av Rolf Inge Godøy ved IMT. Seminaret er åpent for alle.
-
-
Godøy, Rolf Inge
(2004).
Musical Gestures Research.
-
Godøy, Rolf Inge
(2004).
Motor-Mimetic Cognition: Mediating Naturalistic and Culturalistic Approaches in Musicology.
-
-
-
Godøy, Rolf Inge
(2003).
Gestural Imagery in the Service of Musical Imagery.
-
Godøy, Rolf Inge
(2002).
L'imagerie musicale.
-
Godøy, Rolf Inge
(2001).
Temporal re-coding by action-images.
Show summary
The basic idea in this paper is that of an ecologically conditioned re-coding of musical sound into action-images when we listen to music, and hence, that temporal coding in music perception and cognition could be understood as a matter of forming motor images.
-
Godøy, Rolf Inge
(2001).
Simple yet complex: Action-images in musical thought.
Show summary
The ecological bases for musical practice is a promising field for musicology in the future. Abandoning traditional symbol-based approaches, ecological approaches take as the point of departure the continuous audio-acoustic flux, as well as the human capacity for extracting focused and relatively stable sound-objects from this flux. Images of sound-producing actions play an important role in this emergence of focused sound-objects from the continuous flux, hence the idea of music cognition by action-images in musical thought. There are of course many unresolved questions and challenges in such an approach, however, action-images hold promise for elucidating some enigmatic elements in music cognition such as that of parsing, chunking, and temporal coding.
-
Godøy, Rolf Inge
(2000).
Cognition musicale par mimetisme moteur.
Show summary
La capacité de percevoir et imaginer des sons musicaux par images de la production dès sons, c'est à dire, de mentalement simuler des actions qu'on suppose est derrière ce qu'on entend.
-
Godøy, Rolf Inge
(1999).
Lyden og kroppen.
Parergon.
ISSN 0313-6221.
Show summary
En kortfattet ytring om framtiden for komposisjonsfaget og lydforskningen.
-
Schneider, Albrecht & Godøy, Rolf Inge
(1999).
Perspectives and Challenges.
Show summary
Although we have tried to define musical imagery as "our mental capacity for imagining musical sound in the absence of a directly audible sound source", this may in turn imply a number of different approaches and paradigms. Actually, this pluralism of approaches is fortunate, as we believe that the shifts of perspectives caused by the various paradigms are fruitful and constructive. In particular, the phenomenological and gestaltist schools of thought at the end of the nineteenth century and early twentieth century seem to still have relevance for our work today. We shall also try to see the topic of musical imagery in relation to other topics in music cognition, as there in many cases will be rather unclear boundaries here (e.g. in relation to auditory memory, various components of audition, music and bodily movement, etc.), ending up with a summary of what we see as some of the key questions in the field of musical imagery at the present stage.
-
Godøy, Rolf Inge
(1999).
Imagined Action, Excitation and Resonance.
Show summary
The aim of this paper is to show how images of sound-producing actions can enhance our capacity for imagining sonorous qualities by presenting a conceptual model of imagined sound production which separates excitation and resonance, classifies different modes of excitation and resonance, and relates all this to the notions of motor programs, motor equivalence and coarticulation in motor imagery. Various research supporting this model will be presented together with some ideas for further research, the conclusion being that most sounds can be perceived as included in action-trajectories, and that in the case of musical imagery, this exploration of the "silent" choreography of sound-producing actions opens up for differentiations and hence also an enhancement of our means for imagery of sonorous qualities.
-
Godøy, Rolf Inge
(1998).
Compositional Sketching by Kinematic Images of Sound Production.
-
Godøy, Rolf Inge
(1997).
Chunking in Music Theory by Imagined Sound-Producing Actions.
-
Godøy, Rolf Inge
(1996).
Knowledge by Shapes.
-
Godøy, Rolf Inge
(1996).
Kunnskap gjennom musikkobjekter.
-
Godøy, Rolf Inge
(1995).
Knowledge by distantiation.
-
Godøy, Rolf Inge
(1994).
Shapes and Spaces in Musical Thought.
-
Godøy, Rolf Inge
(1992).
Spatializations in Music Theory.
-
Godøy, Rolf Inge
(2022).
Holistic identification of musical harmony: A theoretical and empirical study.
Norges musikkhøgskole.
ISSN 978-82-7853-312-3.
-
Nymoen, Kristian; Tørresen, Jim; Godøy, Rolf Inge; Jensenius, Alexander Refsum & Høvin, Mats Erling
(2013).
Methods and Technologies for Analysing Links Between Musical Sound and Body Motion.
Akademisk Forlag.
ISSN 1501-7710.
Show summary
There are strong indications that musical sound and body motion are related. For instance, musical sound is often the result of body motion in the form of sound-producing actions, and musical sound may lead to body motion such as dance. The research presented in this dissertation is focused on technologies and methods of studying lower-level features of motion, and how people relate motion to sound. Two experiments on so-called sound-tracing, meaning representation of perceptual sound features through body motion, have been carried out and analysed quantitatively. The motion of a number of participants has been recorded using state-of- the-art motion capture technologies. In order to determine the quality of the data that has been recorded, these technologies themselves are also a subject of research in this thesis. A toolbox for storing and streaming music-related data is presented. This toolbox allows synchronised recording of motion capture data from several systems, independently of system-specific characteristics like data types or sampling rates. The thesis presents evaluations of four motion tracking systems used in research on music-related body motion. They include the Xsens motion capture suit, optical infrared marker-based systems from NaturalPoint and Qualisys, as well as the inertial sensors of an iPod Touch. These systems cover a range of motion tracking technologies, from state-of-the-art to low-cost and ubiquitous mobile devices. Weaknesses and strengths of the various systems are pointed out, with a focus on applications for music performance and analysis of music-related motion. The process of extracting features from motion data is discussed in the thesis, along with motion features used in analysis of sound-tracing experiments, including time-varying features and global features. Features for realtime use are also discussed related to the development of a new motion-based musical instrument: The SoundSaber. Finally, four papers on sound-tracing experiments present results and methods of analysing people’s bodily responses to short sound objects. These papers cover two experiments, presenting various analytical approaches. In the first experiment participants moved a rod in the air to mimic the sound qualities in the motion of the rod. In the second experiment the participants held two handles and a different selection of sound stimuli was used. In both experiments optical infrared marker-based motion capture technology was used to record the motion. The links between sound and motion were analysed using four approaches. (1) A pattern recognition classifier was trained to classify sound-tracings, and the performance of the classifier was analysed to search for similarity in motion patterns exhibited by participants. (2) Spearman’s p correlation was applied to analyse the correlation between individual sound and motion features. (3) Canonical correlation analysis was applied in order to analyse correlations between combinations of sound features and motion features in the sound-tracing experiments. (4) Traditional statistical tests were applied to compare sound-tracing strategies between a variety of sounds and participants differing in levels of musical training. Since the individual analysis methods provide different perspectives on the links between sound and motion, the use of several methods of analysis is recommended to obtain a broad understanding of how sound may evoke bodily responses.
-
Godøy, Rolf Inge
(1999).
Shapes and Spaces in Musical Thinking.
Show summary
This is the first draft of a book on musical analysis based on the idea of knowledge in music theory as shapes of various trajectories in timespace. Music perception and cognition is here understood as being fundamentally cross-modal, and as in particular involving action and vision in addition to audition. A triangular model of action, sound and vision is presented, and various issues of representation are discussed. An analysis-by-synthesis approach to musical sound is regarded as the most suited to give us knowledge of previously not well explored elements such as timbre, texture and contour, and various issues of the relationship between complex subsymbolic substrates of musical sound and more singular and focused notions of musical objects are discussed.
-
-
Godøy, Rolf Inge
(1993).
Skisse til en instrumentasjonsanalytisk systematikk.
Institutt for musikk og teater.
-
Godøy, Rolf Inge
(1983).
Den strukturelle tenkningen. Noen begreper i musikken etter 1945.
Universitetet i Oslo.
-
Godøy, Rolf Inge
(1975).
Olivier Messiaen som en tonal komponist.
Norges Musikkhøgskole.