2015, Sarah-Indriyati Hardjowirogo
By the term instrumentality I refer to the potential property of things to be used as musical instruments or, more precisely, to their instrumental potential as such. While sharing the assumption that virtually anything –a shoe, a bottle, a pen– can become a musical instrument under certain conditions, I keep wondering what exactly these conditions are. Utilitarian approaches to this question, such as It’s a musical instrument if it is used as a musical instrument, are by no means trivial since they emphasize the importance of a purposeful use for the process that turns an arbitrary object into a commodity with a well-defined function. Still, I cannot help finding this kind of circular reasoning hardly satisfactory: What, then, does it mean to use something as a musical instrument? What are the actions and procedures associated with musical instruments? What kind of mental and physical knowledge do we have to access in order to recognise or use something as a musical instrument? How is this knowledge being shaped by cultural conventions and temporal conditions? And, finally, how has this complex structure of actions, knowledge and meaning changed over time? More broadly, what are the conditions that constitute a musical instrument as such?
Exactly a century ago, it seemed to be quite clear to the German ethnomusicologists Curt Sachs and Erich Moritz von Hornbostel what it was, which all musical instruments had in common: They obviously produced sound, and each one in its own characteristic way in fact, so as to be able to categorise them according to their specific way of producing sound.
Not much has changed since then — and everything at the same time. Still, it can be said that all musical instruments produce sound. But they are not the only things to do that anymore. Instead, they are surrounded by hundreds of artefacts that, too, produce sound, and even music: sound media, used not only to play back, but also to record, edit, and produce sound, have taken a place that, at the times of Hornbostel and Sachs, did not yet exist. With the boundaries between categories like musical instruments and sound media getting increasingly blurred, it becomes clear that producing sound, today, is a property not exclusively limited to the category of musical instruments anymore and, vice versa, the ability to produce sound is not a sufficient condition for an object to be a musical instrument.
The question of what we can take musical instruments to mean today has bothered musicological organology and related disciplines for a couple of years now. Since the modularisation of the musical instrument in 1970s’ synthesizers at the latest, but actually as early as 1940s’ musique concrète, the definition of the musical instrument, of its instrumental identity, is no longer a trivial task. The once necessary unity of sound generation and control in one device has become as much an option as the correlation between material and sound, between playing action and resulting sound. The new leeways of instrumental play, i.e., the adoption of previously recorded sounds, the possibility of creating and choosing sound material from the entire repertoire of audible sound, the influence through any kind of (physical) interaction, at the same time identify the boundaries of a traditional concept of musical instrument whose present scope is yet to be negotiated.
A few years ago, John Croft defined a number of “conditions of instrumentality” — conditions that must be fulfilled such that an audience would recognize a given setup of live electronics as an instrument. Though I agree with the general idea that the relationship between a performer’s actions and the resulting sound should be somewhat plausible (or even scrutable, as Croft suggests) in order for the setup to appear as an instrument, I see a couple of issues that argue against simply adopting his list of conditions: His argument is based on a very specific techno-aesthetical setting, namely one of electroacoustic music, where the sound of a traditional instrument, e.g., a violin, is in some way being modified in real-time by electronic sound devices or accompanied by pre-recorded sound from a loudspeaker. In any case, the traditional instrument remains an integral part of the configurations described by Croft. His approach to focus on electroacoustic music means focusing on the realm of Western art music and thus disregarding the numerous popular contexts in which electronic and digital technologies are deployed as a matter of course. Also, one fundamental question remains unanswered: Why should a performer be interested in his setup to be recognized as an instrument? What difference does it make (to the performer, to the audience, to the performance, …) whether a sound producing device (or a configuration of such devices) is regarded as an instrument or not?
If we want to find out how instrumentality can be defined on a more general level and how it differs from related concepts, there is no virtue in confining such an analysis to a case as particular as the one described by Croft. Taking into account, for example, the crucial role a sampler like the Akai MPC 2000 may play in a Hip Hop setting, it becomes clear that the inclusion of popular culture as well as that of instrumental border cases might be fruitful for our proposition. Unlike electroacoustic settings, popular settings frequently go without traditional instruments at all, using instead a set of technical standard configurations that fulfill instrumental purposes. It remains unquestioned that the realm of electroacoustic music displays an exciting field of research on instrumentality just because it is so particular. And, in a way, it is precisely the existence of a traditional instrument that renders the setting a border case because the question of instrumentality arises in spite of it. But first, thinking of instrumentality as of a general phenomenon we encounter in connection with contemporary music, it would be a missed opportunity not to ask if and how it appears in other musical contexts. Second, if already we are thinking about the possibility of electronic and digital media being instruments, why not be consequent and leave out traditional instruments altogether for the time being? (After all, we do not seem to have this kind of theoretical problems with them.)
In various conversations on this topic, I realized that the third issue described above is one of particular importance: Why does it matter if something is an instrument or not? Why should we want to maintain a concept that seems to be increasingly hard to grasp? What would the use of a concept like instrumentality be after all?
I believe the possible answers to these questions might have to do with the value we assign to concepts so fundamental to human culture that there is hardly a time or a place they don’t exist. As Margaret Kartomi has shown in her impressive study on the different concepts of musical instruments known in the various cultures in the world, the concept of a musical instrument, even if it differs in its precise meanings, is one of the few universals almost all cultures have in common: On a very basic level, musical instruments are those cultural artefacts we use to play music with. And vice versa, what is used to play music, in our cultural understanding, is a musical instrument. What exactly these artefacts are has always been and will always be a matter of negotiation or, better, of convention. The situation we are facing today is age-old and entirely new at the same time: Odd things are being used to make music.
Trevor Pinch has made a similar point in his tellingly entitled article “Why Do You Go to a Piano Store to Buy a Synthesizer” when describing how improbable it was in the 1960s for the early synthesizers to ‘become’ musical instruments and how it was actually the keyboard built into the MiniMoog and its highly symbolic status in Western culture that became decisive for their success. The status of musical instrument assigned to them could by no means be taken for granted, even though, of course, synthesizers have been built to no other end than producing sound in the first place.
There is a lot to be learned from this process of becoming an instrument the early synthesizers have witnessed, the most important thing being from my opinion that instrumentality is something that is socially and culturally constructed. And it is precisely the principles, rules, and conditions of such construction that I am most curious to study.
Sarah-Indriyati Hardjowirogo is a Research Associate in the Einstein research project 3DMIN (Design, Development and Dissemination of New Musical Instruments) at Technische Universität Berlin. Her main interests are in the areas of music and technoculture, audio media and musical instruments, as well as the conceptual history of culture. She is a PhD candidate at the ((audio)) division of the Institute of Culture and Aesthetics of Digital Media at Leuphana University Lüneburg, working on a dissertation entitled “Cult Objects, Sound Generators, Body Technologies. The Musical Instrument in Flux”, which explores the musical instrument as a cultural concept and its transformation through changing media.