By Jackie Green, President & CTO at Alteros
Have you ever walked through the woods, and really listened to the world around you? The sounds can be almost symphonic in nature; the rush of a stream provides a persistent bass note, the percussive rustle of leaves underfoot, the flute of the wind as it pulls through the trees, the clicks and whistles of animal life that add a trill of a violin.
But is it music? At what point can we truly say that a sound has become music? There are a variety of elements that have been explored in relation to this question, by sound professionals, musicians and philosophers alike.
Let’s return to our example in the woods. You may think it’s an exaggeration to call the sounds of the wood a symphony. But it is acknowledged that ‘music’ can be made from things we wouldn’t necessarily call instruments. Think of the percussive spoon players, wine glass performances, bin lids and buckets and bashing – all of these we would be comfortable calling ‘music’. Conversely, the presence of a musical instrument doesn’t necessarily equate to the production of music; the single ‘toot’ of a trumpet is not music, the strangled efforts of a five-year-old on their violin may one day be concert-worthy, but in that moment of ear piercing practice, it is far from music.
Where does the distinction arise then? One of the first areas considered is that there needs to be a collection of sound instances for sound to become music. In a study conducted by Elisabeth Margulis, respondents were asked to listen to an extract and rate where they felt it lay on a scale which ranged from zero, representing purely ambient noise, to five, representing complete music. A single ‘waterdrop’ played to respondents was generally given a low rating; it was perceived as being only ambient noise. But if this drop was repeated collectively and then even rhythmically, the rating given rose slightly. If the pitch and tone of the drop was altered, repeated and layered, the rating of musicality given was much higher.
Of course, a simple collection of noises together is not enough either. The scramble of TV static or the old dial up tone of the internet constitutes a collection of sounds and tones, but it is not music. There needs to be a relationship between these sounds. A simplistic definition given is often that music is ‘ordered sound’; placed in succession with temporal relationships that result in something that has unity and continuity.
Definitions of ‘ordered sound’ construct the idea of music at a particularly ‘open’ level, whilst others have sought more technical and specific definitions. The ‘elements’ required for something to constitute music are frequently contested, with authors such as Eugene Narmour suggesting that melody, harmony, rhythm, dynamics, timbre, tempo, meter and texture are all necessary components of music. Professor of composition Harold Owen relates the idea of music to the presence of specific cognitive processes; pitch, duration, loudness, timbre, texture and spatial location.
This concept of cognitive processes is interesting, because some definitions of music may often focus on music being a ‘subjectively’ defined concept; for instance, Jean-Jacques Nattiez wrote in his book Music and Discourse: Toward a Semiology of Music that the distinction between noise and music is nebulous, culturally defined and impossible to form a consensus on. However, because musical understanding is based upon cognitive processes, then there are in fact levels of similarity across the cultural music practices of peoples, developed completely independently. This potentially suggests an objective component to the idea of music.
For instance, the pentatonic scale of the West tends to be replicated across the world in terms of the interval relationships between notes, even if the notes themselves are not the same (though some cultures use a greater number of subdivisions that the Western pentatonic scale). There are only a few notable examples of countries with completely ‘out there’ concepts of note formation, such as Balinese Gamelan music. However, the associations we make with these notes – happy, melancholic – are strongly defined by culture. Music may therefore be both an objective and a subjective construct.
But if the ability to either create or listen to music is potentially based around cognitive processes and requires the ability to ‘think’, then this leads us to a question of ‘who’ can make music. Research into animals’ musical abilities has identified that different animals do have some dimensions of music making ability, but these are considered rare. It is said that, whilst most animals may perform activities in a repetitive manner, very few have the ability to match metronomic rhythm. I believe it is possible animals are responding to, and creating music in a manner meaningful to them, and that perhaps humans do not fully understand. I have ridden a number of horses who clearly enjoy adjusting their gait and cadence to the timing of the music. In fact, over the years I have even noticed that certain of my horses had preferences for differing types of music. One has only to watch the stunning performance of Blue Hors Matine ridden by Andreas Helgstrand to realize that there is more to the traditional thinking that animals don’t utilize rhythm. Researchers say that rhythm keeping is actually a ‘predictive’ activity in which you anticipate the pattern a split second before it occurs. They think that animals do not predict and anticipate, but I disagree. Prey animals such as horses evolved with strong predicting and anticipating abilities in order to remain safe. Why couldn’t they apply this to something they enjoy?
There are also other interesting exceptions. Parrots can dance to a beat – even when the tempo changes. There’s also a sea lion out there who seems to have mastered beat – Ronan. What about other birds? Songbirds also ‘learn’ their vocalisations, which is a prerequisite cognitive process for the production of music, but interestingly, if you play a songbird a ‘transposed’ version of a call (i.e the same notes but in a different part of the octave), they fail to recognise it. Rhesus monkeys can manage the whole transposing thing, but they can’t keep a beat. When I think of this, I wonder “what if it’s just that their clock speed is different? What if somehow they are just out of sync with human concepts of timing?” Animals make sounds that sound musical, but there is no animal that combines all of the cognitive music making processes as we humans consider to be “music”– they tend to just have one or two of the whole set. Does that help us to define, then, what we consider to be music?
So this leads us to one final question. Can computers make music? We know that computers have augmented the way we make music – from the auto tune of a pop singer’s single to the entirely electric compositions of much modern dance music, computers are heavily integrated into music production activities. But there has always been a human component.
In theory, perhaps music is an element ripe for AI deployment. It is a field which is driven by conventions, patterns and replicable elements with definable relationships. Feed a deep learning machine enough music and it should be able to figure out what bits go with what, where and when. Interestingly though, AI isn’t quite there yet. There are a few tools out there, but an example as good as any is called Amper. It’s very easy to use; you select a style, it has a little think about it, and then delivers you thirty seconds of a completely original composition. Watson Beat is also working away on eliminating our need for X Factor or Pop Idol.
So before you read on, go and try Amper. What do you think of the results it churns out? They’re interesting and understandable. Pleasant even. They contain all the conventions of that music piece you would expect to hear. But there’s something missing. And it’s not just the psychological element of knowing that a computer did it and believing it to feel soulless – there really is an intangible dimension that is missing. It might pass as the background track in an advert, but it isn’t going to be blasting through our radios any time soon. It’s unlikely to pass the equivalent of a musical Turing test.
The intersection of art and AI is an area that is receiving increasing attention; in the field of painted art the algorithms and deep learning processes being applied are good enough to trick the layperson to the point they don’t know whether something was produced by Van Gogh or his digital impersonator. However, our ears are a much more sensitive and complex ‘measurement tool’ than our eyes. We may be leading up to ‘fooling’ our eyes – but our ears? This will take more time and understanding. This takes us into a far deeper philosophical argument; we may have been able to identify the practical components of music, but to define what music truly is may also need us to examine philosophical concepts of aesthetics, art and the role of intention, emotion and response in the whole process. And that’s just opening up a whole different can of worms…