The Mozart in the Machine
Sometime in the coming decades, an external system that collects and analyzes endless streams of biometric data will probably be able to understand what’s going on in my body and in my brain much better than me. Such a system will transform politics and economics by allowing governments and corporations to predict and manipulate human desires. What will it do to art? Will art remain humanity’s last line of defense against the rise of the all-knowing algorithms?
In the modern world art is usually associated with human emotions. We tend to think that artists are channeling internal psychological forces, and that the whole purpose of art is to connect us with our emotions or to inspire in us some new feeling. Consequently, when we come to evaluate art, we tend to judge it by its emotional impact and to believe that beauty is in the eye of the beholder.
This view of art developed during the Romantic period in the 19th century, and came to maturity exactly a century ago, when in 1917 Marcel Duchamp purchased an ordinary mass-produced urinal, declared it a work of art, named it "Fountain," signed it, and submitted it to an art exhibition. In countless classrooms across the world, first-year art students are shown an image of Duchamp’s "Fountain," and at a sign from the teacher all hell breaks loose. It is art! No it isn’t! Yes it is! No way!
After letting the students release some steam, the teacher focuses the discussion by asking "What exactly is art? And how do we determine whether something is a work of art or not?" After a few more minutes of back and forth the teacher steers the class in the right direction: "Art is anything people think is art, and beauty is in the eye of the beholder." If people think that a urinal is a beautiful work of art -- then it is. What higher authority is there to tell people they are wrong?
And if people are willing to pay millions of dollars for such a work of art -- then that’s what it is worth. After all, the customer is always right.
In 1952, the composer John Cage outdid Duchamp by creating "4’33”." This piece, originally composed for a piano but today also played by full symphonic orchestras, consists of 4 minutes and 33 seconds during which no instrument plays anything. The piece encourages the audience to observe their inner experiences in order to examine what music is, what we expect of it, and how music differs from the random noises of everyday life. The message is that it is our own expectations and emotions that define music and that separate art from noise.
If art defined by human emotions, what might happen once external algorithms are able to understand and manipulate human emotions better than Shakespeare, Picasso or Lennon? After all, emotions are not some mystical phenomenon -- they are a biochemical process. Hence, given enough biometric data and enough computing power, it might be possible to hack love, hate, boredom and joy.
In the not-too-distant future, a machine-learning algorithm could analyze the biometric data streaming from sensors on and inside your body, determine your personality type and your changing moods, and calculate the emotional impact that a particular song -- or even a particular musical key -- is likely to have on you.
Of all forms of art, music is probably the most susceptible to Big Data analysis, because both inputs and outputs lend themselves to mathematical depiction. The inputs are the mathematical patterns of soundwaves, and the outputs are the electrochemical patterns of neural storms. Allow a learning machine to go over millions of musical experiences, and it will learn how particular inputs result in particular outputs.
Supposed you just had a nasty fight with your boyfriend. The algorithm in charge of your sound system will immediately discern your inner emotional turmoil, and based on what it knows about you personally and about human psychology in general, it will play songs tailored to resonate with your gloom and echo your distress. These particular songs might not work well with other people, but are just perfect for your personality type. After helping you get in touch with the depths of your sadness, the algorithm would then play the one song in the world that is likely to cheer you up -- perhaps because your subconscious connects it with a happy childhood memory that even you are not aware of. No human DJ could ever hope to match the skills of such an AI.
You might object that such an AI would kill serendipity and lock us inside a narrow musical cocoon, woven by our previous likes and dislikes. What about exploring new musical tastes and styles? No problem. You could easily adjust the algorithm to make 5 percent of its recommendations completely at random, unexpectedly throwing at you a recording of an Indonesian Gamelan ensemble, a Rossini opera, or the latest Mexican narcocorrido. Over time, by monitoring your reactions, the AI could even determine the ideal level of randomness that will optimize exploration while avoiding annoyance, perhaps lowering its serendipity level to 3 percent or raising it to 8 percent.
Another possible objection is that it is unclear how the algorithm could establish its emotional goal. If you just fought with your boyfriend, should the algorithm aim to make you sad or joyful? Would it blindly follow a rigid scale of “good” emotions and “bad” emotions? Maybe there are times in life when it is good to feel sad? The same question, of course, could be directed at human musicians and DJs. Yet with an algorithm, there are many interesting solutions to this puzzle.
One option is to just leave it to the customer. You can evaluate your emotions whichever way you like, and the algorithm will follow your dictates. Whether you want to wallow in self-pity or jump from joy, the algorithm will slavishly follow your lead. Indeed, the algorithm may learn to recognize your wishes even without you being aware of them.
Alternatively, if you don’t trust yourself, you can instruct the algorithm to follow the recommendation of whichever eminent psychologist you trust. If your boyfriend eventually dumps you, the algorithm may walk you through the official five stages of grief, first helping you deny what happened by playing Bobby McFerrin’s "Don’t Worry Be Happy," then whipping up your anger with Alanis Morissette’s "You Oughta Know," encouraging you to bargain with Jacque Brel’s "Ne me quitte pas" and Paul Young’s "Come Back and Stay," dropping you into the pit of depression with Adele’s "Someone Like You" and "Hello," and finally aiding you to accept the situation with Gloria Gaynor’s "I Will Survive" and Bob Marley’s "Everything's Gonna Be Alright."
The next step is for the algorithm to start tinkering with the songs and melodies themselves, changing them ever so slightly to fit your quirks. Perhaps you dislike a particular bit in an otherwise excellent song. The algorithm knows it because your heart skips a beat and your oxytocin level drops slightly whenever you hear that bit. The algorithm could rewrite or edit out the offending part.
The idea of computers composing music is hardly new. David Cope, a musicology professor at the University of California in Santa Cruz, created a computer program called EMI (Experiments in Musical Intelligence), which specialized in imitating the style of Johann Sebastian Bach. In a public showdown at the University of Oregon, an audience of university students and professors listened to three pieces -- one a genuine Bach, another produced by EMI and a third composed by a local musicology professor, Steve Larson. The audience was then asked to vote on who composed which piece. The result? The audience thought that EMI’s piece was genuine Bach, that Bach’s piece was composed by Larson, and that Larson’s piece was produced by a computer.
Hence in the long run, algorithms may learn how to compose entire tunes, playing on human emotions as if they were a piano keyboard. Using your personal biometric data the algorithms could even produce personalized melodies, which you alone in the entire world would appreciate.
It is often said that people connect with art because they find themselves in it. This may lead to surprising and somewhat sinister results if and when, say, Facebook begins creating personalized art based on everything it knows about you. If your boyfriend dumps you, Facebook might treat you to a hit song about that all-too-familiar bastard rather than about the unknown person who broke the heart of Adele or Alanis Morissette. Talk about art as a narcissistic extravaganza.
Alternatively, by using massive biometric databases garnered from millions of people, the algorithm could produce a global hit, which would set everybody swinging like crazy on the dance floors. If art is really about inspiring (or manipulating) human emotions, few if any human musicians will have a chance of competing with such an algorithm, because they cannot match it in understanding the chief instrument they are playing on: the human biochemical system.
Will this result in great art? That depends on the definition of art. If beauty is indeed in the ears of the listener, and if the customer is always right, then biometric algorithms stand a chance of producing the best art in history. If art is about something deeper than human emotions, and should express a truth beyond our biochemical vibrations, biometric algorithms might not make very good artists. But nor would most humans. In order to enter the art market, algorithms won’t have to begin by straight away surpassing Beethoven. It is enough if they outperform Justin Bieber.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
To contact the author of this story:
Yuval Noah Harari at email@example.com
To contact the editor responsible for this story:
Tobin Harshaw at firstname.lastname@example.org