We have by now established that we can teach machines to remember and think – perhaps similarly to a way people do. OK. But can their thinking be flawed, random and imperfect like that of living beings? Can voltage instability or digital jitter produce movements that render human-like imperfections? If so, can those imperfections be interpreted as the Machine’s emotions at some level or even serve as evidence of its free will? I decided to experiment in search of those answers. Here’s what I found.
After a year of so of hiatus from working on refining Promethej’s algo, I played with a simple python app influenced by notation typography of John Cage’s famous piece. The app turns any text you put into it to notes (music) conveniently saveable as MIDI files. To see what will happen, I copied those MIDI files into tracks within a sequencer connected to a modular synth and – voila – the synth came alive playing music from texts…
I borrowed the opening chapters of Nabokov’s and Poe’s famous pieces and put them through the App. I then did the same with 5 – 6 more classic authors’ pieces to come to an unbelievably curious results. The tones coming out of synth sounded pretty good to begin with, but with each new pass the Machine moved the notes around a bit. This “moving” made the music a bit different (and therefore “better” or more listenable in a traditional sense).
When this random and unplanned occurrence had happened before, I decided to abort the process thinking some user(me)-related “voodoo” is probably taking place. I really wanted to test if textual content has some kind of musical intelligence, i.e. if text transferred into “music” carries over at least some of the emotion (or atmosphere) it has in it textual form. But this accidental offering seems now far more interesting….
Both Poe and Nabokov – turned to music – seemed to have a life of their own. As if the synth got more “skilled” with each new reproduction…And the end result got ever so delicately, yet audibly more different with each new pass. At first, I thought it was just a familiarity bias, so I went ahead and recorded each pass just to confirm fine differences that made an impact on listening experience. Few notes (here and there) were simply “moved” but with each pass they were moved into a “better” place…Weird?
I then checked the notes in the sequencer track, but they seemed to always be at the same spot. So it must be that – once they entered into analog domain – the current within synth (dc) shifted them into (arbitrarily) “better” places. I thought long and hard and realised that analog side of the system must be operating as a sort of a child learning something new. With each new pass, it came to interpret in a “better” way. As if the Machine learns with each pass something new from the same notes….Has the natural tendency to simplify it, to “restructure” it for easier communication…
In other words, the Machine’s “intelligence” in this case seems to “want” to organise notes in such way that it creates less dissonance, more space around them, and in turn – create arbitrarily prettier melodies. Even if my experiments are theoretically and practically flawed (which they probably are) they establish some pretty curious playgrounds.
Last year, I assembled an audio-video installation of this idea at Dani komunikacija 2017 event in Rovinj, Croatia. This piece, called Donald na Danima, was influenced by the concept of trying to use machine’s interpretative capability when dealing with data. The idea was to use this app to see if Trump’s words – transcribed from his inaugural speech – contain any musically relevant emotional or otherwise psychoacoustic content that sounds musically interesting. Sounds coming from the synth were rather eerie and burdening and I felt like they provided an interesting backdrop to Trump’s images from the same speech displayed on screen. Here’s an image of the installation in-situ:
In the next several days. I will post several music pieces “composed” this way, so stay tuned!