For midi piano and any melodic instrument
2009
This piece uses my MaxMSP based “com_poser” automated performance/composition tool. For each movement, the basic tonal area and length are chosen to begin with. Parameters are also set to determine form: tempo, rhythmic type/sporadicity, harmonic depth, number of voices, number of rhythmic voices, volume range/sporadicity, and harmonic range/sporadicity are all graphed in time before hand in order to control the midi player piano’s improvisation. This creates the skeleton of the piece which the performer uses to improvise his/her own accompaniment.
Philosophically, this plays on the idea of interactivity: since parameters are chosen before hand, often by the player him/herself, is there really any interaction, or is this just some sort of elaborate solo? It also works as a musical demonstration of the Turing Test: many people unfamiliar with player pianos assume the recording to be of an actual human player. Those familiar with computer music can’t always tell whether the piano is actually responding to the instrumentalist or vice-versa or both. What does it mean when we can no longer tell?
I. Contagion
[audio:http://amusesmile.com/old/sound/fol1.mp3|titles=contagion]
II. Folie imposée
[audio:http://amusesmile.com/old/sound/fol2.mp3|titles=folie imposée]
III. Afferent Feedback
[audio:http://amusesmile.com/old/sound/fol3.mp3|titles=afferent feedback]
IV. Paracusia
[audio:http://amusesmile.com/old/sound/fol4.mp3|titles=paracusia]
V. Folie simultanée
[audio:http://amusesmile.com/old/sound/fol5.mp3|titles=folie simultanée]