In short, it was underwhelming. The dancing robot had very few degrees of freedom (gimme an Aibo, or Qrio, please), and its movements were, I am quite sure, pre-programmed: i.e. it was a slightly more complex version of a wind-up toy. Ditto the Heliphon. Now, the Heliphon is supposedly MIDI-controlled, but in the two pieces in which it was featured, I saw no sign of it being played interactively. It was a glorified player piano. More importantly, the Heliphon is one-dimensional: yes, it has some flashy high-output LEDs, but it has little dynamic and tonal range, making for a very inexpressive musical instrument. So what if it can play more than 200 notes per minute? It just left me cold and wanting more.
The last piece played, by Christine Southworth, was the most compelling. It had elements of Glass--not surprising as he derived inspiration from gamelan, as does Southworth who is part of a gamelan ensemble--with lovely range of timbres and a hint of funk rhythms. The parts taken by the bots, however, were lacklustre. The dancing robot (BlowBot) re-appears. Again, completely uninspiring. It is a tetrahedron whose edges are remote controlled pneumatic devices: they look like the supports that are used to hold up the hatch of a hatchback car. Each edge may be lengthened or contracted. As I was sitting in the audience, I thought of the many possibilities for performing on that bot. However the actual performance was slow and lumbering, and lacking in rhythm.
Technology can provide so many more modes of expression than I saw that evening. I have seen some of these possibilities which expand the range of experience for the audience (which must not be forgotten), and also the possibilities for the performer, both live and on the web and here. What I saw at the MoS disappointed me. I saw the player piano in various guises.