Richard Hoadley

updated 1.1.2014 by rrs

 
 
 

Calder's Violin:



The following interview was conducted via an email questionnaire in early-2014 by Ryan Ross Smith.


RRS: When did you first start using animated notation in your work?


RH: In 2011, when I discovered the dynamic score environment INScore: http://inscore.sourceforge.net


RRS: What was the first piece you created using animated notation?


RH: Calder's Violin. There are links to lots of papers, images and videos here: http://rhoadley.net/comp/calder/


RRS: What led you to start using animated notation? [This could be aesthetic/artistic concerns, technological experimentation, a bet and/or dare, etc.]


RH: I've always thought of myself as primarily a notation based composer. My primary interest has been, and my PhD was, in musical patterning interpreted by performance. I'm also very interested in the role played by technology in music and performance. While fully respecting those that can, I've found it hard to match the physical aspects of live performance with acousmatic or electroacoustic forms. This has meant that the majority of my electronic works have tended to be algorithmic in nature. INScore allows the algorithmic control and display of western notation as well as (developing) aspects of augmented scoring, such as images, text and graphics. It's important to point out that I'm still very interested in the electroacoustic and algorithmic parts of composition and intend (when I have time) to develop my skills more in the former. However, algorithmic synthesis is a complex programming challenge for me as a musician (I am currently studying for a Masters in Computer Science, mind you).


RRS: Where there particular compositions/notational approaches/technologies/video games/etc. that exerted any influence over your [early and/or present] work?


RH: I've been a composer of algorithmically patterned music for many years. One of the pieces submitted for my PhD included sections generated using Cubase's then 'logical' editor allowing experimentation with complex canonic structures (tempi in 5:4 relationships for instance). These, however, had to be transcribed by hand to make them readable for human performers. Later pieces included algorithmic generation of audio using synthesisers controlled via MIDI system exclusive messages (for instance, Yamaha SY77s and 99s). INScore's implementation of common practice notation was one of the most appropriate I have seen for my activities and I immediately began experimenting with it. Calder's Violin was the first result - I find the ability to take advantage of performers' experience and skill in reading notation to be very exciting.


RRS: How would you describe your current work with animated notation?


RH: At the moment I'm working in these main areas:

- Dancer modulated/controlled audio and notation (Quantum Canticorum);

- Graphics modulated notation (December Variations (on a Theme by Earle Brown));

- In addition, I'm about to start a project with an old friend of mine who's a poet involving text modulated notation, dance and music


RRS: Where do you see your work with animated notation going in the future?


RH: • Exploring what augmented notation means in the live domain (I'm interested, for instance, in Cornelius Cardew's semi-musical scores such as Octet 61, Treatise and The Great Learning and would like to experiment with similar structures.

• Multiple instruments and parts: synchronisation

• Text and graphic modulated notations.

• Live-coded notation.


RRS: What potential, if any, does animated notation have for future work IN GENERAL?


RH: This very much depends on who you ask and what exactly you're referring to. If the 'animation' part is in fact controllable/programmable live then it has the power to transform the way we 'use' music and notation, particularly our attitude towards it's erstwhile permanence. To an extent, in terms of algorithmic composition, 'wrong notes' effectively no longer exist! There's also a lot to experiment with in terms of how we read and interpret scores and what they really are: graphic, semantic, somewhere in between. It would end the need for 'fixed' sight-reading tests! Although I think the struggle with paper will continue for many years to come, there is the distinct possibility that it will become less needed. On the contrary, however, one of the real problems of screen-based notation is the difficulty of annotation.


About:

"In recent years Richard Hoadley has composed using his own bespoke systems implementing physical interfaces and algorithmic software which together generate original compositions in real-time as a feature of the performance. He has developed a number of devices including the 'Gaggle' which investigate and facilitate physical interactions with musically expressive algorithms for installations, performances (including dance) and therapeutic environments. In 'Calder's Violin' (2011-12) he included methods for the live presentation of algorithmically generated notation and augmented scores, an approach developed further in 'The Fluxus Tree' (2012), 'Three Streams' and 'Quantum Canticorum' (2013) in which physical movement generates music notation which is then performed live by an instrumentalist. He is affiliated with the Digital Performance Laboratory at Anglia Ruskin University. "[1]

1. "Ways of Making People Move: composing through the live generation of musical scores [biography]," rhoadley.net, accessed November 30, 2013, http://rhoadley.net/research/abstracts/composition%20in%2021st%20century.pdf.


 

 
  • Visit Richard's website

  • Visit Richard on Vimeo

  • Return to composers

  •