Dr.Godfried-Willem RAES

Kursus Experimentele Muziek: Boekdeel 9: Literatuur en Aktualiteit

Hogeschool Gent : Departement Muziek & Drama


<Terug naar inhoudstafel kursus>    

9535:

Warren BURT

 

Warren BURT leerde ik kennen in 1969 waar hij samen met heel wat door het '68 virus gebeten jonge komponisten de toen nog een zeker aanzien genietende Internationale Ferienkurse fuer Neue Musik bezocht. Hij is Amerikaan en hoewel geboren in Waterford, Connecticut, is hij toch vooral getekend door zijn opleiding aan de Universiteiten van de Amerikaanse westkust: San Diego. De invloedrijke komponisten op deze campus waren toen Pauline Oliveros, Larry Polanski, Kenneth Gaburo.

In de lijn van het vroegere werk van Pauline Oliveros, raakte hij reeds heel vroeg in de ban van de mogelijkheden die de komponist geboden worden door de elektronika enerzijds, en door alternatieve toonsystemen (just intonation) anderzijds. Net zoals in het geval van Oliveros, is ook Burts oorspronkelijke muziekinstrument, het akkordeon.

In zijn kompositorisch werk valt naast het technologische, vooral het meta-muzikale erg op. Veel van zijn stukken refereren op een of andere wijze naar bestaande muziek uit zowel de klassieke als uit de populaire muziek.

In de vroege jaren '80 emigreerde bij naar Australie, waar hij zich in Melbourne vestigde en tot een van de meest prominente figuren van de Australische experimentele muziekscene uitgroeide.


More information about Warren Burt can be found at http://farben.latrobe.edu.au/NMA/22CAC/burt.html

some unedited notes on Warren Burt by Warren Burt:

Warren Burt was born October 10, 1949 in Baltimore, Maryland. He grew up in Waterford, New York where he studied accordion and flute. He decided on music as a career because it looked like an easy major in University. He went to the State University of New York at Albany, (his composition teachers were William Thomas McKinley and Joel Chadabe), where he became fascinated by problems of composition/organization and decided to get serious about music as long as he could laugh at himself. He went to the University of California at San Diego for graduate work, (his composition teachers were Robert Erickson and Kenneth Gaburo; Pauline Oliveros was also a source of inspiration). While at UCSD he became a fellow in the Center for Music Experiment being in charge of the Analog Electronic Music and Video Synthesis facilities. He also became associated with Serge Tcherepnin at this time and participated in the design and construction of the first and subsequent generations of Serge Modular Music Systems. Also while in San Diego he was a founder member, (with Ronald Al Robboy and David Dunn), of Fatty Acid, an incompetent performance group. In 1975 he left the USA and moved to Australia, taking a job teaching freshman theory and building a hybrid sound-video studio at La Trobe University in Melbourne. He is one of the founding members, (with Ronald Nagorcka) of the Plastic Platypus, (an experimental music performance group), and one of the founders of the Clifton Hill Community Music Center, (a community-music-resource-centre). He has written probably far too many works for instruments, electronics, voice, video, theater, prose, poetry, et cetera. However, he is still laughing.

From an undated information sheet supplied by the composer.


Some relevant quotes and thoughts

W. Burt: from 'An Emotional Geography of Australian Composition', in Sounds Australian No. 34, Winter 1992.


Over zijn recent werk laten we hem zelf aan het woord:


Warren BURT:

 

INTERACTIVE IMPROVISATIONS WITH ELECTRONIC MUSIC SYSTEMS

 

Since 1968, I have been performing interactively and improvisationally with electronic music systems. This kind of music making has formed the core of my musical output, both as works for live presentation and as works for tape or radio.

I've used most of the electronic music available over the past 23 years: analog and digital synthesizers, home-made circuits of both great complexity and extreme simplicity, electronic toys, and computer/synthesizer setups, both commercially available and homebuilt.

The main idea behind all this work has been that of live interaction with an intelligent or semi-intelligent machine process. In one sense, this work can be seen as a precursor to the current work being done in virtual reality, in that I have continually been setting up machine-generated musical environments with their own behaviours and rules and then wandering through them, exploring the implications of the environments I've created. When this exploration is done before an audience, or onto a tape recorder, the result is a musical composition that combines both spontaneous and pre-considered elements.

The main work in creating a piece in this way consists of three not necessary not sequential stages.

1) Conceiving of the machine process; assembling the "rules of the game" for the machine, if you will.

2) Setting the limits over which this process will occur and selecting the musical elements the process will control.

3) Deciding what aspects of the process I'm going to control and how that will be done.

For example, in my recent piece Okay-This One's for Arnie (1991), I first conceived of one very specific permutation process which I tought might produce interesting musical results. Writing a computer program to perform the process and hearing the results constituted the first part of the work.

 

I next thought that it might be interesting to see how this process would sound if applied to 12-tone rows.(The fact that I was reading Schoenberg's 'Style and Idea' at the time might have had something to do with this.) On hearing the results, which seemed interesting enough to continue with (the process broke the rows up into interesting segments and combined them in pleasing ways), I then had to decide which rows would be used, at what tempi, in what combinations, volume levels, etc. This was the second part of the process.

The third section involved writing the computer program (in the Ravel language) to interact with, and deciding what controls I would have and how they would be activated. For this piece, I decided to simply sit at the computer, using the computer typewriter keyboard to improvisationally change certain musical elements. The Ravel language allows you to create "windows" on the computer screen that you can type information into which the program will then use in its music making. For "arnie", the program I wrote for this piece, I had four 12 element rows of three columns each (pitch,duration, loudness) on display. I could change any element of any row, and when that element was selected by the permutation process, the newly selected element would be heard. I also had independent controls for the tempo of each of four voices, a control to select the lenght of row the permutation process would be applied to (12 being the maximum lenght of the row),a control to send any of the four permutation processes to any point in their lenght (the process produced a permutation sequence about 4-million elements long), an overall tempo control, and timbre selection , transposition, and maximum loudness controls for each of the four voices.

After working with this program for a while, I noticed that I was getting frustrated being only able to change the rows gradually, one element at a time. Since the choice of rows was the critical factor determining the harmonic world of the piece, I wanted both the ability to gradually change the harmonic world and to suddenly change it. So I added one more control to the screen- a window which would allow me to save any row to the disk and to recall it instantly, applying it to any of the four voices.

In performance, then, I might start off with, say, the four main forms of the principal row from Schoenberg's Piano Concerto (I decided to use only rows from Schoenberg's pieces) all at the same tempo with piano sounds. After shortening the rows to play with various segments in various ways for a while, I might suddenly load all four voices with the same row from the Violin Concerto, but change the four tempi to be proportionally related to each other by 13:17:19:23, and change the four timbres to guitar, marimba, vibraphone and bell. This would have the effect of changing from a single-timbre freely polyphonic world to a multi-timbral strict canon. I could then deform the canon in various ways, transposing voices, changing the lengths of rows, changing the overall tempo of the piece, until I wanted to change to another harmonic world, etc.

 

For this piece,I set up an extremely rich performing environment. For other pieces, this has not necessarily been the case. For example in Aardvarks IV (1975), I had an extremely complex hybrid (analog/digital) patch producing quite interesting timbral variations and pitch waverings in a relatively constant sound. My only performing action in the first 20 minutes of the piece was to turn, extremely gradually, a single control from full off to full on, which produced an imperceptibly slow change from a relatively pitch oriented sound to a modulated noiseband full of glissandi and other sudden changes of sonic character.

In the past few years, I've been working with a number of computer programs that allow real-time interactive performance. Some of these have been Sound Globs, M, Drummer, Cakewalk, Ravel and Music Box. I've written about some of these in a recent issue of Chroma (1), so I won't go into them here. One important point to make, however, is that each program has its own limits and abilities, and more importantly, its own flavour. That is, each program makes you conceptualize musical problems in one way rather than another. One result of this has been that recently I' ve found myself rewriting pieces several times in different languages, trying to find the right mix of musical conception and software capability.

For example, another recent work, 21 Studies in the Modes of Archytas (1991), began with Serge analog synthesizer modules and a control-voltage-to-MIDI converter interacting with the sequencer program Cakewalk to produce a series of variations in ancient Greek modes using harp and flute samples. Finding certain limitations with this approach (mainly that I had to carry around too much equipment!), I decided to rewrite the piece using softare instead of hardware to generate the melodies. So far, I've written new versions in Ravel and Music Box, but haven't yet been happy with the results. The right combination of portability, flexible control and MIDI sysex capabilitiescontinues to elude me. Perhaps I need to rethink the basis of the piece in order to find the environment that's right for it.

 

The main problem with this way of working is that learning each new language is hard work and takes time. It usually takes quite a while before, like one of the characters in a William Gibson science fiction novel, you can feel the structure of a new language settle into your body. When this happens, you can begin to both think and feel naturally in the new language. Being a computer polyglot is damn hard work, but the flexibility it allows is, at least for me, worth it.

Currently, with the help of a Composer's Fellowship from the Performing Arts Board of the Australia Council, I'm composing Some Kind of Seasoning, a set of 16 interactive computer pieces, including the three 1991 pieces mentioned here, each of which can be of varying durations, from around 8 minutes minimum for some of them, out to around 90 minutes for others, with the average duration being about one hour each. The pieces are being performed by me using a portable setup I've developed consisting of an IBM-compatible laptop computer with MIDI interface,an E-mu Proteus Synthesizer, a Roland CP-40 pitch-to-MIDI converter, and a Sony Walkman cassette player. This setup is small enough to travel with me as cabin baggage on an aeroplane, yet powerful enough to do most of what I want.

 

I conceive of the set as being performed over several days in a gallery setting. Since performing Syd Clayton's 9-hour keyboard work, Lucky Number, I do not find extreme musical duration daunting at all, but indeed, rather refreshing. However, understanding the nervous nature of the act of sitting in a concert hall, I've decided not to perform these works in those settings, but rather to place them in a gallery space where the pressure of directed listening seems not so mandated. A gallery seems to me more free-each individual member of the audience can chose to listen intensely or casually, as their moods and abilities permit. The idea of the performer as installation , or the performer as moving sculpture, if you will, is one that appeals to me greatly.

For this large set, I've developed a number of computer programs which I improvise with. I've described one of these, "arnie", above. I'd now like to describe another one, "randie", in some detail.

Whereas with "arnie", I chose to create a world where one improvisationally explores the results of one particular kind of permutation process, "randie" began with wanting to hear if different kinds of "random" number generators actually produced different musical results. The Ravel language has several kinds of random number generators in it, and it was quite easy to write several others of my own. The idea here is to apply the results of different kinds of randomness to similar sets of musical material and see if you can hear the difference, exploring the qualities of each kind of randomness.The computer screen that you see when you are performing with "randie" is shown here.

Each of the windows is numbered near its bottom lefthand corner, and each does a different thing. Window 1 is the main control window. Most of the essential elements of the sound are chosen here. To change an element,simply move the screen cursor (with the directional arrows) over the desired element, type in the new value, and press return. If the element you changed is relevant to the sound you are currently hearing, you will hear the change instantly.

With "randie" you are performing up to 5 simultaneous mostly monophonic lines. The parameters for each of the 5 voices (called vcos) are listed in window 1 below the voice numbers and are labelled in the box to their left. Reading down from the top, we see we can change the ranges of pitches, the range of times between notes, durations of notes, and loudness (velocity) values for each voice. Additionally we can multiply all rhythm values by a related tempi, change the timbre (patch) the voice is playing, change the MIDI, channel it is on and select one of seven types of randomness which will be used by what voice.

The selection of different kinds of randomness is the heart of the program. The parameter list continues with a listing of the kinds of randomness available. "Rndrange" is an equally weighted randomness. There is an equal probability of anything within the selected range occuring.

"Triangle" is a crude approximation of a bell-shaped curve-distribution. That is, if you use it to select a range of events, there is much higher probability that events in the middle of the range will be selected than events at the limits of the range.

"Fractal" is a random number generator that uses a 1/f distribution. That's a kind of number distribution that the Chaos boys and girls in science are all excited over because it shows "fractal" or "self-similar" properties. So far, I haven't been impressed by it, but I included it here in the interests of mathematical fairness and diversity of resources.


Filedate: 900928/971201/98-09-07

Terug naar inhoudstafel kursus: <Index Kursus> Naar homepage dr.Godfried-Willem RAES