Godfried-Willem Raes

"Namuda"

a suite of studies in expressive gesture recognition

for one or more nude dancers, radar/sonar based invisible instrument and robot orchestra

2010-2017

 

This suite of compositions was started in 2010 and the number of sections and components is increasing as we are building more robots for the M&M orchestra as well as adding more software based modules for expressive gesture recognition. All studies are designed to be completely interactive and receiving their inputs from one or more dancers. The compositions make use of the authors invisible instrument based on sonar and microwave doppler shifts caused by the moving and reflective body. For this reason, the performer has always to be naked. The hardware is fully described in our article on the ii2010 invisible instrument (Holosound) and details on our gesture recognition system in another extensive article on Namuda gesture analysis. These studies may serve as demonstrations of embodied music production.

The automatons that can be played are selected from the complete catalogue of musical robots realised sofar.

<Troms> <Rotomoton> <Springers> <Dripper> <ThunderWood> <Psch> <Simba> <Flex> <Casta> <Synchrochord> <Horny>
<Piperola> <Bourdonola> <Vox Humanola> <Puff> <Trump> <Qt> <Krum> <Bomi> <Casta Due> <Klar> <Asa>
<Klung> <Vibi> <Tubi> <Belly> <Xy> <Vacca> <Vitello> <Llor> <Toypi> <Sire> <Whisper>
<So> <Heli> <Bono> <Korn> <Ob> <Fa> <Autosax> <Snar> <Ake> <Pedal> <Rodo>
<Harma> <Bako> <HarmO> <Player Piano> <Hurdy> <Aeio> <Troms> <pp2> <Spiro> <Temblo> <Rumo>
<Hybr> <Zi> <Balmec> <HybrHi> <Tinti> <Chi> <Bello> <HybrLo> <Melauton> <Pi>  

These robots constitute almost the entire robot orchestra built by the author up to the year 2016. The setup on stage takes at least 120 square meters. The naked dancer(s) should be placed central, surrounded by the robots as well as the gesture sensing system.

Namuda forms in fact a collection of pieces that can be performed alone or in a sequence, as a suite. The 'scores' are in general completely embedded in the software. The choreography is laid down in notes and descriptive plots. Some of the studies are written for a dancing musician playing an instrument. In that case there is a written out score for the instrumental part.

Namuda Study #1: "Links" (april 2010) - duration 10'

Gestural particles recognized in this choreographed study: Implosion/Explosion, Freeze, Speedup/Slowdown, Fluency, Constancy of speed, Collision, Theatrical Collision. The recognition of these elementary shapes form the base of the composition. Timing resolution is better than 10ms. In one section of the piece a spectral transform is applied to the gesture data stream, reflecting the gesture shape very well. The results of the transform are mapped on the overtone series on our <Hurdy> robot, each of the two movement vectors on a different string. The hardware platform used is ii2010, using omnidirectional MEMS sensors. Details are published by the author in a separate article. Hardware as well as software are developed completely by the author using the PowerBasic programming language on the Intel/Windows platform.

'Links" was originally written to be performed as a duett with the author and Dominica Eyckmans, but due to the vulcanic ashes that made air transportation impossible -she was lost somewhere on Rhodos- , the piece was premiered with a. rawlings on april 20th of 2010.

Herewith some snapshots taken during the performance:

Technical note: Source code for this composition is in \gmt\namuda\links.inc. The run-time compilation is namuda.exe


Namuda Study #2: "Polyv.i.'s" (may 2010) - duration 7'

In this study only a very small subset of the recognisable gestural particles are used. In casu, the dipole Speedup/Slowdown and in the introduction only, the fluency gesture. The hardware platform used is ii2010, using omnidirectional MEMS sensors. Details are published by the author in a separate article. Hardware as well as software are developed completely by the author using the PowerBasic programming language on the Intel/Windows platform.

'Polyv.i.'s" was originally written to be performed by a dancing viola player, in this case Dominica Eyckmans. The piece was premiered by her on may 20th of 2010.


Namuda Study #3: "Collis" (may 2010) - duration 8'

This study uses mostly a single gesture, collision, and exploits in full the recognition conditions for this gestural property. As an extension, it also introduces the recognition of jumps. It was written to be performed by a Cuban dancer and percussionist. The instrumentation makes use of all the percussive robots in the M&M orchestra augmented with the lower brass instruments for the jumps. The hardware platform used is again ii2010, using omnidirectional MEMS sensors. Details are published by the author in a separate article. Hardware as well as software are developed completely by the author using the PowerBasic programming language on the Intel/Windows platform.

'Collis" was premiered on may 20th of 2010.

A photoshoot of the 'Robozara' production , july 22th, 2010 by Bart Gabriel can be found here.


Namuda Study #4: "Robodomi" (july 2010) - duration 15'

This study, written for Dominica Eyckmans, uses the combined recognition of the gestural prototypes edgy, smooth, fluent, speedup and slowdown.

A small photoshoot by Bart Gabriel can be found here.

This study has been superseded by Study #40, november 2013. The original #4 can only be performed in its original version by disabling the robots added in for Study #40, 'DunkelDark'. Added robots are <Whisper>, <Bomi>, <Asa>, <Horny>, <Fa>, <Spiro>, <Toypi>.

Technical note: Source code for this composition is in \gmt\namuda\robodomi.inc. The run-time compilation is namuda.exe


Namuda Study #5: "RoboGo" (july 2010) - duration 12 - 14'

This study, written to be performed by the author, uses the combined recognition of concatenations of the gestural prototypes.

More pictures can be found in the small photoshoot by Bart Gabriel.

For the Namuda: Black & White production, this piece was slightly adapted such that it could be performed by three black dancers: Zam Martino Ebale, Flavio Marques and Ousmane Gansore. This version was performed on 18,19 and 20th of 2012. Pictures as well as video available on request.

A modified and reworked version for prepared player piano is listed here as Namuda Study #50. (2015) This study was performed and used in the 'Belgica' movie by Felix Van Groeningen.

A revised version, with greatly extended orchestration and improved interactivity was performed by Emilie De Vlam, january 19th 2016.

There are five sections in the piece:

opening:
piano chords rising from low to high, triggered by jumps.
Every jump attempt has to be well-prepared.
Duration is movement dependent. The sections end when the highest note is reached on the piano for all three vectors.
Jumps recognised in all three vectors at the same time will trigger the large castanets on our <Simba> robot.
Rogo4:
Chord-solves in shepard chords.
Orchestration using So, Vibi, Piano, Puff, Qt, Xy
Triggering on edgy-smooth gesture properties
This procedure starts automatic as soon as the highest notes are reached in the opening.
Rogo1:
Gesture recognition based on edgy-smooth gestures
Edgy gestures mapped on Troms
Smooth gestures on Snar (half tempo)
Theatrical collision may trigger Thunderwood.
Collisions may trigger Vacca
Jumps may trigger the snares on Snar
Casta makes a continuo
Duration: 120" (2')
Rogo2:
Dance-beats
Heli, Autosax, So, Simba, Piano, Thunderwood, Qt
Tempo varies with movement speed.
Movements have to be dance-like, with a clear pulse tracking the music.
Duration: 200" (3'20")
Rogo3:
Finale Sire mapped on gestural slowdowns.
Duration: 60" (1')


Namuda Study #6: "RoboEmi" (july 2010) - duration 15'

This study, written by Kristof Lauwers to be performed by Emilie De Vlam uses the namuda gesture recognition engine. Photoshoot by Bart Gabriel.

Technical note: Source code for this composition is in \gmt\namuda\emilie.inc. The run-time compilation is namuda.exe


Namuda Study #7: "RoboBomi" (september 2010) - duration 10'

This study uses the namuda gesture recognition engine to create a serial composition in real time scored for a very limited subset of our robot orchestra. <Bomi> -the main voice- with accompaniment voices scores for <Toypi> and <Aeio>, all controlled by a viola playing dancer. The piece was written for Dominica Eyckmans and premiered on september 16th at the Logos Foundation. Here some pictures of this performance:

Namuda Study #8: "Features" (october 2010) - duration 6'

This study uses the namuda gesture recognition engine to create a composition in real time scored for a very limited subset of our robot orchestra. In particular the newly added soundsources in robots such as <Thunderwood>, <Simba>, <Bomi> and the <Piano-Pedal> come into view. The piece was written for Dominica Eyckmans and was premiered on october 20th at the Logos Foundation. Main gesture properties used here are edgyness and speedup. Compositional structures are based on the number 10 and 20.


Namuda Study #9: "Zwiep & Zwaai" (november 2010) - duration 6'30"

This study uses the namuda gesture recognition engine to create a composition in real time scored for the subset of our robot orchestra characterized by their capability to move visibly. Hence the moving robots <Korn> and <Ob> are getting the most important role. But <Puff>'s eyes, <Springers> shakers, <Simba>'s arms as well as <Thunderwood>'s windmachine also come into scope. The piece was written for Dominica Eyckmans and was premiered on november 17th at the Logos Foundation. Main gesture properties used here are expansion, freezing, collission. Compositional structures are merely based on scales.

A second version of this study, with the addition of <Klar>, was worked out with Emilie De Vlam, for the M&M 'Sense and Sentiment' production, premiered february 13th 2013.

Technical note: Source code for this composition is in \gmt\namuda\robodomi.inc. The run-time compilation is namuda.exe


Namuda Study #10: "Icy Vibes" (december 2010) - duration 7'00"

This study exploits a few new features added into our <Vibi> robot: precize modulation of the resonator wheels as well as its LED stroboscopic lights. It uses the namuda gesture recognition engine to steer the compositional structure of the interaction between <Vibi> and <Bomi>, <Piperola> and <Harma>. The study is scored for a dancing viola player, Dominica Eyckmans and was premiered on december 16th at the Logos Foundation. Main gesture properties used here are collission, theatrical collission, edgyness and smoothness.

 


Namuda Study #11: "Prime 2011" (january 2011) - duration minimum 6'40" , maximum 33'00"

This rather extensive study is based on the numerical analysis of the prime number 2011: it's the sum of the successive series of 11 prime numbers: 157, 163, 167, 173, 179, 181, 191, 193, 197, 199 and 211. Moreover, it is at the same time the sum of three successive prime numbers: 661, 673, 677. The global architecture of the piece is based on these prime numbers. The details are made to be interactive and make use of the Namuda gesture properties. The harmonic structure is entirely based on spectral distributions of slowly expanding irrational overtone series. The study fully exploits the quartertone and microtonal possibilities of the robot orchestra. The mappings of gestural properties on musical robots is as follows:

Prime number derived markers, independent from gestures, are confined to <Trump>, <Krum>, <Vox Humanola>, <Snar>, <Llor>, <Springers>, <Vibi> and <Tubi>. The piece was premiered by Dominica Eyckmans on january 19th 2011. The piece can be performed by one or two dancers.

In february 2013, when writing study #31 along very similar lines, we wrote an upgraded version of this study and removed some bugs from the coding.

Technical note: Source code for this composition is in \gmt\namuda\prime.inc. The run-time compilation is namuda.exe


Namuda Study #12: "Poly-a" (february 2011) - duration minimum 6'40" , maximum 10'00"

This study is about polyphony and involves a performer playing the viola, singing and dancing all at the same time. In top of these 3 voices of polyphony, another set of 3 voices is generated interactively and mapped on instruments of the robot orchestra: <Aeio>, <Vibi>, <Bomi>, <Bourdonola>, <Toypi>, <Puff>, <Qt>, <Xy> . The display on <So> is used to communicate and cue the performer. The counterpoint makes use of rules derived from slowly shifting irrational spectral harmony. The thematic material is derived from a simple note series:

The composition was be premiered on the M&M concert of february 10th, at the Logos Foundation Tetrahedron Hall. The dancer/vocalist/viola player was Dominica Eyckmans.


Namuda Study #13: "AI" (march 2011) - duration minimum 5'00" , maximum 10'00"

This study is about artificial intelligence and thus its software is an implementation thereoff. It uses to a great extend, information processing of the past of the piece and thus becomes capable of analysing its own musical context in real time. The study not only uses our gesture interfaces, but also takes into account the acoustic input from the performer on the viola as well as the sounds produced by the orchestra itself. The performer can play an instrument, sing and dance all at the same time.

The composition was premiered on the M&M concert of march 15th, at the Logos Foundation Tetrahedron Hall. The dancer/vocalist/viola player was Dominica Eyckmans.


Namuda Study #14: "Miked" (april 2011) - duration minimum 3'00" , maximum 6'00"

This study for a singing/rapping dancer with a handheld microphone. It ought to be a parody on typical pop-music singers 'seductive' gesture. The singers vocal utterances are spectrally mapped on Qt, the quartertone organ, such that we can clearly hear Qt to speak. The singers gestures are as well submitted to fast Fourier transforms and in their three vectors spectrally mapped on the robots <Harma>, <Xy>, <Puff>, <Piperola> and <Bourdonola>. use other robots in the orchestra.

The composition was premiered on the M&M concert of april 4th 2011, at the Logos Foundation Tetrahedron Hall. The dancer/vocalist was Dominica Eyckmans.

This study is to a large extend superseded by Study #57, 'Tekstuur' (2015), scored for Hybr and HybrHi, robots much more suitable for the concept of the original study.


Namuda Study #15: "Early Birds" (may 2011) - duration minimum 3'00" , maximum 5'00"

This study was written at the occasion of the preliminary introduction of our <Fa> robot -an automated bassoon- in the robot orchestra. Two main gesture properties, speedup and slowdown -a dipole- are mapped onto the output of a single monophonic instrument. All controls possible on the bassoon robot are made to be controlled by parameters derived from the gesture analysis.

The composition was premiered on the M&M concert of may 19th, at the Logos Foundation Tetrahedron Hall. The dancer/viola player was Dominica Eyckmans.

The piece was revized october 2012, after a substantial upgrade of our <Fa> robot.


Namuda Study #16: "Lonely Tango" (june 2011) - duration 6'30"

This study was written at the occasion of the tango concert and dance production in june 2011. It makes use of tango music, interactively combined with the gesture sensing technology. It's a fully choreographed piece to be performed by a tango couple, one of which also plays the viola. The performance was premiered by the author and Dominica Eyckmans.


Namuda production: "Diptych" (july 2011) - duration 60' (full evening collective production with Kristof Lauwers, Sebastian Bradt and dancers Emilie De Vlam and Dominica Eyckmans)

Video Opening panel 1 (Emilie De Vlam) Video Closing panel 1

Video Opening Panel 2 (Dominica Eyckmans) Video Closing Panel2

The video-clips are in MP4 format. Videorecording by Svend Thomson.

Technical note: Source code for this composition is in \gmt\namuda\namuda_gf.inc. The run-time compilation is namuda.exe


Namuda Study #17: "Spirals for Spiro" (september 2011) - duration 12'00"

This study was written at the occasion of the introduction of our <Spiro> robot -an automated spinet (small harpsichord)- in the robot orchestra. All controls possible on the spiro robot are made to be controlled by parameters derived from the gesture analysis.


Namuda Study #18: "Low Level" (october 2011) - duration 10'00"

This study is based on a very low level arithmetic but fascinating function visualized as: , encountered in a post on Facebook. The numeric sequences were remapped in different ways on notes and intervals. Interactivity was applied to orchestration, tempo and dynamics. The dancer can double up the numeric score vocaly, but also the choreography can be based on this same sequence. This study uses almost the entire robot orchestra. It was premiered by Dominica Eyckmans, october 19th 2011 at the Logos Tetrahedron. A non-interactive version of this algorithmic composition is also in the making.

 


Namuda Study #19: "Elfjes" (november 2011) - duration 6'00"

In this study we make use of eleven precomposed musical cells that are triggered by the simultaneous detection of eleven gesture properties: colliding, slowing down, theatrical collision, airborneness, fluency, smoothness, edgyness, implosive, explosive, speeding up, speed constancy. The freeze-property cancelling all others. The musical gesture of these cells correspond to the recognized gesture type. The playing speed as well as the transposition of the cells are functions of overall gestural characteristics. The choreography is based on a set of eleven visually attractive asanas taken from yoga technique. The asanas themselves trigger the freeze-property, whereas the gesture recognition system tracks the transitions for one asana to another asana. The mappings on the robot orchestra is such that each gesture-type has its own unique instrumentation. Used robots in this composition are: <Player Piano>, <Qt>, <Bomi>, <Piperola>, <Bourdonola>, <Vox Humanola>, <Krum>, <Autosax>, <Bono>, <Ob>, <Korn>, <Aeio>, <Xy>, <Vibi>, <Fa>, <Heli>, <Puff>, <Spiro>. Only pitched instruments are used.

The composition was premiered with the M&M robot orchestra concert on friday 11.11.11, performed by Dominica Eyckmans. Place of the event: Logos Tetrahedron, Ghent. Here are some pictures taken at that occasion:

A second performance took place on friday march 2nd 2012.

Technical note: Source code for this composition is in \gmt\namuda\elfjes.inc. The run-time compilation is namuda.exe


Namuda Study #20: "Solstice" (december 2011) - duration 7'40"

In this study we make use of the tone clock as a means for organising both our melodic and harmonic material. All essentials for the tone clock (an idea of the dutch composer Peter Schat) can be given in the following chart: Note that each of the twelve 'hours' make use of two intervals only and that for each hour a 12-tone series can be made such that in groups of 3 notes, we have only those intervals. Of course all intervals can be inverted and all series transposed freely. The main parameters of the music thus generated are depending on detected gesture properties. The instrumentation is limited to: <Player Piano>, <Spiro>, <Vibi>, <Fa>, <Xy>, <Bomi>, <Bourdonola>, <Llor> and <Psch>. There is an optional part for the viola as well.

The composition was premiered with the M&M robot orchestra concert on 21.12.11, performed by Dominica Eyckmans. Place of the event: Logos Tetrahedron, Ghent.


Namuda Study #21: "Emergence" (january 2012) - duration 8'28"

In this study we make use of the microtonal possibilities of many of our robots. The harmony is completely spectral using irrational and changing spectra throughout. There are ten autonomous voices, each using different robots in their instrumentation. The tempo relationships between the voices are spectral as well as the pitches played. The number of the spectral component played by each voice depends on the global body surface involved in the gestures, whereas the speed of movement is a global tempo parameter. Instrumentation is as follows:

The fundamental pitches run through following series of always increasing intervals:

The choreography, elegant, sensual and smooth is for a single naked dancer. Premiere: January 11th, 2012 by Dominica Eyckmans. Recording available.

Technical note: Source code for this composition is in \gmt\namuda\emergence.inc. The run-time compilation is namuda.exe


Namuda Study #22: "Lites" (february 2012) - duration 6'00"

In this study we make use of the visual possibilities of many of our robots. The colors of the lites are determined by the gesture recognition engine. Only near the end of the study, <Bomi> and <Spiro> are gently mapped on gestural properties. The sounds otherwize, stem from the viola. The study requires darkness on stage and flashlights, operated by a second moving player/performer using ancient flashlight bulbs for photography.

Technical note: Source code for this composition is in \gmt\namuda\lites.inc. The run-time compilation is namuda.exe


Namuda Study #23: "Wet" (march 2012) - duration 10'00"

In the Proceedings of the National Academy of Sciences, an article by three musicologists appeared that attracted our attention. They performed numerical analysis on scores of a variety of 'great' composers and found that both the rhythmical and the melodic material strictly follows the 1/f distribution law. The variance appeared to be somewhat different from composer to composer as well as the predictability, but the 1/f distribution was allways found back. This lead us to undertake an inversed approach, whereby we guarantee the 1/f law to be respected for this entire study. Thus we guarantee this piece to be part of this very privileged top class of great masterpieces our music culture has produced...

The challenge was not so much to generate music following these rules, but much rather to do so in a fully real-time interactive context.

Gesture prototypes used for this study are:

The piano-accompaniment (sometimes vibi is interjected as well in this role) guarantees fullfillment of the postulated 1/f laws for pitch/melody as well as rhythm at all times. Contrary to the common property of all musics analysed in the source mentioned above, we did not place our rhythm in a time grid. Thus the music lacks a 'beat' but not a tempo. The pitch classes used in the seven sections of the composition (the seventh section repeats the first one) are: Although in each class all 12 pitches from the chromatic scale are used, the music is far from dodecaphonic, due to the 1/f probability distrubutions applied. Similarly, the rhythm classes, again six are:

This highly choreographed study was premiered on thursday march 22nd by Dominica Eyckmans and the author at the occasion of the 'Jeux d'eaux' production with the M&M robotorchestra at the Logos Tetrahedron concert hall in Ghent. Here are some pictures from this production:

reference:

LIVITIN, Daniel J a.o., "Musical rhythm spectra from Bach to Joplin obey a 1/f law' In: PNAS (Proceedings of the National Academy of Sciences), March 6th, 2012, Vol.109, nr.10.

Technical note: Source code for this composition is in \gmt\namuda\wet.inc. The run-time compilation is namuda.exe


Namuda Study #24: "No/Si" (april 2012) - duration 8'24"

This study is based on the same 1/f distribution contraints as used in #23, but applied even more strictly. All rhythmic structures and distributions in this study are framed within a strict metric grid. There are eight sections, each using different tonal and rhythmic data sets. In the first six sections, we made use of the same pitch data-sets as used in 'Wet'. The sections thereafter, use more and more limited datasets. The rhythms are very different though and wander through all different permutations of note-values obeing the 1/f distribution law. Whenever it occurs that a given rhythm cannot be played using a specific pitch within the constraints of 1/f distribution combined with the gestural properties used for interactivity, the rhythm will be performed on non-pitched percussion instruments such as <Troms>, <Snar>, <Thunderwood>, <Psch>, <Vacca>, <Simba>.

The distribution sets conform to:

  • for 2 metric values: 2a + b
  • for 3 metric values: 6a + 3b + 2c
  • for 4 metric values: 12a + 6b + 4c + 3d
  • for 5 metric values: 60a + 30b + 20c + 15d + 12e
  • for 6 metric values: 60a + 30b + 20c + 15d + 12e + 10f. If we tollerate a small fault, much simpler rhythms can be obtained in this 6-value case with a distribution: 12a + 6b + 4c + 3d + 2e + f.
  • for 7 metric values: 420a + 210b + 140c + 105d + 84e + 70f + 60g
  • for 8 metric values: 840a + 420b + 240c + 210d + 168e + 140f + 120g + 105h

Here again, the challenge was not so much to generate music following these very restricting rules, but much rather to do so in a fully real-time interactive context. The development of the code for this composition has lead to a substantial growth of our <GMT> real time composition libraries. Now we have build-in support for metric and melodic 1/f distributions as wel as for multi-instance histogram tracking for real time parameters and data sets. The new functions and procedures are in the libraries g_indep.dll and g_mus.dll. They are placed into the public domain.

The piece is scored for a dancing viola player and the viola interjections are entirelly written out. The viola score material can be downloaded in a higher resolution. . The note durations in this score can be shortened ad libitum, but the note onset timings have to be preserved. Alternative permutations of the material are permissible as well. Other possible instruments for this study are: flute, clarinet, violin, trumpet, piccolo, saxophone, trombone.

Gesture prototypes used for this study are:

  • edgy-smooth dipole, x-vector mapped on <Ob> and <Bono>, y-vector mapped on <Korn> and <Heli>, z-vektor mapped on <Autosax> and <Fa>
  • speedup-slowdown dipole (non vectorial) property mapped on <So> and <Spiro>
  • general movement properties mapped in the <Player Piano> part in three voices: staccato/legato control and velocities.
  • implode-explode dipole mapped on <Vibi> and <Xy>

This study was premiered on april 19th by Dominica Eyckmans and the author. Here are some pictures:


Further performances:

 

 

references:

LIVITIN, Daniel J a.o., "Musical rhythm spectra from Bach to Joplin obey a 1/f law" In: PNAS (Proceedings of the National Academy of Sciences), March 6th, 2012, Vol.109, nr.10.

RAES, Godfried-Willem "Namuda Studies: Doppler radar based gesture recognition for the control of a musical robot orchestra", in: Actes des Journees d'Informatique Musicale (JIM 2012), Mons, Wallonie, mai 2012

TOUSSAINT, Godfried "The Euclidian Algorithm generates traditional musical rhythms", paper: School of Computer Science, McGill University, Montreal, Canada

Technical notes:
The different versions of this study require a specific compilation of the code (namuda.exe) with the metaconstants %Breda or %Glasgow set. The versions can not be selected on the fly. The piece should be started in the GMT cockpit by selecting the sync task, after the viola player has finished playing the first theme. This is the only user input required. Optionally slider(0) in the cockpit can be used for overall staccato/legato control. The duration is the same for all versions. The instrumentation for the different versions is:

Technical note: Source code for this composition is in \gmt\namuda\nosi.inc. The run-time compilation is namuda.exe


Namuda Study #25: "Black & White" (july 2012) - duration 60'

This is a full evening staged choreography involving six dancers, 3 black and 3 white: Zam Martino Ebale, Flavio Marques, Ousmane Bilogo Gansore, Emilie De Vlam, Dominica Eyckmans and the author. There are musical and code contributions by Kristof Lauwers and Sebastian Bradt, The entire piece is interactive on gesture input but also uses our long range distance sensor and radar interface. (Lorangus Discens). Next to the complete robot orchestra it also involves our polymetronome.

A representative set of pictures from this performance is available here.

Premiere took place on july 18th of 2012, with subsequent performances on 19th and 20th of july, in the Logos Tetrahedron.


Namuda Study #26: "Ritual" (september 2012) - duration 10'

This is a part of a full evening staged choreographic production around the theme 'ritual'. This study was written at the occasion of the newborn robots <Klar> and <Synchrochord> and their first introduction into the robot orchestra. The finale section of this study uses the same code as the 'Slones' piece for our wind-section robots and a solo trombone. This section is not interactive but features, in this study, a part for the viola.

Premiere: September 20th, 2012 in the Logos Tetrahedron.

Technical note: Source code for this composition is in \gmt\namuda\ritual.inc. The run-time compilation is namuda.exe


Namuda Study #27: "Specs" (october 2012) - duration 6'30"

In this study a spectral transform on the gesture data buffers is performed and remapped on the keys of the piano, with all possible dynamic nuances. In addition there are parts for the newly finished <Klar> robot as well as for the newly upgraded <Fa> and <Korn>. A few later versions were worked out as well, using different robots. By the nature of the algorithms used, it is a spectral piece of music.

Premiere #27.1: October 21th 2012, Miry Concerthall, Conservatory of Music Ghent (with <Player Piano> and <Klar> only.
Premiere #27.2: October 24th, 2012 in the Logos Tetrahedron (With <Spiro>, <Fa> and <Korn> added).
Premiere #27.3: October 30th,2012 in the Logos Tetrahedron: World Music Days, ISCM. (Two performances)
Try Out #27.4: April 16th, 2013 in the Logos Tetrahedron, performed by the author (Glasgow version)
Premiere #27.4: April 19th, 2013, Glasgow Royal Concert Hall. Danced by the author, without live instrumental part. Instrumentation without <Spiro> and <Fa>. Robots <Xy>,<Vibi>,<Bono>, <Heli>, <Piperola> added. This version runs fully automatic and the mapping of the gesture spectra on the notes played is projected on frequency bands slowly going down over a range of two octaves.
Revision #27.5: May 15th, 2017, Logos Tetrahedron by the author.

Performer: Dominica Eyckmans, viola and dance.

Technical note: Source code for this composition is in the module \gmt\namuda\specs.inc. The compilation to be used for runtime is namuda.exe.


Namuda Study #28: "Unisons" (november 2012) - duration 8'00 "

In this study on any moment only a single pitch is sounded. However, the orchestration is subject to continuous changes, in function of detected gestural properties from the dancer. So it becomes basically a study in Klangfarben melody. Obviously, this study uses all pitched instruments available in the robot orchestra.

Premiere November 15th 2012, in the Logos Tetrahedron

Technical note: Source code for this composition is in \gmt\namuda\uni.inc. The run-time compilation is namuda.exe


Namuda Study #29: "Dozens" (december 2012) - duration 12'00 "

In this study we make use of a form of jazz harmony. All chords are based on or derived from 15th chords. Melody generation is gesture dependent, upwards being steered by acceleration and downwards by deceleration. The orchestration is gesture dependent as well. The harmonic structure can be summarized in following overview:

The instrumentation uses just twelve robots: Player Piano, Ob, Klar, Autosax, Heli, Korn, So, Fa, Bono,Krum, Troms, Spiro. The choreography, for a viola playing dancer, alternation bewteen duetts between a solo-robot (Klar, Krum and Autosax) and the player, and chorusses for the robots, triggered by dance. Near the end, some elements from tango come sneaking in.

Premiere: Logos Tetrahedron, 12.12.12, by Dominica Eyckmans and the author.


Namuda Study #30: "Force" (january 2013) - duration 5'05 "

this study is derived from our very first Namuda study ('Links'). In fact it is a remaking of a version of Links that we wrote in 2010 for Jin Hyun Kim, who wanted to make a 5 minute version for scientific analysis using a Mocap video recording. As this never was performed in public, we decided to have another look at the coding such as to make it apt for this 'Force' study. The choreography instructions are all derived from Tai Chi movements. The idea was to have this piece performed three times in the same performance evening, by three different dancers.

Some pictures from the performance by Dominica Eyckmans:

Some picture from the performance by Zam Martino Ebale:

Some pictures of the performance by Emilie De Vlam:

Premiere: Logos Tetrahedron, 16.01.2013


Namuda Study #31: "Sense" (february 2013) - duration 11' or 16'.

This rather extensive study is a further development of our Namuda Study #11 (Prime 2011), but here not prime number based in the same way. Since this is 2013, and given that 2013 = 3 x 11 x 61 , these numbers became the underlying base of this study. But 2013 is also the following sum of primes 149 + 157 + 163 + 167 + 173 + 179 + 181 + 191 + 193 + 197 + 263, another constructive element in this study. All details are made to be interactive and make use of nearly all the Namuda gesture properties. The harmonic structure is again entirely based on spectral distributions of slowly expanding irrational overtone series. The study fully exploits the quartertone and microtonal possibilities of the robot orchestra. The mappings of gestural properties on musical robots is as follows:

  • smooth gestures @ Qt
  • fluent gestures @ Ob & Klar
  • constant speed gestures @ Korn
  • edgy gestures @ Xy
  • accellerating gestures @ Bono, Autosax, Heli (alternating)
  • slowdown gestures @ Bomi
  • shrinking gestures @ Puff
  • exploding gestures @piano
  • collision @ Vacca, Vitello, Belly
  • theatrical collision @ Toypi
  • gestural freeze @ Bourdonola

Time markers, independent from gestures, are confined to <Trump>, <Krum>, <Vox Humanola>, <Snar>, <Llor>, <Springers>, <Spiro>, <Vibi> and <Tubi>. The piece was premiered by Dominica Eyckmans on february 13th 2013. The piece can be performed by one or two dancers.

 


Namuda Study #32: "Spring" (march 2013) - duration 9'.

This study is the first one that includes the newly finished <Temblo> robot. The mappings of gestural properties on musical robots is as follows:

  • smooth/edgy gestures @ Temblo
  • fluent gestures @
  • constant speed gestures @ Toypi
  • accellerating gestures @ Bomi
  • slowdown gestures @
  • shrinking gestures @
  • exploding gestures @
  • collision @ Player Piano (chordal structures)
  • theatrical collision @ Xy
  • gestural freeze @ Harma, HarmO and Vibi

The pitch material is based on a note series: The part for the viola is freely based on this series.

Technical note: Source code for this composition is in \gmt\namuda\springs.inc. The run-time compilation is namuda.exe


Namuda Study #33: "Sprong" (march 2013) - duration 10'.

This study is a collaborative work with Kristof Lauwers. The main robotic instruments used in this piece are <Springers>, <Puff>, <Psch>, <Vacca>, <Qt>, <Tubi>, <Troms> and <Temblo>. The choreography was worked out by Emilie De Vlam, who also premiered the performance on march 21th at Logos. A high resolution video is available in the Logos Foundation archives.

Technical note: Source code for this composition is in \gmt\namuda\springs.inc. (?) The run-time compilation is namuda.exe


Namuda Study #34: "Eggs for Glasgow" (april 2013) - duration 25'.

The choreography for this study is very similar to that worked out for #19 (Elfjes), as it again uses asanas from yoga practice as goal point for every section of the piece. The music as well as the mapping of gestural properties on the instructions for the musical robots is totally different though. As asanas are basically static poses involving no substantial gesture in time, here we underline them with precomposed music fragments. These were arranged and orchestrated by Sebastian Bradt. The gestures required to go from the one asana to the other make use of different mappings, depending on the gestural way the next asana is to be reached. In these transitions the composition becomes fully interactive.

The following gesture mappings apply throughout this piece:

  • Fluency @ <Ob>, <Korn>, <Klar>, <So>, <Bono>, <Heli>
  • Edgy @ <player piano>
  • Smooth @ <Harma>
  • Slowdown @ <piperola>, <Bomi>, <Toypi>
  • Speedup @ <Xy>
  • Collision @ <Vacca>
  • Expanding @ <Thunderwood>
  • Theatrical collision @ <Temblo>
  • Shrinking/Imploding @ <Vibi>
  • FixedSpeed @ <Bono>, <So>
  • Jump @ <Troms>, <Snar>
  • Freeze @ <piperola>,<Bomi>,<Harma>,<Player Piano>

The Freeze property is only used in the finale section of the piece. This study was finished and premiered in april 2013. There are music prototype contributions by Sebastian Bradt and code contributions by Kristof Lauwers.

First try out performance: 16.04.2013: Ghent, Logos tetrahedron by Dominica Eyckmans

International premiere: 19.04.2013, Glasgow, Royal Concert Hall, by Dominica Eyckmans.

Technical note: Source code for this composition is in \gmt\namuda\glasgow.inc. The run-time compilation is namuda.exe. The required precomposed midi files are: 1_Intro_Solemn.mid, 2_Wood.mid, 3_Blurry.mid, 4_Winds1.mid, 5_Wispelturig.mid, 6_Ascending.mid, 7_Makaber.mid, 8_Speelgoed.mid, 9_Twisted.mid.


Namuda Study #35: "Faves" (may 2013) - duration 10'.

This study is structurally very similar to #23 ('Wet') and uses real time calculated 1/f distributions for both pitch and durations. The data sets are completely different however and reflect the general theme of this M&M production: 'Waves'. The pitch class datasets are: The duration sets: The choreography, including dancing viola playing for this study, was worked out in collaboration with the performer, Dominica Eyckmans. The lighting uses mostly sodium vapour light, with a trace of blue at the end.

The orchestration includes following robots: <Ob>,<Korn>, <Heli>,<Bono>,<Fa>, <Klar>, <So>, <Spiro>, <Player Piano>, <Toypi>, <Troms>, <Temblo>, <Dripper>,<Vibi>,<Bomi>,<Krum>.

The premiere of this study, with Dominica Eyckmans took place on may 16th 2013 in the Logos Tetrahedron.


Namuda Study #36: "Waves" (may 2013) - duration 20'.

This study is written in collaboration with Kristof Lauwers who took care of all mapping of our gesture recognition engine on the choreography worked out with Emilie De Vlam.



Namuda Study #37: "Namuda Three" (july 2013) - duration 60'.

This is a full evening choregraphed production written in collaboration with Kristof Lauwers and Sebastian Bradt . There are also music contributions by Jan Baumers. The dancers are Dominica Eyckmans, Emilie De Vlam and the author. The staging features the introduction into the robot orchestra of two newborn robots: <Asa> and <Horny>. Next to the gesture recognition engine at work here, this production also uses the audio analysis features that give the robots the possibility to react on audio input from interacting human players. The dancing viola player has a small DPA microphone on the instrument as well as a wireless transmitter.

This composition was performed three times in a row on july 23th, 24th and 25th of july.

Technical note: Source code for this composition is in \gmt\namuda\gf2013.inc. The run-time compilation is namuda.exe. The required precomposed midi files are: ...


Namuda Study #38: "Whispered Questions" (september 2013) - duration 6'00".

This study was written for the newly finished <Whisper> robot. It uses no sounds with a specified pitch, only 'noises'. The dancer uses only non-sung vocal sounds and utterances.

Gesture prototypes and their mappings used for this study are:

  • acceleration mapped on the whisper sounds in <Whisper>
  • deccelaration mapped on the shakers in <Whisper>
  • edgy-smooth dipole, mapped on <Temblo>
  • collision mapped on <psch> and <Snar>
  • implode property mapped on <Dripper>
  • explode property mapped on <Thunderwood> and <Springers>
  • theatrical collision mapped on <Casta>, <Casta2> and <Simba>


Namuda Study #39: "Gentle Math" (october 2013) - duration 10'00".

This study, just like studies #23 and #35, make use of 1/f distributions. However, not only the data sets are strictly reduced here, also the sizes of the elements are very restricted. The 1/f distribution law is enforced on the parameters pitch, duration as well as dynamics. The choreography also attempts to use gestures submitted to 1/f distributions. There is an optional part for the viola and here as well all material strictly follows the given distrtibution laws.

These are the data sets for pitch and for the durations used in this study:

Robots used in this composition: <Asa>,<Korn>,<Bono>,<Heli>,<Fa>,<Horny>,<Ob>,<Klar>,<Piperola>,<Bourdonola>,<Qt>,<Psch>,<Whisper>,<Player Piano>,<Temblo>,<Bomi>,<Vibi>.

Gestural mappings (in order of appearance in the composition):

  • 0'00": Ob-Korn: Ob @ edgy, Korn @ Smooth
  • 0'45"- 1'10": Vibi: 1/f equaliser, amplitudes are body-surface dependent (Vectors x,y)
  • 1'16"- 8'30": Temblo-Troms: @ speedup gestures
  • 2'10"-2'45": Vibi: 1/f equaliser, amplitudes are body-surface dependent (Vectors x,y)
  • 3'00": Piano: 1/f equaliser, amplitudes are body-surface dependent (Vectors x,y)
  • 5'00:- 7'30": Fa, Heli, Bono @ slowdown gestures (x,y,z)
  • 8'40": Psch @ freeze gesture
  • 8'45": finale start: Temblo count down in crescendo.
  • 8'50": Vibi: 1/f equaliser, amplitudes are body-surface dependent (Vectors x,y)
  • 10'00": End.

Manual fill-in's (decided in real time by laptop operator in function of the choreography):

  • Klar-So: Klar@ Edgy, So@smooth [X-vector]
  • Asa-Bomi: Bomi @ Edgy, Asa @ Smooth [Y-vector]
  • Toypi-Spiro: Toypi @ Edgy, Spiro @ Smooth [Z-vector]

 

Some pictures from this production:

Premiere: October 22nd, 2013 Logos Tetrahedron. Performed by Dominica Eyckmans and the author.


Namuda Study #40: "DunkelDark" (november 2013) - duration ca.10'.

This study builds further on the initial coding written for Namuda Study #4, 'Robodomi'. The study was written for Dominica Eyckmans, uses the combined recognition of the gestural prototypes edgy, smooth, fluent, speedup and slowdown. It makes use of the harmony building algorithms after Hermann Helmholtz integrated in GMT's harmony library.

Mapping of gesture recognition on robots, in the order of appearance in the performance is as follows:

  • Fluency @ <Ob>, <Korn>, <Asa>, using pitch bending. This can be combined with viola. It is the first module used in the performance.[Task Rob3]
  • Explode @ <Klar>, <Horny>, <Fa>, <Springers> [Task Rob7, started as soon as the dancer reaches ground]
  • Slowdown @ <Piperola>, <Qt>, <Bourdonola>, <Whisper> on very low wind pressure. [Task Rob2, pp. Task Rob7 and Rob3 OFF]
  • Slowdown @ <Toypi>, <Spiro> [Task Rob6]
  • Speedup @ <Bono>, <So>, <Heli>. On peak detection, <Llor> will sound. [Task Rob4, started after 3 short-long cues from the viola]
  • Edgy-Smooth dipole @ <Player Piano>, <HarmO> and <Harma>. The spectrum of the gestures is mapped on the chord structures played by the reed organs. [Task Rob1, other tasks OFF]
  • Speedup @ <Xy>, <Puff> [Task Rob5, for preparation to ending]
  • Fluency @ <Ob>, <Korn>, <Asa>.[short reprise of Task Rob3, together with:]
  • Fluency @ Whisper [Task Rob8, ending with Whisper.]

 

Premiere: November 21th, 2013 Logos Tetrahedron. Performed by Dominica Eyckmans and the author.

Technical note: Source code for this composition is in \gmt\namuda\robodomi.inc. The run-time compilation is namuda.exe. This study supersedes #4.


Namuda Study #41: "White" (december 2013) - duration 15'.

This study was written for Emilie De Vlam. As in optics, white is a balanced mixture of all spectral components, we took the omnipresence of all chromatic notes as a compositional base for this study.

Premiere: December 17th, Ghent, Logos Tetrahedron.

 


Namuda Study #42: "Happy Robots" (january 2014) - duration 8'.

This study was written for Dominica Eyckmans. Premiered 15th januari 2014 at Logos Foundation.

A high resolution video made by TVF is available for download here. (554MByte)

More pictures on this production.


Namuda Study #43: "High Order Derivatives" (february 2014) - duration 10'.

This study was written for Dominica Eyckmans. Premiered 20th februari 2014 at Logos Foundation.

The study is based on the exploration of the first, second and third derivatives of the surface vectors. Rather than a composition, this study is much more an implementation of an invisible instrument. It has extreme sensitivity for dynamic gesture properties and very high responsiveness. In one of the six different sections of the piece, we are trying out the possibilities of using the higher order derivatives of the speed vectors. This is a highly experimental end explorative study mainly written to test extentions and further possibilities of our gesture recognition software. It may lead to the discrimination of quite a few more gesture characteristics than we had hitherto.

A high resolution video made by TVF is available for download here. (981MByte)

Technical note: Source code for this composition is in \gmt\namuda\deriv.inc. The run-time compilation is namuda.exe.


Namuda Study #44: "Rods for Rodo" (april 2014) - duration 10'.

This study was written for Dominica Eyckmans. Premiered 16th april 2014 at Logos Foundation.

It was written at the occasion of the first public appearance of our newly build <Rodo> robot. Therefore this piece uses but one single robot. The piece consists of seven different modules with their own mappings of gesture exploiting the different musical possibilities of the newborn <Rodo> robot.

  • Module #1: 3-note clusters, rising . Repeat rate depending on movement speed, amplitude on body surface.
  • Module #2: harmonic diads. Repeat rate depending on movement speed, amplitude on body surface.
  • Module #3: Singing Rods: Here we use the electromagnetic driver to make the rods sing. The music is monophonic and the expansion gesture property is used (first derivative of surface). The coding is non positional.
  • Module #4: Edgy-Smooth properties. Edgy steers the fast 4-note chord sequences and uses the damper mechanism. Smooth steers the pitch of the injected notes derived from the strongest spectral component is the gesture spectrum.
  • Module #5: Slowdown property: sustain is off, and the pitches follow a descending curve. The feedback mechanism comes into action at times.
  • Module #6: Speedup property: sustain is on here and the pitches will be rising. The size of the rising intervals being a function of accelleration.
  • Module #7: Using the dampers on Rodo only and thus a very soft section. The melody lines -in three voices- are a mapping of the implode gesture property, whereas the tempo is a function of global speed of movement.

More pictures on this production.

Technical note: Source code for this composition is in \gmt\namuda\deriv.inc. The run-time compilation is namuda.exe. Code module order for the performance is:P1, P4, P3, P7, P6, +P5, P2.


Namuda Production #4: "Kine" (july 2014) - duration 50'.

This is a full evening dance production with dancers Dominica Eyckmans, Emilie De Vlam and the author. The productions uses some 35 robots and has musical as well as code contributions by Kristof Lauwers and Sebastian Bradt. Premiered july 22nd 2014 and presented again july 23th and 24th at Logos Foundation. A picture album is available on this site.

This production, available for organisors throughout the next season, requires a fully equiped theatre, preferably with a steel floor.


Namuda Study #45: "Litany" (september 2014) - duration 15'.

This study was written for Dominica Eyckmans. Premiered 16th september 2014 at Logos Foundation. It makes use of a fugue I wrote in 1991 (Fugue #13, Fuga Litania) above which interactive gesture controlled elements are juxtaposed. We made an orchestration of the original 4-voice material using some twelve robots. Pitched musical events triggered by gesture properties are enabled to take the global harmony of the fugue fully into account. The code can serve as a typecast for applications where a pre-existing midi-file has to be meaningfully combined with fully interactive polyphonic voices. In a next study, we want to also lock the rhythm and meter to such a file. For this study, there was no need to do this, as the tempo is absolutely constant and fixed to MM60. The metre is a quite bizarre 13/8 and is at the base of the choreography we worked out.

Litany Study #45

Technical note: Source code for this composition is in \gmt\namuda\litany.inc. The run-time compilation is namuda.exe. Next to the robots, the code also requires both displays to be connected.


Namuda Study #46: "Okto" (oktober 2014) - duration 10'.

This study is basically a reworking of study #40 (Dunkeldark) and #4 (Robodomi). The underlying code -apart from some additions to take advantage from the newborn robots- is fundamentally the same. However, the choreography is completely new, hence also the music came out to sound very different. The choreography makes use of the figure 8 motif and the ocho, derived from tango dance. It was premiered, october 22nd by Dominica Eyckmans, dance and viola.


Namuda Study #47: "Bellies" (november 2014) - duration 8'.

Namuda Study #48/49: "Robo Hybr" (december 2014/ january 2015) - duration 15'.

This study was written as a demonstration for the possibilities of our newborn robot <Hybr>, a membrane driven pipe organ.

The study was premiered in two sections, with different performers: Emilie De Vlam and Dominica Eyckmans. A preliminary version was performed by the author in december 2014.


Namuda Study #50: "RoboTec" (february 2015) - duration 4'.

This study is a complete rewrite of our RoboGo! study, for prepared player piano and a single dancer. The study was written as a performance contribution to Felix Van Groeningen's movie 'Belgica', to be performed on the filmset location by the author on february 5th. Due to the severe restricions on available space, I decided to use only the player piano, however, prepared, to get access to some percussion like sounds. The tempo was given (MM120) and part of the commission, hence we did not make this parameter dependent on any gesture property. However, depending of gesture speed, tempo can be halved, kept constant or doubled, preserving the basic beat.

The introduction is 5 bars (in 4/4). The duration of the core-part is not limited in the score, but should not exceed 3'30". When this section is stopped, a final cadenza is started automatically.

The piano preparations are to be as follows:

  • notes 22,23,24 (A,Bb,B): used together, should sound as a bass-drum. Wooden linen clamps can be used on the strings.
  • notes 97, 98 (c#, d): rubber wedge between the strings to produce a dry almost non pitched sound
  • notes 99, 100 (eb, e) : metalic sound, should refer to cymbal sound: M4 bolt with tambourine bells.
  • notes 102 and 103 (f#, g) : metalic sound: an M4 bolt with a loose washer can be inserted in the strings. (must refer to snare drum sound)
  • notes 104 to 108 (g# to c: flat piece of foam with a 1kg weight in top to keep it in place.

Technical note: Source code for this composition is in \gmt\namuda\robogo.inc. The run-time compilation is namuda.exe. Select the robogo study in the menu.


Namuda Study #51: "Seduction" (february 2015) .

This study reconnects to our research on higher order derivatives from the gesture data engine, started in study #43. Here the new possibilities offered by our newly introduced <Hybr> robot are exploited as well as these of our newest robot <HybrHi>, adding three more octaves to the ambitus of <Hybr>.

The study was premiered by Dominica Eyckmans, february 17th at Logos Tetrahedron at the occasion of the robot orchestra concert on the theme of seduction.

Technical note: Source code for this composition is in \gmt\namuda\namuda_hybr.inc. The run-time compilation is namuda.exe. Select the hybr study in the menu. Operator instructions in namuda#51.txt.

This composition was performed again may 2016. Here is an MP3 file of this performance.


Namuda Study #52: "Tumult" (march 2015) .

This study is a further development of study #38. It uses non pitched sounds.


Namuda Study #53: "Mekano" (april 2015) .

This study introduces our newly finished <Bello> robot into the robot orchestra.


Namuda Study #54: "Dancing" (june 2015) .

This study was written for Dominica Eyckmans.


Namuda Study #55: "Impossible" (july 2015) . [60']

This is a one hour production scored for three dancers and monstly musically impossible instruments. The staging makes use of two separate performance spaces. The first space, a lodge, contains the three robotic airplane propellers controlled by our 24GHz radarsystem and makes use of the newly installed staircase there. The second space is the Logos Tetrahedron were most of the robot orchestra is set up. Here only the sonar based gesture recognition system is used to steer 'impossible' robots: Llor, Belly, Vacca, Springers, Thunderwood, Bello, Vitello, Klung, Simba. The dancers also play three large javanese gongs suspended very high to the ceiling such that they have to jump in order to reach them.

Collaborators in this production: Emilie De Vlam, Dominica Eyckmans, Lara Van Wynsberghe, Kristof Lauwers, Mattias Parent. General director and production leader: Godfried-Willem Raes

The production was performed on 21th, 22nd and 23th of july 2015.


Namuda Study #56: "Tintinabuli" (september 2015) . [15']

This is a namuda study using just a single robot. In this case the newly finished <Tinti> robot. The piece starts with a non-interactive exposition wherin interactivity is gradually introduced. In the following section the speed of gesture steers the repetition frequency of the tintinabuli and the body surface the velocity of the strokes. The thirth section maps the slowdown gesture property on Tinti's bells. The fourth section uses the ultrasonic aspect of tinti without even using bells. Here we play with the interferences caused by our ultrasound based invisible instrument and the variable -and gesture controlled- ultrasonic frequencies Tinti can produce. The fifth section recognises the speedup property and maps it on a combination of bells and varying ultrasound. The last section uses the edgy property in combination with gesture controlled ultrasonic chromatic scales. An optional finale can be added but has no interactivity on gesture.

Collaborators/ performers in this production: Dominica Eyckmans and Godfried-Willem Raes

The production was performed on 24th of september 2015.


Namuda Study #57: "Tekstuur" (oktober 2015) . [15']

This study for a vocalising dancer with a handheld microphone is simular to study #14, Miked. The singers vocal utterances are spectrally mapped on Hybr such that we can clearly hear Hybr in tandem with HybrHi to speak. It's a major improvement over study #14, were the mapping made use of the Qt robot. The singers gestures are as well submitted to fast Fourier transforms and in their three vectors spectrally mapped on the robots <Harma>, <Xy>, <Puff>, <Piperola> and <Bourdonola>. use other robots in the orchestra. Collaborators/ performers in this production: Dominica Eyckmans and Godfried-Willem Raes

This production was performed on 28th of october 2015.


Namuda Study #58: "Oor-sprong" (february 2016) . [12']

This study for a dancer manipulating a pair of very large ears is highly theatrical and humorous in nature.
Collaborators/ performers in this production: Dominica Eyckmans and Godfried-Willem Raes

This production was performed on 18th of february 2016. Picture album of this production.


Namuda Study #59: "Vorm" (march 2016) . [10-12']

This study uses the newest features added to our player piano: automated note repeats. It is written in the form of a sonata (ABA) and it used chordal structures derived from the shape of the gestures of the performer.
Collaborators/ performers in this production: Emilie De Vlam and Godfried-Willem Raes

Technical note: Source code for this composition is in \gmt\namuda\ascent.inc. The run-time compilation is namuda.exe. Select the pp_nam study in the menu.

MP3-file of this study, recorded march 2016


 

Namuda Study #60: "De Passie van Chi" (april 2016) . [12-16']

This study uses the newest (at the time of writing...) robot in the orchestra: <Chi>. The study was performed by Dominica Eyckmans and the author , april 21th 2016.


Namuda Production #6: "Haram" (july 2016) - duration 60'.

This is a full evening dance production with dancers Dominica Eyckmans, Emilie De Vlam and the author. The productions uses some 60 robots and has musical as well as code contributions by Kristof Lauwers, Xavier Verhelst and Lara Van Wynsberge. Premiered july 19th 2016 and presented again july 20th and 21th at Logos Foundation. There is a special place for the newly reborn <Flex> robot in this production.

 

The 'Namuda' project on gesture control as well as the development of the robot orchestra are post doctoral research projects with the support of the Ghent University Association, School of Arts (2001 to 2014), and now with the support of the Ghent University as well as the Orpheus Institute.


 

Namuda Study #61: "Family Completed" (october 2016) . [6'- 8']

This study uses the newest (at the time of writing...) robot in the orchestra: <HybrLo>. This completes the family of hybrid robots, consisting of <Hybr>, <HybrHi>, and <HybrLo>, The study was performed by Dominica Eyckmans and the author , october 19th 2016.

Performance operator note: Task order in the <GMT> cockpit for the performance:

  1. HybrLo_0 : intro, lowest note only. Amplitude modulated by gesture. Switching this task off, will start the next task:
  2. HybrLo_6: mapping on slowdown.
  3. Add Hybr_Edgy_Trig
  4. Edgy_Trig OFF, Deriv's ON (alternate on/off)
  5. Slow_gw only until end.

A picture album on the concert where this study was performed can be found here.

 

P.S.: Voor uitvoeringen van deze stukken moeten noch mogen auteursrechten worden betaald. Elke poging tot inning van auteursrechten, door welke instantie ook, naar aanleiding van publieke uitvoeringen van deze stukken, evenals van opnames ervan, kan gerechterlijk als poging tot afpersing worden vervolgd.
Godfried-Willem RAES
Public Domain, may 2017

Back to Godfried-Willem Raes homepage Back to catalogue of compositions by Godfried-Willem Raes Back to the logos website