Zoom Presentation at the DMLab Event

Finally, we’ve been able to have access to the recording of our presentation at Rich Mix on the 28th of May. Watching ourselves talking in public is also a good experience and it will be very helpful for future exercises and meetings of this kind. The talk was overall smooth from beginning to end and the public found the topic of the use of sensors as an accessible feature for musical instruments quite interesting and innovative. Before our presentation, we also had an informative introductory video from our course leader Annie Goh, explaining more in depth the content of BA Sound Arts, but for some reason it wasn’t added to this excerpt. The whole event was presented by Rob Parton, the DMLab project manager & associate fundraiser, who greatly introduced the speeches and made some questions too, and Deborah Borg Brincat, programme delivery manager at Drake Music, organising and bringing questions from the zoom chat. The whole event was also brilliantly translated for deaf people by two BSL interpreters. It was a great evening and here we have the recording of our presentation.

DMLab at Rich Mix

The very final test in this project was to run a presentation at a public event organised by Drake Music. The event hosted another presentation apart from ours, related to accessible Djing, and it was really interesting. Our presentation was great, and we enjoyed talking about the different stages and concepts of our project. We had loads of entangled questions and different suggestions for future improvements in our instruments. The attendants were very interested on the device, and the conversation ended into diverse fields of accessibility, physicality and technology. We also accompanied the talk with pictures of the development process as well as a little demonstration of the instrument and some people even tried it after the presentation. Here is a short video recorded by one of the attendants.

The whole presentation was recorded on zoom and it will be probably shared soon, however, despite of Annie and us having contacted Drake Music via email, I’m afraid that at the time that I’m writing this, on the submission day, we still don’t have access to these recordings. I will make another blog post with them so at the time of marking this unit they might be available. I hope that this could still be taken into consideration, taking into account that the presentation happened just two days before the submission deadline, but I think this is an valuable documentation about our progress during this unit.

Anyway, here I will leave Drake Music’s website for any further information, they use to upload the full presentations in Youtube too. We really enjoyed this experience, it was a really interesting project. Here are some pictures that I took from the event.

https://www.drakemusic.org

Wiring, Coding and finishing up the Air Soundscape Generator

Our patch in Pure Data is now finished and we’ll need to load our own samples to the project ready to generate soundscapes. After we finished the design and laser cutting working together we decided to split the next steps so Lucas would be in charge on the sound design and I would do the coding stuff as I’ve shown previously. He then sent me the samples and I have programmed them to run in PD using the message [open filename.wav], and the object [readsf~] will play our sound file after being triggered with a bang. The files are played and looped from the beginning using the object [loadbang] and their volume will be controlled with the distance sensors as we have seen before. The initial volume is 0, so the instrument will be silent until it detects some activity in the sensors. This is the Bela IDE, the built-in application where we load PD patches in Bela Mini.

The Bela IDE helps us to load patches, samples and other customisations as well as set up a project which will run on booth on the Bela as standalone. In the patch we can see the final programming for each sensor, in the left side, the sensor equations to calculate distance, and in the right side the sample file system with the volume control in the middle.

Now I was ready to wire up the four sensors and attach them to the instrument’s enclosure, here we can see some pictures of the main wiring system.

As mentioned before, our instrument plays three different sounds commonly used in Sound Arts; noise, ambience and field recordings, and the fourth sensors adds and effect to master channel. I also added two hinges to the top side in or the to create a lid that can be opened and closed if we want to manipulate the wiring from inside. And this is the amazing look of our finished accessible instrument:

I did a little jam to share on social media, for which I had a good feedback, with plenty of people intrigued asking questions about it. I also tagged Bela Platform in it and they shared it on their main profile. Here is that jam, where you’ll be able to see and hear how the Air Soundscape Generator works.

Ultrasonic Sensors

One of the key elements of our accessible instrument, and something that Lucas and me had very clear about the development of the instrument since the beginning was the idea of implementing sensors in the functionality of the device. These sensors, the HC-SR04, known as ultrasonic sensors or distance sensors are commonly used in interactive installations and they have been widely used in combination with Bela Board and Pure Data, some tools that we were studying during the beginning of this course in the unit “Expanded Studio Practice for 21st Century Sound Artists”.

The way these sensors work is similar to the ultrasonic vision that allows bats to navigate in the space. One of the round devices emits an ultrasonic signal, whereas the second device receives the echo. Sending a trigger from PD to the Bela Board and with an space/time equation which is available with Bela’s online documentation, the sensor is able to output the equivalent value to the distance to which any object can interfere in that direction. The sensor also needs some wiring and a couple of resistors to work, and I built these schematics in a breadboard to start experimenting with one of the sensors.

Little setup to test one sensor

But using these sensors to control parameters is not as straightforward as it can look at first sight, and it took me quite a lot time to find out the best possible patch in order to obtain a natural and smooth feel when controlling the volume of the sample, which is the feature that we wanted to achieve in this instrument. Here is a short video of the first experimentations with volume control of a sample, in the Bela’s console we can also see the print of the distance, calculated every 60ms.

In order to make this volume control more natural, I added some smoothing to the printed values, with the object [line~] and sending the message [$1 800], where $1 is a variable, in this case is the distance, but it will take 800ms to move from a value to the next one, making the volume control smoother. I also used the objects [samphold~] and [snapshot~] to retain the distance at any point, otherwise the distance, therefore the volume, was continuously increasing after removing the hand. The maximum distance is also capped at 25cm to make a reasonable movement range. Once I had a good patch for the first sensor I just needed to duplicate it four times, with the only difference that the last one would control a filter that I made with [vcf~], and the other three samples would be routed to the filter and master output. Here is the final patch in Pure Data ready to be tested with four sensors.

Laser-cutting the enclosure in the 3D Workshop

Once our design for an accessible instrument has been created in Adobe Illustrator, is time to join the 3D Workshop staff and run the laser cutting and able the structure of the Air Soundscape Generator. The Illustrator file is loaded on a special software for laser cutting, and after setting up the machine and placing the selected material the laser cutting is ready to run.

The design will be cut and engraved on a 3mm plywood board, a very good and resistant material that is normally used for these kinds of projects. The cutting process lasted 11 minutes and our enclosure is ready to be assembled. We are really happy with how the engraving came out, the design looks precise and delicate, and the titles are explaining well the different functions in the instrument. The next step will be coding and wiring up the sensors.

First design for our accessible instrument

For the accessible instrument that we’ll be creating for Drake Music following path 2 in this collaborative unit, we are going to use the laser cutters from the 3d Workshop in order to create a wooden enclosure for the device. I’m teaming up with my BA Sound Arts classmate Lucas Yoshimura in this project, and our first step has been creating this design together in Illustrator, ready to be printed over a plywood with the laser cutter. The Air Soundscape generator as we’ve called is a kind of a sampler, controlled with sensors, which will be able to be played with any part of the body, and it will feature three different kinds of atmospheres, common in the Sound Arts, plus an effect. This is the final design ready to be assembled.

Sessions with Megan Steinberg from Drake Music

Following our visit to Drake Music and our upcoming project for an accessible instrument, we’ve also had some support lectures with Megan Steinberg, an accessibility specialist who works with Drake Music, and also holds a Phd in accessible instruments. This lectures were focused in the explanation of different kinds of disabilities and how to adapt musical elements to make them accessible. These lectures were helpful and inspiring in terms of instrument design and we also were able to discover several projects related with disabilities from both Megan and Drake Music. We also did an exercise selecting from a couple of different options, where I chose creating a musical score for a pianist with anxiety. I really enjoyed with this exercise and I learnt a bit more about how to make accessible musical notations, in this design I used a piano roll pattern, with different relaxing icons which could make a piano player feel more comfortable, this was the result:

Future Goals

Following my recent research in audio programming languages I feel really motivated to keep exploring these kinds of techniques and I can’t wait to find other interesting related tools and I’ll be for sure investing some time into improving my skills with new applications. I think these tools are great for sound design and procedural audio and I’m really looking forward to develop my artistic practice in these fields as well as finding professional opportunities as well.

Within the coding, probably my next step will be adapting what I’ve learnt in sound programming during this unit into the live performance or live coding. I think that one tool that will allow me to do this and that is compatible with Python is Fox-Dot. This programming language is well known by live coders and I think that it links Super Collider with Python, allowing us to code in real time. Super Collider is probably another utility where I’d like to have a look soon, and I will be probably be researching a little bit about it.

Also in terms of audio programming, I would also like to come back to Pure Data. I recently downloaded the library ‘Cyclone’, which allows us to use some of the main special features in which Max/MSP differs from PD, and I would like to have a look to it. I would also like to do some research in visuals, coming back with Gem for PD and make some great generative visuals; and would like to investigate what are the available applications to make computer generated visuals in Python, my new favourite platform for coding projects.

A PD patch using Cyclone

Looking forward to the next year, and probably looking at some interactivity projects, I will try to create little sound installations, maybe have a look to Arduino or Raspberry PI, and see what kind of simple projects could I practice with. I will combine these new experimentations with my more usual compositions with synthesisers, probably having some free time to compose some tracks; another little project that I had in mind was to make modular synth cases with the laser cutter in the 3D Workshop and expanding a bit more my modular rig, so I could probably be designing them, having the design ready for September this year. Apart from all this, hopefully using this time off to read some related books and of course having some deserved relax time, and being fully recovered before the start of the 3rd year.

Taking some time this summer to make recordings in the studio

Sequencing and finishing my composition with Python

I’m really close to finishing my composition for the Creative Sound Work in Element 2 and I’m really happy with this research in audio programming. I’d like to say that this DSP library for Python in particular sounds amazing, really clear and powerful and I would strongly recommend these tools for sound creation. There are hundreds of objects and commands to explore and although the workflow might look a bit tedious specially if we compare it with DAW’s, it’s very easy to use and becomes intuitive and fun once you get familiarised with it. However, learning these basic notions about PYO has been a difficult task as there is not any tutorial available online at all, and everything that I’ve learnt comes from my own research in the library’s documentation and some threads at Github, written by the same creator of PYO, Olivier Belanger.

The track that I came up with has plenty generative elements, allowing some chance and probability into different parameters, but there are also some other more traditional musical elements which I thought that were also important to be shown in my process and understanding on an audio tool. From the beginning I wanted to explore a kind of high pitched texture which could give an ASMR feel to the sound, something that could tinkle our hearing. That’s why I created these aquatic sounds, as well as other elements like wind/water kind of sound created with noise, that ended up creating a soundscape which I could say it’s very lake/pond oriented, and I love it, because I really like these kind of rainforest, beach and exotic places type of sonic environments.

In this audio work I wanted to express a strong experimental touch, because I think that this is what we study the most in this course, and as I mentioned before I added more extra elements like percussion, noise and effects to the final code but I think this would be too long to explain so I’ve recorded a video showing the whole code and then I’ll explain a bit of the sequencing methods and final mastering to finish this research.

Creating an arrangement for these sounds I created was an important part of this creative process. The command ‘.play()’ added at the end of the objects is what will determine if the an object like for example a metronome will start to run. This command has other parameters like duration or delay that will count in seconds, and this is what I’ve used to create an arrangement for this composition. Another useful object is ‘Fader’, where we can make a fade in and out of volume, and when applied to the mul parameter with allow us to manipulate volumes in real time. Let’s see some examples:

# In here the percussion metronome will start after 32 seconds, and will have 
# a duration of 124

pmeter = Metro(prnd, 2).play(delay=32,dur=124)

# We can play with this method more strategically, to locate sound at an
# specific moment, using various objects and even loop them.

def repeat6():
    cenv.play(delay=8)

pat6 = Pattern(function=repeat6, time=32).play(delay=32,dur=128)

# The Fader object allow us to increase and decrease volume gradually, like
# I did with the wind effects at the beginning

fad = Fader(fadein=16,fadeout=5,dur=64,mul=0.6).play(delay=16)
windrev = Freeverb(input=[windpass,windpass2],size=2,damp=2,mul=fad).out()
 

And these are the kind of techniques that I’ve used to create an arrangement, it has worked for me but there may exist other ways to more precisely create automation. I think that to unlock all the potential of making music with PYO, we could export individual sounds and then manage them in a DAW or sampler, to work more precisely, but for the creation of this sound work the challenge I was looking for was to create the whole composition with coding only, and that meant to completely sequencing it in the programming console.

Finally I used the recording option that appears every time we run the code, this option starts and stops recording and saves it in the user folder. Later I will load this sound in Pro Tools to adjust the volume accordingly and adding some mastering, and also create an ending fade-out. This was the whole process related to my research and development of my Creative Sound Work, I will definitely keep using PYO to create sound and I want to keep improving my skills in audio programming, also trying other tools, and the next step will be probably live coding audio with this kind of methods.

Guest Lecture Series (Summer Term)

Seymour Wright

Seymour is an experimental musician and saxophonist and he was showing us his work and influences in this lesson. He also played some of his field recordings in Chelsea Bridge as well as some of his favourite records and own releases. He loves jazz and vinyl, and in his style merges noise, jazz and funk from different approaches and these recordings are full of energy. He used to live in Elephant and Castle and is well familiarised with the area.

Luciano Maggiore

He is a sound artist and experimental musician originally from Palermo, Italy. He plays synths, tape recorders and loves vinyl too. His lo-fi compositions and field recordings are full of interesting nuances. He uses to play versions of famous experimental musicians at Cafe OTO and other venues. The whole session was directed by Rory and Ecka on the stage and they also played some vinyls for us, it was interesting and fun.

Mosquito Farm

This female duo based in London, has played at numerous venues lately, using little DIY noise devices. They are also interested in sculptural electronics having created a couple of installations, one of them that launches table tennis balls around the venue becoming a controversial piece among the attendants to the exhibition. Apart from showing us their studio in Woolwich, they performed a little live set for all of us at LCC.