I went to Berlin for Transmediale 2023, as I have been watching lectures from the conference for a long time. It was really interesting to me that they embraced the fluidity between performance, lecture and screening and this, combined with the Site Writing module made me more experimental with my own work. It was amazing to see that academic outputs can still be engaging and really just cool. It was here that I had the chance to talk to McKenzie Wark and Farzin Lotfi-Jam, who are both working in the same field as I am.
Before I found Porthcurno Beach, I found it really hard to pin my project to a specific site. This was not only a problem because of the nature of the course, but also it made it hard for me to concretise the abstract philosophical and theoretical concepts that my project was dealing with. I attempted to create an intervention where people’s movements in a space would create sounds, however this turned out to be beyond my coding ability. This was lucky, because later I realised that it would not have gotten to the core of what I was talking about anyway, so I could focus on other things. I used Max MSP, and some of the sounds made are linked below.
https://on.soundcloud.com/YspU4
After experimenting with how to make sounds from machine vision, I wanted to see if I could create a closed pipeline of sound making by connecting machine vision and machine learning. I was still using Max MSP, and I used cameras and microphones to pick up how people were moving in spaces and what that sounded like. Unfortunately the ML/AI toolkits in Max are very preliminary, so I was struggling with making something that was actually listenable. The things I learned from this, however, would later come in handy when I used similar processes to make the sounds for Ground Truth. I also started to experiment with how radiowaves can interact and distort each other, a more lo-fi version of making sounds from people in a space.
https://on.soundcloud.com/jgPxZ