Algorithmic Urban Composition

This project aims to explore the possibilities of “urban compositions” by means of mashing up urban landscapes through the algorithmic sonication process using object detection software. As the modern city has various elements (e.g. nature, buildings, and human activities), by generating sounds through the eyes of machines, we aim to aurally re-represent alternative cityscapes. Our question is how unique sounds and spaces would be created by applying urban complexity in the field of audiovisual composition.


This work was accepted and exhibited for the Linux Audio Conference 2019 

Date: March 25th 2019
Place: Stanford University CCRMA Listening Room

Artist: Kenta Tanaka and Kye Shimizu
Kenta Tanaka: Composition, Sound Programming
Kye Shimizu: Machine Learning Engineering, Visual Programming
Ryo Yumoto: Technical Support
Yuki Aizawa: Video Filming
Shinya Fujii: Academic Adviser