EASTN-DC Manchester SoundWalk

This year the Royal Music Association Annual Conference took place in Manchester (11-13 Sept 2019) hosted by the Royal Northern College of Music and The University of Manchester. For this occasion, I received a commission to create a sound-walk for RMA participants exploring some of the history and urban features of the University’s campus. This was a great opportunity to demonstrate the new SonicMaps V2 platform for Locative Audio which is now in a beta stage and will be released in coming weeks with a whole new interface and geolocation engine. https://sonicmaps.xyz

Maia Demo @ CCRMA Stanford

On October 25th, I had the opportunity to demonstrate Maia’s AR-VR synchronization feature at the CCRMA Stage. For this purpose, we spent the first day of our visit 3D-modelling the concert space and the multichannel speaker array so it could be explored in a Unity VR instance while serving as a spatial map to position content in the parallel AR experience.

Maia presentation at CCRMA Stanford
CCRMA presentation
Avatar navigating the virtual concert space
Content synchronisation AR demo

MAIA (EASTN artist-in-residence)

MAIA is a mixed reality simulation framework designed to materialise a digital overlay of creative ideas in synchronised trans-real environments. It has been proposed as an extension to the author’s previous research on Locative Audio (SonicMaps).

For this purpose, a number of hyper-real virtual replicas (1:1 scale) of real world locations are to be built in the form of 3D parallel worlds where user-generated content is accurately localised, persistent, and automatically synchronised between both worlds—the real and the virtual counterparts.

The focus is on fading the boundaries between the physical and digital worlds, facilitating creative activities and social interactions in selected locations regardless of the ontological approach. We should thus be able to explore, create, and manipulate “in-world” digital content whether we are physically walking around these locations with an AR device (local user), or visiting their virtual replicas from a computer located anywhere in the world (remote user). In both cases, local and remote agents will be allowed to collaborate and interact with each other while experiencing the same existing content (i.e. buildings, trees, data overlays, etc.). This idea somehow resonates with some of the philosophical elaborations by Umberto Eco in his 1975 essay “Travels in Hyperreality”, where models and imitations are not just a mere reproduction of reality, but an attempt at improving on it. In this context, VR environments in transreality can serve as accessible simulation tools for the development and production of localised AR experiences, but they can also constitute an end in itself, an equally valid form of reality from a functional, structural, and perceptual point of view.

Content synchronisation takes places in the MAIA Cloud, an online software infrastructure for multimodal mixed reality. Any changes in the digital overlay should be immediately perceived by local and remote users sharing the same affected location. For instance, if a remote user navigating the virtual counterpart of a selected location decides to attach a video stream to a building’s wall, any local user pointing at that physical wall with an AR-enabled device will not only watch the newly added video but might be able to comment it with its creator who will also be visible in the scene as an additional AR element.

Watch the Maia AR App Demo Video

A number of technical and conceptual challenges need to be overcome in order to meet the project’s goals, namely:

Developing appropriate abstractions (API) for the structuring and manipulation of MR objects in the MAIA server.
Modelling precise 3D reproductions of real world locations via 3D scanning, architectural plans, photographs, etc.
Finding the best Networking and Multi-Player solutions to operate within the simulation engine.
Designing reliable outdoors 3D object-recognition algorithms (computer vision).
Providing robust localisation and anchoring of digital assets in world absolute coordinates.
Enabling interactions between virtual and physical objects in AR mode.
The outcome of this research project will be made publicly available to content aggregators and general users as multiple separate apps (the MAIA client) for specifics platforms. Currently, the considered options depending on the access and navigation medium are:

Virtual Reality Client (Virtual World Navigation)

A Unity WebGL online app to explore virtual spaces using a HTML5 compliant web browser.
Fully immersive stereoscopic rendering of virtual environments using VR headsets such as Oculus Rift or HTC Vive.
A dedicated iOS app (iPad only).
Augmented Reality Client (Physical World Navigation)

A custom AR app for Android and iOS mobile devices (smartphones, tablets, iPads).
Ideally, the Maia VR iPad app will also include an AR mode so one single app will only be required to explore local and remote locations. Virtual reality visualisations on smartphones have not been considered because of the small viewport—a large screen is recommended for optimal content editing/management in these 3D environments.

All the apps will share similar functionality as they should provide multiplayer access to selected locations and their characteristic objects and urban features (e.g. buildings, traffic signs, benches, statues, etc.). These physical objects—either visualised through a mobile device’s camera or as virtual representations in the parallel virtual world—may be used as anchors, markers or containers for our digital content (i.e. text, images, sounds, static/dynamic 3D models) and we shall be able to set privacy levels to decide who can access it. If not privacy restrictions are imposed, the new content would be added to MAIA’s public layer and it could be accessed by anyone in the MAIA network.

As a curiosity, the name MAIA was chosen considering the etymological explanation given by Indian philosophy where the word maia is interpreted as “illusion”, “magic”, or the mysterious power that turns an idea into physical reality—ideas being “the real thing”, and their physical materialisation “just an illusion” to our human eyes. In this sense, the digital overlay here proposed constitutes a world of existing ideas in the MAIA cloud, materialising in trans-real environments via virtual or augmented reality.

Dr. Ignacio Pecino
Artist-in-residence EASTN-DC Manchester
Funded by the European Arts Science Technology Network

Virtual Piano WebGL App

Recursive Arts Virtual Piano is an ultra-realistic musical keyboard with intelligent auto-accompaniment of popular songs. This Web app is compatible with most modern browsers supporting WebGL. No additional plugins are required.

3D modeling and lighting was performed using Blender. Interaction was implemented using the Unity Game Engine.

It can be played at https://recursivearts.com/virtual-piano/

Recursive Arts Virtual Piano

Recursive Arts Virtual Piano features two separate Key Mappings for the computer keyboard, although it can also be played using the mouse. The MAX (maximum) mapping provides access to the full 5 octaves of the piano by using the ‘Shift’ modifier key to play any black piano key. The REAL (realistic) mapping emulates a real piano keyboard layout, offering faster and direct access to black keys without having to press the Shift modifier key.

Thanks to its flexible and automated accompaniment mode, user can focus on the melodic line while the AI system keeps up with the user’s tempo.

It currently includes many popular pieces such as:
1. “Gymnopedie n.1” by Erik Satie (Difficulty: Beginner)
2. “Pavane pour une infante défunte” by Maurice Ravel (Difficulty: Medium)
3. “Rêverie” by Claude Debussy (Difficulty: Medium-Pro)

Songs can be loaded into the piano using links from the website’s songs section.

IX Symposium : Sonic Perspectives

The IX Symposium takes places every year at the Society of Arts and Technologies of Montreal. This year’s edition focused on sound as an essencial element for creating convincing immersive experiences. At the core of SAT we can find the Satosphere, a 11 m tall hemisphere featuring 8 projectors and 157 loudspeakers, where I had the opportunity to perform my procedural composition “Boids” with an unprecedent sense of immersion. As a part of the programmed workshops, I was also invited to provide some hands-on experience on how Unity and SuperCollider can be used together for the real time sonification of 3D spatial/kinematic information in procedural composition (“Recursive and Emergent Systems as Spatial Virtual Instruments for Procedural Composition using Game Engines”), which connects strongly with Zack Settel’s approach to “Object-Audio” and the new implementations of procedural audio suggested by the creators of Heavy.

XYZ @ NIME 2015

XYZ is an interactive installation proposing three non-conventional virtual instruments based on spatial and kinematic models to explore timbre, gesture and spatialisation. These models are implemented in a 3D simulation environment (Unity Game Engine) , presenting emergent and recursive characteristics that minimise visual information while maximising the exploration of aural space, through gesture and motion.

Sounds are procedurally generated in Supercollider using the incoming spatial and kinematic data from Unity via OSC messages. This approach reinforces the strong existing connection between the visual (gestural) and sonic aspects of these instruments.

Multiple simultaneous users are invited to interact with the piece using custom software interfaces on touch-screen hardware devices, allowing them to explore the proposed sounds and instrumental techniques, in a collaborative performance/improvisation. These instrumental techniques were implemented as control methods (API) in the scripting language (C#), including random and generative elements that introduce a certain level of indeterminacy and variety into the system.

This work was premiered on 31 May as a part of the NIME 2015 conference (New Interfaces for Musical Expression) and will be open to the public until 28 June at the Glassell Gallery (Shaw Centre for the Arts) in Baton Rouge, Louisiana.






Boids @ NYCEMF 2015

The New York City Electroacoustic Music Festival will take place June 22-28, 2015 at the Abrons Arts Center in New York City.

My latest procedural-generative audiovisual piece “Boids” (2014) will be performed during the festival (exact date to be confirmed).

For further details please visit:
http://nycemf.org

My City, My Sounds

A Symposium on Sonic Augmented Reality organised by ZKM | Institute for Music and Acoustics.

12-14 Dec 2014, Karlsruhe, Germany.

I was kindly invited to participate with the paper: “SonicMaps – Locative Audio: Technical Report, Challenges and Opportunities”, along with very interesting talks by Fred Adam, Ricardo Climent, Torsten Michaelsen, Udo Noll,  Thomas Reschs and many more.

Special thanks to Ludger Brümmer, Yannick Hofmann and Marie-Kristin Meier for their warm reception and hospitality.

Abstract

“This talk offers an insight into the design and implementation of the SonicMaps geolocation engine and its Locative Audio platform. Created in 2012 using the Unity Game Engine, SonicMaps offered new solutions for 3D spatial audio, in-app soundwalk creation, editing and online publishing. Current research and development explores adaptive audio (dynamic content), real-time binaural synthesis and new proximity technologies for reduced GPS signal areas or indoors spaces, all of which should help extend the means of sonic interaction with our environment.”

For further details please visit:

http://zkm.de/en/event/2014/12/my-city-my-sounds

Boids @ Sines & Squares Festival

Sines & Squares” is one of UK’s first festivals and concert series celebrating the recent resurgence of analogue and modular synthesisers, featuring some of the most renown UK and international performers, composers, lecturers and designers working with Buchla, Serge, Eurorack, Hordijk and EMS modular systems.

Here are some pics from this amazing Festival, where I was given the opportunity to present my latest procedural/generative work: “Boids“.


ICMC-SMC 2014

The joint ICMC|SMC|2014 Conference took place in Athens, Greece, from 14 to 20 September 2014 bringing together two well established events: the 40th International Computer Music Conference (ICMC) and the 11th Sound & Music Computing conference (SMC). The main theme of the Conference was “Music Technology meets Philosophy: From Digital Echos to Virtual Ethos”.

My contribution to the conference was a paper on “Spatial and Kinematic Models for Procedural Audio in 3D Virtual Environments”, which discusses and summarizes new ideas and methodologies that emerged from three recent pieces: Singularity (2013), Apollonian Gasket (2014) and Boids (2014). This trilogy proposes models for non-conventional 3D virtual instruments that explore musical gesture and spatialization as a result of spatial and kinematic data.