ListenTree is an audio-haptic display embedded in the natural environment. Motivated by a need for forms of display that fade into the background, our installation invites attention, rather than requiring it. We consume most our of our digital information through devices that often alienate us from our immediate surroundings; ListenTree points to a future where digital information might become enmeshed in material.

Down Arrow

Introduction

A visitor to the installation notices a faint sound appearing to emerge from a tree (or several), and might feel a slight vibration under their feet as they approach. By resting their head against the tree, they are able to both feel and hear crystal clear sound through bone conduction. To create this effect, a specialized audio exciter transducer is weatherproofed and attached to the underground base of a tree (or trees), transforming the tree into a living speaker that channels audio through its branches and provides vibrotactile feedback. Any kind of sound can be played through the tree, including live audio or pre-recorded tracks.

How It Works

A single controller unit is wired to multiple underground transducers, each affixed to the roots of a tree. The controller is designed to be a self-powered, self- contained plug-and-play module, adaptable to any tree. We use a solar panel to charge a battery; computation and wireless network connectivity is delivered by an embedded computer. Audio signals for each tree are generated on the embedded computer and output through a USB sound card and a stereo audio amplifier. Weatherproof connectors on the control module lead to buried speaker cables, which run underground to the transducers. The transducers are cast in silicone rubber to protect them from the elements.

Locations

You can view currently active ListenTree installations at these locations. Some of these pieces are being exhibited for a limited time while others reside in permanent collections like the one at the MIT Museum.

MIT Museum
Cambridge, USA

View Images


Who We Are

Gershon Dublon

gershon@media.mit.edu

Gershon Dublon is an artist, engineer and PhD student at the MIT Media Lab, where he develops new tools for exploring and understanding sensor data. In his research, he imagines distributed sensor networks forming a collective electronic nervous system that becomes prosthetic through new interfaces to sensory perception—visual, auditory, and tactile. These interfaces can be located both on the body and in the surrounding environment. Gershon received a MS from MIT and a BS in electrical engineering from Yale University. Before coming to MIT, he worked as a researcher at the Embedded Networks and Applications Lab at Yale, contributing to research in sensor fusion.

Edwina Portocarrero

edwina@media.mit.edu

Edwina Portocarrero is a PhD Candidate at MIT’s Media Lab. She designs hybrid physical/digital objects and systems for play, education and performance. She previously studied lighting and set design at Calarts. An avid traveler, she has lived in Brazil working at a documentary production house, as lighting designer in her native Mexico, hitchhiked her way to Nicaragua, lived in a Garifuna village in Honduras, documented the soccer scene in Rwanda and honed a special skill for pondering after sitting still for hundreds of hours while modeling for world-renowned artists. Currently telepresence and telepistemiology, cognitive development and what is play? in the 21st century occupy her mind while trying to imagine and reinvent the playgrounds of tomorrow.