Text from a well-known play, sorted in order but with many omissions, flits by repeatedly until someone holds down a button. As long as pressure is maintained, the screen displays one particular selection of one character’s already-degraded lines. This allows visitors to read of forgetting and the loss of bodily form, and also to trace some material efforts that have been made to enhance memory, reaching from the dim history of the book into computing.
Bubble Beyond is an interactive video installation that allows visitors to engage in an augmented reality where they can play with dozens of virtual bubbles. The bubbles emanate from the heads of visitors as they come into view. The bubbles capture their faces and float around randomly and then morph into expressive emoji. Visitors can pop their bubbles as well as other people’s bubbles.
Up to six visitors are tracked by a Kinect V2 system, which captures both their motion and image using the 1080p video camera. A custom algorithm generates the virtual bubbles which are projected on an 8 x 4.5 foot screen. After 30 seconds without human interaction, clouds of emoji float by. Bubble Beyond is a playful representation of our interaction and individual expression in the Internet Age.
Can natural elements from a virtual reality take solid form and exist in our physical reality? Can virtual particles affect real world molecules?
This project is a mixed reality installation in which the wind of Second Life is used to move a windchime in real physical space. The virtual wind’s direction and speed are the variables that determine the device’s functionality in real time. This work creates a parallel between these two realities (virtual and physical), showing how they relate and interact with each other, creating a portal from one world to the other.
explore how a graffiti mural changes over time as existing images are painted over by new artists.
My aim for this project was to create an interface that works as an intuitive metaphor - pushing forward moves the user forward in time by displaying more recent images.
The artwork I used is from a graffiti mural in Central Square, Cambridge MA. The mural is frequently repainted, and by photographing the mural weekly I was able to capture the mural as it changed over time. Sometimes the changes were gradual evolutions; at other times the mural changed suddenly when large sections were completely painted over.
DreamDrops is an interactive fiber/video installation where viewers can interact with felt sculptures, brought to life as a colorful, immersive video and audio environment.
Three DreamDrops are constructed with felt and paper. The felt gives them the form, and the paper provides a window into the virtual words. The Drops are suspended by their “tails”, from the ceiling at three heights. The red Drop is hung the most high, the blue Drop is lower, and the green Drop hangs the lowest.
As visitors walk among the DreamDrops, they can see the three animated worlds. The red world looks like clouds at sunset, the blue world looks like the depths of the oceans, and the green world looks like a lush, tropical forest. The three worlds are populated by “boids”, a flock/school of animated critters that dash about in small groups.Visitors are invited to poke their heads up into DreamDrops. They’ll see bursts of color and dynamic sparks triggered by their motion which is detected by a Microsoft Kinect sensor. People can interact with the boids, who will playfully investigate the human visitors, calling out with a futuristic wail. A hand clap gesture will trigger a colorful tunnel effect that will transport the visitor to other worlds.
With their installation, Rob and Kristina are exploring the threshold between personal space and public space in the technology era. The installation allows people to experience a private environment of light and color in a very public arena.
Several open source software projects were used in this installation:
- OpenFrameworks - an open-source C++ library for creative coding
- ofxKinectNui - Sadam Fujioka’s addon using the Microsoft Kinect
- ofxMSAFluid - an addon for solving and drawing 2Dfluid systems by Memo Akten
- ofxBoids - a flocking motion addon for by Satoshi Okami
Rob and Kristina would like to thank Jennifer Lim for her help with this installation.
What if we could receive real-time feedback on our social interactions? I developed a system like this for myself using Amazon Mechanical Turk to explore in the form of a performance. During a month of continuous dates with new people I met on OkCupid, I streamed the interaction to the web using an iPhone app. Turk workers were paid to watch the stream, interpret what was happening, and offer feedback as to what I should do or say next. These directions were communicated to me via text message.
Dial-A-Style - An Algorithmic Portrait Studio is an interactive video installation that allows visitors to create digital self-portraits in a variety of painterly styles. Through experimentation, viewers will come to understand and appreciate how various styles of painting impact the emotional connection to the artwork.
For online images created by this system, please see http://www.robgon.com/DialastyleImages.aspx
- Click the Start Button
- Position the Webcam
- Spin the Wheel
- Click the Upload Button
There are four major styles represented on the wheel:
- Impressionism - in the style of Van Gogh
- Cubism - in the style of Picasso and Braque
- Pointillism - in the style of Chuck Close
- Anime - the Japanese comic style
The wheel can stop in between neighboring styles, which results in a hybrid styled portrait.
Open Source Components:
- OpenCV library
- XDoG: eXtended difference-of-Gaussians flandmark face detector
I would like to thank Jennifer Lim for her help with this installation.
SONAR Duel positions two TVs in a computer-generated audio/visual dialogue. Sonar sensors embedded in the televisions and connected to hidden computers converse: the sonars’ interaction triggers coding in the computers that creates unique patterns on the screens. On their own, the TVs “talk” to each other, moving in and out of visual and harmonic sync. But people can literally step into the conversation by standing between the two televisions. This human intervention generates human-like responses in the sonar dialogue. The screen images react with patterns, colors, and sounds that appear to reflect emotions such as happiness, agitation, and even jealousy. The interaction of the sonars with each other and with people in the space generates random coding in real time, so that audio and visual effects are dependent on the changing environment. In this way, SONAR Duel conveys the illusion of human sentience in technology.
For more information see Will Copps' article here: Full Circle: SONAR Duel Install at Boston CyberArts Gallery
Everything is Made of Atoms is an interactive new media installation that explores the entangled and ever-changing relationship between the body and technology. It draws on previous works created by artists such as Simon Penny's Traces (1999). The piece draws parallels between participants and their digitally-mediated images, expressing both as a whole and at the same time as a flow of constituent parts, the lifetimes of which, as philosopher Karen Barad (2003) argues, is not an attribute but the ongoing reconfigurings of the world.
Everything is Made of Atoms has two major software components: methods to access the stream of image, depth, and skeleton data from the Microsoft Kinect sensor, and routines to perform a high-performance computation of three-dimensional vortex dynamics. These methods are connected by an extensible framework of the artists own design.
Swarm is an interactive, real-time artwork that puts the viewer in an ever-changing autumn forest full of falling leaves. Normally, leaves will simply wobble slightly as they fall and cover the ground. When a viewer comes close to the screen, or passes by quickly, a whirlwind will pick up and swirl the leaves in complex, never-repeated patterns. The dense texture of motion and shape can be calming or torrential. In addition to the constantly-changing wind and turbulence, the piece exhibits a day/night cycle, subtle longer-term changes in the leaf colors, and other details.
The piece is physically composed of a flatscreen monitor for display, and a small desktop computer to run the simulation. To compute the motions of the thousands of leaves, a creative and lesser-known fluid dynamics algorithm is applied to the particles. Both the rendering and this special computation are performed on the GPU. The title, "Swarm," refers to the fact that the essential algorithm used for the wind flow is a relative of the swarming algorithms that were among the first to provide evidence in support of self-organization in complex systems. Viewing Swarm, you could be forgiven for thinking that the leaves had a global plan.