Flagscape: Data-visualizing Global Economic Exchange in Virtual Reality

Overview

Scott Kildall is conducting research into data-navigation techniques in virtual reality with a project called Flagscape, which constructs a surreal world of economic exchange between nations, based on United Nations data.

The work deploys “data bodies,” which represent exports such as metal ores and fossil fuels that move through space and impart complexities of economic relations. Viewers move through the procedurally-generated datascape rather than acting upon the data elements, inverting the common paradigm of legible and controlled data access.

Economic exchange in VR

Details

The code constructs data from several databases at runtime including population, carbon emissions per capita, military personnel per capita and a United Nations database on resource extraction. All of these get combined to construct the Flagscape data bodies. Each one represents a single datum, linked to a specific country.

The only stationary data body is a population model for each country, which scales to the relative value for each country and resembles a 3D person using a revolve around a central axis. The code positions these forms at their appropriate 3D world location, such that China and India — the largest two population bodies — act as waypoints as their forms dwarf all others.

Population bodies of India and China

A moshed flag skins every data body, acting as a glitched representation that subverts its own national identity. Underneath the flag is a complex set of relations of exchange that exceeds nationhood. For example, resource-extraction machines are made in one country that then get purchased by another to extract the very resources that make those machines.

Brazil flag, moshed

Flagscape reminds us that our borders are imaginary and in this idealized 3D space, there are no delineations of territory, only lines that guide trade between countries, forms magically gliding along an invisible path. What the database cannot tell us is how exactly the complex power relations move resources from one nation to another. Meanwhile, carbon emissions, the only untethered data body in Flagscape, which affects the entire planet spin out of control into the distance only to get endlessly respawned.

Carbon emissions by Canada and Australia

The primary acoustic element triggers when you navigate close to a population body. That country’s national anthem plays, filling your ears with a wash of drums, horns and militaristic melodies that flow into a state of sameness.

Initial Inspiration

The project is inspired by early notions of cyberspace described by writers such as William Gibson, where virtual reality is a space of infinity and abstraction. In Neuromancer, published in 1984, he describes cyberspace as:

“Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding…”

Neuromancer

While this text entices, most VR content recreates physical spaces, such as the British Museum with the same artwork, floor tiles and walls as the real, or it builds militarized spaces in which “you” are a set of hands that trigger weapons as you walk through combat mazes. At some level, this is a consequence of linear-thinking embedded in our fast-paced capitalist economy, arcing towards functionality but ignoring artistic possibilities. This research project acts as an antidote to these constrained environments.

OverkillVR, a virtual reality game

It was with these initial conversations around virtual datascapes with Ruth Gibson and Bruno Martelli that I was invited to be part of the Reality Remix project and was included in the AHRC Reality Remix grant which is part of their Next Generation Immersive Experiences call. My role is a “collaborator” (aka artist) who is creating their own project under these auspices.

Spatialization and Materializing Data

Unlike the 2D screen, which has a flatness and everyday familiarity, VR offers full spatialization and a new form of non-materiality, which Flagscapes fully plays with. One concept that I have been working with is that since data has physical consequences, it should exist as a “real” object. This project will expand this idea but will also blur sensorial experiences, tricking the visitor into a boundary zone of the non-material.

At the same time, Flasgscapes is its own form of landscape, creating an entire universe of possibility. It refers to traditions of depicting landscapes as art objects as well as iconic Earthworks pieces such as Spiral Jetty, where the Earth itself acts as a canvas. However, this type of datascape will be entirely infinite, like the boundaries of the imagination.

Spiral Jetty

Finally, Flagscape continues the steam of instruction-based work by artists such as Sol LeWitt, where an algorithm rather than the artist creates the work. Here, it accomplishes a few things such as taking the artists hand away from creating the form itself but also recognizing the power of artificial intelligence to assist in creating new forms of artwork.

Alternate Conception of Space in Virtual Reality

VR offers many unique forms of interaction, perception and immersion, but one aspect that defines it is the alternate sense of space. Similar to the religious spaces before the dominance of science, as described by Margaret Wertheim in the Pearly Gates of Cyberspace, this “other” space has the potential to create a set of rules that transport us to a unique imagination space.

As technology progresses and culture responds, the linearity of engineering-thinking often confines creativity rather than enhances it. Capitalist spaces get replicated and modified to adapt to the technology, validating McLuhan’s predictions of instantaneous, group-like thinking. The swipe gestures we use on our phones get encoded in muscle memory. We slyly refer to Wikipedia as the “wonder-killer”. The flying car is often cited as the most desirable future invention.

Flying car from Blade Runner

At stake with technological progress is imagination itself. Will the content of the spaces that get opened up with new technologies be ones that enhance our creativity or dull it? Who has access to technology-inspired culture? How can we use, enhance and subvert online distribution channels? These are just some of the questions and conversations that this project will ask — in the context of virtual space.

I see VR in a similar place as Video Art was in the 1970s, which thrived with access to affordable camcorders. However, VR and this specific project has the ability to easily disseminate into homes and public spaces through various app stores. Ultimately, with this project I hope to direct conversations around access and imagination with art and technology.

Marshall McLuhan with many telephones

Work-in-progress Presentation

Our Reality Remix group will be presenting its research, proof-of-concepts and prototypes at two venues in London on July 27th and July 28th, 2018 at Ravensbourne and Siobhan Davies Studios. Both free events are open to the public.

Bibliography
Gibson, W. (1993). Neuromancer. London: Harper Collins Science Fiction & Fantasy.
McLuhan, M. (1967). The medium is the massage : an inventory of effects. Bantam Books.
Wertheim, M. (2010). The pearly gates of cyberspace. New York [u.a.]: Norton.

Revamping Moon v Earth

My artwork occupies the space between the digital and analog as I generate physical expressions of the virtual. In the last several years, most of my work with transforming data into sculptures and installations.

But sometimes I return to narratives themselves. It’s not so much a lack of focus but rather a continual inquiry into technology and its social expression. Imaginary narratives seem particularly relevant these days with the subjectivity of truth magnifying an already polarized political discourse.

I recently finished revamping a project called Moon v Earth, originally presented in 2012 at the Adler Planetary Museum. This augmented reality artwork installation depicts a future narrative where a moon colony run by elites declares its independence from Earth. It is now on display at the Institute of Contemporary Art in San Jose.

Here are a few augments from the 2012 exhibition that made it in the 2018 show. My favorite was this pair of newspapers, which showed two different ‘truths’. At the time, “fake news” meant nothing and the idea of seeding false stories into online outlets wasn’t a remarkable.

The last augment — the ridiculous wooden catapult about to launch rocks at Earth — refers to the Robert Heinlein novel, The Moon is a Harsh Mistress. This inspired the my project many years ago. In his plot line, the moon was a penal colony much like Australia 200 years ago and features an AI as one of the three heads of the revolution. The independence-seekers achieved victory by hurled asteroids at Earth as their most effective weapon.

I created this absurd 3D model in the imaginary world of Second Life as an amateur 3D assemblage. It was quick and dirty, like much digital artwork and as we see nowadays, like the fragility of truth.

The turn of Moon v Earth, at least the 2018 version is that the augments aren’t virtual at all, but instead are constructed as physical augments hanging from fishing line or hot-glued against a cardboard backing. At first, I tried working with AR technology, but soon discovered its compromises: a device-dependence and a distance between the viewer and the experience. Instead, the physical objects shows the fragile and fragmentary nature of the work in cheap cardboard facades and flimsy hanging structures distributed throughout the venue.

NextNewGames is at the San Jose ICA until September 16th, 2018

Farewell, Dinacon

I just spent 20 days on a sparsely-inhabited island in Thailand with about 80 artists, scientists and other imaginative people. Everyone worked on their own projects ranging from jungle-foraged dinners to plant-piloted drones to creating batteries from microbial energy. We had no AC for much of the day, got bitten by weaver ants, were surrounded by jungle cats and ate off each other’s plates. And, I absolutely loved the experience.

Microbial Battery Workshop

The gathering was Dinacon, the first Digital Naturalism conference and was co-organized by Tasneem Khan and Andrew Quitmeyer. I was a “node leader” which meant that I spent a bit of time reviewing the applications, organizing workshops and spending longer at the event than most.

Dinacon registration area

The site was Koh Lon, a small island that is just off the coast of Phuket. We stayed at a “resort”, which was actually fairly minimal and had small cabins, common house or options for tent camping. From the main work area, you walk a few minutes in one direction and you’re on the beach. In the other direction is jungle. There were no cars on the island, a handful of scooters, two hundred or so local residents and not a single dog. The soundscape felt entirely tropical with cicadas, birds and frogs filling the airwaves with their chatter. Our dinner was boated in each day and at the small restaurant we could get the three essentials: wifi (when the power was on), breakfast + lunch, and beer.

Selfie with Koh Lon in the background

The participants came from all over the world and arrived and left at random times such that there was constant inflow of new friends and outflow of sad goodbyes. Each day, we had about 40 people on the island. I could nerd out on my project, kayak in the water, take a break on the ship that we had access to (Diva Andaman), find myself sitting in a chair sharing ideas, play with hermit crabs or get away from everyone and walk in the jungle. Helping one another was something that effortlessly emerged in our temporary community.

Saying hi to the Diva Andaman

Questions that I asked myself upon arrival was will happen when you assemble a group of project-creating strangers in a natural environment, where you can take a break by putting on a pair of swim trunks and walk into the ocean? What does building things in on the island with its outdoor space and natural light do to your creative practice? How can I prototype an artwork that collaborates with this specific place?

I quickly became a lot less efficient and much more connected to people and place. I ended up creating better work and my body was utterly relaxed. Any shoulder pain I might have in an office space dissipated quickly. There were no Google calendar invites, no afternoon soy lattes and certainly no eating at my desk.

I found myself in daily arrhythmic patterns of production, often sitting on my neighbor’s the porch with headphones on and composing audio synth code, then stopping suddenly and reveling in nature. I would get interrupted to see a tree snake or find myself lost in conversation with someone’s project. In the evenings, we usually had self-organized small workshops or informal talks. I drank beer sometimes but also often went to bed early, worn out from the humidity and brain swell each day.

Arduino coding by susnet

I did make a thing! This experiment — a potential new artwork has a working title of DinaSynth Quartet. It is a live audio-synth performance between a plant, the soil, the air and the water, which is an electronics installation that is designed exist only outside. I connected each of these four “players” to sensors: plant with electrodes, ground to soil sensor, water to EC sensor and air to humidity.

Each one used a variation on my Sonaqua boards — a kit which I am actively using for workshops — to make a dynamic audio synth track, modulating bleeps and clicks to their sensor readings, creating a concert performance of sorts.

I’m not sure exactly where the work will go next, but I’m happy with the results. It was my first audio synth project and I’m far from being an expert, calling my approach “beginner’s mind”. However, most of the participants liked the idea and the specific composition that the jungle played.

I already miss everyone there: Jen, Tina, David, Rana, Pom, Sebastian and so, so many other delightful friends. And this is one thing I love about the life I’ve created for myself as an new media artist: after events like this, I now have friends who are doing inspiring work all over the world.

Jungle-foraged dinner party

 

Kira’s birthday party
Putting on a heart rate sensor on one of the local cats
Sonaqua Workshop in the common space
Local lotus flower

 

Millipedes were everywhere

 

Soldering work in the main space
Dani doing a lizard dissection on the beach at susnet

 

Andrew holding a snake

 

Little Niko, my favorite of the Dinacon Cats

Dinacon: 2 more environmental synths

Dinacon — the Digital Naturalism Conference on island of Koh Lon in Thailand — has been amazing. It’s been an opportunity to meet and collaborate with other artists, scientists, hackers, writers and more. The caliber of the participants has been extraordinary.

My art experiments have been around creating audio synth compositions from the environment, using low-cost sensors and custom electronics to make site-specific results.

In the last two days, I’ve made two composition-circuits, this one (below) which uses a soil sensor and tracks moisture in the sand.

And this one, which uses electrodes on plant leaves to simulate what the plants might be “saying”.

The GitHub repo for all my experiments is here.

Dinacon: First Audio Synth Recording

At Dinacaon, I’m conducting many experiments with electronics using audio synth and environmental sensors to make site-specific compositions.

I’m extending my Sonaqua custom boards to use the Mozzi audio synthesis libraries. Yesterday I put together my first mini-composition.

These will eventually lead to more dynamic 4-channel compositions and could also extend into some live performances by plants and the environment.

This is the first of several environmental sensors that I’m deploying in the environment — a humidity sensor produced by SparkFun.

With some post-processing in Adobe Audition, I smoothed out an annoying low-pitched whine. I still have loads to learn about the transition from algorithmically-generated sound to recording and getting the glitches out — I’m certainly no audio engineer.

But, I’m pleased with what my little board can do and am excited about more environmental sensors on this amazing little island of Koh Lon.

Oh and here is the GitHub Repo for Sonaqua_Dinacon.

Dinacon: A walking tour of Koh Lon Island

As I often do, when I get to a new place, I get lost. I follow the advice of Rebecca Solnit in A Field Guide to Getting Lost and just wander. Before establishing patterns, your perceptions are the most open and so the day after arriving at Dinacon, I wandered around the island and just looked at things.

Various boats at low tide.

Lots of garbage, unfortunately. I saw this as an opportunity. Perhaps to do some cleanup or more likely to use as scavenged materials for some sculptural-sound installations. This would harken back to my work several years ago as an artist-in-residence at Recology.

Patterns in architectures. Patterns in nature.

An active school.

Small trails everywhere. There are no cars here and so one thing I noticed was the soundscape is different. Sometimes you’ll hear the sounds of a motorcycle or scooter, but even then, only occasionally.

Some sort of nest on a tree.

Intersection markers with plastic bags and red paint.

This island is quite large and much of it is impassible.

Holes in the sand into which crabs scurry.

So many coconuts.

Various signs, hand painted and more.     

   

Abandoned architecture.

 

New paths freshly cut by locals.

And as I was warned, if I venture out at low tide, I might be returning at high tide. Fortunately the water is warm and I was wearing shorts, so could wade back home.

Some thoughts about the work I’m doing here and ways I can engage with the space:

— Nature: there are plenty of plants, some amount of critters such as ants. How can I collaborate with various critters and foliage? Some of the things that are easily scavenged are bamboo, coconuts, dead coral and shells.

— Trash: what could be scavenged or collected to make temporary sculptures. Would this expand my practice here or should I stick with my original plan of electronics that make sounds? Perhaps I could put speakers inside of things that amplify the sound, like discarded gas cans.

— Architecture: there are some beautiful abandoned buildings and structures that no one seems to care about. I could probably do a performance or something in these spaces.

 

And finally, jungle cats!

Sonaqua at Currents 2018

I jokingly referred to my Sonaqua artwork as “the most annoying piece at the festival”. The exhibition was Currents New Media 2018, which was an incredible event.

It was a hit with the public and invited multi-user interaction. Kids went crazy for it. Adults seemed to enjoy the square-waves of audio glitch all night.

So yes, perhaps a tad abrasive, but it was also widely popular.

A number of people were intrigued by the water samples and electronics with what looked like a tangly mess of wires. It was actually a solid wiring job and nothing broke!

After working at the Exploratorium for a couple of years, I adjusted my approach to public engagement so that anyone can get something from this artwork.

How does it work?

The electrodes take a reading of the electrical current flow in various water samples that I collected throughout New Mexico. If more current flows through the water, then this means there are more minerals and salts, which is usually an indicator of less clean water.

The technical measurement is electrical conductivity, which correlates to total dissolved solids, which is one measure of water quality that scientists frequently use.

The installation plays lower tones for water that is more conductive (less pure) and higher tones for water that has less pollutants in it.

The results are unpredictable and fun, with 12 different water quality samples.

The light table is custom-built with etchings of New Mexico rivers and waterways, indicating where the original water sample was taken.

 

 

 

 

 

Gun Control (revisited)

My writing (below) was originally printed as part of the Disobedient Electronics project by Garnet Hertz. It is a limited edition publishing project that highlights confrontational work from industrial designers, electronic artists, hackers and makers that disobey conventions. 

 

Gun Control (revisited)

In 2004, I created Gun Control — a set of four electromechanical sculptures, which used stepper motors, servos and cheap cameras that were controlled by AVR code. The distinguishing feature of each unit is a police-issue semi-automatic replica handgun. You can purchase these authentic-looking firearms for less than $100.

The make-believe weapons arrived in the mail a week after I ordered them. That night, I closed the blinds, drank too much whisky and danced around my apartment in my underwear waving my new guns around. The next morning, I packed them in a duffel bag and took the “L” in Chicago to my studio. During the 45-minute commute I felt like a criminal.

Each gun is connected a stepper motor via a direct-drive shaft and flexible couplings. I used a lathe and a milling machine to make custom fittings. I hid unsightly electronics in a custom-sewn leather pouch, resembling some sort of body bag.

As people enter the Gun Control installation space, the cameras track their movement, and the guns follow their motion. Well, at least this is what I had hoped it would do. However, I had committed to using the first gen CMUCam and its blob-tracking software was spotty at best. I was under a deadline. It was too late to spec out new cameras. Plus, these were the right size for the artwork, which was using decentralized embedded hardware. I shifted my focus to building a chaotic system.

I re-coded the installation so the guns would point at different targets. They would  occasionally twirl about playfully and re-home themselves. I programmed the stepper motors to make the armatures shake and rattle when they got confusing target information. The software design embraced unpredictability, which made the whole artwork feel uncertain, embodying the primal emotion of fear.

Gun Control was my heavy-handed response to the post-911 landscape and the onset of the Iraq War. I exhibited it twice, then packed it up. It lacked subtlety and tension. At the time, there was not enough room for the viewer.

Just last month, I pulled the artwork out of deep storage. I brought the pieces to my studio and plugged in one of the units. It functioned perfectly. Upon revisiting this piece after 12 years, my combination of guns and surveillance seems eerily prescient.

Mass shootings have drastically increased in the last several years. Surveillance is everywhere, both with physical cameras and the invisible data-tracking from internet servers. Documentation of police shootings of unarmed African Americans is sadly, commonplace. I no longer recoil from the explicit violence of this old artwork.

I coded this using AVR microcontrollers, just before the Arduino was launched. It was tedious work just to get the various components working. I can no longer understand the lines of C code that I wrote many years ago. The younger me was technically smarter than the current me. My older self can put this historical piece into perspective. I plan to re-exhibit it in the coming years.

GitHub repo: https://github.com/scottkildall/GunControl

Collecting Sacred Fluids

I recently debuted a new art installation called Cybernetic Spirits at the L.A.S.T. Festival. This is an interactive electronic artwork, where participants generate sonic arrangements based on various sacred fluids. These include both historical liquids-of-workshop such as holy water, blood and breast milk and more contemporary ones such as gasoline and coconut water.

My proposal got accepted. Next, I had to actually collect these fluids.

My original list included: blood, holy water, coffee, gasoline, adrenaline, breast milk, corn syrup, wine, coca-cola, coconut water, vaccine (measles), sweat and kombucha

Some of these were easily procured at the local convenience store and a trip to the local gas pump. No problem.

But what about the others? I found holy water on Amazon, which didn’t surprise me, but then again this wasn’t anything I had ever thought about before.

I knew the medical ones would be the hardest: adrenaline and a measles vaccine. After hours scouring the internet and emailing with a doctor friend of mine, I realized I had to abandon these two. They were either prohibitively expensive or would require deceptive techniques that I wasn’t willing to try.

Art is a bag of failures and I expected not to be entirely successful. Corn syrup surprised me however. After my online shipment arrived, I discovered was sticky and too thick. It is syrup after all. Right. My electrical probes got gunky and more to the point, it didn’t conduct any electrical current. No current = no sound.

Meanwhile, I put out feelers for the human bodily fluids: blood, sweat and breast milk. Although it was easy to find animal blood, what I really wanted was human blood (mine). I connected with a friend of a friend, who is a licensed nurse and supporter of the arts. After many emails, we arranged an in-home blood draw. I thought I’d be squeamish about watching my blood go into several vials (I needed 50ml for the installation), but instead was fascinated by the process. We used anti-coagulant to make it less clotty, but it still separated into a viscous section at the bottom.

Since I am unable to produce breast milk, I cautiously inquired with some good friends who are recent moms and found someone willing to help. So grateful! She supplied me with one baby-serving size of breast milk just a couple of days before the exhibition, so that it would preserve better. At this point, along with the human blood in the fridge, I was thankful that I live alone and didn’t have to explain what was going on to skeptical housemates.

I saved the sweat for the last-minute, thinking that there was some easy way I could get sweaty in an exercise class and extract some. Once again a friend helped me, or at least tried, by going to a indoor cycling class and sweating into a cotton t-shirt. However, wringing it out produced maybe a drop or two of sweat, nowhere close to the required 50ml for the vials.

I was sweating over the sweat and really wanted it. I made more inquiries. One colleague suggested tears. Of course, blood, sweat and tears, though admittedly I felt like I was treading into Kik Smith territory at this point.

So, I did a calculation on the amount of tears you would need to collect 50ml and this would mean a crying a river everyday for about 8 months. Not enough time and not enough sadness.

Finally, just before shooting the documentation for the installation, the sweat came through. I friend’s father works for a company that produces artificial sweat and gave me 5 gallons of this mixture. It was a heavy thing to carry on BART, but I made it home without any spillage.

Artificial sweat? Seems gross and weird. The truth is a lot more sensible. A lot of companies need to test human sweat effects on products from wearable devices to steering wheels and it’s more efficient to make synthetic sweat than work with actual humans. Economics carves odd channels.

My artwork often takes me on odd paths of inquiry and this was no exception. Now, I just have to figure out what to do with all the sweat I have stored in my fridge. 

 

 

 

 

Reality Remix: Salon 1

I just returned from our first Reality Remix workshop in Dundee, Scotland. The prompt that we gave ourselves afterwards was to write up things that came up for us, returning thoughts and think about what is next. I write now on the train journey back to London.

The background is that Reality Remix is part of an Arts & Humanities and Research Council AHRC grant around Immersive Experiences and I am one of the “collaborators” (artists) — others include Ruth Gibson and Bruno Martelli (Ruth is the Principal Investigator), Joe DeLappe, Darshana Jayemanne, Alexa Pollman and Dustin Freeman. We are also working with several “partners”, who are in academia, industry and government to act as advisors and contribute in various ways. These are Nicolas Lambert (Ravensbourne University), Lauren Wright (Siobhan Davies Dance), Alex Woolner (Ads Reality) and Paul Callaghan (British Council).

The short project description is:

Reality Remix will explore how we move in and around the new spaces that emergent technologies afford. Through the development and examination of a group of prototypes, initiated from notions of Memory, Place and Performance and with a team of artists, computer programmers, fashion and game designers, we aim to uncover the mysteries of these new encounters, focussing on human behaviour, modes of moving, and kinaesthetic response. Reality Remix brings a unique dance perspective in the form of somatic enquiry that questions concepts of embodiment, sensory awareness, performance strategy, choreographic patterning and the value of touch in virtual worlds.

Within this framework, each of us will be developing our own VR/AR projects and possible collaborations might arise in the process.

Some of the reasons that I was invited to be part of this project stem from core inquiries about what we call “Gibsonian” cyberspace versus a simulated cyberspace. I find it odd that when we often depict virtual reality — and for the purposes of simplicity, I will treat VR a subset of cyberspace — as a simulation, a weak reproduction of some sort of physical reality. VR has immense possibilities that most people don’t tap into. With the dominance of first-person shooter games, reproductions of museums, and non-spaces such as TiltBrush, I have often wondered about how we can conceptualize VR landscapes in new ways. And so, this was my starting point for our first session.

Improving the functionality of the headset

However, technical skills are not a precursor to producing compelling work and, in fact, this is part of my artistic practice. I quickly adapt. For example in 2014, I quickly dove into 3D printing without knowledge of any real 3D modeling package and in the space of a few months, produced some conceptually-driven 3D print work that drew strong responses. I will easily pick up Unity, Unreal, 3ds Max or whatever else is needed.

It is with this lack of technical knowledge, that I can approach concepts with a beginner’s mind, a core concept of Buddhist thinking where you approach a situation without preconceptions and harvest a disposition of openness. Without deep investment in the structures of discourse, it is here that you can ask questions about the effectiveness of the technology such the nature of immersive spaces, the bodiless hands of VR and hyperreality.

For this initial meeting, we each have our own project ideas that we will be researching and producing in various forms. Some of my own inquiries stem from these Gibsonian landscapes. On the train I re-read Neuromancer and was still inspired by this seminal quote from Neuromancer:

Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts… A graphic representation of data abstracted from banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding…

I arrived with this general framework and began to ponder about how to incorporate threads of previous work such as physical data-visualizations and network performance. What about the apparatus of the headset itself? How can we play with the fact that we are partially in real space and partially out. And as one of our partners (Alex) pointed out rather than being immersed in VR, we are absorbed by it. Like a fish in water, we are live in full reality immersion. And when we talked about this, I chuckled to myself, remembering this David Foster Wallace joke:

There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?

The first day was a full-out setup day, installing our Windows work machines, getting the Oculus Rifts working, Google Pixel 2 phones, Unreal, Unity and anything else. We all centralized on some specific platforms so we can easily share work with each other and invite possible collaborations. Fighting with a slow wi-fi connection was the biggest challenge here but within a few hours I got my Alienware work machine making crude VR on the Oculus.

Goodies! This is the Oculus Rift hardware that we will all be using for project production

I yearned to visit Dundee despite the cold weather but all of our time was spent in this workspace, supported by the NEoN Digital Arts Festival (where I showed my Strewn Fields pieces last November) and in the evenings, we saw some art lectures by our own Joe DeLappe and my friend and former colleague Sarah Brin. We are food and drank in the local speakeasy bar, chitchatting about ideas.

Lecture by Joe DeLappe

Within the first couple of days, what was previously a mystery to me now became more clear. While some of the collaborators had well-developed projects (Dustin and Bruno/Ruth), others such as myself, Alexa and Joe were approaching it from a more conceptual angle with less technical aptitude.

I kept in mind that our goals for this project are to create compelling proof-of-concepts rather than finalized work, which makes it more of a research-oriented project with a forward-face rather than something that will compete in the already littered landscape of the good, bad and ugly of the Oculus Store.

We started each day with movement exercises led by Ruth, reminding us that we all live in “meatspace”. Our minds and bodies are not separate as we hunch over the machines and then stand with a headset on and wave our arms around. We began experimenting with the technology. I vacillated between diving into Unreal or Unity, each with their own advantages. While Unity has more generalized support and is easier to learn, Unreal undoubtedly has a better graphics engine and is the weapon-of-choice by Bruno. So, solving the early technical challenges began to help coalesce my ideas.

Ruth leading us on some Skinner Releasing exercises

We soon entered into a vortex of artistic energy — some of us from performance, wearables, immersive theater with various conceptual practices and the parters from other organizations who had a less artistic approach but loads of experience in the gaming worlds, university community and impact studies. I knew this on paper, but in reality, the various talents of our Reality Remix dream team soon became apparent.

Twice a day, each of the collaborators led workshops related to their practices. Alexa treated us to her performance-based clothing which registered AR markers and asked us to do an exercise where we tried to perceive something through someone else. Bruno and I made a drawing where he wore the VR headset and I sketched on his back which he replicated on paper. The process was fun and predictability the drawing was unimpressive. The process was fun and predictability the drawing was unimpressive.

 

Ruth is wearing fabric created by Alexa while she shows us some of her responsive AR augments

 

Our collaborative drawing in response to Alexa’s prompt

Dustin led us on a prototype for a sort of semi-immersive experience where actors jump into various avatars. With his deep background of improv, theater and role-playing, I began to shift, thinking about how to involve people not in a headset, which make up the majority of people in a VR experience as essential players.

Darshana in the VR headset while Dustin demonstrates some ways in which “non-players” can interact in VR space

I am not wondering about intimacy and vulnerability in VR. There is a certain amount of trust in this space. You are blind and often suffused in another audio dimension. Then, could you guide people through a VR experience like a child in a baby carriage? What can be done with multiple actors? So many questions. So many possibilities.

One thing that I was reminded of was the effectiveness of simple paper-prototyping and physical movement. Make things free from technology; keep it accessible. Stick to the concepts.

My own orientation began shifting more into virtual landscapes and thinking about data as the generator. I asked people to brainstorm various datasets and come up with some abstract representations based on that earlier quote from Neuromancer. I do want to get away from the sci-fi notion of cyberspace since it is limiting and enmeshed in VR 1.0, but will still claim this as the starting point to an antidote to the often mundane reality-simulation space.

This was useful for my own brainstorming. Alexa brought in an interesting point of view because she was thinking about time rather than landscape and brought in conversations around anticipation, reality and memory, which reminded me of the work by Bergson. Her movements were around personal data and captured her attention.

Meanwhile, Ruth made marks on the wall, translating gesture to 2D. Bruno worked with abstract visual forms. Despite being a poor draftsman, the question arose: how can we incorporate movement into a system? My own perception is highly visual and orientation towards abstract patterns. The success of my exercise was based in the fact that some useful (to me) renderings were produced while I also quickly learned that the a line-based VR landscape doesn’t resonate with everyone. How can we incorporate movement into a system.

Drawings by Bruno Martelli in response to my workshop prompt

As a manifesto bullet item: the scriptedness of VR is something we would all like to break. With all the possibilities of VR, why are the dominant forms assuming a feeling of immersion. Why don’t we consider what can be done before rushing to produce so much content.

Where is the element of surprise? VR is a solitary experience. I’m reminded of Joe’s work with the military and what one can do with a gaming space. Could we intervene or somehow interfere with VR space?

Darshana, the theorist amongst the group, did a beautiful summary of the session. My head was spinning with ideas at this point, so I can’t even recall everything he spoke about but certainly ideas around how to both be engaging and critical in this space surface. He envisioned a nexus around abstract spatialization, performance, role play and the body that tied our various projects together.

Bruno, Ruth and Alexa wearing some of Bruno + Ruth’s Dazzle Masks

There is much more to write and think about. I made progress on the technical side of things, such as getting an OSC pathway to Unreal working, so that can begin playing with electronic interfaces into a VR world.

More importantly, I feel like I’ve found my people with this Reality Remix team: one where we understand the history of new media, subversion of forms, aren’t dazzled by simplicity. We got along so very well with mutual respect and laughter. I’m excited about what comes next.

Reality Remix group phptp