The Joshua Tree and Infrared Light: A Sensor-driven Sound Performance

I create artwork, which captures live data from natural phenomena that is invisible to humans, and translate the data it into soundscapes, what many call data sonification. I’ve worked with water quality, air quality, electrical activity in mycelium and much more. The artworks aren’t just soundtracks. They take the forms of sculptural installations or even performances that others can experience viscerally.

This spring, I was invited to be an artist-in-residence at Joshua Tree National Park, which is my favorite of all the National Parks. I love this landscape so much, and feel so alive and present in the high desert.

I’ve created many site-specific sound installation in different parts of the world, including Abu Dhabi, Thailand, Slovenia and various ecosystems in the United States. Here, I knew I had to focus on the Joshua Tree itself. It’s iconic, and it’s a keystone species that it hosts a variety of other organisms. Without the Joshua Tree, this desert ecosystem would be drastically different.

The challenge was that, because this was in a National Park, I couldn’t damage the flora in any way, so putting nails in the plants (the Joshua Tree is a yucca), or even attaching any sort of sensor to it wasn’t going to happen.

In the weeks before the residency and after some research and brainstorming, I designed experiments of where I would use spectral sensors that tracked visible light by holding them near the Joshua Tree. These worked surprisingly well and I could integrate them into my wireless custom hardware + software system. Then, I found similar ones that captured data from wavelengths of near-infrared light: the light just outside the visible spectrum, well, visible to humans at least.

I did research on how the Joshua Tree might react. Maybe there would be some differences in the data, maybe not. I based my experiments on articles such as these, which suggest that a high percentage of IR light is reflected (not emitted) from the leaves of healthy plants and the chlorophyll itself is responsible via this article by NASA and this one by USGS.

On my first day, at the park, I did some data logging from a nearby Joshua Tree.

The wave lengths of light here captures here are at 730, 760, 810 and 860nm.

The first one is of the “barky part” — the dead brown leaves of the plant.

For the scientists in the audience, the labels on the graph correspond to my sensor hardware transmission code:

988_T = 730nm

988_U = 760nm

988_V = 810nm

988_W = 860nm

The second one is of the live green leaves.

When I saw this, I was floored. The low readings were from the base of the tree and the high readings were from the leafy parts. This was exactly what I had thought it could do, but the differences was stark.

What’s amazing is that we can think of these sensor readings roughly as an indicator of health of plants. Yes, I would expect this to work on other plants and trees and haven’t even yet tested the project on them. That’s for another day.

This was one of my early experiments. Forgive the sound quality!

Further research cited this National Library of Medicine article indicates that the high near-infrared reflectance is due to “high scattering of light by the leaf mesophyll tissues”, not the chlorophyll (as I stated in the video). I find the scientific source material to be fascinating and want my sensor-driven soundscapes to be based in true signal and not just noise.

Remember that color isn’t real. Color is data that we receive and construct in our brains. What is reality are photos: small particles of electromagnetic energy, each with a unique wavelength. In the near-infrared spectrum is a stream of data that we cannot perceive, but it is out there and other organisms, usually non-mammalian can see them. That’s how mosquitos find tasty meals. Frogs and salmon use the IR spectrum to navigate through murky waters. Vampire bats use infrared vision to locate prey.

I spent my time during this short residency, which was less than month, building a stable electronics system and mostly working on a soundscape that I felt would express the Joshua Trees in the park on May 4th, 2024. I decided to “play” a few different Joshua Trees like a theremin in a performance, using a “sloth glove”, which would host the sensor on the palm. More on the sloth below…

(Photo by NPS/ Paul Martinez)

For the sound design, I designed four different “instruments” that I could activate with a handheld controller, which had latching buttons on them. I could mute/unmute different tracks, creating a dynamic performance where I could move around without using the computer. Each one corresponded to one of the four different wavelengths of light.

Here are the some of the live data sound recordings from each of the tracks:

Joshua Tree National Park, 2024, High Notes

Joshua Tree National Park, 2024, Like a Theremin

Joshua Tree National Park, 2024, Water Guitar

Joshua Tree National Park, 2024, Electric Piano

About the sloth

The Shasta ground sloth (, now extinct, used to eat the seeds of the Joshua Tree and poop them out, and in a symbiotic relationship, spreading the plant over wide distances, dispersing them over a wider range than they now can travel.

As climate change changes the ecology of the desert environment where the Joshua Tree lives, the plant is now under environmental distress. Since it is a keystone species, that hosts many organisms, and is essential to its desert ecosystem, the health of this species is even more important to this environment.

I see this artwork as a performance where I commune with the tree, reading it’s health and generating compositions from this.


And here is the final documentation video for the project.



Documentation Debt

At Mars College, where I’ve been studying and living for the last couple of months, I’ve been thinking about the term “Documentation Debt” and how it applies to artists.

In the business world, Documentation Debt is when you skip documenting things to save time, money, or effort. Over time, this neglect accumulates and becomes harder to fix, similar to monetary debt with compounded interest. The more time passes, the more the lack of documentation becomes a problem.

For artists, you are in Documentation Debt when you have shot the documentation of your work but haven’t actually published it on your website, or otherwise. It’s more likely with video than photos, since video needs editing, sound synchronizing, and titling. The files sit there on your hard drive, collecting digital dust.

I often say to my students: the documentation is the artwork. An overstatement, yes, but most people see the documentation rather than the physical work itself. Website views are cheap; museum shows are not.

Future applications: grants, residencies, shows all hinge on having solid documentation. We have to document our projects all the time and make them look aesthetically compelling as well as tell the story of what it is. It does baffle me that documentation is often not considered proper a line item in a project proposal budget. Artists have to pay expert photographers (often other artists) to set up a post-show studio shoot.

I have had to shoot my own documentation, work out trade deals with other videographers, do post-shoot color correction, sound mixing and other attempts to make something look semi-professional and save money. And, I’m not an expert. Documentation is just expensive and a pain to manage.

A case of Documentation Debt happened recently with an artwork that I did in Slovenia in 2022 called River Glitch as part of PIF Camp. I conceptualized and built this installation in a week, setting up eight of my dispersed sensor-sound players (Datapods) on the Soça River. We had a videographer onsite to help out and while they shot footage, I also rushed around and shot video on my phone just in case. Before my audience of 40 other artists came to see it, a surprise heavy rainstorm pummeled the river installation.

I panicked, and packed up, and trudged for a kilometer in the rain, drenched while the other event attendees were huddled under a rooftop silk-screening t-shirts. I was crushed as I had planned for an engaging installation event for my colleagues to see that afternoon. No one saw it but the journalist and the videographer.

The video footage that the videographer shot was sadly, unusable. Some sort of filter was on that made it look like Werner Herzog’s Cave of Forgotten Dreams. My own phone footage was at 1920×1080 instead of 4K, and though I got enough coverage, there was by no means any planned shot list and the footage felt haphazard. I felt defeated and unmotivated to put together a video for the project.

The clips were on my drive as-was for many months, through 2022 and into 2023. I knew that the audio would be difficult to re-edit and would require studio recordings, and the footage would have to be cleaned up.

In October of 2023, I had just finished a successful project, called Poet Trees, which I had also shot loads of footage from. Now I was in some serious documentation debt. Two sound installations were now undocumented, save for some images on the my website.

I gave myself the goal of finishing it up over the holiday season. I buckled down and got videos of both of these works edited, titled and complete. It was onerous and took a long time, and the dark nights of December were perfect for this. There were some problems with my documentation — some shots needed stabilizing for example, but once I got these down, I had cleared my Documentation Debt for a little while, at least. Phew.

In the end, I was pretty happy with River Glitch. It’s a conceptually complex and strong piece and the footage worked out pretty well and the sound design felt strong.

Documentation Debt is a useful term for planning. Now I think before I shoot documentation, what will be my timeline for finishing it? Can I do in-progress works that are not expertly-edited? Can I show these to colleagues as I progress towards final installations and documentation? All of these questions rattle around my head as I plan for the documentation stage of the artwork.

Final video edit is here!



An Evening of Mushroom Delights

Maria Finn is one of my favorite people. This summer she took me on a huckleberry foraging trip at a secret spot that was at, well, secret. She told me nature stories, we laughed a lot, went swimming in a lake, and I came home with several cups of huckleberries that I used for infusions.

A few weeks ago, we collaborated on Fungitopia, which was an evening of food, conversation and art that combined her talents as a chef and storyteller along with my work: a sound installation that generated “Mushroom Music” from the electrical activity of mycelium.


We packed people into my home studio at Xenoform Labs and sold out the event. Folks left feeling satiated, informed and excited about the sound installation.

Maria concocted a tasty evening of mushroom delights such as  a mushroom-centric grazing table, mushroom pizza, infused bourbon and even a desert.

But what impressed me the most was her talk on how truffle have seduced humans.The culture with the longest recorded use of desert truffles were the Bedouins of the Negev desert, who refer to desert truffles as “the thunder fungus”. Science now shows that lightning may pull hydrogen from the air and deposit it into the earth, helping truffles grow. Truffles were revered by Etruscans, considered aphrodisiacs by the Ancient Greeks and Romans, and outlawed as “witches stones” during the Middle Ages.

She also talked about how by training a truffle dog, she is learning how scent moves through the forest and how she is developing a language with her dog that is teaching her to be more connected with the animal and plant world. Plus, her dog, Flora Jane, is adorable.

I decided to use the name Fungitopia for my future sound installation, which is currently a work-in-progress. What this video shows is my first generation of it and what I plan to do is to develop a slowly-changing soundscape, where new instruments will play and old instruments will fade out, so that it shifts without you really noticing it.

This direction is hugely influenced by Brian’s Eno’s 77 Million Paintings, which I enjoyed at YBCA over a decade ago. As I watched the slowly shifting digital paintings. I could barely notice the work changing, but 15 minutes later, it looked completely different.

We’ve trained ourselves for immediacy and my sensor-based sound work celebrates nature-time: tree time, plant time, fungus time and slower cycles of nature.


Introducing Catsronauts

As an extension of the Exotopia sci-fi storytelling project, I am presenting a new series called Catsronauts, which are space characters in future narratives. I created them in collaboration with an AI-image generation engine.

These are portraits of six of them.

Science Officer Paw Paws

Catsronauts are certainly playful and cute, playing with the use of “low culture” cat memes on the internet alongside the science of space travel.

The portraits reflect human-like characteristics, with the scared eyes of Ensign Magros to the determined and curious face of Science Officer Paw Paws to the grizzled feel that Captain Valery exudes.

Referring to the “Dogs of the Soviet Space Program” exhibition at the Museum of Jurassic Technology, Castronauts looks at the ideas of pets-in-space, speculating on imaginary futures.

Created by an AI engine with detail work by me, this work postulates a new hybrid creativity between artists and AI that simply wasn’t possible just a year ago.

Ensign Magros

Captain Valery

Tactical Officer Manchu


Coding Methodologies for New Media Artists


Last week, I hosted Raunak Singh, as a coder-in-residence, at Xenoform Labs. I normally provide artist-in-residencies for new media artists throughout the year, but  Covid torpedoed the program for 2020 and 2021.  I am thrilled to be resuming some sort of programming for 2022 and am now experimenting with different models.

Raunak is a trained software engineer and we collaborated on this blog post, I designed for New Media Artists, working with software code for their own projects.

My background: I am an educator who now teaches code to Design and CS students, and a new media artist, who is proficient with programming code, and for new works, I’m learning new technologies. I used to run my own software company in my early 20s and am self-taught, with an undergrad degree in Political Philosophy and an M.F.A.

The problem is that new media artists need to know how to use a range of technologies in their projects. They may be strong in a single domain (electronics, Java coding, VR, etc), but don’t usually have the engineering experience that makes learning and applying new technologies efficient. These artists often must pivot to new tech as it comes out, in order to be current with their critical discourse.

What methodologies can we learn from software engineering and apply to an art practice, without becoming a full-blown engineer or getting stuck in the technical weeds? After all, artists want to focus on researching the developing the concepts and creating the art. The tech is the means to the end, just a tool.

But first, an introduction to Raunak, in his own words.

Raunak is a software engineer who comes from a civil engineering background. He enjoys working on inter-disciplinary projects that combine different areas of engineering to do interesting things. This has given him a lot of practice in quickly learning and building with new tools.

Raunak has worked with tools ranging from virtual reality, full-stack applications, and machine learning. His most recent work with Xenoform Labs was on NFT Culture Proof, a collaboration with Scott Kildall and Nathaniel Stern and was a large-scale participatory text performance on the Blockchain. Through this project, Raunak exercised his ability to dive into new technologies through learning how to build smart contracts, as well as frontend and backend components that interact with them.


Time is a finite resource

Most of this article is about practical time-management techniques, which has implications in both the short-term (a specific project) and long-term (generally becoming a better builder).

Raunak introduced me to the concept of the Planning Fallacy, one of the many cognitive biases that impact time management. After reading about it, I quickly realized that I’ve often been its victim.

To quote Wikipedia, “The planning fallacy is a phenomenon in which predictions about how much time will be needed to complete a future task display an optimism bias and underestimate the time needed.”

People often (1) think that they can do the task faster than others who have done it before you (2) don’t account for unknown delays or hiccups (3) don’t possess the necessary information for realistic estimates.

Experience will counter this bias and as Raunak points out, most junior engineers overwhelmingly have problems with underestimating the completion times since it’s hard to see future roadblocks since they haven’t experienced them.

So don’t be discouraged when finishing a project takes even 2 or 3 times longer than you initially thought it would take! Even (and especially inexperienced) engineers have this problem. Set expectations — if you know this is the case ahead of time, then stretch out your timeline and then when it does take 3 times longer, for example, get your fancy interactive electronics pushing live data to a website functioning seamlessly, then there are fewer surprises, and, you’re not brimming with frustration halfway through the project.

One thing that we spoke about was to ascertain when to outsource and know when to learn a new tool. With NFT Culture Proof, we hired Raunak to be the lead developer and do all the blockchain coding. It was beyond my level of expertise — I’m an expert coder (at least for an artist), but learning this new tech and properly deploying it was beyond what I could do in the 3-4 month timeframe we had.

Nathaniel and I scoped out this problem early and focused on securing a solid developer. Our first person fell through and she recommended Raunak, who worked out spectacularly well.

The problem with outsourcing work is that you don’t learn new tools. And, the more distance that you have from the actual software coding, the harder it is to (1) make serendipitous discoveries — there are often unintended good outcomes that you discover when you write code (2) maintain your code for future projects or changes (3) implement your vision, since you have to both find and trust someone to do it for you.


Spend 5-10 hours solely on learning the new tool 

This guideline helps  to manage technical debt, which is a concept that Raunak spoke about. The concept here is that there is a cost of additional rework caused by choosing an easy (limited) solution now instead of using a better approach that would take longer. To summarize: when you make a decision for a better short-term outcome at the cost of the long-term.

Spending just 5-10 hours on learning a new tool can result in net gains down the project timeline. An example of this that I’ve seen is learning how to use GitHub, the standard source-code archiving and sharing tool. Just about every coder uses it to manage their projects. I teach it in my Interaction Design class. It’s notoriously confusing.

In about 2-3 hours, newbies can learn how to use this tool. And the more you use it, the easier it gets in terms of workflow. GitHub enables you to backup your projects, revert to older versions and have a cloud-based storage solution. I have about 100 source code repositories, some public, some private for artwork I’ve made, dating back to 2005. I even have an intro to GitHub video here

Too often, I’ve seen new media artists working with custom JavaScript, Arduino code, or something else and they have multiple versions of the project — old semi-working ones with names like “TalkingShell_final_final_3”. It’s a mess of confusion and creates its own sense of anxiety.

The trick here is, as Raunak points out, that when learning a new technology, you don’t know how long everything will take. This makes it hard to make decisions like “should I spend time writing this code that will make this other code easier to write”?

This is why a guideline of 5-10 hours is helpful. In Raunak’s case, with NFT Culture Proof, he learned the basics of Solidity, which is a smart contract coding language for the Polygon blockchain that we used. He spent time with an Udemy course on the topic and then developed the project timeline.

That 5-10 hours can also help you out with the decision to outsource or not. Perhaps it’s going to be something you can bite off with some limited technical knowledge, or maybe you’ll figure out that WTF, this is really hard and beyond what I can do, as I did with any sort of smart contract coding. That time will not be wasted.


What technologies should you use?

Ask a trusted colleague. What you find online through Googling won’t be nearly as valuable as what someone in your own field, another artist would recommend doing. For example, if you are doing Arduino development on a complex project, I can recommend which development environment to set up based on your specific needs.

The Arudino IDE, for example, is fine for setting up something quick and easy, but for a more complex embedded project, I’d recommend Visual Studio Code with PlatformIO. This provides color-text options, context-specific lookups of functions, debugging and so much more. But, it takes a few hours to set this up and sort through the project management details.

How long will the technology be around? Languages like Javascript or Python will certainly be around in 5 years. Will the Polygon blockchain be relevant? Maybe. Others are more transient. When I worked at the Exploratorium, for example, I used the Eclipse IDE with Proclipsing to run our Processing projects. It was powerful, but at some point along the way, the Proclipsing extension stopped being updated and this left us all with projects that were near-impossible to update. The popularity of a tool also means that you will likely find more resources (e.g. Stackoverflow posts) on how to use that software tool.

Check out at the popularity of such tools. Will it be easy to go back to the more stable system? For example, with Visual Studio Code + Platform IO, I could port this project back to the Arduino IDE very easily. Since VS Code is a Microsoft product, it’s likely to be around for a while anyhow.

How hard is it to learn? If after 5 hours of mucking around, if the new technology is just too damn difficult, then that is probably time to think about outsourcing the project to a qualified engineer. Is it just that far beyond your skill set? Are you completely disoriented after a few hours or do you have a sense that this software tool will be your friend?

How is it suited to your needs? Don’t try to muscle something around that isn’t well-suited for your project. I have a close friend, not a coder, for example, who uses the After Effects scripting language to generate his projects. That’s fine for video work, but when he started talking about a fun project called Biden Bingo, that generated bingo cards for different states during the 2020 presidential election, well that’s where I stepped in with Processing and solved the problem much more quickly. Make sure the tool in question is actually useful and not a circuitous route to solve the problem.

How much time should you spend researching vs building? 

As Raunak pointed out: with software engineering, it’s likely that someone has already implemented something you are trying to do. Successful engineering involves learning when to look online for libraries or other people’s code and when to implement your own. On one hand, it can be a huge timesaver if you find that one library or Stack Overflow Post that does exactly what you need to do and it fits perfectly with other components of your project. On the other, this holy grail is often hard to find, and code which looks like it is useful often requires doing acrobatics around it to make it fit into your other code. A general rule is to familiarize yourself with common libraries used by the new technology, and to always do a couple of quick searches on how other people have accomplished tasks you are working on when working with a new tool. In a lot of ways, searching for code and discovering what’s out there is experience itself in getting to know a tool.


You don’t know how much you can do with the time you are given if you don’t know how long things take

Rank project necessities from highest priority to lowest priority, e.g. do you need a completely accurate visual sensor or do you want to focus on an aesthetically-pleasing UI. Most of the time, some features don’t get completed, so it will be helpful to bump the lowest priority stuff down the chain. Once again, this seems obvious in some ways, but many don’t do it. Having this sketched out before you start a project, helps since what often happens is that you get emotionally attached to the work you’ve already sunk into a project and make poor decisions. Maybe the least important things have become more vital in your work-addled brain.

Timelines are also important in balancing long term growth with project completion. Most people have an idea of how much time they want to spend on a project, and how much time they have to learn other things. But it’s hard to make decisions like this when you don’t know how much time things take.

This takes us to the classic Donald Rumsfeld quote:

…because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know…it is the latter category that tends to be the difficult ones.

Right-wing war mongering aside, there is a chunk of wisdom for us to chew on here, which is simply that we don’t know what we don’t know.

First, try to get those unknown unknowns solved as quickly as possible. Don’t leave them for the second half of the project. Overestimate your timeline. Do the best you can.

Raunak’s advice, re-echoing here, is to take a short course or work through a book to learn what you don’t know, then you can re-cast your timeline. Courses or books are great for unknown unknowns.

Build what you don’t know first, then you can handle known unknowns. For me, for example, there is was a lot of JavaScript coding I did for NFT Culture Proof and I had to make various forms and other things that I had never done before, but I know JavaScript reasonably well and have worked with it before, so was confident I could solve these problems easily.

Some parting tips:

  1. Iterate on your design often. It’’s hard to keep a design fully in your head.
  2. Don’t get stuck in the “I can’t code” mental loop. It is a self-fulfilling prophecy. While some people have a more natural aptitude for code structures, anyone can learn to code in a “good enough” way for art projects.
  3. Identify bottlenecks, and design around them. There are often multiple solutions to the same problem. Determine the specific components that you are most concerned about working with, and come up with potential backup plans if you find that it’s too complicated to make it work. For example, in the NFT Culture Proof project, we initially wanted to generate our SVGs in Solidity, but we switched to doing that in JavaScript when that proved too difficult.
  4. Try to have multiple ways of accomplishing things. If something doesn’t work like it should or if it’s taking too much time/energy, then it’s always great to have some other technique to fall back on.
  5. Break down problems into bite-size pieces. It’s like doing work on your house. If you look at everything at once, it’s easy to get overwhelmed. If you work on just the shelving unit, that task can get done in a weekend and then you can move onto the next.
  6. Focus on handling one unknown at a time if possible. Don’t try to implement a smart contract and the UI for it in one go. It’s much easier to divide and conquer. Focus on a single component (e.g. the smart contract) first, and then, distill your work into as many simple tasks as you can (e.g. first deploy a trivially simple smart contract first, then work on how the smart contract token balances are stored, and then add transferring tokens between users).
  7. Know when you’re at the point of diminishing returns. If it’s 11pm and you can’t figure out a bug in the code, time to shut off the computer and work on it with a fresh can of Club-Mate in the morning.
  8. Prepare yourself for success. Figure out a workflow that is organized and soothing. For example, at the end of most days, I put together a to-do list for the next day. My brain is sharper in the morning and this is when I want to jump in and get shit done. Doing this is a good way to close out the work day and be focused on the next day.


Thanks Raunak for your insights!


Climate Change and NFTs

NFTs and Climate Change
Aren’t NFTs disastrous for the environment? How can I reconcile being an environmentalist with being a proponent of NFTs?

There is a path here. Let’s get into it.

NFT 101
A non-fungible token (NFT) is a unique and non-interchangeable unit of data stored on a blockchain. NFTs can be used to represent photos, videos, audio, and other types of digital files as unique items, essentially acting as a certificate of authenticity. The blockchain technology establishes a verified and public proof of ownership for them.

This Medium article has more detailed information, if you want to glean a deeper understanding of them.

Here are a few critical aspects for better understanding of what NFTs are:

  1. Each cryptocurrency has its own blockchain, consisting of verified, decentralized transactions that are publicly-viewable. So, Bitcoin, Ethereum, Tezos, Dogecoin, etc all have their own blockchains, each of which have unique features related to consensus, security, speed, scalability and other factors.
  2. An NFT is generated (“minted”) using what is called a smart contract. This computer code is stored on a blockchain and executes automatically when certain conditions are met.
  3. Since NFTs are on the blockchain, they are immutable, public, decentralized and much more secure than most other financial transactions. 


Energy and NFTs
The production of NFTs, on their most used blockchain (Ethereum), currently require large amounts of energy, consumed by computers to verify the transactions. There is no question that this tangibly contributes to the problem of climate change, and it is a deep concern of many in the NFT community.

Also there are several things to keep in mind about misinformation, energy mixtures and how the future is unfolding for NFTs and this is where things get complex.

(1) Bitcoin ≠ NFTs
First off, NFTs pretty much don’t use the Bitcoin blockchain. Bitcoin doesn’t have the smart contract architecture of Ethereum or other cryptocurrencies necessary for minting and distributing NFTs. 

The dominant blockchain/cryptocurrency that people are using for NFT collecting is Ethereum. It uses about 13.5 times less power/transaction than Bitcoin.1

And for now, we’ll talk about Ethereum instead of Bitcoin in terms of NFTs (there are other blockchains that support NFTs, which we will cover later), since it is the dominant blockchain out there that NFTs get minted and distributed on.

So let’s stop passing those Bitcoin headlines around and pretend like they’re accurate for NFTs. 

(2) Energy ≠ Carbon Emissions 

We can’t extrapolate calculations of carbon emissions without knowing the precise energy mix of the various energy sources used by the computers used for transactions on Ethereum.

For example, one unit of hydro-electric energy will have much less environmental impact than the same unit of coal-powered energy.

The statistics are not readily available for calculating the climate change impact but various sources have this anywhere from 40% to 75% renewable energy used for mining operations.2 By contrast, the United States is at ~12.5% renewable sources.3

(3) Many blockchain transactions use what are called stranded energy

My research is from a Harvard Business Review article4 and so I’m summarizing one of their key points.

Hydro-electric exemplifies how stranded energy works. In China during the wet season in the Sichuan and Yunnan provinces, enormous quantities of renewable hydro-electric energy are wasted every year. The production capacity simply outpaces local demand and the battery technology is not there yet to efficiently store and transport energy from these rural regions into the urban centers of demand.

According to their well-researched article, these regions probably represent the single largest stranded energy resource on the planet, and as they are responsible for almost 10% of global Bitcoin mining (and presumably Ethereum) in the dry season and 50% in the wet season.

(4) Ethereum is moving to a much more low-consumption of energy model
Ethereum 2.0 will use what is called a Proof-of-Stake mechanism for validating transactions, with estimates being summer of 2022.5 This is a huge deal for the future of NFTs and what will get NFTs into a green space.

Currently, both Etherum and Bitcoin use what is called Proof-of-Work (PoW) consensus algorithms to verify transactions. And, right now, many other blockchains use Proof-of-Stake (PoS) models.

PoW verifications require huge amounts of computing power to solve problems and verify  transactions. This requires significant energy to power computers. Ethereum and Bitcoin were not originally intended for scaling to the point they are now and this is one reason why we are in the predicament of cryptocurrency impacting the climate in a hugely negative way.

With PoS mechanisms for verifications the computers need much less power with energy requirements that would be 1/10000th of what they are now, which will mitigate, by orders of magnitude, the impact of NFTs on Climate Change and by extension, all the crypto transactions on Ethereum.6 The PoS mechanism also will be able to run many more transactions per second, making it a more efficient form of exchange.

With this shift to Ethereum 2.0, the energy requirements for all of the Ethereum would be that of a small town (2100 people).7 That seems pretty reasonable for what it offers.

Ethereum-based NFTs are not good for the environment now, they soon will be very low-energy in terms of consumption.

(5) There are currently many “green” blockchains that people are using for NFTs.

While Ethereum is the dominant blockchain for NFTs and the one that is used for many high-end collectibles, there are thriving markets on alt-blockchains such as Tezos, Flow, Cardano, Solana and Polygon which use Proof-of-Stake mechanisms for their transactions.

The collectibles here are often lower-priced, such as NBA Top Shots, on the Flow network, which are often priced in the $5 range. The transaction fees (“gas costs”) are very low and so they have this advantage from an economic standpoint. From an energy consumption viewpoint, it’s also quite low.

For my first NFT project, a collaboration with Nathaniel Stern called NFT Culture Proof, for this very reason, we are using Polygon, which is a PoS blockchain and a side chain of Ethereum and will offer a future bridge to Ethereum. 


Yes, currently many NFTs are not very energy-efficient since they use Ethereum

Ethereum will soon move to a much more energy-efficient model (Ethereum 2.0)

There are many other blockchains where artists and creators can mint NFTs that are energy efficient. If you’re an artist and concerned about the environment, you may want to consider those, at least until Ethereum 2.0 is released.










Symphonic Forest – Pre-installation Walkthrough

Just before I showed my new sound installation work, Symphonic Forest, I did this walkthrough at the Bloedel Reserve. This was the sound check before the final installation and I was quite happy with the results.


And here are some final pictures of the installation.


Their videographer shot some video on the morning of the installation premiere and now I have to edit it together, so give me a couple weeks for that.

Symphonic Forest @ Bloedel Reserve

I’m midway through my 3 week summer residency at the Bloedel Reserve, where I am creating a new installation called Symphonic Forest, which is sculptural-sound installation that creates a data-driven soundscape from live data from trees.

It’s a lot of work! Come check out its premiere on Thursday, August 12th (noon to 5pm) and Friday, August 13th (10am-2pm).

You can buy advance tickets here

(there are no walkups)

What you will experience is a site-specific installation with emergent acoustic behavior of tree data, depicting several states that the forest will go through. When one tree goes to a new emotion, for example, being excited, then it will influence the other trees to do the same.

I’m thrilled to have this opportunity to create this new work in this amazing reserve. It’s an evolution of much of the work that I’ve done in the last 4 years, including Sonaqua, Unnatural Language (in collaboration with Michael Ang), and Botanic Quartet.

These are some preliminary videos:

Initial walk-through of the installation site (below)


Initial sound tests (below) at the installation site


Emergent Behavior Test (below) — this shows the 12 trees and the communication model, where they will go from one state to another, influencing one another. I modeled this in p5.js and the final version will use OSC over wifi with the ESP32 chips from the project.


P5.js + GitHub Tutorials (intro)

In addition to being an artist, I also am an educator. I feel passionate about sharing the methodologies, technical tools and conceptual pathways that I’ve developed over the years .

This semester, I’m teaching Interaction Design (2 sections) at the University of San Francisco, where I’m an adjunct professor. Rather than doing hardware, I’ll be focusing exclusively on software (not apps) interaction and networked experiences.

P5.js, which essentially is a JavaScript version of Processing helps with learning coding since its output is visual. Graphics create gratification as students can see their work as shapes, lines and text rather than in a printed output in a console window.

With remote learning, I’ve developed some videos that I want to share with folks. These are just some intro videos and include some simple GitHub tutorials for classroom use.

Key to teaching is making it easy for students and to provide both a straightforward and uniform experience with the tech. Then the conceptual aspects can blossom.

Daniel Shiffman has created many in-depth tutorials for P5 and Processing and he is animated, personable and thorough. These are an amazing resource. For a classroom environment, however, Shiffman’s vids are too much for my needs. For example, I’m asking all the students to use the same text editor (Sublime), mirror my work in the GitHub with a Hello World example and generally give them less of a universe of possibilities at the start.


P5.js Tutorials

Getting Started in P5.js, (3 min, 45 sec)

Hello World in P5.js, by Scott Kildall (3 min, 24 sec)

Intro to GitHub by Scott Kildall (10 min)

GitHub Tutorials

Cloning and Pulling from GitHub by Scott Kildall (4 min, 39 sec)

Using the JavasScript Console with P5.js by Scott Kildall (4 min, 39 sec)


A flowchart guide for my online tutorials (below)



Biden Bingo

The 2020 Presidential Election is just several days away. We anticipate hours in front of the TV on election night, waiting for the results. Biden Bingo is a game you can play to entertain yourself and your guests as you watch each state go for Biden (yay!) or Trump (boo!). This project is a collaboration between Scott Kildall and Mark Woloschuk.

Download 100 unique Biden Bingo Cards here.

Print out several cards, circulate them amongst your friends, either in person or virtually and play the game as the election results trickle in. We may not have a winner, due to all the confusion and delays associated with this year’s election, so Biden Bingo could be a several day experience.

However, I’m still optimistic that we will have a winner (Biden) on November 3rd.

Please circulate this page link to your friends.  We’d love to have everyone join in a Bingo celebration with a hopeful Biden victory.


How it Works

Download the .zip file (above), which contains 100 printable PDFs (8.5″ x 11″) of unique Bingo cards. You may select any number of cards to print out for you and your guests. You may want to play two or three cards, select which cards from the lot to play or improvise the game in any way.

Each card has only three paths for a possible Biden victory, all of which contain at least one swing state and no republican states. When that state is called for Biden, X it out. Bingo is 5 in a row: across, down or diagonal.

New York is always center since it will be Democrat, it is Trump’s home and well, it’s New York, so we may offend them if we don’t give them the center square.

Of course, as we saw in 2016, anything can happen. Texas could become Blue (yes!) or a blue state could flip to Red (nooooo…), but we are both hopeful and confident that Biden will win despite Trump’s shenanigans. Ever though we are optimistic, people have to actually Vote for Biden to boot Trump out of office.

Interested in the Source Code?

I built this in Processing and the source code is available here, on GitHub under a Creative Commons License.


Copyright 2020 Xenoform Labs

Contagious Whisper at Schambad (Austria)

Badeverbot Welle 3, Schaumbad, Graz

We are delighted to invite you to see our sci-fi short film Contagious Whisper (US/UK 2020, 8min. dir. Kildall / Luksch) at the exhibition Badeverbot Welle 3, which will open Sept 24th; the vernissage will take place on Sept 27th.

Venue: Schaumbad – Freies Atelierhaus GrazPuchstrasse 41, 8020 Graz, Austria


Botanic Quartet at Ars Electronica

I haven’t blog-written for a long time and so much has happened in the last year plus. Amongst other global meltdowns, well yes, we are still in a pandemic. Like all artists, this changes everything. Cancelled exhibitions, postponed residencies, etc. You can’t make projects for physical spaces. And most of all, no plans.

We all have to be adaptable, right?

This year, for the first time ever, I’m not only “going to” but also participating in Ars Electronica. It has hitherto been cost-prohibitive for me to attend and so I’ve only looked at the documentation and heard stories about this long-standing art festival that explores critical issues around technology.

This year, they’re doing a virtual exhibition with the concept of “gardens” curated by different nodes — essentially different cities that are doing some sort of online exhibitions that an audience of thousands can navigate to and view via the web.

I admit that right now, I don’t understand it fully. The exhibition begins today (Sept 9 2020), I will explore various gardens and come up with my own feelings and conclusions about them — it’s odd but I’m compelled by their curatorial direction. I’m grateful to not only participate but also that they’re sharing this format with a more global audience.

I was invited by Freaklab Thailand to participate in the Psych Garden along with several other projects, essentially part of the Thailand node, which continues my work with the Bangkok 1899 Residency earlier this year.

This new installation of Unnatural Language, my ongoing collaboration with Michael Ang, is called “Botanic Quartet”.

This installation of Unnatural Language is one that I’ve developed at my house and consists of four plants — all plants that are native to Thailand that play in a synchronized quartet with one another.

I will be presenting this as a 24-hour livestream performance on September 12th (12:01 am – 11:59 pm)  At UTC +2 — this is the timezone in Linz, Austria.

I chose four instruments: a baritone ukulele, which I played myself, drums, which are me banging on pots and pans, a musical saw, played by my sweetie and a sampler which plays recorded sounds from Thailand.

Each plant is hooked up to a sensor — in most cases these are electrodes that read their rapidly-changing electrochemical activity. The data from this sensor gets processed by an embedded chip — an ESP32 that plays synthesized effects of these instruments, changing tones, pitches and patterns.

Specifically what is happening is that:

the drums vary their style and tempo

the ukulele changes its chord patterns

the musical saw alters its pitch

the sampler triggers with data spikes

The “clock” is a different ESP32 chip which sends synchronized beat signals to all the other ESP32 chips over OSC — through my local wifi network.

They receive signals at different times and so respond slightly out of sync. So like a musically expressive performance every beat does does not align exactly with each tick of a metronome.

What you don’t see in the video is how technically sturdy and extensible this work is. Each “Datapod” can run off of batteries and be located at various distances from one another in physical space. They are weatherproof, solid and don’t require a central computer and so easily can be brought to different festivals and events.

I’m super-curious where this will go in the future as this project is one that I’m dedicated to and has so much potential for site-specific installations, both during the pandemic and post-pandemic. Despite all this I’m excited to share a physical installation with you.

Lessons from my Dry January

Rain blanketed San Francisco throughout January but it was dry for me. No alcohol for 31 days. Both easy and annoying. I know many people who make this commitment annually, well-before someone dubbed it “Dry January”(or just #Dryuary).

I had imagined a cinema-like moment where my right hand would reach for the whiskey bottle and my left hand would slap it away, but the temptation never beckoned. Alcoholism flows in my family and I sure like drinking. My internal deal is that I keep a close watch on the booze. If it ever becomes a problem, I’m going 12-step.

I don’t sink my emotions into the bottle and I don’t indulge in sessions of wine therapy when things get tough. I committed to Dry January because during November and December I ended up drinking a couple of glasses of wine each night. I felt sloppy. I didn’t like where the path was heading.

A few days into 2019, I discovered the land of deep sleep. I felt more focused in the daytime. I got significantly more work done. I sorted out new art projects, I banged out tedious grants, my meetings energized rather than depleted me. I didn’t feel overwhelmed as I often do. I felt clear in my intentions. I felt in balance. The effect cumulated.

And…I missed the socializing. I missed the taste of a French red loaded with tannins. I missed dressing up and going out for a cocktail date. The dinner parties where I was dry, while enjoyable, didn’t have quite that epic feeling of social joy. In January, I holed up and nurtured my introverted side, a smaller version of me. Most importantly, I didn’t feel like my genuine self.

What’s the balance between alcohol and not? I’ve been reading about Mindful Drinking, which seems to have several variations. Feel free to rabbit hole on this search term.

My interpretation is this: before ordering or pouring a drink, ask yourself: How does this serve me?

When I go to an art opening, I often head towards the free wine corral and pour myself a glass, before any social exchange. Alcohol acts as both social lubricant and social crutch. However, I’m a slightly worse conversationalist in this particular situation: with a loose networks of friends of varying degrees of intimacy with a lot of chaotic traffic flow. So, at art openings, probably not.

This Saturday night, I went to a social dance party-thing, where I knew only a few people, other than my date. It’s an intimate circle and I’m new to that community of creative thinkers and warm folks. I feel more rigid without drinking. Perhaps the conversations are better, but my personal vibe feels off. I’m a little less relaxed. Saturday night, I wanted to unwind a bit, smile, laugh and integrate. Two glasses of wine served me well.

How does this drink serve me? Make it a quick question. Answer yes or no and then drink or not. It’s simple, but takes mostly practice and a smidge of discipline.

And, as a good friend of mine points out, if you’re on the fence about having another drink, then the answer is always no.

As a practicing artist, who wants to amplify creativity, when does drinking help? Very rarely. Drinking makes me sloppy and unstructured. Lateral thinking, as opposed to linear thinking is what I need for increased creativity. Drinking certainly shatters linear-thinking, but the resulting journey is a meandering path of randomness.

My next inquiries will be into thinking more laterally and less literally. What are the spaces and mindsets that make this happen? Alcohol is clearly not one of them. The research begins.

I Started an Art Residency

In August, I quit my job at Autodesk Pier 9 as a Shop Staff member in their unique fabrication facility. I had started there as an artist-in-residence in 2014 and continued on as a part-time employee to help other artists realize amazing projects, specifically teaching them electronics, coding and virtual reality techniques.

Everything changes. The company that I worked for is no longer supporting artists as they once did. I’ve spent the summer doing some soul-searching. I reflected on what was special about the Pier 9 Artist-in-Residency program. I moved on.

What I have for you is Xenoform Labs — a new experiment that I launched just this October.


Xenoform Labs is located in the Mission District in San Francisco and is my studio, a workshop space and an art residency program.

That’s right, an artist-in-residency program.

Art Residency

The Xenoform Labs Residency is an invitation-only art residency program for new media artists from outside of the Bay Area. I provide free housing and studio space for 1 month for one selected artist/couple. The studio includes digital media, virtual reality hardware, media production and light fabrication. During the residency period, I will host events for the artists to connect with local thinkers, artists and curators in the Bay Area. I hope to support 2-3 artists per year with flexible timing.

The studio supports digital media, virtual reality hardware, media production and light fabrication. Xenoform Labs will host events for the artists to connect with local thinkers, artists and curators in the Bay Area. We plan to support 4-5 artists per year with flexible timing.

The idea for the residency is to provide a space for experimentation and the development of new works and ideas. I hope to support open-ended inquiry and possible collaborations instead of production work for a final exhibition.

The First Artists-in-Residents

My first guests were Ruth Gibson and Bruno Martelli, who are friends and colleagues that I met at the Banff New Media Institute in 2009.


British electronic arts duo Gibson / Martelli make live simulations using performance capture, computer generated models and an array of technologies including Virtual Reality. Artworks of infinite duration are built within game engines where surround sound heightens the sense of immersion. Playfully addressing the position of the self, the artists examine ideas of player, performer and visitor – intertwining familiar tropes of videogames and art traditions of figure & landscape.

What they proposed and worked on was…

Extending the physical into the virtual, we will work to develop novel body-based interfaces for virtual reality. One of the drivers for our research has been thinking about how we can interface performance with virtual reality. In earlier works, performers were motion-captured and avatars visualized in a kind of ‘inside-out’ performance-in-the-round. Taking this a step further we see live performance being ‘beamed’ into a virtual space using electronics hardware. The idea for this residency is about extending Ruth’s somatic practice into virtual space – so that the user experiences it more as a visceral sensation, rather that as a primarily visual experience. This will be enabled perhaps, by creating physical interfaces that subtly encourage this.

One month wasn’t long enough! I miss them already.

I felt energized

There is nothing quite like setting up an art residency while it is happening.

The studio space is in an apartment. One kitchen. Two studio rooms. A small balcony in the backyard. Ruth and Bruno stayed downstairs a separate bedroom. Located in the center of the Mission, the Xenoform Labs Residency was a vortex of activity.

We set up a common workspace in the front room. Since it is also my studio, I am essentially co-working with the artists. I stocked the kitchen with dishes on loan, put vinyl for Xenoform Labs on the front door, added my own artwork to the center room, bought some camping chairs so we could lounge on the back deck and did loads of other small things to make it feel homey.

In the evenings, we would all work or go out to an art event. Ruth and Bruno met colleagues: curators, artists, thinkers, technologist and many more. At times it was overwhelming…I hope in a good way.

I discovered that the Xenoform Labs Residency could be a site for conversations around leveraging technology as a critical art practice. With this residency, I plan to slowly build networks of cross-geographical understanding and experimentation around new media art.


What I learned

First of all, breath sensors are a remarkable way to navigate in VR. Ruth and Bruno assembled a sensor that monitors the rhythm of your lungs as you inhale and exhale. Upon the inhale, you ascend in VR space and as you the exhale, you descend. It was magical, like scuba diving.

Also, I love hosting and giving a reason for people to come over to the studio — engaging their curiosity with the work and experiments of Ruth & Bruno. We hosted three separate events: a meet n’ greet, VR Salon and closing event. The format was casual for all of these, where invited guests could drop in and see what the residents were doing. This event structure worked well.

And, there is a lot of mundane things to do: cleaning, grocery-shopping, finding tools, directing the residents to the best coffee joint in town and so on. I embraced being a tour guide and was also grateful that they pitched in on dishes and were lovely housemates.

Finally, the visitors were super-enthusiastic. The concept of doing a small-scale residency in San Francisco generated so much interest and support amongst friends and colleagues. As often is the case, you go through a moment of imposter syndrome. Is this really happening? Yes, it is, because the residents arrived, made art and people came over to talk with them. It’s the real deal.

What’s Next?

Good question!

In the short-term, I’m setting up a private workshop + talk series starting in early December with programing in January and February. You can find all the workshops here.

Of course, I’m working on the next round of residents for 2019. It’s an invite-only program for artists who live outside of San Francisco. I am open to nominations, just contact me here.

This turns out to be more work than I thought! I’m trying to find the right fit: not just any qualified artist, but one who is angling to be on the more social side and yearns for conversation and connection to the unique cultural scene in the Bay Area.

Plus, there is a lot of maneuvering around complex schedules. In the last few weeks, I’ve developed a deep empathy for arts administrators.

Yes. I’m quite excited. 

Interview with Tasneem Khan and Andy Quitmeyer


›Last month, at the conclusion of my time at Dinacon, I interviewed the two organizers: Tasneem Khan and Andy Quitmeyer. This was a special time and I was grateful for the opportunity to get their thoughts before I left.

Scott Kildall: Hi! Would each of you please give a short self-introduction?

Andy: I’m Andy Quitmeyer and am a researcher, who is studying how we can use interactive technology to help us explore nature and other living creatures.

Tasneem: I’m Tasneem Khan and am a researcher and am exploring the idea of using place-based learning with different learner groups to understand how immersive experiences in ecosystems might affect responses.

Scott: And how did you two meet each other?

Andy: Tasneem sent me a random email that said something like “hi, I’m a friend of a friend who said I should check out your work and I’m going to be in Singapore soon”, which is where I was living. And a few weeks later we had lunch. My reaction was “holy crap you’re awesome and driven, let’s run a giant conference together”. And Tasneem was game for it.

Tasneem: We literally discussed that just a half an hour into our lunch. I loved the idea of running it and mixing both our styles. I’ve worked with running programs for large groups of people in weird remote places and Andy has done a lot of exploring and teaching with the concept of “digital naturalism.”

Scott: So you jumped into doing this conference but both had a pretty complimentary background in running conferences and organizing people. So it wasn’t that you came without inexperience just not of working with one another.

Tasneem: Not really. I have had experience with learning groups, curating residencies and collaborative expeditions, but not conferences of this scale.

Andy: Yeah, this is new for both of us. I’ve organized expeditions and workshops but never anything on this scale. This is easily eight to ten times bigger than anything I’ve ever done before.

Drones-eye view of Koh Lon

Scott: Maybe it’s time to explain what Dinacon is. Why do you call it a conference because it being here sure doesn’t feel like one. It feels more like a residency or a hacker camp.

Andy: Dinacon is a six week conference that Tasneem and I are running with the help of lots of other amazing wonderful people. It’s primarily targeted originally towards interaction designers, artists, and field biologist but we’re open to anyone who’s interested in any of these commingling ideas.

Tasneem: I’d say that Dinacon is the coming together of people, which is a conference in the most literal sense. We decided to do it because of a common ideology towards how people should work regardless of their areas of expertise. We believe people can work in a better environment to collaborate in a specific space and apply their expertise to both the field and to real life.

Andy: Yes, it also emerged from a common disappointment in how conferences are often run with a rigid structure where many people care more about what they can put on their C.V. than the actual conference itself. So, we wanted to undo the things that are not so great about conferences and open up the structure and give people time. It’s important to have both freedom and time to relax and soak in both the natural context and the impact of all these amazing people around you.

Scott: Great. And where are we right now? What is the conference venue for Dinacon?

Tasneem: We’ve chosen a tropical island — Koh Lon — which is a small Island off the Southeastern side of Phuket in Thailand. The reason we specifically chose this site, as opposed to any other of the hundred islands you have around here is the proximity to Phuket. This was our first conference of this scale and we had a hundred and thirty people expected from all over the world, most of whom we hadn’t met before. So from a logistics and safety point of view this made a lot of sense, it’s a ten minute boat ride away from a big city, hospitals, airports, anything, one might need provisions and so forth.

Andy: But we’re still quite on our own.

Tasneem: Yes. The great part about it is that the island a has population of only a couple of hundred people. You have access to pristine forest on one hand and the ocean on the other. In terms of selecting a location beyond practicality, one of the things we really wanted was to give people access to a cross-section of environments and Koh Lon gives us that — everything from the ocean to the forest and a range of connecting systems in between.

Workshop in the Dinacon “headquarters”

Andy: The main facilities of our conference are a big main tropical jungle house, that’s the central area, let us call it the headquarters. In front of that is a large campground, a grassy field that’s also surrounded by jungle and a beachfront, as well as little cabins that people can rent out at a pretty subsidized rate.

We also have the Diva Andaman, which is a glorious ship that we were able to use with the generosity of Yannick Mazy, the owner of Diva Marine. So, people can work on the beach, they can work out at sea, they can go voyage off into the forest and just soak in everything and really rapidly test whatever kinds of devices or art projects or things that they want to do that involve nature and just test it right away with the natural resources there.

The one and only Diva Andaman

Scott: Wow. And then with all these sites of activities, what’s the role of chance encounters? What’s the intention here about how people might be interacting?

Andy: I see my role as a conference organizer to heighten serendipity, and so I just try to mix together loads of interesting factors: nature, the people, the places, the devices they might be able to use and just try to increase the chances that these things might lead someone, for example, to make a cool hermit crab project.

Tasneem: With my practice in general and this attempt to push interdisciplinary work across subjects and across spaces. I view it like Andy said but also as way to think through the experience and ask questions – like, what are the kinds of people we want to bring in? What are the kinds of people we hope to attract? What is the work that we have no idea about that can surprise and illuminate each other.

Do this, while curating how can the aspects of place influence people’s work and interactions 

What we make available to participants can change the way they work, the way they think, who they interact with and what they produce in that space.

We have intentionally not put in too much work into programming activities every day because we want that to be organic and flow from the participants and from the place but what we have put a lot of effort into thinking about what to make accessible and the experiences to create for people in order to trigger and drive that enthusiasm and inspiration to work with each other and with the place.

Scott: So then, how did you select these people? Was there an application process? How did you pull through that process and how did you promote these diverse networks?

Andy: We had lots of forms for people to fill out. [laughter]

Scott: It was not bad at all.

Tasneem: We were trying to steer away from the overly bureaucratic approach to conferences and all the ways people need to prove themselves – like you’d only be allowed to enter the conference if you had a certain paper to present. Also, we wanted to be cognizant that that many people don’t have money through institutional backing to spend on a conference.

We still had a couple of basic forms. Andrew’s great social media network and ability to reach out to people made a big difference.

Andy: In order for people to get here, we first had just a super simple initial application form, which we sent around. Anyone could apply. It asked people what they might want to do here and to share an idea of what they would spend there time doing.

We also wanted to convey the understanding that a project might change in the next six months between when you think of your idea and then when it gets closer coming here and then of course once you’re on the island, everyone’s ideas blow up or somehow transform.

We had some different criteria because one thing about having an extended kind of conference like this is that it makes certain time slots a bit trickier. So like if everybody wants to come the first of July or something like that, it was a bit harder for us to choose some people. Then we asked people to open their dates be flexible and move around because we tried to fit in as many people as possible.   

Tasneem: We tried to ensure that we had no more than forty people on a single day. So one of the big criteria was just practicality and logistics. If people were willing to move around, they could be most often be slotted in.

Andy: Given our extremely minimal application process, if the applicants showed that they were genuinely interested in Dinacon, that was something I think we evaluated more positively than whatever their project was. We looked at how interested did this person seem about the place, about the people and about the kind of tools they work with.

Tasneem: Many people wanted to come and just learn, but we felt that they needed to be making and doing something that other people can learn from as well. Therefore, one of the main things we were looking at in applications, was their own project ideas and intent.

Hermit Crab Behavior by Margaret Minsky

Scott: So that brings up the three rules of Dinacon…Dinacon pronounced like a dinosaur right?

Andy: I think it depends where you come from I think technically since its the digital naturalism conference, that you’d be dinna-con

Scott: that sounds like a British person saying dinner.

Andy: Yeah exactly.

Tasneem: Which is another thing by the way, so we have been have a subculture of  ‘dinna-con’ that has emerged for people who like to cook, forage, eat and experiment with food.


Andy: But at least from an American perspective of a kid who likes dinosaurs, it’s totally Dinacon.

Scott: OK. So what are the three rules of Dinacon?

Tasneem: First, you have to make something, so it puts emphasis on the creation and your own thought process in the context of our location. Then, you have to document it because we’re all for making things available or accessible and not storing them away on your shelf or in a journal. So you have to share what you do in whichever format you like and finally take the time to engage with, review and provide feedback on somebody else’s work. So those are the 3 rules. To make, document and review.

Andy: They’re still based off this idea of how our academic conferences work but in a kind of inverted model. Instead of writing a paper and then get it pre-reviewed by a bunch of busy people who tend to not have time to give anything but a quick glimpse and be like “oh, well they didn’t cite blah blah-blah.”

So instead inverting that and proposing “Hey, you’re still going to be productive here.” I think people tend to be more productive here than a lot of other conferences, and you’re still going to get valued feedback from people in the rest of your community. Another thing we’re talking about is a real key factor of Dinacon is taking people from very different fields and showing them that their work can be valued by and meaningful to people across these invisible borders we set up.

Tasneem: Here, your work can be reviewed by anybody who is from a completely different field of practice, place and perspective.

Andy: An artist can review a field biologist paper, who can review someone’s robot design.

I think that will truly test how effective your work is in terms of – have people being able to understand and relate to your work. Have people been able to apply it or at least generate ideas that deal with you and your work? That forces an interdisciplinary approach to push people to step outside their comfort zone and express themselves in new formats rather just a written paper. There’s nothing wrong with the paper, but we are asking how can we communicate that in different ways.

The forest/jungle of Koh Lon

Scott: Excellent. So then, can you talk briefly about the term Digital Naturalism? Is it one that you made up? I didn’t find that defined on Wikipedia.

Andy: Digital Naturalism was the subject of my PhD. Research. So in essence, I just made it up. What I was looking to do was taking all of this digital technology that we have available which is really fascinating for looking at nature because it’s the first new medium that we have that can really enact behaviors. If you think about animals, they can take input from the environment, they can sense things and they can also react to the environment, they can move, they can create light, they can make sounds, they can do all these kinds of actions that contribute back to the environment.

You can get these behavior cycles and networks of things interacting with each other and then you have computer technology which is kind of the first technology we have that can also do this: it can take in inputs from its sensors, it can buzz, it can beep, it can turn on LEDs, so it can communicate back. What I’m really fascinated by is how we can use this interactive digital technology to join into these networks of natural interactions and create these dynamic systems between creatures and computers and see what happens. This is a bit in contrast to the way a lot of technology gets used with looking at nature and a lot of sciences.

And that’s where it has much more of just a pure utilitarian use. There’s a bunch of things happening in nature. We want to extract all of this data and then do something with it.

Such as find out where the oil is, see how we can get the honeybees to pollinate our field better, something like that but instead —  well, that’s why it’s digital naturalism, it’s not digital science, it’s not digital field biology because it’s going back to the naturalistic roots of field biology that’s more concerned with learning about creatures and systems for the sake of learning about them and experiencing them in visceral interesting ways and doing this more out of a love and appreciation for nature. That can also be quite useful, can be quite enlightening to people but its basis is more in love then utility.

Tasneem: I just wanted to add to that often scientists or biologists are out there working in the field and you have all this amazing equipment and technology that exists that works really well in laboratories. However, if you look at the sphere of field equipment that can actually survive and do the work that the people serving, probing and studying nature or the environment need with them, it’s so limiting and that’s because the people who develop the technology are very rarely actually embedded in the space where the technology has to be used.

So I guess what Andy’s PhD was about in many ways, the idea that it stemmed from was this need to go out there and build in context and without it having to cost a fortune.

Scott: Directly related to this is the idea you two mentioned to me last week called “place-based learning”. So maybe you can talk about that in the context of this question, how have you seen that working in the jungle or on the boat has affected people’s work from their proposals to the actuality?

Andy: Oh it’s a lovely mutation that we have been witnessing. Two main things that I see a lot at Dinacon that makes me happy is intergenerational knowledge transfer within Dinacon, you have people coming and going from Koh Lon. The older guard will demonstrate, for example, how to open a coconut. The new people learn from the elders at Dinacon and so knowledge is transferred but then also there is a parallel mutation of practices that we see where someone wants to make this thing and then someone else might contribute something sideways, like “oh here’s this kind of stuff that I do with these weird leaves or these type of corkscrew devices” and then the first person says “huh, I’m going to incorporate that into my design” and then suddenly you have these writhing, wriggling bamboo creatures that are different than either of the original people were even thinking about the beginning.

Island Caterpillar by Hannah Wolfe

Scott: Tasneem, I’m wondering if you can tell me a little bit more? I hadn’t really heard of place-based learning which means in my wide readership, hundreds of thousands of people will not have heard of place based learning, I’m wondering if you could talk a little bit about what that really is.

Tasneem: Place-based learning is a big subject, but I will talk about it in the context of Dinacon. Let’s return to one of the initial questions of why eight weeks and how did we designed this? A lot of the way that I think learning happens comes from that act of allowing time in a space in a particular place. If you go back to the origins of art, science or philosophy it all stems from extended observation of systems that then led to inquiry, thought and expression. The subsequent subject divisions are just based on the ways of thinking and the methods we then choose.

Much about ‘learning from place and in context’ is about giving yourself the opportunity to do that if you give yourself the time to explore. One of our main goals has been that, to provide people time embedded in a particular place before going so far as to learning about it, to push them to ask the questions, to spend extended duration of time observing and then asking questions and then moving forward to the next step of whether it’s experimenting, creating or learning from.

Place-based learning is basically that: how do we learn the things that we otherwise compartmentalize in the subject of biology, engineering, sociology or whatever the subject you might choose to look at physics or robotics or chemistry.  How you will remove those barriers and illuminate the context of the place you’re living in. What can one extract from a space and you’ll notice that it’s actually a huge mesh of interconnection of all these different subjects. I can’t start answering questions about the water or the ocean without addressing the chemistry of the water, the composition of the water, the physics of what happens when you go to 1 meter below the surface of the water, the biology of what actually lives in one single drop of water and so on.

So I would say in a sentence — well I don’t want to be defining place-based learning because it’s already defined by many experts — but to me, place-based learning is to be able to learn in context without subject barriers, so more of the emphasis is on the method of learning rather than a linear process. So even though this is not a class it’s a group of people who are yet to explore their own subjects and interests all feeding off and learning from the common systems they are situated at.

All the people and projects at Dinacon have gone through a process of metamorphosis — they came in with an idea (which is why we didn’t put too many constraints on how they have to make their proposal) then the place and people in it affected that idea.

The place has helped refine the question and define the methodology that they then use. We’ve seen that with so many people. Jennifer, for instance, came to do work on this project with food, eating, foraging and documenting herself eating things that she’s collected but she ended up discovering so much – like finer aspects of how to make salt and she then spent her time collecting seawater and making different kinds of salt, which for her was a revelation.

For instance, she explored how one can extract formic acid from a weaver ant  and then use it in the right way to add flavor to a salad. So, her work was not something that I had given much thought to before she came here but it was so rewarding to see how people’s learning process and then the practice and the output can transform.

It wasn’t anything we did, it was just what she observed from the ants and from the ocean, what she learning from other scientists and practitioners, how she then chose to apply it — that’s what place-based learning is.

Scott: Wow. Excellent. Shifting gears here…can you tell me about the different areas for making. What do you have available?

Andy: In terms of the facilities we have a whole suite of interactive electronics, prototyping stuff, zillions of different types of sensors, actuators, motors, breadboards, soldering irons, different things that you would see in an interactive electronics labs, a whole crap ton of Arduinos, the various different flavors and shapes and sizes and powers and things like that.

We also have mold-making equipment for doing casting and natural forms, we have biological workbenches with microscopes, vials, tubes, insect aspirators. We have a whole textile zone, so we have sewing machines, buckles, zippers, fabrics. We have yarn crafting stuff like yarn, plastic yarn, needle, a loom.

Tasneem: Lots of art and craft stuff. So I mean anything you might need from bits of copper strips to glue of every kind and tapes.

Andy: Sharp cutting knifes and hand tools and power tools like drills. little mini projectors, robotic arms that have different heads on them, which can function as 3D printers or laser engravers.

Tasneem: We have a vinyl cutter, a sticker cutter. Since we’re on an island and electricity is diesel generated and that’s not always reliable and it’s not very sustainable, so we’ve set up solar panels as part of our collaboration with Yannick. And we have electricity pretty much all the time, even in the storms.

Island take away sound glasses by Mónica Rikić

Scott: What were some of your expectations with this event and in which ways were the expectations met and what were some of the surprises both positively and negatively?

Andy: It feels a little weird to say but it kind of came out how I expected. We got a bunch of weirdos together and we put them in a really amazing place and things started taking off and they really enjoy working with each other, chatting, cooking, living, sharing tons of cool ideas and that’s kind of what I expected. I was a little bit primed for that from experiences of other places that had kind of similar models that we built off of like PIFCamp in Slovenia or the Signal Fire Arts Residency.

So we’ve kind of seen this model in action a bit before but what I was not as much prepared for is how well it would work and the caliber of the people and how many just brought it when they got there.

Tasneem: People come in for one or two weeks, they arrive with such great energy and they’re willing to give all seven or fourteen days – everything they have which is a great vibe— because we all feed off each other’s energy.

Andy: I think maybe one thing I didn’t expect as much, not a good or bad way but I kind of the life cycle of a person here at Dinacon where the first like a day or two, they’re kind of in a daze and they just show up and are confused or just stoking things in their brain, or maybe swollen with all kinds of crazy stuff.

Tasneem: A sensory overload!

Andy: Totally. So then they start jumping on it and then something switches and then they’re starting interesting projects and they’re helping out with the next round of dazed new people who are coming in. Then they realize “oh, I have to leave.” It’s always too early.

Andy, always hamming it up

Scott: How long do people stay here at Dinacon?

Tasneem: One of the things that was intentional was to not have a structure that would define how this must run. So we left it open to participants to choose how long they would stay. I do however feel strongly about extended time — whenever I’ve run programs for students I notice that nothing less than seven days is something I want to engage with. Because when you work with this model of immersing someone in a new environment and the whole idea of trans-locality and what people learn from a new place, you have to acknowledge that fact that the body and all your senses together need time to absorb, assimilate and then respond in a new environment.

For example, for someone who has never been to Asia before, they suddenly find themselves staying in the jungle and riding on a boat… with new sounds, new smells, a new time zone and new flavors. You’re surrounded by all sorts of people and so much information being thrown at every part of your body that you need to give yourself time to take in, to reflect. So in terms of an expectation, I wish that everyone had stayed for a minimum of seven days.

Andy: Yeah I think I think that’s about the average stay of a person here is six days, the longest stay I think has been about twenty days. You’re around that.

Scott: It’s been incredible.

Tasneem: You and Vanessa and I think a few others…you can see the work, the outcomes, the collaboration and the interaction in general around people who stay for long is different from the ones who just got a brief taste of it.

Andy: So our original rules we set up was just something like minimum three days, maximum three weeks and the three day minimum was in response to academic conferences, which often only last three days.

Scott: Can you describe like what might happen in a single day in Dinacon.

Andy: For a slice of a single day maybe people wake up, people kind of slowly getting up at different times. The kitchen might be busy with people cooking different leftovers, things like that, people kind of waking up, getting into the day, someone’s busting out the soldering iron already and like you know carving into some bamboo, making a fun robot caterpillar and then maybe someone decides to take a kayak trip around the island and so they lead people off.

Meanwhile, other people are collecting people saliva to look at the crystallizations of different hormones in them throughout the day and then you’ll have…

Tasneem: A lot of building happening, people making things like robots and working on project boards to actually outdoor building and bamboo crafts.

Andy: Yeah and then usually people are kind of snacking throughout the day, getting some kind of lunch, again it’s still pretty informal. Towards the afternoon there usually tends to be a spike in activity. We have an online forum chat room where we’re keeping each other updated, so maybe the kayak people say, “Hey, we found this weird creature, we’re going to you know bring it back to the microscope” and then someone’s coordinating bringing the microscope back from the ship and people are kind of talking about different things that they’ve shared throughout the day and then maybe food will come in, people might organize a beautiful sunset yoga, suddenly the giant flying foxes — huge bats — come out and people gather around to see that, maybe we go see someone do a presentation or an art performance outdoors or indoors and then suddenly someone posts a message that the water is glowing and they found a bunch of bioluminescence and then everyone runs out to the ocean to start exploring and investigating what’s going on. Why is it glowing and how do you make it glow? So there’s a lot of these serendipitous moments that appear throughout.

Tasneem: And the whole programming of it is also intentionally informal, we have a couple of boards which everyone collectively builds schedules on and general information about the day is put up. And then there’s a online chat room which functions as a board for announcements and coordination as Andy was saying, so if someone feels like sharing their work or going out for a walk or setting up sensors on plants, they usually put it out there and open it up for anyone else who is interested to come and join them, help them, learn from them or contribute to the work maybe with other devices and expertise. So it sort of creates potential for multiple parallel activities and you can plug into anything that you’re interested or create your own. The evenings because of the group dinner, tends to become an interesting reflection, sharing of information, sharing of exciting things that happened that day and every so often semiformal presentations.

Random Forests exercise, one of the many ad-hoc workshops

Scott: Can you talk about what are some of the logistical challenges? It sounds crazy to me and how do you maintain your own energy and positivity?

Tasneem: Like you said this is our project. Curating this experience and seeing it actually come to life is so exciting and the fact that it’s all going so well, puts us in a high-energy state.

Andy: We just kind of roll with it all. Even if I’m like crushing through hours on the spreadsheet that’s just a monstrosity and figuring out what the hell’s going on, it’s still a pleasant experience because of how much joy and activity is going on around you and you know if things get too intense I just go walk around in the forest, go take a swim, the nature kind of revives you.

Tasneem: It helps you put things in perspective. It’s not so tragic if somebody misses a boat for instance, it’s all okay in the larger scheme of events.

Scott: Andy was the one who saw me when I came off the boat when I was like spaghetti noodles were flowing out of my head. [laughter]

Tasneem: And that comes back to what you said of people arriving and how do they that sort of metamorphosize and they learn to sit back and loosen up a bit.

Andy: Yes, many people on their arrival here, they’re upset that the boat was an hour late and they are upset but then they realize it’s okay in the larger scheme of things – it’s a part of working in the field – there is so much to learn in the adventure itself.

Tasneem: We have a lot more confidence we can fix pretty much everything and like Andy was saying it’s got a zillion moving parts but if you were to complete a breakdown the logistics actually, there are a couple of main principles that hold it together — it’s the people primarily.

So we need to keep track of people coming and going to the best of our ability with 56 days of arrival and 56 days of departure. Maybe some sort of grouping would be wiser but then again it wouldn’t have allowed people the flexibility they had this time. In terms of like the moving parts, there are many and those change every day. So that just needs us to be attentive and responsive and be willing to play back and forth with it.

But in terms of the key logistics that hold it together, it’s the people and detailed information on them, any sort of serious medical conditions that we need to be aware of and things like that. Food and water availability for everyone. We were able to make available to participants, the environment and tools that they would need, so we’ve really had to make that most accessible.

And in terms of those four verticals (arrival/departure; health and safety; food/water; Place/access/toold) if we’re able to keeps those together, I think everything else is fluid and manageable.

Scott: I also want to ask about the code of conduct and how do you get people to cooperate and treat each other respectfully? It has been drama-free as far as I can tell.

Andy: Our code of conduct, it’s kind of based off of two things, one from the Signal Fire Arts Residency, they do these backpacking hikes that are also art residencies, so they have all of these different people who’ve never met each other living in close quarters, doing stressful intense kind of things together and bonding in different ways and so we adapted a lot of things with their permission because I always thought it was a good code of conduct. That basically sums up to – don’t be an asshole, like if someone asks you to stop being an asshole, you know listen to them and don’t continue that process.

Tasneem: If we really had trouble we could ask someone to leave, that was specified in the code of conduct — but nothing has even remotely lead to something like of that nature. I guess it’s really that simple, if you’re respectful and you respect peoples boundaries and equipment, belongings and practice. And everyone’s been as expected, very nice to each other. When new people come in, even people who have been here a while take the time to show them around. And share their work and that’s it and it’s been as simple as that actually and fortunately we’ve never had to go into the details of the code of conduct that we made.

Andy: The other half of the code of conduct comes from, this other big strange wonderful art project called the SV Seeker, where this guy is making his own opens source research vessel for scientists and he’s doing it his backyard in Tulsa, Oklahoma and he basically has built an apartment for any kinds of artists or engineers to like go live there and help him well, then grind stuff on the ship and build parts of the ship and before you go there you have to sign this code conduct that basically is like radical self-responsibility, that like hey you’re coming you need to bring everything that will keep you alive and healthy, you are entirely responsible for all aspects of yourself and so I think between really stressing that with the participant, you know take care of yourself like you’re responsible for all of your own actions and then don’t be a dick to other people. It’s turned into a good mix and I think everyone got that.

Tasneem: No one’s pushed those boundaries.

Scott: In the future, where might this take place and how would you improve this event?

Andy: Yeah definitely, we’re going to have Dinacon 2.

Tasneem: Yes that’s always been the plan.I mean that’s the whole point, we’re not doing this conference because we had a few thousand dollars left over that we had to use in the conference budget. We put this together because it was a part of a larger dream and ideology.

Andy: It came from ourselves and this is what we want to do with our lives.

Tasneem: And given that the response and the participation in the outcomes are so exciting, we are learning from every step along the way, there’s no question of not doing it again and the only question would be where are we doing it again.

Andy: we didn’t know where we were doing this until like October of last year and so we still got a couple months.

Tasneem: I guess once we wrap up here, we’d be actively thinking about that and we’ve thought about several very good ideas in other places, like South America, who knows? Lets see. In terms of doing things differently, I think in essence we are happy with how it’s gone, we might make adjustment to certain logistical aspects but that would be site specific, depending on where it is.

Scott: Anything else you would change?

Andy: No maybe different contract styles with the place for rental but that’s just logistics.

Tasneem: Yeah pushing the participants a little bit more to tell us what date they’re arriving without being too pushy.

Scott: Great and where can people learn more about Dinacon? Is there is a website?

Andy: there is a website, you can go to and that’s where you have contact information for me and Tasneem and you’ll be able to see all the projects by all the wonderful participants and node leaders and yeah you can find it all there.

Scott: Any last things you want to add? This has been amazing. Is there anything you want to add that you think wasn’t covered?

Andy: I’m just going to plug, if you’re building something or doing art or art, doing whatever you’re doing and go try to do it outside.

Tasneem: I would say the same thing, have a lot of fun using the space you’re in and don’t coop yourself up in a white box.

Scott: Thank you.

Flagscape: Data-visualizing Global Economic Exchange in Virtual Reality


Scott Kildall is conducting research into data-navigation techniques in virtual reality with a project called Flagscape, which constructs a surreal world of economic exchange between nations, based on United Nations data.

The work deploys “data bodies,” which represent exports such as metal ores and fossil fuels that move through space and impart complexities of economic relations. Viewers move through the procedurally-generated datascape rather than acting upon the data elements, inverting the common paradigm of legible and controlled data access.

Economic exchange in VR


The code constructs data from several databases at runtime including population, carbon emissions per capita, military personnel per capita and a United Nations database on resource extraction. All of these get combined to construct the Flagscape data bodies. Each one represents a single datum, linked to a specific country.

The only stationary data body is a population model for each country, which scales to the relative value for each country and resembles a 3D person using a revolve around a central axis. The code positions these forms at their appropriate 3D world location, such that China and India — the largest two population bodies — act as waypoints as their forms dwarf all others.

Population bodies of India and China

A moshed flag skins every data body, acting as a glitched representation that subverts its own national identity. Underneath the flag is a complex set of relations of exchange that exceeds nationhood. For example, resource-extraction machines are made in one country that then get purchased by another to extract the very resources that make those machines.

Brazil flag, moshed

Flagscape reminds us that our borders are imaginary and in this idealized 3D space, there are no delineations of territory, only lines that guide trade between countries, forms magically gliding along an invisible path. What the database cannot tell us is how exactly the complex power relations move resources from one nation to another. Meanwhile, carbon emissions, the only untethered data body in Flagscape, which affects the entire planet spin out of control into the distance only to get endlessly respawned.

Carbon emissions by Canada and Australia

The primary acoustic element triggers when you navigate close to a population body. That country’s national anthem plays, filling your ears with a wash of drums, horns and militaristic melodies that flow into a state of sameness.

Initial Inspiration

The project is inspired by early notions of cyberspace described by writers such as William Gibson, where virtual reality is a space of infinity and abstraction. In Neuromancer, published in 1984, he describes cyberspace as:

“Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding…”


While this text entices, most VR content recreates physical spaces, such as the British Museum with the same artwork, floor tiles and walls as the real, or it builds militarized spaces in which “you” are a set of hands that trigger weapons as you walk through combat mazes. At some level, this is a consequence of linear-thinking embedded in our fast-paced capitalist economy, arcing towards functionality but ignoring artistic possibilities. This research project acts as an antidote to these constrained environments.

OverkillVR, a virtual reality game

It was with these initial conversations around virtual datascapes with Ruth Gibson and Bruno Martelli that I was invited to be part of the Reality Remix project and was included in the AHRC Reality Remix grant which is part of their Next Generation Immersive Experiences call. My role is a “collaborator” (aka artist) who is creating their own project under these auspices.

Spatialization and Materializing Data

Unlike the 2D screen, which has a flatness and everyday familiarity, VR offers full spatialization and a new form of non-materiality, which Flagscapes fully plays with. One concept that I have been working with is that since data has physical consequences, it should exist as a “real” object. This project will expand this idea but will also blur sensorial experiences, tricking the visitor into a boundary zone of the non-material.

At the same time, Flasgscapes is its own form of landscape, creating an entire universe of possibility. It refers to traditions of depicting landscapes as art objects as well as iconic Earthworks pieces such as Spiral Jetty, where the Earth itself acts as a canvas. However, this type of datascape will be entirely infinite, like the boundaries of the imagination.

Spiral Jetty

Finally, Flagscape continues the steam of instruction-based work by artists such as Sol LeWitt, where an algorithm rather than the artist creates the work. Here, it accomplishes a few things such as taking the artists hand away from creating the form itself but also recognizing the power of artificial intelligence to assist in creating new forms of artwork.

Alternate Conception of Space in Virtual Reality

VR offers many unique forms of interaction, perception and immersion, but one aspect that defines it is the alternate sense of space. Similar to the religious spaces before the dominance of science, as described by Margaret Wertheim in the Pearly Gates of Cyberspace, this “other” space has the potential to create a set of rules that transport us to a unique imagination space.

As technology progresses and culture responds, the linearity of engineering-thinking often confines creativity rather than enhances it. Capitalist spaces get replicated and modified to adapt to the technology, validating McLuhan’s predictions of instantaneous, group-like thinking. The swipe gestures we use on our phones get encoded in muscle memory. We slyly refer to Wikipedia as the “wonder-killer”. The flying car is often cited as the most desirable future invention.

Flying car from Blade Runner

At stake with technological progress is imagination itself. Will the content of the spaces that get opened up with new technologies be ones that enhance our creativity or dull it? Who has access to technology-inspired culture? How can we use, enhance and subvert online distribution channels? These are just some of the questions and conversations that this project will ask — in the context of virtual space.

I see VR in a similar place as Video Art was in the 1970s, which thrived with access to affordable camcorders. However, VR and this specific project has the ability to easily disseminate into homes and public spaces through various app stores. Ultimately, with this project I hope to direct conversations around access and imagination with art and technology.

Marshall McLuhan with many telephones

Work-in-progress Presentation

Our Reality Remix group will be presenting its research, proof-of-concepts and prototypes at two venues in London on July 27th and July 28th, 2018 at Ravensbourne and Siobhan Davies Studios. Both free events are open to the public.

Gibson, W. (1993). Neuromancer. London: Harper Collins Science Fiction & Fantasy.
McLuhan, M. (1967). The medium is the massage : an inventory of effects. Bantam Books.
Wertheim, M. (2010). The pearly gates of cyberspace. New York [u.a.]: Norton.

Revamping Moon v Earth

›My artwork occupies the space between the digital and analog as I generate physical expressions of the virtual. In the last several years, most of my work with transforming data into sculptures and installations.

But sometimes I return to narratives themselves. It’s not so much a lack of focus but rather a continual inquiry into technology and its social expression. Imaginary narratives seem particularly relevant these days with the subjectivity of truth magnifying an already polarized political discourse.

I recently finished revamping a project called Moon v Earth, originally presented in 2012 at the Adler Planetary Museum. This augmented reality artwork installation depicts a future narrative where a moon colony run by elites declares its independence from Earth. It is now on display at the Institute of Contemporary Art in San Jose.

Here are a few augments from the 2012 exhibition that made it in the 2018 show. My favorite was this pair of newspapers, which showed two different ‘truths’. At the time, “fake news” meant nothing and the idea of seeding false stories into online outlets wasn’t a remarkable.

The last augment — the ridiculous wooden catapult about to launch rocks at Earth — refers to the Robert Heinlein novel, The Moon is a Harsh Mistress. This inspired the my project many years ago. In his plot line, the moon was a penal colony much like Australia 200 years ago and features an AI as one of the three heads of the revolution. The independence-seekers achieved victory by hurled asteroids at Earth as their most effective weapon.

I created this absurd 3D model in the imaginary world of Second Life as an amateur 3D assemblage. It was quick and dirty, like much digital artwork and as we see nowadays, like the fragility of truth.

The turn of Moon v Earth, at least the 2018 version is that the augments aren’t virtual at all, but instead are constructed as physical augments hanging from fishing line or hot-glued against a cardboard backing. At first, I tried working with AR technology, but soon discovered its compromises: a device-dependence and a distance between the viewer and the experience. Instead, the physical objects shows the fragile and fragmentary nature of the work in cheap cardboard facades and flimsy hanging structures distributed throughout the venue.

NextNewGames is at the San Jose ICA until September 16th, 2018

Farewell, Dinacon

I just spent 20 days on a sparsely-inhabited island in Thailand with about 80 artists, scientists and other imaginative people. Everyone worked on their own projects ranging from jungle-foraged dinners to plant-piloted drones to creating batteries from microbial energy. We had no AC for much of the day, got bitten by weaver ants, were surrounded by jungle cats and ate off each other’s plates. And, I absolutely loved the experience.

Microbial Battery Workshop

The gathering was Dinacon, the first Digital Naturalism conference and was co-organized by Tasneem Khan and Andrew Quitmeyer. I was a “node leader” which meant that I spent a bit of time reviewing the applications, organizing workshops and spending longer at the event than most.

Dinacon registration area

The site was Koh Lon, a small island that is just off the coast of Phuket. We stayed at a “resort”, which was actually fairly minimal and had small cabins, common house or options for tent camping. From the main work area, you walk a few minutes in one direction and you’re on the beach. In the other direction is jungle. There were no cars on the island, a handful of scooters, two hundred or so local residents and not a single dog. The soundscape felt entirely tropical with cicadas, birds and frogs filling the airwaves with their chatter. Our dinner was boated in each day and at the small restaurant we could get the three essentials: wifi (when the power was on), breakfast + lunch, and beer.

Selfie with Koh Lon in the background

The participants came from all over the world and arrived and left at random times such that there was constant inflow of new friends and outflow of sad goodbyes. Each day, we had about 40 people on the island. I could nerd out on my project, kayak in the water, take a break on the ship that we had access to (Diva Andaman), find myself sitting in a chair sharing ideas, play with hermit crabs or get away from everyone and walk in the jungle. Helping one another was something that effortlessly emerged in our temporary community.

Saying hi to the Diva Andaman

Questions that I asked myself upon arrival was will happen when you assemble a group of project-creating strangers in a natural environment, where you can take a break by putting on a pair of swim trunks and walk into the ocean? What does building things in on the island with its outdoor space and natural light do to your creative practice? How can I prototype an artwork that collaborates with this specific place?

I quickly became a lot less efficient and much more connected to people and place. I ended up creating better work and my body was utterly relaxed. Any shoulder pain I might have in an office space dissipated quickly. There were no Google calendar invites, no afternoon soy lattes and certainly no eating at my desk.

I found myself in daily arrhythmic patterns of production, often sitting on my neighbor’s the porch with headphones on and composing audio synth code, then stopping suddenly and reveling in nature. I would get interrupted to see a tree snake or find myself lost in conversation with someone’s project. In the evenings, we usually had self-organized small workshops or informal talks. I drank beer sometimes but also often went to bed early, worn out from the humidity and brain swell each day.

Arduino coding by susnet

I did make a thing! This experiment — a potential new artwork has a working title of DinaSynth Quartet. It is a live audio-synth performance between a plant, the soil, the air and the water, which is an electronics installation that is designed exist only outside. I connected each of these four “players” to sensors: plant with electrodes, ground to soil sensor, water to EC sensor and air to humidity.

Each one used a variation on my Sonaqua boards — a kit which I am actively using for workshops — to make a dynamic audio synth track, modulating bleeps and clicks to their sensor readings, creating a concert performance of sorts.

I’m not sure exactly where the work will go next, but I’m happy with the results. It was my first audio synth project and I’m far from being an expert, calling my approach “beginner’s mind”. However, most of the participants liked the idea and the specific composition that the jungle played.

I already miss everyone there: Jen, Tina, David, Rana, Pom, Sebastian and so, so many other delightful friends. And this is one thing I love about the life I’ve created for myself as an new media artist: after events like this, I now have friends who are doing inspiring work all over the world.

Jungle-foraged dinner party


Kira’s birthday party

Putting on a heart rate sensor on one of the local cats

Sonaqua Workshop in the common space

Local lotus flower


Millipedes were everywhere


Soldering work in the main space

Dani doing a lizard dissection on the beach at susnet


Andrew holding a snake


Little Niko, my favorite of the Dinacon Cats

Dinacon: 2 more environmental synths

Dinacon — the Digital Naturalism Conference on island of Koh Lon in Thailand — has been amazing. It’s been an opportunity to meet and collaborate with other artists, scientists, hackers, writers and more. The caliber of the participants has been extraordinary.

My art experiments have been around creating audio synth compositions from the environment, using low-cost sensors and custom electronics to make site-specific results.

In the last two days, I’ve made two composition-circuits, this one (below) which uses a soil sensor and tracks moisture in the sand.

And this one, which uses electrodes on plant leaves to simulate what the plants might be “saying”.

The GitHub repo for all my experiments is here.

Dinacon: First Audio Synth Recording

At Dinacaon, I’m conducting many experiments with electronics using audio synth and environmental sensors to make site-specific compositions.

I’m extending my Sonaqua custom boards to use the Mozzi audio synthesis libraries. Yesterday I put together my first mini-composition.

These will eventually lead to more dynamic 4-channel compositions and could also extend into some live performances by plants and the environment.

This is the first of several environmental sensors that I’m deploying in the environment — a humidity sensor produced by SparkFun.

With some post-processing in Adobe Audition, I smoothed out an annoying low-pitched whine. I still have loads to learn about the transition from algorithmically-generated sound to recording and getting the glitches out — I’m certainly no audio engineer.

But, I’m pleased with what my little board can do and am excited about more environmental sensors on this amazing little island of Koh Lon.

Oh and here is the GitHub Repo for Sonaqua_Dinacon.

Dinacon: A walking tour of Koh Lon Island

As I often do, when I get to a new place, I get lost. I follow the advice of Rebecca Solnit in A Field Guide to Getting Lost and just wander. Before establishing patterns, your perceptions are the most open and so the day after arriving at Dinacon, I wandered around the island and just looked at things.

Various boats at low tide.

Lots of garbage, unfortunately. I saw this as an opportunity. Perhaps to do some cleanup or more likely to use as scavenged materials for some sculptural-sound installations. This would harken back to my work several years ago as an artist-in-residence at Recology.

Patterns in architectures. Patterns in nature.

An active school.

Small trails everywhere. There are no cars here and so one thing I noticed was the soundscape is different. Sometimes you’ll hear the sounds of a motorcycle or scooter, but even then, only occasionally.

Some sort of nest on a tree.

Intersection markers with plastic bags and red paint.

This island is quite large and much of it is impassible.

Holes in the sand into which crabs scurry.

So many coconuts.

Various signs, hand painted and more.     


Abandoned architecture.


New paths freshly cut by locals.

And as I was warned, if I venture out at low tide, I might be returning at high tide. Fortunately the water is warm and I was wearing shorts, so could wade back home.

Some thoughts about the work I’m doing here and ways I can engage with the space:

— Nature: there are plenty of plants, some amount of critters such as ants. How can I collaborate with various critters and foliage? Some of the things that are easily scavenged are bamboo, coconuts, dead coral and shells.

— Trash: what could be scavenged or collected to make temporary sculptures. Would this expand my practice here or should I stick with my original plan of electronics that make sounds? Perhaps I could put speakers inside of things that amplify the sound, like discarded gas cans.

— Architecture: there are some beautiful abandoned buildings and structures that no one seems to care about. I could probably do a performance or something in these spaces.


And finally, jungle cats!

Sonaqua at Currents 2018

I jokingly referred to my Sonaqua artwork as “the most annoying piece at the festival”. The exhibition was Currents New Media 2018, which was an incredible event.

It was a hit with the public and invited multi-user interaction. Kids went crazy for it. Adults seemed to enjoy the square-waves of audio glitch all night.

So yes, perhaps a tad abrasive, but it was also widely popular.

A number of people were intrigued by the water samples and electronics with what looked like a tangly mess of wires. It was actually a solid wiring job and nothing broke!

After working at the Exploratorium for a couple of years, I adjusted my approach to public engagement so that anyone can get something from this artwork.

How does it work?

The electrodes take a reading of the electrical current flow in various water samples that I collected throughout New Mexico. If more current flows through the water, then this means there are more minerals and salts, which is usually an indicator of less clean water.

The technical measurement is electrical conductivity, which correlates to total dissolved solids, which is one measure of water quality that scientists frequently use.

The installation plays lower tones for water that is more conductive (less pure) and higher tones for water that has less pollutants in it.

The results are unpredictable and fun, with 12 different water quality samples.

The light table is custom-built with etchings of New Mexico rivers and waterways, indicating where the original water sample was taken.






Gun Control (revisited)

My writing (below) was originally printed as part of the Disobedient Electronics project by Garnet Hertz. It is a limited edition publishing project that highlights confrontational work from industrial designers, electronic artists, hackers and makers that disobey conventions. 


Gun Control (revisited)

In 2004, I created Gun Control — a set of four electromechanical sculptures, which used stepper motors, servos and cheap cameras that were controlled by AVR code. The distinguishing feature of each unit is a police-issue semi-automatic replica handgun. You can purchase these authentic-looking firearms for less than $100.

The make-believe weapons arrived in the mail a week after I ordered them. That night, I closed the blinds, drank too much whisky and danced around my apartment in my underwear waving my new guns around. The next morning, I packed them in a duffel bag and took the “L” in Chicago to my studio. During the 45-minute commute I felt like a criminal.

Each gun is connected a stepper motor via a direct-drive shaft and flexible couplings. I used a lathe and a milling machine to make custom fittings. I hid unsightly electronics in a custom-sewn leather pouch, resembling some sort of body bag.

As people enter the Gun Control installation space, the cameras track their movement, and the guns follow their motion. Well, at least this is what I had hoped it would do. However, I had committed to using the first gen CMUCam and its blob-tracking software was spotty at best. I was under a deadline. It was too late to spec out new cameras. Plus, these were the right size for the artwork, which was using decentralized embedded hardware. I shifted my focus to building a chaotic system.

I re-coded the installation so the guns would point at different targets. They would  occasionally twirl about playfully and re-home themselves. I programmed the stepper motors to make the armatures shake and rattle when they got confusing target information. The software design embraced unpredictability, which made the whole artwork feel uncertain, embodying the primal emotion of fear.

Gun Control was my heavy-handed response to the post-911 landscape and the onset of the Iraq War. I exhibited it twice, then packed it up. It lacked subtlety and tension. At the time, there was not enough room for the viewer.

Just last month, I pulled the artwork out of deep storage. I brought the pieces to my studio and plugged in one of the units. It functioned perfectly. Upon revisiting this piece after 12 years, my combination of guns and surveillance seems eerily prescient.

Mass shootings have drastically increased in the last several years. Surveillance is everywhere, both with physical cameras and the invisible data-tracking from internet servers. Documentation of police shootings of unarmed African Americans is sadly, commonplace. I no longer recoil from the explicit violence of this old artwork.

I coded this using AVR microcontrollers, just before the Arduino was launched. It was tedious work just to get the various components working. I can no longer understand the lines of C code that I wrote many years ago. The younger me was technically smarter than the current me. My older self can put this historical piece into perspective. I plan to re-exhibit it in the coming years.

GitHub repo:

Collecting Sacred Fluids

I recently debuted a new art installation called Cybernetic Spirits at the L.A.S.T. Festival. This is an interactive electronic artwork, where participants generate sonic arrangements based on various sacred fluids. These include both historical liquids-of-workshop such as holy water, blood and breast milk and more contemporary ones such as gasoline and coconut water.

My proposal got accepted. Next, I had to actually collect these fluids.

My original list included: blood, holy water, coffee, gasoline, adrenaline, breast milk, corn syrup, wine, coca-cola, coconut water, vaccine (measles), sweat and kombucha

Some of these were easily procured at the local convenience store and a trip to the local gas pump. No problem.

But what about the others? I found holy water on Amazon, which didn’t surprise me, but then again this wasn’t anything I had ever thought about before.

I knew the medical ones would be the hardest: adrenaline and a measles vaccine. After hours scouring the internet and emailing with a doctor friend of mine, I realized I had to abandon these two. They were either prohibitively expensive or would require deceptive techniques that I wasn’t willing to try.

Art is a bag of failures and I expected not to be entirely successful. Corn syrup surprised me however. After my online shipment arrived, I discovered was sticky and too thick. It is syrup after all. Right. My electrical probes got gunky and more to the point, it didn’t conduct any electrical current. No current = no sound.

Meanwhile, I put out feelers for the human bodily fluids: blood, sweat and breast milk. Although it was easy to find animal blood, what I really wanted was human blood (mine). I connected with a friend of a friend, who is a licensed nurse and supporter of the arts. After many emails, we arranged an in-home blood draw. I thought I’d be squeamish about watching my blood go into several vials (I needed 50ml for the installation), but instead was fascinated by the process. We used anti-coagulant to make it less clotty, but it still separated into a viscous section at the bottom.

Since I am unable to produce breast milk, I cautiously inquired with some good friends who are recent moms and found someone willing to help. So grateful! She supplied me with one baby-serving size of breast milk just a couple of days before the exhibition, so that it would preserve better. At this point, along with the human blood in the fridge, I was thankful that I live alone and didn’t have to explain what was going on to skeptical housemates.

I saved the sweat for the last-minute, thinking that there was some easy way I could get sweaty in an exercise class and extract some. Once again a friend helped me, or at least tried, by going to a indoor cycling class and sweating into a cotton t-shirt. However, wringing it out produced maybe a drop or two of sweat, nowhere close to the required 50ml for the vials.

I was sweating over the sweat and really wanted it. I made more inquiries. One colleague suggested tears. Of course, blood, sweat and tears, though admittedly I felt like I was treading into Kik Smith territory at this point.

So, I did a calculation on the amount of tears you would need to collect 50ml and this would mean a crying a river everyday for about 8 months. Not enough time and not enough sadness.

Finally, just before shooting the documentation for the installation, the sweat came through. I friend’s father works for a company that produces artificial sweat and gave me 5 gallons of this mixture. It was a heavy thing to carry on BART, but I made it home without any spillage.

Artificial sweat? Seems gross and weird. The truth is a lot more sensible. A lot of companies need to test human sweat effects on products from wearable devices to steering wheels and it’s more efficient to make synthetic sweat than work with actual humans. Economics carves odd channels.

My artwork often takes me on odd paths of inquiry and this was no exception. Now, I just have to figure out what to do with all the sweat I have stored in my fridge. 





Reality Remix: Salon 1

I just returned from our first Reality Remix workshop in Dundee, Scotland. The prompt that we gave ourselves afterwards was to write up things that came up for us, returning thoughts and think about what is next. I write now on the train journey back to London.

The background is that Reality Remix is part of an Arts & Humanities and Research Council AHRC grant around Immersive Experiences and I am one of the “collaborators” (artists) — others include Ruth Gibson and Bruno Martelli (Ruth is the Principal Investigator), Joe DeLappe, Darshana Jayemanne, Alexa Pollman and Dustin Freeman. We are also working with several “partners”, who are in academia, industry and government to act as advisors and contribute in various ways. These are Nicolas Lambert (Ravensbourne University), Lauren Wright (Siobhan Davies Dance), Alex Woolner (Ads Reality) and Paul Callaghan (British Council).

The short project description is:

Reality Remix will explore how we move in and around the new spaces that emergent technologies afford. Through the development and examination of a group of prototypes, initiated from notions of Memory, Place and Performance and with a team of artists, computer programmers, fashion and game designers, we aim to uncover the mysteries of these new encounters, focussing on human behaviour, modes of moving, and kinaesthetic response. Reality Remix brings a unique dance perspective in the form of somatic enquiry that questions concepts of embodiment, sensory awareness, performance strategy, choreographic patterning and the value of touch in virtual worlds.

Within this framework, each of us will be developing our own VR/AR projects and possible collaborations might arise in the process.

Some of the reasons that I was invited to be part of this project stem from core inquiries about what we call “Gibsonian” cyberspace versus a simulated cyberspace. I find it odd that when we often depict virtual reality — and for the purposes of simplicity, I will treat VR a subset of cyberspace — as a simulation, a weak reproduction of some sort of physical reality. VR has immense possibilities that most people don’t tap into. With the dominance of first-person shooter games, reproductions of museums, and non-spaces such as TiltBrush, I have often wondered about how we can conceptualize VR landscapes in new ways. And so, this was my starting point for our first session.

Improving the functionality of the headset

However, technical skills are not a precursor to producing compelling work and, in fact, this is part of my artistic practice. I quickly adapt. For example in 2014, I quickly dove into 3D printing without knowledge of any real 3D modeling package and in the space of a few months, produced some conceptually-driven 3D print work that drew strong responses. I will easily pick up Unity, Unreal, 3ds Max or whatever else is needed.

It is with this lack of technical knowledge, that I can approach concepts with a beginner’s mind, a core concept of Buddhist thinking where you approach a situation without preconceptions and harvest a disposition of openness. Without deep investment in the structures of discourse, it is here that you can ask questions about the effectiveness of the technology such the nature of immersive spaces, the bodiless hands of VR and hyperreality.

For this initial meeting, we each have our own project ideas that we will be researching and producing in various forms. Some of my own inquiries stem from these Gibsonian landscapes. On the train I re-read Neuromancer and was still inspired by this seminal quote from Neuromancer:

Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts… A graphic representation of data abstracted from banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding…

I arrived with this general framework and began to ponder about how to incorporate threads of previous work such as physical data-visualizations and network performance. What about the apparatus of the headset itself? How can we play with the fact that we are partially in real space and partially out. And as one of our partners (Alex) pointed out rather than being immersed in VR, we are absorbed by it. Like a fish in water, we are live in full reality immersion. And when we talked about this, I chuckled to myself, remembering this David Foster Wallace joke:

There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?

The first day was a full-out setup day, installing our Windows work machines, getting the Oculus Rifts working, Google Pixel 2 phones, Unreal, Unity and anything else. We all centralized on some specific platforms so we can easily share work with each other and invite possible collaborations. Fighting with a slow wi-fi connection was the biggest challenge here but within a few hours I got my Alienware work machine making crude VR on the Oculus.

Goodies! This is the Oculus Rift hardware that we will all be using for project production

I yearned to visit Dundee despite the cold weather but all of our time was spent in this workspace, supported by the NEoN Digital Arts Festival (where I showed my Strewn Fields pieces last November) and in the evenings, we saw some art lectures by our own Joe DeLappe and my friend and former colleague Sarah Brin. We are food and drank in the local speakeasy bar, chitchatting about ideas.

Lecture by Joe DeLappe

Within the first couple of days, what was previously a mystery to me now became more clear. While some of the collaborators had well-developed projects (Dustin and Bruno/Ruth), others such as myself, Alexa and Joe were approaching it from a more conceptual angle with less technical aptitude.

I kept in mind that our goals for this project are to create compelling proof-of-concepts rather than finalized work, which makes it more of a research-oriented project with a forward-face rather than something that will compete in the already littered landscape of the good, bad and ugly of the Oculus Store.

We started each day with movement exercises led by Ruth, reminding us that we all live in “meatspace”. Our minds and bodies are not separate as we hunch over the machines and then stand with a headset on and wave our arms around. We began experimenting with the technology. I vacillated between diving into Unreal or Unity, each with their own advantages. While Unity has more generalized support and is easier to learn, Unreal undoubtedly has a better graphics engine and is the weapon-of-choice by Bruno. So, solving the early technical challenges began to help coalesce my ideas.

Ruth leading us on some Skinner Releasing exercises

We soon entered into a vortex of artistic energy — some of us from performance, wearables, immersive theater with various conceptual practices and the parters from other organizations who had a less artistic approach but loads of experience in the gaming worlds, university community and impact studies. I knew this on paper, but in reality, the various talents of our Reality Remix dream team soon became apparent.

Twice a day, each of the collaborators led workshops related to their practices. Alexa treated us to her performance-based clothing which registered AR markers and asked us to do an exercise where we tried to perceive something through someone else. Bruno and I made a drawing where he wore the VR headset and I sketched on his back which he replicated on paper. The process was fun and predictability the drawing was unimpressive. The process was fun and predictability the drawing was unimpressive.


Ruth is wearing fabric created by Alexa while she shows us some of her responsive AR augments


Our collaborative drawing in response to Alexa’s prompt

Dustin led us on a prototype for a sort of semi-immersive experience where actors jump into various avatars. With his deep background of improv, theater and role-playing, I began to shift, thinking about how to involve people not in a headset, which make up the majority of people in a VR experience as essential players.

Darshana in the VR headset while Dustin demonstrates some ways in which “non-players” can interact in VR space

I am not wondering about intimacy and vulnerability in VR. There is a certain amount of trust in this space. You are blind and often suffused in another audio dimension. Then, could you guide people through a VR experience like a child in a baby carriage? What can be done with multiple actors? So many questions. So many possibilities.

One thing that I was reminded of was the effectiveness of simple paper-prototyping and physical movement. Make things free from technology; keep it accessible. Stick to the concepts.

My own orientation began shifting more into virtual landscapes and thinking about data as the generator. I asked people to brainstorm various datasets and come up with some abstract representations based on that earlier quote from Neuromancer. I do want to get away from the sci-fi notion of cyberspace since it is limiting and enmeshed in VR 1.0, but will still claim this as the starting point to an antidote to the often mundane reality-simulation space.

This was useful for my own brainstorming. Alexa brought in an interesting point of view because she was thinking about time rather than landscape and brought in conversations around anticipation, reality and memory, which reminded me of the work by Bergson. Her movements were around personal data and captured her attention.

Meanwhile, Ruth made marks on the wall, translating gesture to 2D. Bruno worked with abstract visual forms. Despite being a poor draftsman, the question arose: how can we incorporate movement into a system? My own perception is highly visual and orientation towards abstract patterns. The success of my exercise was based in the fact that some useful (to me) renderings were produced while I also quickly learned that the a line-based VR landscape doesn’t resonate with everyone. How can we incorporate movement into a system.

Drawings by Bruno Martelli in response to my workshop prompt

As a manifesto bullet item: the scriptedness of VR is something we would all like to break. With all the possibilities of VR, why are the dominant forms assuming a feeling of immersion. Why don’t we consider what can be done before rushing to produce so much content.

Where is the element of surprise? VR is a solitary experience. I’m reminded of Joe’s work with the military and what one can do with a gaming space. Could we intervene or somehow interfere with VR space?

Darshana, the theorist amongst the group, did a beautiful summary of the session. My head was spinning with ideas at this point, so I can’t even recall everything he spoke about but certainly ideas around how to both be engaging and critical in this space surface. He envisioned a nexus around abstract spatialization, performance, role play and the body that tied our various projects together.

Bruno, Ruth and Alexa wearing some of Bruno + Ruth’s Dazzle Masks

There is much more to write and think about. I made progress on the technical side of things, such as getting an OSC pathway to Unreal working, so that can begin playing with electronic interfaces into a VR world.

More importantly, I feel like I’ve found my people with this Reality Remix team: one where we understand the history of new media, subversion of forms, aren’t dazzled by simplicity. We got along so very well with mutual respect and laughter. I’m excited about what comes next.

Reality Remix group phptp

DIY Water Sensors Workshop

This write-up is a bit tardy, but that’s what happens when the holidays hit. In December, I hosted a DIY Water Sensors Workshop at Autodesk Pier 9 in collaboration with The Center for Investigative Reporting.

I’ve been fortunate enough to work at Autodesk, first as an artist-in-residence (2014) and for the last few years, running their Electronics Lab and more recently their Simulation Lab (VR/AR). For the workshop, we hosted a combination of journalists and members of the Autodesk community.

The idea for the workshop sprung out of my Sonaqua artwork. The project sonifies (makes sounds from) water quality by testing for electrical conductivity (EC), which is an correlates to pollution — the more heavy metals and minerals, the higher the EC. It’s one of a number of measurements that scientists make in the field, along with indicators such as pH and Dissolved Oxygen.

That’s a brief summary of the artwork and what I wanted to do was make basic module circuit available for anyone to use. We breadboarded the basic circuit and within a couple of hours, everyone was up and running, making sounds from water samples that they brought in.

Working with the Center for Investigative Reporting (CIR) was valuable — afterwards, we got into a long discussion about data journalism. I was impressed with their breadth of projects and related works which include:

Sonifying the Seismic Activity in Oklahoma – tracks earthquake activity increases due to fracking

Wet Prince of BelAir – uses satellite data to find water-wasters during the big drought.

Cicada Tracker – a project by WNYC & Radiolab using Arduinos to predict the cycle of 17-year cicadas



After a few hours, the breadboarded circuits were complete! I mailed circuit boards, designed in Autodesk EagleCAD to the workshop participants a few days later. There are always production delays, but they did get the boards in time for the holidays.

Photo credit, Blue Bergen, Autodesk

Sonaqua goes to Biocultura

Last month…yes, blogging can be slow, I traveled to Santa Fe with the support of Andrea Polli and taught a workshop on my Sonaqua project.

The basic idea of Sonaqua is to sonfiy — create sounds — based on water quality. As a module, these are Arudino-based and designed for a single-user to make a sound. I’m actively teaching workshops on these and have open-sourced the software and made the hardware plans available.

interested in a Sonaqua workshop? then contact me

My Sonaqua installation creates orchestral arrangements of water samples based on electrical conductivity. Here’s a link to the video that explains the installation, which I did in Bangkok this June.

Back to New Mexico..In the early part of the week, I taught a workshop on the Sonaqua circuit at one of Andrea’s classes at UNM, creating single-player modules for each student. We collected water samples and played each one separately. The students were fun and set up this small example of water samples with progressive frequencies, almost like a scale.

The lower the pitch, the more polluted* the water sample and so higher-pitched samples might correspond to filtered drinking water.

Later in the week, I traveled to Biocultura in Santa Fe, which is a space that Andrea co-runs. Here, I installed the orchestral arrangement of the work, based on 12 water samples in New Mexico. She had a whole set of beakers and scientific-looking vessels, so I used what we had on hand and installed it on a shelf behind the presentation.

A physical map (hard to find!) of the sites where I took water samples.

And a close-up shot of one of the water samples + speakers. If you look closely, you can see an LED inside the water sample.

My face is obscured by the backlit screen. I presented my research with Sonaqua, as well as several other projects around water that evening to the Biocultura audience.

And afterwards, the attendees checked out the installation while I answered questions.

Photos from Longnow Talk

Last week, I gave a talk, detailing my interpretation of the term Art Thinking at the LongNow Interval Space. More on that later. I also discussed a 4-part model of time and several art projects that I’ve made over the last several years.

It was one of my best talks and I felt so honored to be part of this series.

Here are some photos from the event.



Data Crystals at EVA

I just finished attending the EVA London conference this week and did a demonstration of my Data Crystals project. This is the formal abstract for the demonstration and writing it helped clear up some of my ideas about the Data Crystals project and digital fabrication of physical sculptures and installations.


Embodied Data and Digital Fabrication: Demonstration with Code and Materials
by Scott Kildall


Data has tangible consequences in the real world. Accordingly, physical data-visualizations have the potential to engage with the actual effects of the data itself. A data-generated sculpture or art installation is something that people can move around, though or inside of. They experience the dimensionality of data with their own natural perceptual mechanisms. However, creating physical data visualizations presents unique material challenges since these objects exist in stasis, rather than in a virtual space with a guided UX design. In this demonstration, I will present my recent research into producing sculptures from data using my custom software code that creates files for digital fabrication machines.


The overarching question that guides my work is: what does data look like? Referencing architecture, my artwork such as Data Crystals (figure 2) executes codes that maps, stacks and assembles data “bricks” to form unique digital artifacts. The form of these objects are impossible to predict from the original data-mapping, and the clustering code will produce different variations each time it runs.

Other sculptures remove material through intense kinetic energy. Bad Data (figure 3) and Strewn Fields (figure 1) both use the waterjet machine to gouge data into physical material using a high- pressure stream of water. The material in this case — aluminum honeycomb panels and stone slabs — reacts in adverse ways as it splinters and deforms due to the violence of the machine.

2.1 Material Expression

Physical data-visualizations act on materials instead of pixels and so there is a dialogue between the data and its material expression. Data Crystals depict municipal data of San Francisco and have a otherworldly ghostly quality of stacked and intersecting cubes. The data gets served from a web portal and is situated in the urban architecture and so the 3D-printed bricks are an appropriate form of expression.

Bad Data captures data that is “bad” in the shallow sense of the word, rendering datasets such as Internet Data Breaches, Worldwide UFO Sightings or Mass Shootings in the United States. The water from the machine gouges and ruptures aluminum honeycomb material in unpredictable ways, similar to the way data tears apart our social fabric. This material is emblematic of the modern era, as aluminum began to be mass-refined at the end of the 19th century. These datasets exemplify conflicts of our times such as science/heresy and digital security/infiltration.

2.2 Frozen in Time

Once created, these sculptures cannot be endlessly altered like screen-based data visualizations. This challenges the artwork to work with fixed data or to consider the effect of capturing a specific moment.

For example, Strewn Fields is a data-visualization of meteorite impact data. When a large asteroid enters the earths atmosphere, it does so at high velocity of approximately 30,000km/hour. Before impact, it breaks up into thousands of small fragments, which are meteorites. Usually they hit our planet in the ocean or at remote locations. The intense energy of the waterjet machine gouges the surface of each stone, mirroring the raw kinetic energy of a planetoid colliding with the surface of the Earth. The static etching captures the act of impact, and survives as an antithetical gesture to the event itself. The actual remnants and debris (the meteorites) have been collected, sold and scattered and what remains is just a dataset, which I have translated into a physical form.

2.3 Formal Challenges to Sculpture

This sort of “data art” challenges the formal aspects of sculpture. Firstly, machine-generated artwork removes the artist’s hand from the work, building upon the legacy of algorithmic artwork by Sol Lewitt and others. Execution of this work is conducted by the stepper motor rather than by gestures of the artist.

Secondly, the input source of data are unknowable forms until they are actually rendered. The patterns are neither mathematic nor random, giving a certain quality of perceptual coherence to the work. Data Crystals: Crime Incidents has 30,000 data points. Using code-based clustering algorithms, it creates forms only recently possible with the combination of digital fabrication and large amounts of data.


My sculpture-generation tools are custom- developed in C++ using Open Frameworks, an open source toolkit. My code repositories are on GitHub: My own software bypasses any conventional modeling package. It can handle very complex geometry, and more importantly doesn’t have the “look” that a program such as Rhino/Grasshopper generates.

3.1 Direct-to-Machine

My process of data-translation is optimized for specific machines. Data Crystals generate STL files which most 3D printers can read. My code generates PostScript (.ps) files for the waterjet machine. The conversation with the machine itself is direct. During the production and iteration process, once I define the workflow, the refinements proceed quickly. It is optimized, like the machine that creates the artwork.

3.2 London Layering

In my demonstration, I will use various open data from London. I focus not on data that I want to to acquire, but rather, data that I can acquire. I will demonstrate a custom build of Data Crystals which shows multiple layers of municipal data, and I will run clustering algorithms to create several Data Crystals for the City of London.


Figure 1: Strewn Fields (2016)
by Scott Kildall
Waterjet-etched stone

Figure 2:
Data Crystals: Crime Incidents (2014)
by Scott Kildall
3D-print mounted on wood

Figure 3:
Bad Data: U.S. Mass Shootings (2015)
by Scott Kildall
Waterjet-etched aluminum honeycomb panel

Playing with the e-mail scammers

When someone sends you an email scam, think of it as an opportunity for fun. They stopped replying to my emails after several responses.

Here is the exchange:


Good Day,
How is everything with you? I picked interest in your artwork and decided to write you. I will like to know if your artwork can be purchased and shipped internationally?. I can email the artwork of interest and payment will be completed in full once you confirm my purchase order with a quotation.
Kindly let me know when you are in office and ready to take my artwork order also let me know if you accept either Visa Card or Master Card for payment furthermore you can email me your recently updated website or art price list in your response.
Best Regards

Hi Yoshida,

Thank you for contacting me.

I’m curious which artwork you are interested in, I have available:

(1) Shoe-gazing — a 96-hour performance art video of me looking at my shoes. Audio track is optional.

(2) MDMA Buttplug — I think the title says it all. Leave it to your imagination.

(3) The Salmonella Experience — A crowdsourced experiment on Mechanical Turk, where I send people salmonella-infested eggs, which they ingest and document over a 4-day period.

Scott Kildall

Hi Scott,
Good to hear from you please can you email me the cost of three available pieces


Hi Yoshida,

Which one do you like best from my list?

That is the most important question. Price is secondary.


(3) The Salmonella Experience — A crowdsourced experiment on Mechanical Turk, where I send people salmonella-infested eggs, which they ingest and document over a 4-day period.


Hi Yoshida,

Thank you for choosing The Salmonella Experience.

I had thought that MDMA Buttplug would be more to your liking, for some reason. I do want to give you one last chance to reconsider. For, once we go down a financial path, then we cannot turn back and choose another artwork.

So, are you sure about The Salmonella Experience?

Question: What attracted you to this project over the other ones that were available?

Thank you,
Scott Kildall

<no response after this one…>

A friend of mine pointed me to this TED talk by James Veitch. So, obviously I’m not the first: