I Started an Art Residency

In August, I quit my job at Autodesk Pier 9 as a Shop Staff member in their unique fabrication facility. I had started there as an artist-in-residence in 2014 and continued on as a part-time employee to help other artists realize amazing projects, specifically teaching them electronics, coding and virtual reality techniques.

Everything changes. The company that I worked for is no longer supporting artists as they once did. I’ve spent the summer doing some soul-searching. I reflected on what was special about the Pier 9 Artist-in-Residency program. I moved on.

What I have for you is Xenoform Labs — a new experiment that I launched just this October.

 

Xenoform Labs is located in the Mission District in San Francisco and is my studio, a workshop space and an art residency program.

That’s right, an artist-in-residency program.

Art Residency

The Xenoform Labs Residency is an invitation-only art residency program for new media artists from outside of the Bay Area. I provide free housing and studio space for 1 month for one selected artist/couple. The studio includes digital media, virtual reality hardware, media production and light fabrication. During the residency period, I will host events for the artists to connect with local thinkers, artists and curators in the Bay Area. I hope to support 2-3 artists per year with flexible timing.

The studio supports digital media, virtual reality hardware, media production and light fabrication. Xenoform Labs will host events for the artists to connect with local thinkers, artists and curators in the Bay Area. We plan to support 4-5 artists per year with flexible timing.

The idea for the residency is to provide a space for experimentation and the development of new works and ideas. I hope to support open-ended inquiry and possible collaborations instead of production work for a final exhibition.

The First Artists-in-Residents

My first guests were Ruth Gibson and Bruno Martelli, who are friends and colleagues that I met at the Banff New Media Institute in 2009.

Bio

British electronic arts duo Gibson / Martelli make live simulations using performance capture, computer generated models and an array of technologies including Virtual Reality. Artworks of infinite duration are built within game engines where surround sound heightens the sense of immersion. Playfully addressing the position of the self, the artists examine ideas of player, performer and visitor – intertwining familiar tropes of videogames and art traditions of figure & landscape.

What they proposed and worked on was…

Extending the physical into the virtual, we will work to develop novel body-based interfaces for virtual reality. One of the drivers for our research has been thinking about how we can interface performance with virtual reality. In earlier works, performers were motion-captured and avatars visualized in a kind of ‘inside-out’ performance-in-the-round. Taking this a step further we see live performance being ‘beamed’ into a virtual space using electronics hardware. The idea for this residency is about extending Ruth’s somatic practice into virtual space – so that the user experiences it more as a visceral sensation, rather that as a primarily visual experience. This will be enabled perhaps, by creating physical interfaces that subtly encourage this.


One month wasn’t long enough! I miss them already.

I felt energized

There is nothing quite like setting up an art residency while it is happening.

The studio space is in an apartment. One kitchen. Two studio rooms. A small balcony in the backyard. Ruth and Bruno stayed downstairs a separate bedroom. Located in the center of the Mission, the Xenoform Labs Residency was a vortex of activity.

We set up a common workspace in the front room. Since it is also my studio, I am essentially co-working with the artists. I stocked the kitchen with dishes on loan, put vinyl for Xenoform Labs on the front door, added my own artwork to the center room, bought some camping chairs so we could lounge on the back deck and did loads of other small things to make it feel homey.

In the evenings, we would all work or go out to an art event. Ruth and Bruno met colleagues: curators, artists, thinkers, technologist and many more. At times it was overwhelming…I hope in a good way.

I discovered that the Xenoform Labs Residency could be a site for conversations around leveraging technology as a critical art practice. With this residency, I plan to slowly build networks of cross-geographical understanding and experimentation around new media art.

 

What I learned

First of all, breath sensors are a remarkable way to navigate in VR. Ruth and Bruno assembled a sensor that monitors the rhythm of your lungs as you inhale and exhale. Upon the inhale, you ascend in VR space and as you the exhale, you descend. It was magical, like scuba diving.

Also, I love hosting and giving a reason for people to come over to the studio — engaging their curiosity with the work and experiments of Ruth & Bruno. We hosted three separate events: a meet n’ greet, VR Salon and closing event. The format was casual for all of these, where invited guests could drop in and see what the residents were doing. This event structure worked well.

And, there is a lot of mundane things to do: cleaning, grocery-shopping, finding tools, directing the residents to the best coffee joint in town and so on. I embraced being a tour guide and was also grateful that they pitched in on dishes and were lovely housemates.

Finally, the visitors were super-enthusiastic. The concept of doing a small-scale residency in San Francisco generated so much interest and support amongst friends and colleagues. As often is the case, you go through a moment of imposter syndrome. Is this really happening? Yes, it is, because the residents arrived, made art and people came over to talk with them. It’s the real deal.

What’s Next?

Good question!

In the short-term, I’m setting up a private workshop + talk series starting in early December with programing in January and February. You can find all the workshops here.

Of course, I’m working on the next round of residents for 2019. It’s an invite-only program for artists who live outside of San Francisco. I am open to nominations, just contact me here.

This turns out to be more work than I thought! I’m trying to find the right fit: not just any qualified artist, but one who is angling to be on the more social side and yearns for conversation and connection to the unique cultural scene in the Bay Area.

Plus, there is a lot of maneuvering around complex schedules. In the last few weeks, I’ve developed a deep empathy for arts administrators.

Yes. I’m quite excited. 

GPS Tracks

I am building water quality sensors which will capture geolocated data. This was my first test with this technology. This is part of my ongoing research at the Santa Fe Water Rights residency (March-April) and for the American Arts Incubator program in Thailand (May-June).

This GPS data-logging shield from Adafruit arrived yesterday and after a couple of hours of code-wrestling, I was able to capture the latitude and longitude to a CSV data file.

This is me walking from my studio at SFAI to the bedroom. The GPS signal at this range (100m) fluctuates greatly, but I like the odd compositional results. I did the plotting in OpenFrameworks, my tool-of-choice for displaying data that will be later transformed into sculptural results.

The second one is me driving in the car for a distance of about 2km. The tracks are much smoother. If you look closely, you can see where I stopped at the various traffic lights.

Now, GPS tracking alone isn’t super-compelling, and there are many mapping apps that will do this for you. But as soon as I can attach water sensor data to latitude/longitude, then it can transform into something much more interesting as the data will become multi-dimensional.

Views from 9000 feet

9000 feet in the air gives you entirely different perspective on the world. Last Sunday, I had the opportunity to fly in a single-engine Cessna with my old friend, Gary. His plane was from the 1970s and had similar instrumentation as my dad’s plane from the same era.

My father, coincidentally also named Gary, loved flying. When I was a kid, he took me up in his plane for countless hours. It’s been about 35 years since I’ve been in a small plane like this. It was comforting, loud, fun and magical.

Although I have no interest in being a pilot, I certainly appreciated the view. Moving slowly (130 mph) at 9000 feet, gives the opportunity to see the landscape at a slower pace and at such a low altitude, I saw dimensionality in the terrain unlike I’ve seen in a long time.

The folds of the hills, the washouts from snow melt and the various waterways fascinated me. The odd manmade structures and dirt access roads punctuated the depopulated desert terrain

I saw the acequias — community-owned irrigation canals for family farms, which delineated the parcels of land. As they say, water is life. There area has no agribusiness here, just family farms, often growing alfalfa on the side in addition to a day job.

I gazed at the results of the San Juana-Chama Project — which linked to the Abiquiu Dam that feeds the Colorado River through Rio Chama and into the Rio Grande so that Santa Fe and Albuquerque can have drinking and water.

The most fantastic sight was the Rio Grand Gorge near Taos. Here you can see how the Earth got split apart by tectonic forces. Rather than carving its own path, the Rio Grande trickles through the gorge because its the easiest way for the water to flow.

After a couple hours and a lunch stop, we landed back on the ground. I was again bound by gravity as I drove back to Santa Fe, along the highway that earlier that day I had seen from the sky.

Santa Fe River Walk

When you get to a new place, take a long walk. This is essential to ground yourself in that space. Rebecca Solnit writes about it; Guy Debord speaks of diverting the stream of capitalism with it; Richard Long incorporates it into his art practice.

Just after unpacking at a new art residency Water Rights at the Santa Fe Art Institute, I went on a walk up the Santa Fe River with two of my fellow residents, Christina Catanese and Megan Heeres.

Santa Fe is a new place with new people. Before jumping into studio practice, which can be a crutch for compulsive art-making, I wanted to engage with the physical environment. At the residency, the purpose will be to open the mind and the art practice.

We started at Frenchy’s Field and walked up the riverbed itself towards downtown.We walked, talked and observed.

At the head of the trail was a poem kiosk with laminated sheets of poetry and a little shelf full of rocks. The riverbed here was dry and sandy.

We began walking in the bed itself. Christina is a trained hydrologist and Megan knows much about plants.I know a little bit about geology after my Strewn Fields project.

At the start of the walk, we encountered a collection of heart-shaped rocks, obviously put here by humans. I love this organically-generated “land art”.

We wondered why these large rocks were stacked this way. Was it for humans? Or for the river? Christina later determined that it was to control the river flow, as future steps required tricky traversals.

Here I am with a backpack full of branches that I collected. Im specifically intrigued by the Salt Cedar, which is an invasive species that was brought to the area many years ago as a wind break for agriculture. Ooops, as is often the case, the introduction of a new species created more problems than it solved. The salt cedar is a water-sucker and consumes the areas most precious resource.

Here is the “rock penitentiary” maybe these rocks were bad and had to be put behind fencing.

And here is a rock that escaped. Fly away, be free!

Under a bridge, we found a rope swing. Wheeee!

As we traversed further, the salt cedar thinned out and we saw various grasses along the banks of the (dry) river.

And I found my own heart-shaped rock. A beautiful specimen, which looks like two geological samples that were grafted together.

We took a side path and disturbed two birds of prey who had been feasting on this treat.

Around the mid-point of the walk, we started seeing icy formations.

I love these alluring crystalline structures surrounding various stones.

And the ground was damp. We noticed various animal prints. What was this? I still do not know. The front foot matches the hind foot, which seems like an odd walking pattern.

Finally, we began to see actual water with this miniature waterfall.

As we approached downtown, there was more and more human-generated waste.

And one shoe? Who loses a single shoe?

At the end of the walk was a patch of rainbow in the sky.

Movies about Water

A few days ago, I asked on Facebook:

What’s your favorite movie about water? We’re doing a Monday movie night at the Water Rights residency and I’m taking suggestions. Narrative or documentary, but not exceedingly lengthy.

78 responses! Here is the list, in order of posting, which has less than 78 because there were duplicates:

Blue Planet
SlingShot
Chinatown
Milagro Bean Field War
Riding Giants
Step Into Liquid
Deliverance
One Water
Jaws
Dune
Waterworld
Force 10 from Navarone
Knife in the Water
The Abyss
Darwins Nightmare
Marvelous Resources
Dripping Water (Joyce Wieland, Michael Snow)
Into Blue
Sharknado
Even the Rain
Salween Spring (Travis Winn)
Glass-memory of Water (Leighton Pierce)
Old Man And The Sea.
Flow
Gasland
Plagues & Pleasures on the Salton Sea
Water Warriors!
The Swimmer!
Peter Hutton (various films)
Erin Brockovich
Paddle to the Sea
H20 Film (Ralph Steiner)
Titanic
Splash
Point Break
Patagonia Rising
Step Into Liquid
Civil Action
Trouble the Water
Like Water for Chocolate
Guy Sherwin’s black and white film of his daughter watering shadows. Prelude – 1996, 12 mins
Moana
The Same River Twice
The Illustrated Man
Whale Rider
The Gods Must Be Crazy
The Big Blue
The Dry Summer
Joe Versus the Volcano
Watermark
Water & Power: A California Heist
The Finest Hours
The Woman in the Dunes
Total Recall (first one)
Tears
“Water Wrackets” by Peter Greenaway
“Watersmith” by Will Hindle
My Winnipeg

Joining SETI as an artist-in-residence

The SETI Institute just announced their new cohort of artists-in-residents for 2016 and I couldn’t be happier to be joining this amazing organization for a long-term (up to 2 years!) stint.

This includes a crew of other amazing artists: Dario Robleto (Conceptual Artist, Houston), Rachel Sussman (Photographer, Artist, Writer, New York), George Bolster (Filmmaker, Artist, New York), Jen Bervin (Visual Artist, Writer, Brooklyn), David Neumann (Choreographer, New York). The SETI Air program is spearheaded by Charles Lindsay (artist) and Denise Markonish (curator at MASS MoCA). I first met Charles at ISEA 2012 in Albuquerque, New Mexico when we were on the same panel around space-related artwork.

On January 13, 2016, at 7pm, in San Francisco’s Millennium Tower, SETI Institute President and CEO Bill Diamond will formally welcome the incoming artists and our new institutional partners, as well as patrons and friends of the program. This event is invitational and seating is limited.

SETI_Logo

So, what will I be working on?

Well, this follows on the heels of a number of artwork related to space such as Tweets in Space (in collaboration with Nathaniel Stern), Uncertain LocationBlack Hole Series and Moon v Earth, which were meditations of metaphors of space and potential.

uncertainlocation_main

Roughly speaking, I will be researching, mapping and creating installations of asteroids, meteor and meteorite data and working with scientists such as Peter Jenniskens, who is an expert on meteorite showers. These will be physical data-visualizations — installation, sculptures, etc, which follow my interests in digital fabrication and code with projects such as Water Works.

What specifically fascinates me is the potential between outer space and the earth, is the metaphor of both danger and possibility from above. These range from numerous spiritual interpretations to practical ones such as the extinction of the human race to the possibility that organic material from other planets being carried to our solar system. Despite appearances to the contrary Earth is not only a fragile ecosystem but also once that could easily be transformed from outside.

And already I have begun mapping some meteor showers with my custom 3D software, working with in collaboration with Dr. Jenniskens and a dataset of ~230,000 meteors over Northern California in the last few years. This makes the data-space-geek in me very happy.

Stay subscribed for more.

meteor-of-Screen Shot 2015-11-18 at 8.41.10 AM

meteor-2Screen Shot 2015-10-27 at 4.55.25 PM

And I will heed Carl Sagan’s words: “Imagination will often carry us to worlds that never were, but without it we go nowhere.”

EquityBot goes live!

During my time at Impakt as an artist-in-residence, I have been working on a new project called EquityBot, which is an online commission from Impakt. It fits well into the Soft Machines theme of the festival: where machines integrate with the soft, emotional world.

EquityBot exists entirely as a networked art or “net art” project, meaning that it lives in the “cloud” and has no physical form. For those of you who are Twitter users, you can follow on Twitter: @equitybot

01_large

What is EquityBot? Many people have asked me that question.

EquityBot is a stock-trading algorithm that “invests” in emotions such as anger, joy, disgust and amazement. It relies on a classification system of twenty-four emotions, developed by psychologist and scholar, Robert Plutchik.

Plutchik-wheel.svg

how it works
During stock market hours, EquityBot continually tracks worldwide emotions on Twitter to gauge how people are feeling. In the simple data-visualization below, which is generated automatically by EquityBot, the larger circles indicate the more prominent emotions that people are Tweeting about.

At this point in time, just 1 hour after the stock market opened on October 28th, people were expressing emotions of disgust, interest and fear more prominently than others. During the course of the day, the emotions contained in Tweets continually shift in response to world events and many other unknown factors.

twitter_emotionsEquityBot then uses various statistical correlation equations to find pattern matches in the changes in emotions on Twitter to fluctuations in stocks prices. The details are thorny, I’ll skip the boring stuff. My time did involve a lot of work with scatterplots, which looked something like this.

correlationOnce EquityBot sees a viable pattern, for example that “Google” is consistently correlated to “anger” and that anger is a trending emotion on Twitter, EquityBot will issue a BUY order on the stock.

Conversely, if Google is correlated to anger, and the Tweets about anger are rapidly going down, EquityBot will issue a SELL order on the stock.

EquityBot runs a simulated investment account, seeded with $100,000 of imaginary money.

In my first few days of testing, EquityBot “lost” nearly $2000. This is why I’m not using real money!

Disclaimer: EquityBot is not a licensed financial advisor, so please don’t follow it’s stock investment patterns.

accountThe project treats human feelings as tradable commodities. It will track how “profitable” different emotions will be over the course of months. As a social commentary, I propose a future scenario that just about anything can be traded, including that which is ultimately human: the very emotions that separate us from a machine.

If a computer cannot be emotional, at the very least it can broker trades of emotions on a stock exchange.

affect_performanceAs a networked artwork, EquityBot generates these simple data visualizations autonomously (they will get better, I promise).

It’s Twitter account (@equitybot) serves as a performance vehicle, where the artwork “lives”. Also, all of these visualizations are interactive and on the EquityBot website: equitybot.org.

I don’t know if there is a correlation between emotions in Tweets and stock prices. No one does. I am working with the hypothesis that there is some sort of pattern involved. We will see over time. The project goes “live” on October 29th, 2014, which is the day of the opening of the Impakt Festival and I will let the first experiment run for 3 months to see what happens.

Feedback is always appreciated, you can find me, Scott Kildall, here at: @kildall.

 

Data-Visualizing + Tweeting Sentiments

It’s been a busy couple of weeks working on the EquityBot project, which will be ready for the upcoming Impakt Festival. Well, at least some functional prototype in my ongoing research project will be online for public consumption.

The good news is that the Twitter stream is now live. You can follow EquityBot here.

EquityBot now tweets images of data-visualizations on its own and is autonomous. I’m constantly surprised and a bit nervous by its Tweets.

exstasy_sentimentAt the end of last week, I put together a basic data visualization using D3, which is a powerful Javascript data-visualization tool.

Using code from Jim Vallandingham, In just one evening, I created dynamically-generated bubble maps of Twitter sentiments as they arrive EquityBot’s own sentiment analysis engine.

I mapped the colors directly from the Plutchik wheel of emotions, which is why they are still a little wonky like the fact that the emotion of Grief is unreadable. Will be fixed.

I did some screen captures and put them my Facebook and Twitter feed. I soon discovered that people were far more interested in images of the data visualizations than just text describing the emotions.

I was faced with a geeky problem: how to get my Twitterbot to generate images of the data visualizations using D3, a front-end Javascript client? I figured it out eventually, after stepping into a few rabbit holes.

Screen Shot 2014-10-21 at 11.31.09 AM

I ended up using PhantomJS, the Selenium web driver and my own Python management code to solve the problem. There biggest hurdle was getting Google webfonts to render properly. Trust me, you don’t want to know the details.

Screen Shot 2014-10-21 at 11.31.29 AM

 

But I’m happy with the results. EquityBot will now move to other Tweetable data-visualizations such as its own simulated bank account, stock-correlations and sentiments-stock pairings.

Blueprint for EquityBot

For my latest project, EquityBot, I’ve been researching, building and writing code during my 2 month residency at Impakt Works in Utrecht (Netherlands).

EquityBot is going through its final testing cycles before a public announcement on Twitter. For those of you who are Bot fans, I’ll go ahead and slip you the EquityBot’sTwitter feed: https://twitter.com/equitybot

The initial code-work has involved configuration of a back-end server that does many things, including “capturing” Twitter sentiments, tracking fluctuations in the stock market and running correlation algorithms.

I know, I know, it sounds boring. Often it is. After all, the result of many hours of work: a series of well-formatted JSON files. Blah.

But it’s like building city infrastructure: now that I have the EquityBot Server more or less working, it’s been incredibly reliable, cheap and customizable. It can act as a Twitterbot, a data server and a data visualization engine using D3.

This type of programming is yet another skill in my Creative Coding arsenal. And consists of mostly Python code that lives on a Linode server, which is a low-cost alternative to options like HostGator or GoDaddy, which incur high monthly costs. And there’s a geeky sense of satisfaction in creating a well-oiled software engine.

The EquityBot Server looks like a jumble of Python and PHP scripts. I cannot possibly explain it excruciating detail, nor would anyone in their right mind want to wade through the technical details.

Instead, I wrote up a blueprint for this project.

ebot_server_diagram_v1For those of you who are familiar with my art projects, this style of blueprint may look familiar. I adapted this design from my 2049 Series, which are laser-etched and painted blueprints of imaginary devices. I made these while an artist-in-residence at Recology San Francisco in 2011.

sniffer-blue

EquityBot: Capturing Emotions

In my ongoing research and development of EquityBot — a stock-trading bot* with a philanthropic personality, which is my residency project at Impakt Works — I’ve been researching various emotional models for humans.

The code I’m developing will try to make correlations between stock prices and group emotions on Twitter. It’s a daunting task and one where I’m not sure of the signal-to-noise ratio will be (see disclaimer). As an art experiment, I don’t know what will emerge from this, but it’s geeky and exciting.

In the last couple weeks, I’ve been creating a rudimentary system that will just capture words. A more complex system would use sentiment analysis algorithms. My time and budget is limited, so phase 1 will be a simple implementation.

I’ve been looking for some sort of emotional classification system. There are several competing models (of course).

My favorite is the Plutchik Wheel of Emotions, which was developed in 1980. It has a symmetrical look to it and apparently is deployed in various AI systems.

 

Plutchik-wheel.svg

Other models such as the Lövheim cube of emotion are more recent and seem compelling at first. But it’s missing something critical: sadness or grief. Really? This is such a basic human emotion and when I saw it was absent, I tossed the cube model.

1280px-Lövheim_cube_of_emotion

Back to the Plutchik model…my “Twitter bucket” captures certain words, from the color wheel above. I want enough words for a reasonable statistical correlation (about 2000 tweets/hour). Too many of one word will strain my little Linode server. For example, the word “happy” is a no-go since there thousands of Tweets with that word in it each minute.

Many people tweet about anger by just using the word “angry” or “anger”, so that’s an easy one. Same thing goes with boredom/boring/bored.

For other words, I need to go synonym-hunting, like: apprehension. The twitter stream with this word is just a trickle. I’ve mapped it to “worry” or “anxiety”, which shows up more often in tweets. It’s not quite correct, but reasonably close.

The word “terror” has completely lost it’s meaning, and now only refers to political discourse. I’m still trying to figure out a good synonym-map for terror: terrifying, terrify, terrible? It’s not quite right. There’s not a good word to represent that feeling of absolute fear.

This gets tricky and I’m walking into the dark valley of linguistics. I am well-aware of the pitfalls.

Screen Shot 2014-10-01 at 3.18.33 PM

 

* Disclaimer:
EquityBot doesn’t actually trade stocks. It is an art project intended for illustrative purposes only, and is not intended as actual investment advice. EquityBot is not a licensed financial advisor. EquityBoy It is not, and should not be regarded as investment advice or as a recommendation regarding any particular security or course of action.

 

EquityBot @ Impakt

My exciting news is that this fall I will be an artist-in-residence at Impakt Works, which is in Utrecht, the Netherlands. The same organization puts on the Impakt Festival every year, which is a media arts festival that has been happening since 1988. My residency is from Sept 15-Nov 15 and coincides with the festival at the end of October.

Utrecht is a 30 minute train ride from Amsterdam and 45 minutes from Rotterdam and by all accounts is a small, beautiful canal city with medieval origins and also hosts the largest university in the Netherlands.

Of course, I’m thrilled. This is my first European art residency and I’ll have a chance to reconnect with some friends who live in the region as well as make many new connections.

impakt; utrecht; www.impakt.nlThe project I’ll be working on is called EquityBot and will premiere at the Impakt Festival in late October as part of their online component. It will have a virtual presence like my Playing Duchamp artwork (a Turbulence commission) and my more recent project, Bot Collective, produced while an artist-in-residence at Autodesk.

Like many of my projects this year, this will involve heavy coding, data-visualization and a sculptural component.

equity_bot_logo

At this point, I’m in the research and pre-production phase. While configuring back-end server code, I’m also gathering reading materials about capital and algorithms for the upcoming plane rides, train rides and rainy Netherland evenings.

Here is the project description:

EquityBot

EquityBot is a stock-trading algorithm that explores the connections between collective emotions on social media and financial speculation. Using custom algorithms Equitybot correlates group sentiments expressed on Twitter with fluctuations in related stocks, distilling trends in worldwide moods into financial predictions which it then issues through its own Twitter feed. By re-inserting its results into the same social media system it draws upon, Equitybot elaborates on the ways in which digital networks can enchain complex systems of affect and decision making to produce unpredictable and volatile feedback loops between human and non-human actors.

Currently, autonomous trading algorithms comprise the large majority of stock trades.These analytic engines are normally sequestered by private investment companies operating with billions of dollars. EquityBot reworks this system, imagining what it might be like it this technological attention was directed towards the public good instead. How would the transparent, public sharing of powerful financial tools affect the way the stock market works for the average investor?

kildall_bigdatadreamsI’m imagining a digital fabrication portion of EquityBot, which will be the more experimental part of the project and will involve 3D-printed joinery. I’ll be collaborating with my longtime friend and colleague, Michael Ang on the technology — he’s already been developing a related polygon construction kit — as well as doing some idea-generation together.

“Mang” lives in Berlin, which is a relatively short train ride, so I’m planning to make a trip where we can work together in person and get inspired by some of the German architecture.

My new 3D printer — a Printrbot Simple Metal — will accompany me to Europe. This small, relatively portable machine produces decent quality results, at least for 3D joints, which will be hidden anyways.

printrbot

After Thought at Art in Odd Places

Last Thursday, I exhibited After Thought, a performance-installation that I developed while at Eyebeam Art + Technology Center at Art in Odd Places in New York (check out their AIOP website, there’s some great projects there).

As the name implies, these performances that happen in unusual spots in the city, this one being at the 14th Street Y.

14_y_entrance

We scheduled this to happen during the CSA pick up where folks were picking up their weekly organic veggies.

csa

Here I am posing with my two assistants: Minha Lee and Zack Frater. We used the lab coat + eyeglasses props to reel people in.

3_of_us

I began with a short intake form with questions such as “What is your greatest physical fear?” I discovered that an inordinate number of people are afraid of snakes.

intake_form

After completing the intake form, people wear a brainwave-reading headset — I use the Neurosky Mindset — to capture stress and relaxation levels. They turn over flashcards while I monitor their reactions.

I can’t see what they are looking at. If their their stress or relaxation responses spike, I ask them for the card, then note it down on my result form. This person was especially negatively triggered by cockroaches.

scott_testing

And this gentleman was relaxed by the guys hanging out in the hot tub. Give me that flashcard!

scott_testing3

Minha, who interned for me at Eyebeam also administered tests. This subject has no reaction, good or bad to the image of the police car.
minha_testing

Here you can see how the intervention occurs. People had no idea why we were there. Many were suspicious, thinking that we our Scientology-style relaxation/stress test was trying to sell them something or lure them into a cult. Others were immediately intrigued. Some needed convincing. One respondent offered us a bundle of swiss chard for barter.

scott_testing4

Afterward, I would sit down with each respondent and we would talk about their results. “Why did you get stressed out by the cute puppy?”

scott_anaylzing

In the background here, you can see one of the two curators, Yaelle Amir, who demonstrates her ambidexterity by texting while typing.

scott_analyzing

One of my last tests of the day was with Stephanie Rothenberg, a good friend of mine. I knew her too well to provide unbiased analysis. The image of the crying baby was one of her stress indicators. Hmmm.

scott_testing_rothenberg

01SJ Day 5: Public Viruses

Today we shifted to the virus-making portion of Gift Horse, where anyone can assemble a virus sculpture to be placed inside the belly of the Trojan Horse. The gesture is to gather people in real space, give them a way to hand-construct their “artwork” and to hide hundeds of the mini-sculptures inside the horse.

The first virus to go inside, the Rat of the Chinese zodiac, was The Andromeda Strain, an imaginary virus from the film. This father-daughter team cut, folded and glued the paper sculpture together and she did the honors of secreting it inside the armature.

father_daughter
daughter_places
It takes a long time to cut each virus from the printed sheet. This is where the lasercutter from the Tech Shop came in handy. In the afternoon, we traced the outlines of the Snow Crash virus and tried cutting it out. After about an hour of fiddling around with settings and alignment, I was able to get a batch done.
lasercutter

many_snowcrashes

Hurray for mechanized production!

This halved the assembly time from 30 minutes to 15 minutes, bypassing the tedious cutting step. Perhaps this is a compromise in the process of hand-construction techniques, but I’ll gladly make the trade-off for practicality.

The next person to sit with us was Jeff who worked on one of the freshly-cut Snow Crash viruses.

jeff_builds

Once finished, it joined The Andromeda Strain. Come on down to South Hall (435, S. Market, San Jose) and check us out — we will be holding workshops on building viruses all weekend.

andromeda_snowcrash

After Thought goes to Flux Factory

I just finished writing the software which tracks your emotions using brainwave analysis. From a flashcard-style test, it creates a custom video for each participant from a melange of silent clips such as balloons floating in the sky, a tapping foot and an angry dog. This weekend Flux Factory along with The Metric System will be presenting The Science Fair, (New York), where I will showing After Thought, which I developed as a resident artist at Eyebeam Art + Technology Center.

after_thought_06

This project expands my deep interest in personal emotional spaces created by video. My first exploration was with Future Memories in 2006, which sources the “in-between” shots from Hollywood cinema to create a series of black-and-white videos which evoke feelings of displaced familiarity.

future_memories_03

With my Home Stories (2008) project, which I call an “experimental narrative,” I use a silent, looped 5 minute edit from assorted 8mm home movies (including my own parents, now deceased) and invite 5 different storytellers to come up with narratives for the video.

home_stories_02

I’m excited to see the possibilities. If you are in New York this weekend (June 5-6), you will be guaranteed a memorable experience by coming to The Science Fair.

The Great Avatar Challenge

Live from New York this Saturday: The Great Avatar Challenge. This mixed-realities performance is a collaboration with Stephanie Rothenberg for Eyebeam’s Mixer: Olympiad in New York. Get your tickets now, as it will be certain to sell out.

dr_and_ge_002

Our performance is one of many spectacular events going on in this two-night series. We will be conducting races where real-life contestants will compete against my Second Life avatar, Great Escape. The course winds through Eyebeam’s main space and is a hurdle-sprint, in a gesture of pure physicality against a simulated one.

P1020708

Projected against the real-life wall at Eyebeam, our Second Life track will be an extension of the real-life space.

Hatch and Afterthought

New documentation! During my 6-month residency at Eyebeam, I worked on about 6 different projects. Two of them: Hatch and After Thought are now documented on my site.

hatch_2

Hatch is the first of a series of acrylic plexiglass installations. This one depicts a mass of sperm (up to 200!) which swarm around a doorway. This was cut with the Eyebeam’s lasercutter, can be site-specific in its installation, and is cheap to ship.

06_large

After Thought is the most experimental of my individual works. Here, I use a Neurosky Mindset to test people while they look at flashcards of charged imagery. I monitor their responses in a subjective application of science, noting their responses on an indicator sheet (below). After their test, I feed their results back into video generation software that I wrote which makes a custom video (5 minutes) that reflects their emotional state of mind.

05_large

Another artist that I am close friends with, Luther Thie, uses the same headset for the Acclair project in compelling but conceptually different repurposing of the brain to computer interface (BCI).

Apple’s Jailhouse (part 2)

We are making good progress on the Open Video Sync project. It’s buggy but the syncing code works!

iphone_photo

After some thought about how to best make this available to a wide set of users and support some of Apple’s undocumented APIs — ones that are basic like pausing a movie or playing it back on an external device, (see this thread for the geeks out there) we have decided to release two versions of Open Video Sync, one for App Store which will be a slimmed-down version and one for the Cydia Store — for jailbroken phones, which will be a full-featured version.

I’m still disappointed with Apple and their closing down of the iPhone. But apparently I am just one of many.

Apple’s Jailhouse (part 1)

Open Video Sync is one of my Eyebeam projects and will be a way to turn your iPhone or iPod touch into a cheap and wireless video synchronization tool.

We have unfortunately come to the conclusion that we will have to release this as a jailbroken application which means it will be released on the Cydia Store rather than the Apple Store (here is a glossary of what these terms mean) which means restricting the audience to a more tech-savvy group, but there is no other way.

televisor

The bone of contention is the use of undocumented interfaces and there is specifically one called the MPTVOutInterface which lets you playback video onto an external device. Apple doesn’t support this for the development community which is a foot-shooting maneuver.

First of all: any video player should have a direct-to-device output. In fact, here is a great iPhone hardware hack that will let you do just that.

Second: this is already something that works for Apple’s own iPod video player. It is well-tested and should be folded into the general API.

The shoot-in-foot problem is this: it is only a matter of time before the open source Google Android phone catches up. Right now, it still lacks the necessary inter-phone communication via Bluetooth/wireless API. And also the phone is too expensive, requiring a service plan. The iPod touch is an excellent model: cheap, great UI and a lot of application support. Hopefully the Android will come up with a similar model sooner than later.

Apple could profit from iPhone-as-gaming device such as this example.

In the meantime, my co-developer, Eric Brelsford and I have decided to jailbreak and go Cydia on this one.  Stay tuned.

Astronauts without a home

As part of the Postgravity Art: Synaptiens event which invites hour-long interventions into a 50-hour performance cycle, I will be enacting a two-person performance: Space Age Love.

ge_space

Here, Victoria Scott and myself will be floating in space — in Second Life space — and communicating via chat, while our cameras point at one another and our astronaut avatars perform acrobatics. The two viewpoints will be projected onto the Synaptiens structures at Eyebeam. This is happening today (Nov 12 2009) along with performances at 2:30 by Jamie O’Shea and 4:30 by Rashaad Newsome.

I call this an auto-biographical performance as the two of us are floating between San Francisco and New York, working out opportunities, desires and finances to find a home. The chat will be entirely improvised, discussing these issues in live space at Eyebeam.

Better Diagrams

This is a more readable diagram than the chicken scratch one I wrote last Friday.

Print

This shows the Open Hardware modular component design for the custom LED projectors that I have begun  prototyping at Eyebeam.

The gray boxes are the mandatory components and the white ones are optional, depending on the design. The idea here is to let others come up with better battery systems and LED bulbs but still keep the structure of this project intact.

Incidentally, if you are looking for a good Arduino startup kit, check this one out from adafruit — just $50. I just ordered one as a prototyping tool for things such as the PWM for the LEDs.

First Week at Eyebeam

I’m excited to be one of the Resident Artists for Eyebeam this Fall along with the other artists: Diana Eng, Nora Ligorano & Marshall Reese, Rashaad Newsome and Marina Zurkow. Today marks the end of my first week: getting oriented, research, setting up my workspace and more — a real treat to be in Chelsea and part of an amazing organization that has funded and assisted so many artists as well as public programs for students and much more.

For the last 3 years, I’ve been focused on a studio practice in San Francisco which has been developing many individual works including popular video and prints including Future Memories, Uncertain Location, Video Portraits and Paradise Ahead, along with several collaborations such as No Matter, Wikipedia Art and Second Front.

uncertain_1_410

While this period has been prolific and fruitful, I could feel myself straying from my roots of community activism and group collaboration. Here at Eyebeam, I will be developing some open source and open hardware technologies which will enable mobile and networked video projectors using LED bulbs for power.

It is ambitious, I know. But, I think this is an amazing and prescient technology that will soon be ubiquitous. I’d like to make the means available to modification and customization by artists and others. I have my own ideas for several projects which could use mobile and cheap projection systems which can synchronize video channels.

So far, my favorite links for the build-your-own projector community has been the one at Lumen Labs which is a storehouse for ideas and conversations. Additionally, there are some useful examples on Instructables and on engadget of DIY projectors. Most involve ripping apart off-the-shelf components and modifying them to make them into home-brew projectors. Remember that the DIY projector is different than the open hardware designs.

Here is a crude diagram, which illustrates my poor handwriting, of a general design for opening up the hardware I want to make a design that is cheap, modular, open and effective. All of this for less that $500. Each unit will be able to be synchronized using custom iPhone software that I will write during my stay here (more on that later).

simple_diagram_thumb