Asteroids and Celebrities

Asteroids! Planetary scientists have found and mapped about 700,000 of them and some estimate upwards of 150 million asteroids in our solar system. Most of them are in the asteroid belt, between Mars and Jupiter.

David Bowie has one named after him. Prince does not, though both have songs about being in space. Recently Freddie Mercury was awarded one on his 70th posthumous birthday, which seems a fitting tribute to a star, whose life was cut short by AIDS.

FILE - In this 1985 file photo, singer Freddie Mercury of the rock group Queen, performs at a concert in Sydney, Australia. Queen guitarist Brian May says an asteroid in Jupiter's orbit has been named after the band's late frontman Freddie Mercury on what would have been his 70th birthday, it was reported on Monday, Sept. 5, 2016. May says the International Astronomical Union's Minor Planet Centre has designated an asteroid discovered in 1991, the year of Mercury's death, as "Asteroid 17473 Freddiemercury." (AP Photo/Gill Allen, File)

Most asteroids have provisional designations. The full list of human-named asteroids are here. A few pets and fictional characters have even made it onto the list.

I saw this as an opportunity, as part of my SETI Artist-in-Residency to work with asteroid orbital data from JPL, estimated spaceship velocities* and create a new work called Celebrity Asteroid Journeys, which charts imaginary travels from one asteroid to another as silkscreen prints on wood panels.

20161025_165421_webCelebrity Asteroid Journey: Make Believe Land Mashup

I will be presenting the Celebrity Asteroid Journeys as part of my Machine Data Dreams solo show at Black and White Projects. The reception is on Saturday, November 5th, 7-9pm.

Representation is important and the list of asteroids-named-after people is no exception. Even though the majority of the asteroids are named after Western men, I worked to balance as much as possible.

 20161025_165440_webCELEBRITY ASTEROID JOURNEY: SINGERS

And how are asteroids named? According to my research, they are first given a provisional name. Then, when the orbit is determined, it is assigned a sequential number. The discoverer of the asteroid can then request from the International Astronomical Union to give the asteroid a formal name.

*the spaceship speeds do not use true acceleration and deceleration (the math was beyond my skills), but I did work with the best numbers I could find, about 140,000km/hour using a nuclear-electric engine.

Display at Your Own Risk by Owen Mundy

I get a lot of press for my artwork. These articles often gloss over the nuances, distilling the essence of a story.

Well-written academic articles about my artwork is what thrills me the most.

Such is the case, with Owen Mundy’s article, Display at Your Own Risk, which looks at 3D printing, copyright and photogrammetry in art.

bts_matrix

The work, he is referring to, in our case is Chess with Mustaches, which is detailed here.

cwm_fullset_adjusted -960x540

What Mundy hones in on is that our original Duchamp Chess set is not like ‘ripping’ music from physical media to a computer, but rather a “hand” tracing from a set of photographs to create a 3D model. It is essentially a translation rather than a crude copy.

These are the sorts of comparisons and nuances that garner my appreciation.

cw_duchamp_pieces

 

 

Waterjet Etching Tests

For the last several weeks, I have been conducting experiments with etching on the waterjet — a digital fabrication machine that emits a 55,000 psi stream of water, usually used for precision cutting. The site for this activity is Autodesk Pier 9 Creative Workshops. I continue to have access to their amazing fabrication machines, where I work part-time as one of their Shop Staff.

My recent artwork focuses on writing software code that transforms datasets into sculptures and installations, essentially physical data-visualizations. One of my new projects is called Strewn Fields, which is part of my work as an artist-in-residence with the SETI Institute. I am collaborating with the SETI research scientist, Peter Jenniskens, who is a leading expert on meteor showers and meteorite impacts. My artwork will be a series of data-visualizations of meteorite impacts at four different sites around the globe.

While the waterjet is normally used for cutting stiff materials like thick steel, it can etch using lower water pressure rather than pierce the material. OMAX — the company that makes the waterjet that we use at Pier 9 —  does provide a simple etching software package called Intelli-ETCH. The problem is that it will etch the entire surface of the material. This is appropriate for some artwork, such as my Bad Data series, where I wanted to simulate raster lines.

Meth Labs in Albuquerque(Data source: http://www.metromapper.org)

The technique and skills that I apply to my artistic practice is to write custom software that generates specific files for digital fabrication machines: laser-cutters, 3D printers, the waterjet and CNC machines. The look-and-feel is unique, unlike using conventional tools that artists often work with.

For meteorite impacts, I first map data like the pattern below (this is from a 2008 asteroid impact). For these impacts, it doesn’t make sense to etch the entire surface of my material, but rather, just pockets, simulating how a meteorite might hit the earth.

strewn_field_15scaled_no_notation

I could go the route of working with a CAM package and generating paths that work with the OMAX Waterjet. Fusion 360 even offers a pathway to this. However, I am dealing with four different datasets, each with 400-600 data points. It just doesn’t make sense to go from a 2D mapping, into a 3D package, generate 3D tool paths and then back to (essentially) a 2D profiling machine.

So, I worked on generating my own tool paths using Open Frameworks, which outputs simple vector shapes based on the size of data. For the tool paths, I settled on using spirals rather than left-to-right traverses, which spends too much time on the outside of the material, and blows it out. The spirals produce very pleasing results.

My first tests were on some stainless steel scrap and you can see the results here, with the jagged areas where the water eats away at the material, which is the desired effect. I also found that you have to start the etching from the outside of the spiral and then wind towards the inside. If you start from the inside and go out, you get a nipple, like on the middle right of this test, where the water-jet has to essentially “warm-up”. I’m still getting the center divots, but am working to solve this problem.

This was a promising test, as the non-pocketed surface doesn’t get etched at all and the etching is relatively quick.

IMG_0286

I showed this test to other people and received many raised eyebrows of curiosity. I became more diligent in my test samples and produces this etch sample with 8 spirals, with an interior path ranging from 2mm to 9mm to test on a variety of materials.

sprial_paths.png

I was excited about this material, an acrylic composite that I had leftover from a landscape project. It is 1/2″ thick with green on one side and a semi-translucent white on the other. However, as you can see, the water-jet is too powerful and ends up shattering the edges, which is less than desirable.

IMG_0303

And then I began to survey various stone samples. I began with scavenging some material from Building Resources, which had an assortment of unnamed, cheap tiles and other samples.

Forgive me…I wish I hadn’t sat in the back row of “Rocks for Jocks” in college. Who knew that a couple decades later, I would actually need some knowledge of geology to make artwork?

I began with some harder stone — standard countertop stuff like marble and granite. I liked seeing how the spiral breaks down along the way. But, there is clearly not enough contrast. It just doesn’t look that good.

IMG_0280

IMG_0294

I’m not sure what stone this is, but like the marble, it’s a harder stone and doesn’t have much of an aesthetic appeal. The honed look makes it still feel like a countertop.

IMG_0295

I quickly learned that thinner tile samples would be hard to dial in. Working with 1/4″ material like this, often results in blowing out the center.

IMG_0282

But, I was getting somewhere. These patterns started resembling an impact of sorts and certainly express the immense kinetic energy of the waterjet machine, akin to the kinetic energy of a meteorite impact.

white_tile_detail

This engineered brick was one of my favorite results from this initial test. You can see the detail on the aggregate inside.

IMG_0290brick_all

And I got some weird results. This material, whatever it is, is simple too delicate, kind of like a pumice.

IMG_0289

This is a cement compound of some flavor and for a day, I even thought about pouring my own forms, but that’s too much work, even for me.

 

IMG_0291

I think these two are travertine tile samples and I wish I had more information on them, but alas, that’s what you get when you are looking through the lot. These are in the not-too-hard and not-too-soft zone, just where I want them to be.

 

IMG_0274

IMG_0292

I followed up these tests by hitting up several stoneyards and tiling places along the Peninsula (south of San Francisco). This basalt-like material is one of my favorite results, but is probably too porous for accuracy. Still, the fissures that it opens up in the pockets is amazing. Perhaps if I could tame the waterjet further, this would work.

IMG_0275basalt-detail

basalt-more-detailThis rockface/sandstone didn’t fare so well. The various layers shattered, producing unusable results.

IMG_0299discolored_slate

Likewise, this flagstone was a total fail.

IMG_0302flagstone-shatter

The non-honed quartzite gets very close to what I want, starting to look more like a data-etching. I just need to find one that isn’t so thick. This one will be too heavy to work with.

IMG_0284  quartzite_close_IMG_0340

Although this color doesn’t do much for me, I do like the results of this limestone.

IMG_0298

Here is a paver, that I got but can’t remember which kind it is. Better notes next time! Anyhow, it clearly is too weak for the water-jet.

IMG_0297

This is a slate. Nice results!

IMG_0296

And a few more, with mixed results.

IMG_0300 IMG_0301

And if you are a geologist and have some corrections or additions, feel free to contact me.

Strewn Field Map @ SETI

I’ve been an artist-in-residence at SETIthe Search for Extraterrestrial Intelligence — for several weeks now. Many think of SETI as people who listen for signals from advanced alien life in the deep desert.

Of course, this isn’t even close to the full story. SETI is also doing amazing work in the field of planetary science: the stuff in our solar system

Why would SETI scientists be playing in our astronomical backyard in the quest for extraterrestrial life? …a couple of reasons:

(1) there is a decent chance of microbial life in our solar system, which certainly counts as “extraterrestrial” life, though not as exciting as an advanced alien species.

(2) if we understand how life began on Earth, then we can apply that knowledge to determine how life might originate on other planets.

Planetary data is ripe with amazing possibilities. My current artistic focus is to write custom software code which translates datasets into physical sculptures and installations. My first foray is meteorite impact data from SETI.

The scientist I am currently working with is Dr. Peter Jenniskens, who is one of the world’s experts on meteors and meteorites. And, as I have discovered, he is also interested in the artistic possibilities.

seti_peter_in_front_of_signThe 2008 TC3 asteroid was discovered on October 6th, 2008, heading right for Earth. Calculations were made to determine its approximate impact, which ended up being in Sudan just 19 hours later. The event was significant — it’s the first time we’ve been able to calculate the location of a “small body” impact with Earth. For all it’s importance, 2008 TC3 deserves a much better name. After all, even Lance Armstrong has an asteroid named after him.

2008TC3-groundpath-rev
Dr. Jenniskens was not only near the impact zone the next day, on October 7th but also led an expedition to map and collect the meteorite fragments. He worked with nearly 100 students at the University of Khartoum to find, geolocate and weigh everything they could find.

It is very unusual to be able to get an accurate strewn field map like this. Usually fresh meteorites hit the ocean or areas that are difficult to collect meteorites for various reasons.

323213main_Petersmeteorites_946-710

I work at the Creative Workshops at Autodesk, and have access to their 3D printers. I printed out a model of the 2008 TC3 asteroid, at least one possible physical mapping of the asteroid that approximates its shape. Dr. Jenniskens got a gift of plastic that day.seti_peter_with_meteorHe later showed me the fragments of one of the meteorites. The crust has an amazing texture, which looks like baked clay. Inside, it looks like a regular rock, well at least to my untrained eyes.seti_scott_n_meteroitesOnto the datasets! Peter Jenniskens provided me which had the geolocation + mass of 639 meteorites that his team found. It is now my job to do something with this amazing information.

With my Bad Data series, I wrote custom software that translates the datasets into a map of vector shapes which I then cut, etch, mill or work with somehow on a CNC machine — laser-cutter, water-jet, Shopbot, etc.

I applied similar code to this dataset, creating this map. The larger circles correspond to more mass. It even looks like an impact, with the smaller fragments being shed off before the bulk of the extraterrestrial rocks hit our planet.

strewn_field_15scaled_no_notationIt will be a slog of testing with various materials before I get a final result that I’m happy with. I love this part — the back n’ forth playing with data and materials to get a final aesthetic result that is pleasing.

But, I did manage to squeeze out some tests on wood and have this result. It’s promising.

sudan_image

PARISOMA art panel reportback

Sometimes I’m an art ambassador to the tech community in San Francisco.

Last week, I was on a panel of artists and non-profit educators called “How Technology is Revolutionizing the World of Art” as part of PARISOMA — a co-working space in San Francisco. This included colleagues: Matt Ganucheau, Danille Siembieda and Barry Threw.

I talk at these sorts of events fairly often, addressing a tech crowd who is art-curious. This forces me out of my comfort zone. I know the art world well, but the tech world of start-up lingo and social entrepreneurship is slightly unfamiliar. I do think art-technology discourse is essential, especially in SF in these times, so I do my part.

PARISOMA is faithfully trying to stir up conversation. This is so appreciated, especially since it would be easy to exclude artists from the “tech conversation”.

Oh, the naming problem: How Technology is Revolutionizing the World of Art. This presumes that technology is now changing the world of art. Let’s not forget our history. (New) technology has been turning the art world on its head for decades,  and for centuries, it has been influencing art-making in overt and subtle ways.

Projects such as E.A.T. (Experiments in Art and Technology) were talking about this very issue 35-40 years ago. I won’t get into the manyfold examples here, but the research is out there and easy to find.

bild
…and the over-use of the word “revolution” is well-documented. It’s a disservice to actual revolution: the overturning of a political state. Language is important. Point being that art and technology have been intertwined for a very long time. It is not happening just now, nor is it a sudden turn of events that is redefining art.

Patrol_of_the_October_revolution

However, the positive things from the dialogue were immense. A few key observations:

(1) The attendance for this panel in a tech venue was much higher than in an art venue (~100 people on a Wed night). Why is this? Why does the tech community garner more bodies? Is it because there is some flavor of “networking” involved? This happens at art events as well, so I don’t get it.

(2) Art jargon alienates the wider community. Tech folks get intimidated by art galleries and the language describing the works. At one point I brought this up and saw a sea of faces that were nodding. This is a thing that everyone seems to feel. I suppose the art dialogue is in my comfort zone, so I don’t think about the barriers it creates.

(3) Techies have a bad reputation for driving up prices, displacing old-time residents of San Francisco and hopping on corporate buses to work in the Peninsula. But, here was an audience of 80-100 people who wanted to integrate art somehow into their culture. Techies aren’t all bad!

The take-home message is that we should build bridges between the art folks and the technology folks…somehow. I don’t have the answers, but do feel like there are slow inroads being made by just having the conversations.

This video is a bit long (1 hour +), but for those of you who are curious, here it is. Thanks everyone.

 

Art in Space: the First Art Exhibition in Space

Art in Space is the first art exhibition in space, which was created in conjunction with Autodesk’s Pier 9 Creative Workshops and Planet Labs, a company which dispatches many fast-orbiting imaging satellites that document rapid changes on the Earth’s surface.

For this exhibition, they selected several Pier 9 artists to create artwork, which were then etched onto the satellites panels. Though certainly not the first artwork in space*, this is the first exhibition of art in space. And, if you consider that several satellites are constantly orbiting Earth on opposite sides of the planet, this would be the largest art exhibition ever.

My contribution is an artwork called: Hello, World! It is the first algorithmically-generated artwork sent to space and also the first art data visualization in space. The artwork was deployed on August 19th, 2015 on the satellite: Dove 0C47. The artwork will circle the Earth for 18 months until its satellite orbit decays and it burns up in our atmosphere.

 

The left side of the satellite panel depicts the population of each city, represented by squares proportional to the population size. The graphics on the right side represent the carbon footprint of each city with circles proportional to carbon emissions. By comparing the two, one can make correlations between national policies and effects on the atmosphere. For example, even though Tokyo is the most populated city on earth, its carbon emissions per capita is very low, making its carbon footprint much smaller in size, than Houston, Shanghai or Riyadh, which have disproportionately large footprints.

The etched panel resembles a constellation of interconnected activity and inverts the viewpoint of the sky with that of the earth. It is from this “satellite eye,” that we can see ourselves and the effect of humans on the planet. The poetic gesture of the artwork burning up as the satellite re-enters the Earth’s atmosphere, serves as reminder about the fragile nature of Earth.

Also consider this: the Art in Space exhibition is something you can neither see nor is it lasting. After only 18 months, the satellite, as well as the artwork vaporizes. I thought of this as an opportunity to work with ephemerality and sculpture. And, this is the first time I have had the chance for a natural destruction of my work. Everything dies and we need to approach life with care.

A few people have asked me where did my title come from? Anyone who has written any software code is familiar with the phrase: “Hello, World!” This is the first test program that any instructional has you write. It shows the basic syntax for constructing a working program, which is helpful since all computer programs embody different language constructions. By making this test code work, you also have verified that your development environment is working properly.

“Hello, World!” C implementation.

/* Hello World program */
#include<stdio.h>
main() {
    printf("Hello World");
}

And here is the full a video that explains more about the Art in Space exhibition.

 

* There has been plenty of other art in space, and more recent projects such as my collaboration with Nathaniel Stern for Tweets in Space (2012) and Trevor Paglen’s The Last Pictures.

KALW Story: Brothers and Cisterns

What’s beneath those brick circles in San Francisco intersections?” is a story that Audrey Dilling, editor and reporter for KALW’s Crosscurrents, recently released for broadcast on the radio. She joined us as a journalist-on-a-bike for the Cistern Mapping Project, shadowing a pair of volunteers.

The bike-mapping project is just one part of her larger story about the San Francisco Cisterns.

cistern30_VanNess_Bay

Audrey did a fantastic job on the production work. I like the asides, such as the question “What do they call manholes in Holland?” The radio personality talks over my own words, bridging the inevitable gaps and fissures in any audio interview, making me look smarter than I am.

Other interviewees include an author, Robert Graysmith, author of Black Fire, who gives character to the story by talking about 1850s San Francisco, and a representative from the San Francisco Fire Department.

And the sound queues such as “Can I get some period music?…great job.

Me: not the best interviewee, not the worst, still sounding too matter of fact. But hey, I’m learning slowly the lessons of being a better communicator.

And the finale, of bookending the story with Chava and Lucas, very nice.

 

 

 

 

Cistern Mapping Project Reportback

On October 11th, 2015, 18 volunteer bike and mapping aficionados gathered at my place to work on the Cistern Mapping Project — an endeavor to physically document the 170 (or so) Cisterns in San Francisco. There exists no comprehensive map of these unique underground vessels. The resulting map is here.

I personally became fascinated by them, when working on my Water Works project*, which mapped the water infrastructure of San Francisco.

The history of the cisterns is unique, and notably incomplete.

Image2

The cisterns are part of the AWSS (Auxiliary Water Supply System) of San Francisco, a water system that exists entirely for emergency use and is separate from the potable drinking water supply and the sewer system.

In the 1850s, after a series of Great Fires in San Francisco tore through the city, about 23 cisterns were built. These smaller cisterns were all in the city proper, at that time between Telegraph Hill and Rincon Hill. They weren’t connected to any other pipes and the fire department intended to use them in case the water mains were broken, as a backup water supply.

They languished for decades. Many people thought they should be removed, especially after incidents like the 1868 Cistern Gas Explosion.

However, after the 1906 Earthquake, fires once again decimated the city. Many water mains broke and the neglected cisterns helped save portions of the city. Afterward, the city passed a $5,200,000 bond and begin building the AWSS in 1908. This included the construction of many new cisterns and the rehabilitation of other, neglected ones. Most of the new cisterns could hold 75,000 gallons of water. The largest one is underneath the Civic Center and has a capacity of 243,000 gallons.

The original ones, presumably rebuilt, hold much less, anywhere from 15,000 to 50,000 gallons.

Cistern109 22nd Dolores

Armed with a series of intersections of potential Cistern Locations, the plan was to bike to each intersection and get the exact latitude and longitude and a photograph of each of the cistern markers — either the circular bricks or the manholes themselves.

We had 18 volunteers, which is a huge turnout for a beautiful Sunday morning. I provided coffee and bagels and soon folks from my different communities of the bike teamExploratorium and other friends were chatting with one another.

Cistern Prep Meeting 3

One way to thank my lovely volunteers was to provide gifts. What I made for everyone were a series of moleskine notebooks with vinyl stickers of the cisterns and bikes. I was originally planning to laser-etch them, but found out that they were on the “forbidden materials” list at the Creative Workshops at Autodesk Pier 9, where I made them. Luckily, I always have a Plan B and so I made vinyl stickers instead.

Cistern Prep Meeting 4Cistern Prep Meeting 1

Here I am, in desperate need of a haircut, greeting everyone and explaining the process. I grouped the cisterns into blocks of about 10-20 into 10 different sets. This covered most of them and then we paired off riders in groups of 2 to try to map out the best way to figure out their ride.Cistern Prep Meeting 2

Some of the riders were friends beforehand and others became friends during the course of riding together. Here, you can see two riders figuring out the ideal route for their morning. Some folks were smart and brought paper maps, too!

Cistern Prep Meeting 7

Here are the bike-mappers just before embarking on their day-of-mapping. Great smiles all around!

 

Cistern Group 1

I would have preferred to ride, but instead was busy arranging the spreadsheet and verifying locations. Ah, admin work.

How did we do this? Simple: each team used a GPS app and emailed me the coordinates of the cistern marker, along with a photo of the cistern: the bricks, manhole or fire hydrant. I would coordinate via email and confirm that I got the right info and slowly fill out the spreadsheet. It was a busy few hours.

Screen Shot 2015-12-10 at 4.09.31 PM

The hills were steep, but fortunately we had a secret weapon: some riders from the Superpro Racing team! Here is Chris Ryan crushing the hills in Pacific Heights.

Cistern chris uphill

One reason that we traveled in pairs is that documentation can be dangerous. Sometimes we had to to put folks on the edge or actually in the street so they could get some great documentation.

Cisterns chris

So, how many cisterns did we map? The end result was 127 cisterns, which is about 75% of them, all in one day. We missed a few and then there were a series of outliers such as ones in Glen Park, Outer Sunset and Bayview that we didn’t quite make.

And the resulting map is here. There are still some glitches, but what I like about it is that you can now see the different intersections for each cistern. These have not been documented before, so it’s exciting to see most of them on the map.

Cistern web map

What did we discover?

Most of the cisterns are not actually marked with brick circles and just have a manhole that says “Cisterns” or even just “AWSS” on it.

Image1

What I really enjoyed, especially being in the “backroom” was how the cyclists captured the beautiful parts of the city in the background of the photos, such as the cable car tracks.

IMG 0609

Also, the green-capped fire hydrants usually are nearby. These are the ones that get used to fill up the cisterns by the SF Fire Department.

Green hydrant

A few were almost like their own art installations, with beautiful brickwork.

Cistern118 24th Noe

The ones in the Sunset and Richmond district are newer and are actually marked by brick rectangles

Cistern135 46th Geary

Thanks to the AMAZING volunteers on this day.

* Water Works is supported by a Creative Code Fellowship through Stamen Design, Autodesk and Gray Area.

Joining SETI as an artist-in-residence

The SETI Institute just announced their new cohort of artists-in-residents for 2016 and I couldn’t be happier to be joining this amazing organization for a long-term (up to 2 years!) stint.

This includes a crew of other amazing artists: Dario Robleto (Conceptual Artist, Houston), Rachel Sussman (Photographer, Artist, Writer, New York), George Bolster (Filmmaker, Artist, New York), Jen Bervin (Visual Artist, Writer, Brooklyn), David Neumann (Choreographer, New York). The SETI Air program is spearheaded by Charles Lindsay (artist) and Denise Markonish (curator at MASS MoCA). I first met Charles at ISEA 2012 in Albuquerque, New Mexico when we were on the same panel around space-related artwork.

On January 13, 2016, at 7pm, in San Francisco’s Millennium Tower, SETI Institute President and CEO Bill Diamond will formally welcome the incoming artists and our new institutional partners, as well as patrons and friends of the program. This event is invitational and seating is limited.

SETI_Logo

So, what will I be working on?

Well, this follows on the heels of a number of artwork related to space such as Tweets in Space (in collaboration with Nathaniel Stern), Uncertain LocationBlack Hole Series and Moon v Earth, which were meditations of metaphors of space and potential.

uncertainlocation_main

Roughly speaking, I will be researching, mapping and creating installations of asteroids, meteor and meteorite data and working with scientists such as Peter Jenniskens, who is an expert on meteorite showers. These will be physical data-visualizations — installation, sculptures, etc, which follow my interests in digital fabrication and code with projects such as Water Works.

What specifically fascinates me is the potential between outer space and the earth, is the metaphor of both danger and possibility from above. These range from numerous spiritual interpretations to practical ones such as the extinction of the human race to the possibility that organic material from other planets being carried to our solar system. Despite appearances to the contrary Earth is not only a fragile ecosystem but also once that could easily be transformed from outside.

And already I have begun mapping some meteor showers with my custom 3D software, working with in collaboration with Dr. Jenniskens and a dataset of ~230,000 meteors over Northern California in the last few years. This makes the data-space-geek in me very happy.

Stay subscribed for more.

meteor-of-Screen Shot 2015-11-18 at 8.41.10 AM

meteor-2Screen Shot 2015-10-27 at 4.55.25 PM

And I will heed Carl Sagan’s words: “Imagination will often carry us to worlds that never were, but without it we go nowhere.”

EquityBot World Tour

Art projects are like birthing little kids. You have grand aspirations but never know how they’re going to turn out. And no matter, what, you love them.

20151125 125225

It’s been a busy year for EquityBot. I didn’t expect at all last year that my stock-trading algorithm Twitterbot would resonate with curators, thinkers and  general audience so well. I’ve been very pleased with how well this “child” of mine has been doing.

This year, from August-December, it has been exhibited in 5 different venues, in 4 countries. They include MemFest 2015 (Bilbao), ISEA 2015, (Vancouver), MoneyLab 2, Economies of Dissent (Amsterdam) and Bay Area Digitalists (San Francisco).

Of course, it helps the narrative that EquityBot is doing incredibly well, with a return rate (as of December 4th) of 19.5%. I don’t have the exact figures, but the S&P for this time period, according to my calculations, is the neighborhood of -1.3%.

Screen Shot 2015-12-05 at 9.13.20 AM

 

The challenge with this networked art piece is how to display it. I settled on making a short video, with the assistance of a close friend, Mark Woloschuk. This does a great job of explaining how the project works.

And, accompanying it is a visual display of vinyl stickers, printed on the vinyl sticker machine at the Creative Workshops at Autodesk Pier 9, where I once had a residency and now work (part-time).

EquityBot_installation_screen_c

 

from-columbus-show

Cistern Mapping Project…with Bikes

Do you like riding bikes and mapping urban space?

On October 11th, 2015, I will be leading the Cistern Mapping Project, which will be an urban treasure hunt, where we document and geolocate all of the 170 (or so) Cisterns of San Francisco.

The easiest way to let me know you want to participate is to sign up with this contact form. This will email me (Scott Kildall) and I can give you some more exact details.

The plan
We will meet at a specific location in the Mission District at 11am on Sunday, October 11th. I am hoping to gather about 20 riders, paired up in groups. Each will be provided with a map of approximate locations of several cisterns.

The pair will find the search for the exact location of each brick circle, photo-document it and get the geolocation (latitude + longitude) of the cistern, using an app on their iPhone or Android. Plan for 4-5 hours or so of riding, mapping, documenting and tweeting.

The background story
Underneath our feet, usually marked by brick circles are cisterns, There are 170 or so of them spread throughout the city. They’re part of the AWSS (Auxiliary Water Supply System) of San Francisco, a water system that exists entirely for emergency use and is separate from the potable drinking water supply and the sewer system.

cistern-mapping

In the 1850s, after a series of Great Fires in San Francisco tore through the city, 23 (or so) cisterns were built. These smaller cisterns were all in the city proper, at that time between Telegraph Hill and Rincon Hill. They weren’t connected to any other pipes and the fire department intended to use them in case the water mains were broken, as a backup water supply.

They languished for decades. Many people thought they should be removed, especially after incidents like the 1868 Cistern Gas Explosion.

However, after the 1906 Earthquake, fires once again decimated the city. Many water mains broke and the neglected cisterns helped save portions of the city. Afterward, the city passed a $5,200,000 bond and begin building the AWSS in 1908. This included the construction of many new cisterns and the rehabilitation of other, neglected ones. Most of the new cisterns could hold 75,000 gallons of water. The largest one is underneath the Civic Center and has a capacity of 243,000 gallons.

cistern_main

Augmenting an Existing Map
Last year, as part of the Creative Code Fellowship between Stamen Design, Gray Area and Autodesk, I worked on a project called Waterworks, which mapped the San Francisco water infrastructure as a series of 3D prints and web maps.

As part of this project, I created an interactive web map of the San Francisco Cisterns (the only one), based on the intersections listed in the SFFD water supplies manual. However, this map is less-than-complete.

The problem is that the intersections are approximate and are sometimes a block or so away. They are inaccurate. Also, there are very few photographs of the brick circles that make the San Francisco cisterns. I think it would be an urban service to map these out for anyone to look at.

The goal will be to geolocate (lat + long) cistern location, photograph the bricks that (usually) mark them and produce a dataset that anyone can use. cistern-web-map

A Live Twitter Performance…on bikes

This will be a live Twitter event, where we update each cistern location live using Twitter and Google docs, adding photographs and building out the cistern map in real-time.

Bikes are the perfect mode of transport. Parking won’t be an issue and we can conveniently hit up many parts of the city.

Will we map all of these cisterns? This is up to you. Contact me here if you would like to join.

 

Press for Chess with Mustaches: the response to the Duchamp Estate

Press coverage is like an improv performance. It’s unpredictable, erratic and sometimes works or falls on its face, usually by the lack of press.

I’ve seen my work get butchered, my name get dragged in the mud. I’ve been called a “would-be performance artist”, an “amateur cartographer” and even Cory Doctorow recently called me a “hobbyist”.

But as long as my name is spelled right, I’m happy.

We recently went public with our response to the Duchamp Estate and the Chess with Mustaches artwork.

We soon received coverage from three notable press sources: Hyperallergic, 3DPrint.com and The Atlantic, and this was soon followed up by Boing Boing and later 3ders.com, plus a mention in Fox News (scroll down) and then Tech Dirt.

These are arts blogs, 3D printing blogs, tech rags — and well, The Atlantic, a  well-read political and culture new source — so there’s a wide audience for this story.

The press has certainly reached the critical threshold for the work. The cat is out of the bag, after being inside for nearly a year…a frustrating process where we kept silent about the cease-and-desist letter from the Duchamp Estate.

This is perhaps the hardest part of any sort of potential legal conflict. You have to be quiet about it, otherwise it might imperil your legal position. The very act of saying anything might make the other party react in some sort of way.

But the outpouring of support has been amazing, both on a personal and a press level. Sure, some of the articles have overlooked certain aspects of the project.

And as always #dontreadthecomments. But overall, it has been such a relief to be able to be talk about the Duchamp Estate and the chess pieces, and to devise an appropriate artistic response.

 

cwm_fullset_adjusted

What Happened to the Readymake: Duchamp Chess Pieces?

Over the last several months, we (Scott Kildall and Bryan Cera) have been contacted by many people asking the same question: What happened to the Readymake: Duchamp Chess Pieces?

cwm_orig_set

The answer is that we ran into an unexpected copyright concern. The Marcel Duchamp Estate objected to the posting of our reconstructed 3D files on Thingiverse, claiming that our project was an infringement of French intellectual property law. Although the copyright claim never went to legal adjudication, we decided that it was in our best interests to remove the 3D-printable files from Thingiverse – both to avoid a legal conflict, and to respect the position of the estate.

For those of you who are unfamiliar with Readymake: Duchamp Chess Set by Scott Kildall and Bryan Cera, this was our original project description:

Readymake: Duchamp Chess Set is a 3D-printed chess set generated from an archival photograph of Marcel Duchamp’s own custom and hand-carved game. His original physical set no longer exists. We have resurrected the lost artifact by digitally recreating it, and then making the 3D files available for anyone to print.

We were inspired by Marcel Duchamp’s readymade — an ordinary manufactured object that the artist selected and modified for exhibition — the readymake brings the concept of the appropriated object to the realm of the internet, exploring the web’s potential to re-frame information and data, and their reciprocal relationships to matter and ideas. Readymakes transform photographs of objects lost in time into shared 3D digital spaces to provide new forms and meanings.

Just for the sake of clarity, what we call a “readymake” is a play on the phrase “readymade”. It is ready-to-make, since it can be physically generated by a 3D printer.

Our Readymake project was not to exist solely as the physical 3D prints that we made, but rather as the gesture of posting the 3D-printable files for anyone to download, as well as the initiation of a broader conversation around digital recreation in the context of artwork. We chose to reconstruct Duchamp’s chess set, specifically, for several reasons.

The chess set, originally created by Duchamp in 1917-18, was a material representation of his passion for the game. Our intention was not to create a derivative art work, but instead to re-contextualize an existing non-art object through a process of digital reconstruction as a separate art project.

What better subject matter to speak to this idea than a personal possession of the father of the Readymade, himself?  Given the artifact’s creation date, we believed it would be covered under U.S. Copyright Law. We’ll get back to that in a bit.

cwm_bw_duchamp_set

 cw_duchamp_pieces

On April 21st, 2014, we published this project on our website and also uploaded the 3D (STL) files onto Thingiverse, a public online repository of free 3D-printable models.  We saw our gesture of posting the files not only as an extension of our art project, but also as an opportunity to introduce the conceptual works of Duchamp, specifically his Readymades, to a wider audience.

cwm_makerbot_grouping

The project generated a lot of press. By encouraging discussion between art-oriented and technology-oriented audiences, it tapped into a vein of critical creative possibilities with 3D printing. And perhaps, with one of Marcel Duchamp’s personal belongings as the context, the very notions of object, ownership and authenticity were brought into question among these communities.

Unfortunately, the project also struck a nerve with the Duchamp Estate. On September 17th, 2014, we received a cease and desist letter from a lawyer representing the heirs of Marcel Duchamp. They were alleging intellectual property infringement on grounds that they held a copyright to the chess pieces under French law.

Gulp.

cwm_170914-letter-blackedout-p1

cwm_170914-letter-blackedout-p2

cwm_170914-letter-blackedout-p3

We assessed our options and talked to several lawyers. Yes, we talked to the Electronic Frontier Foundation…and others. We were publicly quiet about our options, as one needs to with legal matters such as this. The case was complex since jurisdiction was uncertain. Does French copyright law apply? Does that of the United States? We didn’t know, but had a number of conversations with legal experts.

Some of the facts, at least as we understand them

1)  Duchamp’s chess pieces were created in 1917-1918. According to US copyright law, works published before 1923 are in the realm of “expired copyright”.

2) The chess pieces themselves were created in 1917-1918 while Duchamp was in Argentina. He then brought the pieces back to France where he worked to market them.

3)  According to French copyright law, copyrighted works are protected for 70 years after the author’s death.

4)  Under French copyright law, you can be sued for damages and even serve jail time for copyright infringement.

5)  The only known copy of the chess set is in a private collection. We were originally led to believe the set was ‘lost’ – as it hasn’t been seen, publicly, for decades.

6) For the Estate to pursue us legally, the most common method would be to get a judgment in French court, then get a judgment in a United States court to enforce the judgement.

7) Legal jurisdiction is uncertain. As United States citizens, we are protected by U.S. copyright law. But, since websites like Thingiverse are global, French copyright could apply.

Our decision to back off

Many people have told us to fight the Estate on this one. This, of course, is an obvious response. But our research indicated this would be a costly battle. We pursued pro-bono representation from a variety of sources, and while those we reached out to agreed it was an interesting case, each declined. We even considered starting a legal defense fund or crowdsourcing legal costs through an organization such as Kickstarter. However, deeper research showed us that people were far more interested in funding in technology gadgets than legal battles.

Finally we ascertained, through various channels, that the Estate was quite serious. We wanted to avoid a serious legal conflict.

And so, without proper financial backing or pro-bono legal representation, we backed off — we pulled the files from Thingiverse. This was painful – it was incredible to see how excited people were to take part in our project, and when we deleted the Thingiverse entry and with it the comments and photo documentation shared by users, we did so with much regret. But we didn’t see any other option.

Initially, we really struggled to understand where the estate was coming from. As part of the estate’s task is to preserve Duchamp’s legacy, we were surprised that our project was seen by them as anything other than a celebration, and in some ways a revitalization, of his ideas and artworks. Despite the strongly-worded legal letter, we heard that the heirs were quite reasonable.

The resolution was this: we contacted the estate directly. We explained our intention for the project: to honor the legacy of Duchamp, and notified them that we had pulled the STL files from online sources.

We were surprised by the amicable email response — written sans lawyers — directly from one of the heirs. Their reply highlighted an appreciation for our project, and an understanding of our artistic intent. It turns out that their concern was not that we were using the chess set design, but rather that the files – then publicly available — could be taken by others and exploited.

We understand the Estate’s point-of-view – their duty, after all, is to preserve Duchamp’s legacy. Outside of an art context, a manufacturer could easily take the files and mass produce the set. Despite the fact we did put this under a Creative Commons license that stipulated that the chess set couldn’t be used for commercial purposes, we understand the concern.

If we had chosen to stand our ground, we would have had various defenses at our disposal. One of them is that French law wouldn’t have applied since we are doing this from a U.S. server. But, the rules around this are uncertain.

If we had been sued, we would have defended on two propositions: (1) our project would be protected under U.S. law; (2) not withstanding this, under U.S. law, we have a robust and widely-recognized defense under the nature of Fair Use.

We would make the argument that our original Duchamp Chess Pieces would have have added value to these objects. We would consider invoking Fair Use in this case.

But, the failure of a legal system is that it is difficult to employ these defenses unless you have the teeth to fight. And teeth cost a lot of money.

Parody: Our resolution

We thought about how to recoup the intent of this project without what we think will be a copyright infringement claim from the Duchamp Estate and realized one important aspect of the project, which would likely guarantee it as commentary is one of parody.

Accordingly, we have created Chess with Mustaches, which is based on our original design, however, adds mustaches to each piece. The pieces no longer looks like Duchamp’s originals, but instead improves upon the original set with each piece adorned with mustaches.

chesswithmustaches_fullset

cwm_plastic_set

The decorative mustache references vandalized work, including Duchamp’s own adornment of the Mona Lisa.

cwm_mona_lisa

Coming out with this new piece is risky. We realize the Duchamp Estate could try to come back at us with a new cease-and-desist. However, we believe that this parody response and retitled artwork will be protected under U.S. Copyright Law (and perhaps under French law as well). We are willing to stand up for ourselves with the Chess with Mustaches.

Also for this reason, we decided not to upload the mustachioed-pieces to Thingiverse or any other downloadable websites. They were created as physical objects solely in the United States.

cwm_king

Final thoughts

3D printing opens up entire new possibilities of material production. With the availability of cheap production, the very issue of who owns intellectual property comes into play. We’ve seen this already with the endless reproductions on sites such as Thingiverse. Recently, Katy Perry’s lawyers demanded that a 3D Print of the Left Shark should be removed from Shapeways.

And in 2012, Golan Levin and Shawn Sims provided the Free Universal Construction Kit, a set of 3D-printable files for anyone to print connectors between Legos, Tinker Toys and many other construction kits for kids. Although he seems to have dodged legal battles, this was perhaps a narrow victory.

Our belief is that this our project of reviving Duchamp’s chess set is a strong as both a conceptual and artistic gesture. It is unfortunate that we had to essentially delete this project from the Internet. What copyright law has done in this case is to squelch an otherwise compelling conversation about the original, Duchamp’s notion of the readymade in the context of 3D printing.

Will our original Duchamp Chess pieces, the cease-and-desist letter from the Duchamp Estate and our response of the Chess with Mustaches be another waypoint in this conversation?

We hope so.

And what would Marcel Duchamp have thought of our project? We can only guess.

   cwm_knight

Scott Kildall’s website is: www.kildall.com
Twitter: @kildall

Bryan Cera’s website is: http://bryancera.com
Twitter: @BryanJCera

BOOM! WaterWorks

My Water Works project recently got coverage in BOOM: A Journal of California and I couldn’t be more pleased.

Screen Shot 2015-08-25 at 9.06.28 AM

A few months ago, I was contacted by the editorial staff to write about the 3D printed maps and data-visualization for Water Works.

What most impressed me is the context for this publication, which is a conversation about California, in their own words: “to create a lively conversation about the vital social, cultural, and political issues of our times, in California and the world beyond.”

So, while my Water Works project is an artwork, it is having the desired effect of a dialogue outside of the usual art world.

EquityBot got clobbered

Just after the Dow Jones dropped 1000 points on Aug 24th (yesterday), I checked out how EquityBot was doing. Annual rate of return of > -50%

Screen Shot 2015-08-24 at 11.20.25 PM

Crazy! Of course, this is like taking the tangent of any curve and making a projection. A day later, EquityBot is at -32%.

Screen Shot 2015-08-25 at 8.57.06 AM

Still not good, but if if you were to invest yesterday, you could be much richer today.

I’m not that much of a gambler, so I’m glad that EquityBot is just a simulated (for now) bank account.

EquityBot Goes to ISEA

EquityBot will be presented at this year’s International Symposium on Electronic Art at Vancouver. The theme is Disruption. You can always follow EquityBot here: @equitybot.

EquityBot is an automated stock-trading algorithm that uses emotions on Twitter as the basis for investments in a simulated bank account.

This art project poses the question: can an artist create a stock-trading algorithm that will outperform professional managed accounts?

The original EquityBot, what I will call version 1, launched on October 28th via the Impakt organization, which was supported the project last fall during at artist residency.

I intended for it to run for 6 months and then to assess its performance results. I ended up letting it run a little bit longer (more on this later).

Since then, I’ve revamped EquityBot about 1 month ago. The new version is doing *great* with an annual rate of return of 10.86%. Most of this is due to some early investments in Google, whose stock prices have been doing fantastic.

equitybot-isea-8emotions-1086percent

How does EquityBot work? During stock market hours, EquityBot scrapes Twitter to determine the frequency of eight basic human emotions: anger, fear, joy, disgust, anticipation, trust, surprise and sadness.

equitybot-8emotions

The software code captures fluctuations in the number of tweets containing these emotions. It then correlates them to changes in stock prices.  When an emotion is trending upwards EquityBot will select a stock that follows a similar trajectory. It deems this to be a “correlated investment” and will buy this stock.

equitybot_correlation_graph

The ISEA version of EquityBot will run for another 6 months or so. The major change from version 1 was that with this version, I tracked 24 different emotions, all based on the Plutchik wheel.

Plutchik-wheel.svg_1

 

The problem that I found was this was too many emotions to track, both in terms. Statistically-speaking, there were too few tweets for many of the emotions for the correlation code to properly function.

The only change with the ISEA version (what I will call v1.1) is that it now tracks eight emotions instead of 24.

popular-emotions

How did v1 of EquityBot perform? It came out of the gates super-strong, hitting a high point of 20.21%. Wowza. These are also some earlier data-visualizations, which have since improved, slightly so.
equitybot-nov26-2021percent

But 1 month later, by December 15th, EquityBot dipped down to -4.58% percent. Yikes. These are the vicissitudes of the market and a short time-span

equitybot-dec15-minus-458percent

 

By January 21st 2015, EquityBot was almost back to even at -0.96%.

 

equitybot-jan21-minus096percent

Then by February 4th, 2015, EquityBot was back at a respectable 5.85%.

equitybot-feb4-585percent

And on March 1st, doing quite well at 7.36%

equitybot-march1-736percent

I let the experiment run until June 11th. The date was arbitrary, but -9.15% was the end result. This was pretty terrible.

equitybot-jun11-minus915percent

And which emotions performed the “best” — the labels aren’t on this graph, but the ones that were doing well were Trust and Terror. And the worst…was Rage (extreme Anger).

equitybot-investing-results-jun11

 

How do other managed accounts perform? According to the various websites, these are the numbers I’ve found.

Janus (Growth & Income): 7.35%
Fidelity (VIP Growth & Income): 4.70%
Franklin (Large Cap Equity): 0.46%
American Funds (The Income Fund of America): -1.23%
Vanguard (Growth and Income): 4.03%

This would put EquityBot v1.0 as dead last. Good thing this was a simulated bank account.

I’m hoping that v1.1 will do better. Eight emotions. Let’s see how it goes.

 

Machine Data Dreams: Barbie Video Girl Cam

One of the cameras they have here at the Signal Culture Residency is the Barbie Video Girl cam. This was a camera embedded inside a Barbie doll, produced in 2010.

The device was discontinued most notably after the FBI accidentally leaked a warning about possible predatory misuses of the camera, is  patently ridiculous.

The interface is awkward. The camera can’t be remotely activated. It’s troublesome to get the files off the device. The resolution is poor, but the quality is mesmerizing.

 

barbie_disassembly_1

The real perversion is the way you have to change the batteries for the camera, by pulling down Barbie’s pants and then opening up her leg with a screwdriver.

b-diss

I can only imagine kids wondering if the idealized female form is some sort of robot.

barbie_disassembly_3

The footage it takes is great. I brought it first to the local antique store, where I shot some of the many dolls for sale.

 

 

And, of course, I had to hit up the machines at Signal Culture to do a live analog remix using the Wobbulator and Jones Colorizer.

In the evening, as dusk approached, I took Barbie to the Evergreen Cemetery in Owego, which has gravestones dating from the 1850s and is still an active burial ground.

Here, Barbie contemplated her own mortality.

barbie_cemetery barbie_close_cross barbie_good barbie_gravestone_1 barbie_headstore barbie_warren

barbie_cemetery_mother

It was disconcerting for a grown man to be holding a Barbie doll with an outstretched arm to capture this footage, but I was pretty happy with the results.

I made this short edit.

And remixed with the Wobbulator. I decided to make a melodic harmony (life), with digital noise (death) in a move to mirror the cemetery — a site of transition between the living and the dead.

How does this look in my Machine Data Dreams software?

You can see the waveform here — the 2nd channel is run through the Critter & Guitari Video Scope.

Screen Shot 2015-08-04 at 10.43.09 AM

And the 3D model looks promising, though once again, I will work on these post-residency.

Screen Shot 2015-08-04 at 10.45.04 AM

Machine Data Dreams: Critter & Guitari Video Scope

Not to be confused with Deleuze and Guattari, this is a company that makes various hardware music synths.

For my new project, Machine Data Dreams, I’m looking at how machines might “think”, starting with the amazing analog video machines at Signal Culture.

signal_culture-fullsetup

This morning, I successfully stabilized my Arduino data logger. This captures the raw video signal from any device with RCA outputs and stores values at a sampling rate of ~3600 Hz.

It obviously misses a lot of the samples, but that’s the point, a machine-to-machine listener, bypassing any sort of standard digitizing software.

data_logger

For my first data-logging experiment, I decided to focus on this device, the Critter & Guitari Video Scope, which takes audio and coverts it to a video waveform.

critterguitari_3 critterguitari_2 Crittcritterguitari_1

Using the synths, I patched and modulated various waveforms. I’ve never worked with this kind of system until a few days ago, so I’m new to the concept of control voltages.audio_sythn

This is the 15-minute composition that I made for the data-logger.

Critter & Guitari Videoscope Composition (below)

And the captured output, in my custom OpenFrameworks software.

 

Screen Shot 2015-08-02 at 10.56.15 PMThe 3D model is very preliminary at this point, but I am getting some solid waveform output into a 3D shape. I’ll be developing this in the next few months. But since I only have a week at Signal Culture, I’ll tackle the 3D-shape generation later.

Screen Shot 2015-08-02 at 11.02.25 PM

My data logger can handle 2 channels of video, so I’m experimenting with outputting the video signal as sound and then running it back through the C&G Videoscope.

This is the Amiga Harmonizer — output, which looks great by itself. The audio, however, as a video signal, as expected comes out sounding like noise.

But the waveforms are compelling. there is a solid band at the bottom, which is the horizontal sync pulse. This is the signature for any composite (NTSC) devices.

2000px-Composite_Video.svg

 

So, every devices I log should have this signal at the bottom, which you can see below.

Screen Shot 2015-08-02 at 10.58.12 PM

Once again, the 3D forms I’ve generated in OpenFrameworks and then opened up in Meshlab are just to show that I’m capturing some sort of raw waveform data.

Screen Shot 2015-08-02 at 11.00.14 PM

Atari Adventure Synth

Hands down my favorite Atari game when I was a kid was Adventure (2). The dragons looked like giant ducks. Your avatar was just a square and a bat wreaks chaos by stealing your objects.

In the ongoing research for my new Machine Data Dreams project, beginning here at Signal Culture, I’ve been playing with the analog video and audio synths.

Yesterday afternoon, I explored the town of Owego. I ran across a used DVD, CD & electronics store and bought an Atari Flashback Console for $25. I didn’t even know these existed.

blog_pic

I can plug it directly into their video synth system. After futzing around with the various patch cables, I came up with this 5-minute composition, which shows me playing the game. The audio sounds like marching with dirty noise levels.

Also, here is the latest 3D model from my code, which now has a true 3D axis for data-plotting.

Time is one axis, video signal is another, audio signal is the third.

Screen Shot 2015-07-31 at 9.26.05 PMAnd a crude frequency plot.

Screen Shot 2015-08-01 at 3.03.24 PM

 

Van Gogh Wobbulator

In the first full day of the residency at Signal Culture, I played around with the video and audio synthesizers. It’s a new world for me.

While my focus is on the Machine Data Dreams project, I also want to play with what they have and get familiar with the amazing analog equipment.

I started with this 2 minute video, which I shot earlier this summer at Musee d’Orsay. I had to document the odd spectacle: visitor after visitor would take photos of this famous Van Gogh self-portrait…despite the fact you can get a higher-quality version online.

I ran this through a few patches and into the Wobbulator, which affects the electronic signal on the CRT itself.

20150730_200415

 

 

20150730_152742

Ewa Justka, who is the toolmaker-in-residence here, and who is building her own audio synthesizer spruced up the accompanying audio. I captured a 20-minute sample.

ewa-blog-trash

What I love about the result is that the repetitive 2-minute video takes on its own life, as the two of us tweaked knobs, made live patches and laughed a lot.

Introducing Machine Data Dreams

Earlier this year, I received an Individual Artist Commission grant from the San Francisco Arts Commission for a new project called Machine Data Dreams.

I was notified months ago, but the project was on the back-burner until now — where I’m beginning some initial research and experiments at a residency called Signal Culture. I expect full immersion in the fall.

The project description
Machine Data Dreams will be a large-scale sculptural installation that maps the emerging sentience of machines (laptops, phones, appliances) into physical form. Using the language of machines — software program code  — as linguistic data points, Scott Kildall will write custom algorithms that translate how computers perceive the world into physical representations that humans can experience.

The project’s narrative proposition is that machines are currently prosthetic extensions of ourselves, and in the future, they will transcend into something sentient. Computer chips not only run our laptops and phones, but increasingly our automobiles, our houses, our appliances and more. They are ubiquitous and yet, often silent. The key to understanding their perspective of the world is to envision how machines view the world, in an act of synthetic synesthesia.

Scott will write software code that will perform linguistic analysis on machine syntax from embedded systems — human-programmable machines that range from complex, general purpose devices (laptops and phones) to specific-use machines (refrigerators, elevators, etc) . Scott’s code will generate virtual 3D geometric monumental sculptures. More complex structures will reflect the higher-level machines and simpler structures will be generated from lower-level devices. We are intrigued by the experimental nature of what the form will take — this is something that he will not be able to plan.

kildall_5

Machine Data Dreams will utilize 3D printing and laser-cutting techniques, which are digital fabrication techniques that are changing how sculpture can be created — entirely from software algorithms. Simple and hidden electronics will control LED lights to imbue a sense of consciousness to the artwork. Plastic joints will be connected via aluminum dowels to form an armature of irregular polygons. The exterior panels will be clad by a semi-translucent acrylic, which will be adhered magnetically to the large-sized structures. Various installations can easily be disassembled and reassembled.

The project will build on my experiments with the Polycon Construction Kit by Michael Ang, where I’m doing some source-code collaboration. This will heat up the fall.

PCK-small-mountain-768x1024

At Signal Culture, I have 1 week of residency time. It’s short and sweet. I get to play with devices such as the Wobbulator, originally built by Nam June Paik and video engineer Shuya Abe.

The folks at Signal Culture built their own from the original designs.

What am I doing here, with analog synths and other devices? Well, I’m working with a home-built Arduino data logger that captures raw analog video signals (I will later modify it for audio).

20150730_200511I’ve optimized the code to capture about 3600 signals/second. The idea is to get a raw data feed of what a machine might be “saying”, or the electronic signature of a machine.

20150730_150950

Does it work? Well, I hooked it up to a Commodore Amiga (yes, they have one).

I captured about 30 seconds of video and I ran it through a crude version of my custom 3D data-generation software, which makes models and here is what I got. Whoa…

It is definitely capturing something.

Screen Shot 2015-07-30 at 10.08.49 PM

Its early research. The forms are flat 3D cube-plots. But also very promising.

People I Want To Punch in the Face

“People I Want to Punch in the Face” is a book sold at the Whitney (and apparently on Etsy as well) with blank pages.

In one of them, unbeknownst to the bookstore staff, assorted visitors filled in their choices.

11222053_10153612163059274_906440019349147799_n

11058240_10153612163614274_2887715013415183768_n 11145561_10153612163409274_2681781276863721911_n 11214197_10153612163304274_2279591990549537961_n 11220458_10153612163539274_7186057447222198758_n 11220879_10153612163809274_422588321701773217_n  11223531_10153612163859274_8381130295349143957_n 11224733_10153612163924274_8767296298503521774_n 11702794_10153612163894274_5660821899910988506_n 11796299_10153612163389274_5730743001320813356_n 11800074_10153612163124274_3322783156749186006_n 11800312_10153612163149274_169491647891715725_n 11811389_10153612163489274_7267997940101690339_n 11811500_10153612163334274_2498593643581833620_n 11811505_10153612163654274_978369019232892087_n 11816853_10153612163719274_5747074713365788879_n 11825711_10153612163984274_5919445799299516647_n 11828615_10153612163759274_6926645940978707847_n

Bad Data: SF Evictions and Airbnb

The inevitable conversation about evictions at San Francisco every party…art organizations closing, friends getting evicted…the city is changing. It has become a boring topic, yet it is absolutely, completely 100% real.

For the Bad Data series — 12 data-visualizations depicting socially-polarized, scientifically dubious and morally ambiguous dataset, each etched onto an aluminum honeycomb panel — I am featuring two works: 18 Years of Evictions in San Francisco and 2015 AirBnb Listings for exactly this reason. These two etchings are the centerpieces of the show.

evictions_airbnb

This is the reality of San Francisco, it is changing and the data is ‘bad’ — not in the sense of inaccurate, but rather in the deeper sense of cultural malaise.

By the way, the reception for the “Bad Data” show is this Friday (July 24, 2015) at A Simple Collective, and the show runs through August 1st.

The Anti-Eviction Mapping Project has done a great job of aggregating data on this discouraging topic, hand-cleaning it and producing interactive maps that animate over time. They’re even using the Stamen map tiles, which are the same ones that I used for my Water Works project.

Screen Shot 2015-07-23 at 4.52.36 PM

When I embarked on the Bad Data series, I reached out to the organization and they assisted me with their data sets. My art colleagues may not know this, but I’m an old-time activist in San Francisco. This helped me with getting the datasets, for I know that the story of evictions is not new and certainly not on this scale.

In 2001, I worked in a now-defunct video activist group called Sleeping Giant, which worked on short videos in the era when Final Cut Pro made video-editing affordable and when anyone with a DV camera could make their own videos. We edited our work, sold DVDs and had local screenings, stirring up the activist community and telling stories from the point-of-view of people on the ground. Sure, now we have Twitter and social media, but at the time, this was a huge deal in breaking apart the top-down structures of media dissemination.

Here is No Nos Vamos a hastily-edited video about evictions in San Francisco. Yes, this was 14 years ago.

I’ve since moved away from video documentary work and towards making artwork: sculpture, performance, video and more. The video-activist work and documentary video in general felt overly confining as a creative tool.

My current artistic focus is to transform datasets using custom software code into physical objects. I’ve been working with the amazing fabrication machines at Autodesk’s Pier 9 facility to make work that was not previously possible.

Ths dataset (also provided through the SF Rent Board) includes all the no-fault evictions in San Francisco, I got my computer geek on…well, I do try to use my programming powers for non-profit work and artwork.

I mapped the data into vector shapes using the C++ open source toolkit, called OpenFrameworks and wrote code which transformed the ~9300 data points into plotable shapes, which I could open in Illustrator. I did some work tweaking the strokes and styles.

sf_evictions_20x20

This is what the etching looks like from above, once I ran int through the water jet. There were a lot of settings and tests to get to this point, but the final results were beautiful.

waterjet-overhead

The material is a 3/4″ honeycomb aluminum. I tuned the high-pressure from the water-jet to pierce through the top layer, but not the bottom layer. However, the water has to go somewhere. The collisions against the honeycomb produce unpredictable results.

…just like the evictions themselves. We don’t know the full effect of displacement, but can only guess as the city is rapidly becoming less diverse. The result is below, a 20″ x 20″ etching.

Bad Data: 18 Years of San Francisco Evictions

baddata_sfevictions

The Airbnb debate is a little less clear-cut. Yes, I do use Airbnb. It is incredibly convenient. I save money while traveling and also see neighborhoods I’d otherwise miss. However, the organization and its effect on city economies is a contentious one.

For example, there is the hotel tax in San Francisco, which after 3 years, they finally consented to paying — 14% to the city of San Francisco. Note: this is after they had a successful business.

There also seems to be a long-term effect on rent. Folks, and I’ve met several who do this, are renting out places as tenants on Airbnb. Some don’t actually live in their apartments any longer. The effect is to take a unit off the rental market and mark it as a vacation rental. Some argue that this also skirts the law rent-control in the first place, which was designed as a compromise solution between landlords and tenants.

There are potential zoning issues, as well…a myriad of issues around Airbnb.

BAD DATA: 2015 AIRBNB LISTINGS, etching file

airbnb_sf

In any case, the location of the Airbnb rentals (self-reported, not a complete list) certainly fit the premise of the Bad Data series. It’s an amazing dataset. Thanks to darkanddifficult.com for this data source.

BAD DATA: 2015 Airbnb Listings

baddata_airbnb

Selling Bad Data

The reception for my solo show “Bad Data”, featuring the Bad Data series is this Friday (July 24, 2015) at A Simple Collective.

Date: July 24th, 2015
Time: 7-9pm
Where: ASC Projects, 2830 20th Street (btw Bryant and York), Suite 105, San Francisco

The question I had, when pricing these works was how do you sell Bad Data? The material costs were relatively low. The labor time was high. And the data sets were (mostly) public.

We came up with this price list, subject to change.

///  Water-jet etched aluminum honeycomb:

baddata_sfevictions
18 Years of San Francisco Evictions, 2015 | 20 x 20 inches | $1,200
Data source: The Anti-Eviction Mapping Project and the SF Rent Board


baddata_airbnb
2015 AirBnB Listings in San Francisco, 2015 | 20 x 20 inches | $1,200
Data source: darkanddifficult.com


baddata_hauntedlocations
Worldwide Haunted Locations, 2015 | 24 x 12 inches | $650
Data source: Wikipedia


baddata_ufosightings

Worldwide UFO Sightings, 2015 | 24 x 12 inches | $650
Data source: National UFO Reporting Center (NUFORC)


baddata_missouriabortionalternatives

Missouri Abortion Alternatives, 2015 | 12 x 12 inches
Data source: data.gov (U.S. Government) | $150


baddata_socalstarbucks

Southern California Starbucks, 2015 | 12 x 8 inches | $80
Data source: https://github.com/ali-ce


baddata_usprisons

U.S. Prisons, 2015 | 18 x 10 inches | $475
Data source: Prison Policy Initiative prisonpolicy.org (via Josh Begley’s GitHub page)


///  Water-jet etched aluminum honeycomb with anodization:

baddata_denvermarijuana

Albuquerque Meth Labs, 2015 | 18 x 12 inches | $475
Data source: http://www.metromapper.org


baddata_usmassshootings

U.S. Mass Shootings (1982-2012), 2015 | 18 x 10 inches | $475
Data source: Mother Jones


baddata_blacklistedips-banner

Blacklisted IPs, 2015 | 20 x 8 ½  inches | $360
Data source: Suricata SSL Blacklist


baddata_databreaches

Internet Data Breaches, 2015 | 20 x 8 ½ inches | $360
Data source: http://www.informationisbeautiful.net

Bad Data, Internet Breaches, Blacklisted IPs

In 1989, I read Neuromancer for the first time. The thing that fascinated me the most was not the concept of “cyberspace” that Gibson introduced. Rather it was the physical description of virtual data. The oft-quoted line is:

“The matrix has its roots in primitive arcade games. … Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts. … A graphic representation of data abstracted from banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding.”

What was this graphic representation of data that struck me at first and has stuck with be ever since. I could only imagine what this could be. This concept of physicalizing virtual data later led to my Data Crystals project. Thank you, Mr. Gibson.

dc_sfart_v1

In Neuromancer, the protagonist Case is a freelance “hacker”. The book was published well-before Anonymous, back in the days when KILOBAUD was the equivalent of Spectre for the BBS world.

At the time, I thought that there would be no way that corporations would put their data in a central place that anyone with a computer and a dial-up connection (and, later T1, DSL, etc) could access. This would be incredibly stupid.

And then, the Internet happened, albeit more slowly than people remember. Now hacking and data breaches are commonplace.

My “Bad Data” series — waterjet etchings of ‘bad’ datasets onto aluminum honeycomb panels — capture two aspects of internet hacking: Internet data breaches and Blacklisted IPs.

In these examples, ‘bad’ has a two-layered meaning. The abrogations of accepted treatises of Internet behavior is widely considered a legal, though always not a moral crime. The data is also ‘bad’ in the sense that it is incomplete. Data breaches are usually not advertised by the entities that get breached. That would be poor publicity.

For the Bad Data series, I worked with no necessarily the data wanted, but rather the data that I could get. From Information Is Beautiful, I found this dataset of Internet data breaches.

Screen Shot 2015-07-12 at 8.22.04 PM

What did I discover? …that Washington DC is the leader of breached information. I suspect it’s mostly because the U.S. government is the biggest target rather than lax government security. The runner-up is New York City, the center of American finance. Other notable cities are San Francisco, Tehran and Seoul. San Francisco makes sense — the city is home to many internet companies. And Tehran, which is the target of Western internet attacks, government or otherwise. But Seoul? They claim to be targeted by North Korea. However, as we have found out, with the Sony Pictures Entertainment Hack, North Korea is an easy scapegoat.

BAD DATA: INTERNET DATA BREACHES (BELOW)

baddata_databreaches

Conversely, there are many lists of banned IPs. The one I worked with is the Suricata SSL Blacklist. This may not be the best source, as there are thousands of IP Blacklists, but it is one that is publicly available and reasonably complete. As I’ve learned, you have to work with the data you can get, not necessarily the data you want.

I ran these two etched panels both through an anodization process, which further created a filmy residue on the surface. I’m especially pleased with how the Banned IPs panel came out.

Bad Data: BLACKLISTED IPs (below)

baddata_blacklistedips

Genetic Portraits and Microscope Experiments

I recently finished a new artwork — called Genetic Portraits — which is a series of microscope photographs of laser-etched glass that data-visualize a person’s genetic traits.

I specifically developed this work as an experimental piece, for the Bearing Witness: Surveillance in the Drone Age show. I wanted to look at an extreme example of how we have freely surrendered our own personal data for corporate use. In this case, 23andMe provides a (paid) extensive genetic sequencing package. Many people, including myself have sent in saliva samples to the company, which they then process. From their website, you can get a variety of information, including their projected likelihood that you might be prone to specific diseases based on your genetic traits.

Following my line of inquiry with other projects such as Data Crystals and Water Works, where I wrote algorithms that transformed datasets into physical objects, this project processes individual’s genetic sequence to generate vector files, which I later use to laser-etch onto microscope slides. The full project details are here.

gp_scott_may11

Concept + Material
I began my experiment months earlier, before the project was solidified, by examining the effect of laser-etching on glass underneath a microscope. This stemmed from conversations with some colleagues about the effect of laser-cutting materials. When I looked at this underneath a microscope, I saw amazing results: an erratic universe accentuated by curved lines. Even with the same file, each etching is unique. The glass cracks in different ways. Digital fabrication techniques still results in distinct analog effects. 

blog-IMG_4106When the curators of the show, Hanna Regev and Matt McKinley, invited me to submit work on the topic of surveillance, I considered how to leverage various experiments of mine, and came back to this one, which would be a solid combination of material and concept: genetic data etched onto microscope slides and then shown at a macro scale: 20” x 15” digital prints.

Surrendering our Data
I had so many questions about my genetic data. Is the research being shared? Do we have ownership of this data? Does 23andMe even ask for user consent? As many articles point out, the answers are exactly what we fear. Their user agreement states that “authorized personnel of 23andMe” can use the data for research. This sounds officially-sounding text simply means that 23andMe decides who gets access to the genetic data I submitted. 23andMe is not unique: other gene-sequencing companies have similar provisions, as the article suggests.

Some proponents suggest that 23andMe is helping the research front, while still making money. It’s capitalism at work. This article in Scientific American sums up the privacy concerns. Your data becomes a marketing tool and people like me handed a valuable dataset to a corporation, which can then sell us products based on the very data we have provided. I completed the circle and I even paid for it.   

However, what concerns me even more than 23andMe selling or using the data — after all, I did provide my genetic data, fully aware of its potential use — is the statistical accuracy of genetic data. Some studies have reported a Eurocentric bias to the data and The FDA has also has battled with 23andMe regarding the health data they provide. The majority of the data (with the exception of Bloom’s Syndrome) simply wasn’t predictive enough. Too many people had false positives with the DNA testing, which not only causes worry and stress but could lead to customers taking pre-emptive measures such as getting a mastectomy if they mistakenly believe they have are genetically predisposed to breast cancer.

A deeper look at the 23andMe site shows a variety of charts that makes it appear like you might be susceptible (or immune) to certain traits. For example, I have lower-than-odds of having “Restless Leg Syndrome“, which is probably the only neurological disorder that makes most people laugh when hearing about it. My genetic odds of having it are simply listed as a percentage.

Our brains aren’t very good with probabilistic models, so we tend to inflate and deflate statistics. Hence, one of many problems of false positives.

And, as I later discovered, from an empirical standpoint, my own genetic data strayed far from my actual personality. Our DNA simply does not correspond closely enough to reality.

Screen Shot 2015-06-16 at 11.06.44 AM

Data Acquisition and Mapping
From the 23andMe site, you can download your raw genetic data. The resulting many-megabyte file is full of rsid data and the actual allele sequences.

Screen Shot 2015-06-15 at 10.37.08 AM

Isolating useful information from this was tricky. I cross-referenced some of the rsids used for common traits from 23andMe with the SNP database. At first I wanted to map ALL of the genetic data. But, the dataset was complex — too much so for this short experiment and straightforward artwork.

Instead, I worked with some specific indicators that correlate to physiological traits such as lactose tolerance, sprinter-based athleticism, norovirus resistances, pain sensitivity, the “math” gene, cilantro aversion — 15 in total. I avoided genes that might correlate to various general medical conditions like Alzheimer’s and metabolism.

For each trait I cross-referenced the SNP database with 23andMe data to make sure the allele values aligned properly. This was arduous at best.

There was also a limit on physical space for etching the slide, so having more than 24 marks or etchings one plate would be chaotic. Through days of experimentation, I found that 12-18 curved lines would make for compelling microscope photography.

To map the data onto the slide, I modified Golan Levin’s decades-old Yellowtail Processing sketch, which I had been using as a program to generate curved lines onto my test slides. I found that he had developed an elegant data-storage mechanism that captured gestures. From the isolated rsids, I then wrote code which gave weighted numbers to allele values (i.e. AA = 1, AG = 2, GG = 3, depending on the rsid).

gp_illustrator

Based on the rsid numbers themselves, my code generated (x, y) anchor points and curves with the allele values changing the shape of each curve. I spent some time tweaking the algorithm and moving the anchor points. Eventually, my algorithm produced this kind of result, based on the rsids.

genome_scott_notated

The question I always get asked about my data-translation projects is about legibility. How can you infer results from the artwork? Its a silly question, like asking an Kindle engineer to to analyze a Shakespeare play. A designer of data-visualization will try to tell a story using data and visual imagery.

My research and work focuses deep experimentation with the formal properties of sculpture — or physical forms — based on data. I want to push boundaries of what art can look like, continuing the lineage of algorithmically-generated work by artists such as Sol Lewitt, Sonia Rappaport and Casey Raes.

Is it legible? Slightly so. Does it produce interesting results? I hope so.

gp_slide_image

But, with this project, I’ve learned so much about genetic data — and even more about the inaccuracies involved. It’s still amazing to talk about the science that I’ve learned in the process of art-making.

Each of my 5 samples looks a little bit different. This is the mapping of actual genetic traits of my own sample and that of one other volunteer named “Nancy”.

genome_scott_notated

Genetic Traits for Scott (ABOVE)
GENETIC TRAITS FOR NaNCY (BELOW)

genome_scott_notatedWe both share a number of genetic traits such as the “empathy” gene and curly hair. The latter seems correct — both of our hair is remarkably straight. I’m not sure about the empathy part. Neither one of us is lactose intolerant (also true in reality).

But the test-accuracy breaks down on several specific points. Nancy and I do have several differences including athletic predisposition. I have the “sprinter” gene, which means that I should be great at fast-running. I also do not have the math gene. Neither one of these is at all true.

I’m much more suited to endurance sports such as long-distance cycling and my math skills are easily in the 99th percentile. From my own anecdotal standpoint, except for well-trodden genetics like eye color, cilantro aversion and curly hair, the 23andMe results often fail.

The genetic data simply doesn’t seem to be support the physical results. DNA is complex. We know this, it is non-predictive. Our genotype results in different phenotypes and the environmental factors are too complex for us to understand with current technology.

Back to the point about legibility. My artwork is deliberately non-legible based on the fact that the genetic data isn’t predictive. Other mapping projects such as Water Works are much more readable.

I’m not sure where this experiment will go. I’ve been happy with the results of the portraits, but I’d like to pursue this further, perhaps in collaboration with scientists who would be interested in collaboration around the genetic data.

FOUR FINAL SLIDE ETCHINGS  (BELOW)

gp_allison_may11

 

gp_michele_may11 gp_nancy_may11 gp_scott_may11

Dérive in Paris

The first day after arriving in Paris, we embarked on a dérive — the French word for a “drift” — an unplanned journey (usually) through an urban space. The idea is to immerse yourself in the moment, the now of a city. No maps, no mobile phones, no direction, just walk and make choices on where to go based on your senses: the smells, sights and sounds of a city. This experiment would hopefully be some sort of authentic experience, devoid of the central modes of organization and give us a subjective experience.

I did this once before, in Berlin, while reading Rebecca Solnit’s A Field Guide to Getting Lost. That time was by bicycle and I spent the first day meandering through the city with no direction. Every couple of hours, I’d stop for a cup of coffee or a snack and read Solnit’s book, which covered themes of mental and emotional wandering. It was profound. I noticed odd things, mostly architectural.

solnit_gettinglost

My recommendation is to do this when you first arrive in an unfamiliar city, after getting a night’s sleep but before you’ve done anything else. At this point, your body is still jet-lagged. Daily patterns have yet to be formed. Memories are unestablished. The brain is at its most receptive state.

IMG_1178

We started here, near where we were staying. All I know was that the 6th Arrondissement was on the Left Bank. I’ve since become familiar with the shell-like ordering of the city’s districts.

We picked the direction that we most “liked”, based on whatever looked best down the street.IMG_1179

When you’re not trying to get somewhere or having a conversation about something, you notice funny things, like tons of push-scooters locked with cheap cable locks everywhere.

IMG_1183

Or custom-painted tiles like these. Of course, these are “touristy”, but the walk pushed these labels out of my mind. IMG_1184

I wanted to document the dérive, but didn’t want to be in a documentation state-of-mind, so just snapped photos without much consideration for what I was shooting.IMG_1185

The space-for-women was inviting, but also seemed to be closed. It was some sort of library.IMG_1187

We never would have found this old store on Yelp, but it was incredible. Lots of old science and medical devices and posters were inside! The dérive soon meant that we could go inside shops and here is where my expectations of some sort of 1950s Paris that Guy Debord lived in quickly got dashed on the rocks. There were tons of distracting shops and restaurants everywhere. I guess that was the case 60 years ago as well, but I’m sure capitalist advertising techniques have advanced significantly since his time.IMG_1190

We found some contemporary art galleries, too.IMG_1192

Though the Jesus spinning on the turntable didn’t “work” for me.IMG_1193

With two people, the dérive meant compromising. Sometimes I wanted to walk on one side of the street and Victoria would walk on the other. And when we made a decision, we had to pick one person’s “way” if we disagreed. I’m would have been curious to see where my choices would have left me.IMG_1194

Sure, you notice all sorts of details.IMG_1195

And signs in French, mostly about parking rules.IMG_1196

Interesting chimneys on buildings.IMG_1198

You’re not supposed to stop to do errands, but we had to get some coffee capsules for the espresso machine in our room. And then I noticed the shrink-wrapped cheese. IMG_1201

Wide boulevards with complex intersections. Surprisingly little traffic noise and congestions for a major city.  IMG_1202

Streets signs and greenery.IMG_1203

Plaques with names of historical figures and where they once lived.IMG_1204

The smell of dog shit everywhere. Cigarettes, lots of cigarette smoke. I still hate getting the exhale of smoke in my face.IMG_1206

Many apartment buildings with exactly the same window dressing on them. Why do only the 2nd story windows have planters on them?IMG_1207 Everywhere, ads for various services, including “Tantra Massage” on drain pipes. IMG_1209

A giant old wooden door with intricate carvings.IMG_1210

An old church interspersed amongst the apartment buildings.IMG_1211

Odd urban compositions.IMG_1213

A time portal to the year 1858.IMG_1214

Bubble windows.IMG_1215

Ah, the iron work.IMG_1217

Gold leafing shop. Isn’t it dangerous to leave this in the window for potential thievery?IMG_1218

Real estate ads everywhere. Prices are comparable to San Francisco.IMG_1219

French flags outside what looks like government buildings.

IMG_1223

Lots of small dogs and apparently it’s okay to bring them into the restaurant with you.IMG_1222

Sign for a movie theater…or something else.IMG_1226

The most amazing air vent I’ve ever seen.IMG_1227

Reserving your parking spot with trash.IMG_1228

The stop sign figurine is fatter than the walk sign figurine.IMG_1229

Goats in a park.IMG_1230

One cannot escape the Eiffel Tower as a point of orientation.IMG_1231

Bodily functions rule in the end. The toilets are free, but the lines are long.IMG_1232

Make Art, Not Landfill

This Thursday (June 8, 2015), will be the opening of Make Art, Not Landfill, which is the 25th Anniversary of the Recology Artists in Residence program. If you are in San Francisco, you should go to the show.

I first heard about the program in the late 1990s. In 2010, I saw the 20th Anniversary show, and later that year, applied and was accepted. I started my residency in February 2011. During this time, I made a series called “2049” — where I played the role of a prospector from the year 2049, who was mining the dump for resources to construct “Imaginary Devices” to help me survive.

skl_051811_050These included items such as the Sniffer, the 2049 Hotline, the Universal Mailbox, Reality Simulator and Infinite Power. Each one was accompanied by a blueprint with imaginary symbols on it.

skl_051711_053_eq

Using these scavenged items, I built a complex narrative around some sort of future collapse. The work was odd, funny and touched veins of consumption for many people. Dorothy Santos did a writeup for Asterisk Magazine on the 2049 Series, which captured some of the feelings evoked by the sculptures, paintings and videos.

skl_051711_047_eq

Part of the deal with being an artist-in-residence at The Dump is that they get to keep one of your artworks. And exhibitions like this are exactly the reason why. The good folks at Recology put on shows, featuring work from their program. The artwork that they elected to retain was the Universal Mailbox (below), which will be in tomorrow’s show.

I constructed the Universal Mailbox from a discarded UPS keypad, scrap wood, a found satellite dish and dryer hose. I found the paint at the dump as well. skl_051811_018 I used a similar technique for the 2049 Hotline, and during the opening, friends of mine played the role of “emissaries from the year 2049”, who would talk to exhibit-goers on the phone. Their only directive was to stay in character — they had to be from the future, but the environment they imagined could be anything they wanted.skl_051711_003The artwork later traveled to the New York Hall of Science for their Regeneration Show (walkthrough below)

This was a one-way mission for many of my sculptures, as they were fragile to begin with and 4 months at an Interactive Science Museum decimated the work. I knew this would happen. I always viewed the sculptures as temporary. I was even able to save some money on shipping costs. The artwork, after all, came from the dump!

skl_051811_001_prsThe blueprints survived, as well as a rebuilt versions of the Universal Mailbox and the 2049 Hotline, which I will continue to exhibit. The 2049 project and my 4 months at the dump was a lesson in attachment to material things, which flow from hands to hands and eventually to landfill and hopefully, sometimes, to art.

Water Works, NPR and Imagination

I recently achieved one of my life goals. I was on NPR!

The article, “Artists In Residence Give High-Tech Projects A Human Touch” discusses my Water Works* project as well as artwork by Laura Devendorf, and more generally, the artist-in-residence program at Autodesk.

sewer-works

“Water Works” 3D-printed Sewer Map in 3D printer at Autodesk

The production quality and caliber of the reporting is high. It’s NPR, after all. But, what makes this piece important is that it talks about the value of artists, because they are the ones who infuse imagination into culture. The reporter, Laura Sydell, did a fantastic job of condensing this thought into a 6 minute radio program.

Arts funding has been cut out of many government programs, at least in the United States. And education curriculum increasing is teaching engineering and technology over the humanities. But, without the fine arts and teaching actual creativity (and not just startup strategies), how can we, as a society, be truly creative?

Well, that’s what this article suggests. And specifically, that corporations such as Autodesk, will benefit from having artists in their facilities.

Perhaps one problem is that “imagination” is not quantifiable. We have the ability to measure so much: financial impact, number of clicks, test scores and more, but creativity and imagination, not so much. These are — at least to date — aspects of our culture that we cannot track on our phones or run web analytics on.

So, embracing imagination means embracing uncertainty, which is an existential problem that technology will have to cope with along the way.

waterwork_in_lobby

“WATER WORKS” Installed in AUTODESK LOBBY

At the end of the article, the reporter talks about Xerox Parc of the 1970s, which had a thriving artist-in-residence program. Early computer technology was filled with imagination, which is why this time was ripe with technology and excitement.

This is close to my heart. My father, Gary Kildall, was a key computer scientist back in the 1970s. His passions when he was in school was mathematics and art. By the time that I was a kid, he was no longer drawing or working in the wood shop. But, instead was designing computer architectures which defined the personal computer. He passed away in 1994, but I often wish he could see the kind of work I’m doing with art + technology now.

gary-kildall

Gary Kildall oN TELEVISION, examining COMPUTER HARDWARE, circa 1981

* Water Works was part of Creative Code Fellowship in 2014 with support from Gray Area, Stamen Design and Autodesk.

EEG Dinner Party @ SXSW

I’m experimenting with a new model for sustainable art practice: leveraging the intellectual property from my technology-infused artworks into lucrative contracts. And why not? Artists are creative engines and deserve to be compensated.

Teaching is how many of my ilk get their income and every professor I’ve talked to about the university-academia track constantly moans about the silo-like environment, the petty politics, the drudgery of the adjunct lifestyle and the low pay. They are overworked and burdened by administration. No thanks.

The other option is full-time work. Recently (2012-13), I was on full-time staff at the Exploratorium as a New Media Exhibit Developer. I love the people, the DIY shop environment and the mission of the organization. It was here that I fully re-engaged with my software coding practice and learned some of the basics about data-visualization. But, ultimately a full-time job meant that I wasn’t making my own artwork. My creative spirit was dying. I couldn’t let this happen, so when my fixed-term job came up, I decided not to try to pursue full-time employment. I now work with the Explo on selected part-time projects.

This year, I started an LLC and in January, I had landed my first contract job, which was to do the software coding, technical design and visitor interaction for a project called “EEG Dinner Party”, which was part of a larger installation for General Electric, which they called the “GE BBQ Research Center” to be presented at SXSW in Austin.

ge-bbq-research-center
The folks I directly worked with were Sheet Metal Alchemist (Lara and Sean who run the company, below) — they are fantastic company who build custom-fabrication solutions. They helped General Electric produce this interactive experience for SXSW, which featured a giant BBQ smoker with sensors and the EEG Dinner Party, which was the portion I was working on.

eegdinnerparty-17

My “intellectual property” was my artwork, After Thought (2009), which I made while an artist-in-residence at Eyebeam Art + Technology Center in New York. This is a portable personality testing kit using EEG brainwaves and flashcards, where I generate a personal video that expresses your “true” personality. I dressed in a lab coat and directed viewers in a short, 5-10 minute experiment with technology and EEG testing.

afterthought_main-1024x683When the folks at Sheet Metal Alchemist (SMA) contacted me about doing the EEG work, I was confident that I could transform the ideas behind this project — an interactive experience into one that would work for SXSW and General Electric.

From the get-go, I knew this wasn’t my art project and I didn’t have my usual the creative control. For one, General Electric that had a very specific message: “Your Brain on BBQ” and the entire SXSW site was designed as a research lab, of sorts. It was a promotional and branding engine for GE, who provided free meat and beer for the event.

The work that SMA did was just a portion, albeit the attractor (the smoker) and the high-tech demo (EEG) of what was going on, but GE also had videos, displays in vitrines, speakers, DJs and a mix-your-own-BBQ sauce stations. New Creatures were the ones that put the entire event together (check out the video at the end of this post).

The irony: I don’t even eat meat.

eegdinnerparty-20

I treaded the delicate balance doing of doing client work combined with my own artistic/tech designs. While I don’t work for GE, nor is their message mine, we’re all in it for a temporary goal: to produce a successful event. The end-result was an odd compromise of social-messaging, technology and visitor experience, which ended up being a very successful installation at the event.

The concept was the we conduct a series of “dinner parties” (it was actually during the day) where two tables of four people each would sit down and eat a 5-minute “meal”. We would track their brainwaves in real-time and generate a graph showing a composite index of what they were experiencing.

All of the event staff costumed ourselves in lab coats. Here I am with the two monitor display, wearing the EEG headset, which we chose: the Muse Headset, which after doing a lot of research, beat the pants off the competitors, Neurosky and Emotiv for its comfort and developer’s API.

eegdinnerparty-19

The technical setup took awhile to figure out, but I finally settled upon this system, which was very stable. Each of the eight headsets was paired to a cheap Android tablet. The tablet then streamed the EEG data to two separate Processing applications, one for each table via Open Sound Control (OSC).eegdinnerparty-18The tablet software that I wrote was based on some of the Android sample code from Muse and would show useful bits of information like the battery life and connection status for the 4 headset sensors. Also check out the “Touching Forehead” value. This simple on/off was invaluable and would let us know if the headset was actually on someone’s head. This way, I could run tables of just 1 person or all 4 people at a time.

eegdinnerparty-13

Each headset was assigned to a separate graph color and icon. My software then graphed the real-time composite brainwave index over the course of 5 minutes. The EEG signals are alpha, beta, delta, gamma and theta waves. But, showing all of these would be way too much information, so I produced a composite value of all 5 of these, weighting certain waves such as beta and theta waves (stress and meditation) more than others such as alpha (sleep).

eegdinnerparty-6

We ran the installation for 3 days. We soon has an efficient setup for registration and social media. You would make a reservation ahead of time and a greeter would fill in the spots on for empty tables.

eegdinnerparty-12

The next lab technician would have everyone digitally sign consent forms and ask for their personal information such as your name and Twitter handle.

eegdinnerparty-10

We soon had a reasonably-sized line.

eegdinnerparty-15

My job was to make sure the technology worked flawlessly. I would clean headsets, check the tablets, do any troubleshooting, as necessary. Fortunately, the installation went off very smoothly. We had just one headset stop working one the 2nd day and on the 3rd day, a drunk guest knocked one of the tablets off the table, shattering it. Of course, we had backups.

After fit all the guests with the headsets and make sure the connections worked, I’d pass the them off to Sean, who talked about EEG signals and answered questions about what the installation was all about. After about 5 minutes, we had people sitting at tables.

eegdinnerparty-9

Then, they got served. Food, that is.

eegdinnerparty-1

Here is a piece of sausage from the smoker, some coleslaw and a bit of banana cream pudding.
eegdinnerparty-5

As folks ate, they watched their brainwaves graph in real-time.

eegdinnerparty-8

Each headset was marked with the corresponding color on the graph. One dot was for Table A and two for Table B.eegdinnerparty-3The guests got a kick out of it, that’s for sure.
eegdinnerparty-7And while they consumed food, the photographer shot closeups of people eating.
eegdinnerparty-14If you chose to be at the EEG Dinner Party, you certainly had to have no fear of the media.
eegdinnerparty-16Then, the social media team would do a hand-tracing of the graph and send out an animated image via Twitter, like these.

Screen Shot 2015-03-18 at 11.07.56 PM
Screen Shot 2015-03-18 at 11.06.58 PM
Screen Shot 2015-03-18 at 11.07.33 PM
Of course, they ended up getting retweeted.

Screen Shot 2015-03-18 at 11.07.11 PM

And we had some celebrities! Here is Questlove.
questloveMy concluding thoughts on artwork-as-IP: it’s solid paid work. My billable rate is at least 3 times higher than any non-profit work that I do, which translates to a more sustainable art practice. My coding skills got sharper — this was my first Android application. I didn’t feel like I had to dial in the fine creativity and was more of a tech lead on the project. So, overall a success and I’m hoping I can do some future paid gigs with my technology-based artwork.

 

*As promised, here is the Hot Wheels Double Dare project, produced by New Creatures.