Machine Data Dreams @ Black & White Projects

This week, I opened a solo show called Machine Data Dreams, at Black & White Projects. This was the culmination of several months of work where I created three new series of works reflecting themes of data-mapping, machines and mortality.

The opening reception is Saturday, November 5th from 7-9pm. Full info on the event is here.

Two of the artworks are from my artist-in-residency with SETI and the third is a San Francisco Arts Commission Grant.

All of the artwork uses custom algorithms to translate datasets into physical form, which is an ongoing exploration that I’ve been focusing on in the last few years.

Each set of artwork deserves more detail but I’ll stick with a short summary of each.

Fresh from the waterjet, Strewn Fields visualizes meteorite impact data at four different locations on Earth.

water-jet-1Strewn Fields: Almahata Sitta

As an artist-in-residence with SETI, I worked with planetary scientist, Peter Jenniskens to produce these four sculptural etchings into stone.

When an asteroid enters the earths atmosphere, it does so at high velocity — approximately 30,000 km/hour. Before impact, it breaks into thousands of small fragments — meteorites which spread over areas as large as 30km. Usually the spatial debris fall into the ocean or hits at remote locations where scientists can’t collect the fragments.

And, only recently have scientists been able to use GPS technology to geolocate hundreds of meteorites, which they also weigh as they gather them. The spread patterns of data are called “Strewn Fields”.

Dr. Jenniskens is not only one of the world’s experts on meteorites but led the famous  2008 TC3 fragment recovery in Sudan of the Almahata Sitta impact.

With four datasets that he both provided and helped me decipher, I used the high-pressure waterjet machine at Autodesk’s Pier 9 Creative Workshops, where I work as an affiliate artist and also on their shop staff, to create four different sculptures.

water-jet-2Strewn Fields: Sutter’s Mill

The violence of the waterjet machine gouges the surface of each stone, mirroring the raw kinetic energy of a planetoid colliding with the surface of the Earth. My static etchings capture the act of impact, and survive as an antithetical gesture to the event itself. The actual remnants and debris — the meteorites themselves — have been collected, sold and scattered and what remains is just a dataset, which I have translated into a physical form.

A related work, Machine Data Dreams are data-etchings memorials to the camcorder, a consumer device which birthed video art by making video production accessible to artists.

pixel_visionMACHINE DATA DREAMS: PIXELVISION

This project was supported by an San Francisco Individual Arts Commission grant. I did the data-collection itself during an intense week-long residency at Signal Culture, which has many iconic and working camcorders from 1969 to the present.

sonyvideorecorderSONY VIDEORECORDER (1969)
pixelvisionPIXELVISION CAMERA (1987)

During the residency, I built a custom Arduino data-logger which captured the raw electronic video signals, bypassing any computer or digital-signal processing software.data_loggerWith custom software that I wrote, I transformed these into signals that I could then etch onto 2D surfaces.Screen Shot 2015-08-02 at 10.56.15 PM I paired each etching with its source video in the show itself.

sony_video_recorderMACHINE DATA DREAMS: PIXELVISION

Celebrity Asteroid Journeys is the last of the three artworks and is also a project of from the SETI Artist in Residency program, though is definitively more light-hearted than the Strewn Fields.

Celebrity Asteroid Journeys charts imaginary travels from one asteroid to another. There are about 700,000 known asteroids, with charted orbits. A small number of these have been named after celebrities.

Working with asteroid orbital data from JPL and estimated spaceship velocities, I charted 5 journeys between different sets of asteroids.

My software code ran calculations over 2 centuries (2100 – 2300) to figure out the the best path between four celebrities. I then transposed the 3D data into 2D space to make silkscreens with the dates of each stop.

20161025_165421_webCELEBRITY ASTEROID JOURNEY: MAKE BELIEVE LAND MASHUP

This was my first silkscreened artwork, which was a messy antidote to the precise cutting of the machine tools at Autodesk.

All of these artworks depict the ephemeral nature of the physical body in one form or another. Machine Data Dreams is a clear memorial itself, a physical artifact of the cameras that once were cutting-edge technology.

With Celebrity Asteroid Journeys, the timescale is unreachable. None of us will ever visit these asteroids. And the named asteroids are memorials themselves to celebrities (stars) that are now dead or soon, in the relative sense of the word, will be no longer with us.

Finally, Strewn Fields captures a the potential for an apocalyptic event from above. Although these asteroids are merely minor impacts, it is nevertheless the reality that an extinction-level event could wipe out human species with a large rock from space. This ominous threat of death reminds us that our own species is just a blip in Earth’s history of life.

 

Waterjet Etching Tests

For the last several weeks, I have been conducting experiments with etching on the waterjet — a digital fabrication machine that emits a 55,000 psi stream of water, usually used for precision cutting. The site for this activity is Autodesk Pier 9 Creative Workshops. I continue to have access to their amazing fabrication machines, where I work part-time as one of their Shop Staff.

My recent artwork focuses on writing software code that transforms datasets into sculptures and installations, essentially physical data-visualizations. One of my new projects is called Strewn Fields, which is part of my work as an artist-in-residence with the SETI Institute. I am collaborating with the SETI research scientist, Peter Jenniskens, who is a leading expert on meteor showers and meteorite impacts. My artwork will be a series of data-visualizations of meteorite impacts at four different sites around the globe.

While the waterjet is normally used for cutting stiff materials like thick steel, it can etch using lower water pressure rather than pierce the material. OMAX — the company that makes the waterjet that we use at Pier 9 —  does provide a simple etching software package called Intelli-ETCH. The problem is that it will etch the entire surface of the material. This is appropriate for some artwork, such as my Bad Data series, where I wanted to simulate raster lines.

Meth Labs in Albuquerque(Data source: http://www.metromapper.org)

The technique and skills that I apply to my artistic practice is to write custom software that generates specific files for digital fabrication machines: laser-cutters, 3D printers, the waterjet and CNC machines. The look-and-feel is unique, unlike using conventional tools that artists often work with.

For meteorite impacts, I first map data like the pattern below (this is from a 2008 asteroid impact). For these impacts, it doesn’t make sense to etch the entire surface of my material, but rather, just pockets, simulating how a meteorite might hit the earth.

strewn_field_15scaled_no_notation

I could go the route of working with a CAM package and generating paths that work with the OMAX Waterjet. Fusion 360 even offers a pathway to this. However, I am dealing with four different datasets, each with 400-600 data points. It just doesn’t make sense to go from a 2D mapping, into a 3D package, generate 3D tool paths and then back to (essentially) a 2D profiling machine.

So, I worked on generating my own tool paths using Open Frameworks, which outputs simple vector shapes based on the size of data. For the tool paths, I settled on using spirals rather than left-to-right traverses, which spends too much time on the outside of the material, and blows it out. The spirals produce very pleasing results.

My first tests were on some stainless steel scrap and you can see the results here, with the jagged areas where the water eats away at the material, which is the desired effect. I also found that you have to start the etching from the outside of the spiral and then wind towards the inside. If you start from the inside and go out, you get a nipple, like on the middle right of this test, where the water-jet has to essentially “warm-up”. I’m still getting the center divots, but am working to solve this problem.

This was a promising test, as the non-pocketed surface doesn’t get etched at all and the etching is relatively quick.

IMG_0286

I showed this test to other people and received many raised eyebrows of curiosity. I became more diligent in my test samples and produces this etch sample with 8 spirals, with an interior path ranging from 2mm to 9mm to test on a variety of materials.

sprial_paths.png

I was excited about this material, an acrylic composite that I had leftover from a landscape project. It is 1/2″ thick with green on one side and a semi-translucent white on the other. However, as you can see, the water-jet is too powerful and ends up shattering the edges, which is less than desirable.

IMG_0303

And then I began to survey various stone samples. I began with scavenging some material from Building Resources, which had an assortment of unnamed, cheap tiles and other samples.

Forgive me…I wish I hadn’t sat in the back row of “Rocks for Jocks” in college. Who knew that a couple decades later, I would actually need some knowledge of geology to make artwork?

I began with some harder stone — standard countertop stuff like marble and granite. I liked seeing how the spiral breaks down along the way. But, there is clearly not enough contrast. It just doesn’t look that good.

IMG_0280

IMG_0294

I’m not sure what stone this is, but like the marble, it’s a harder stone and doesn’t have much of an aesthetic appeal. The honed look makes it still feel like a countertop.

IMG_0295

I quickly learned that thinner tile samples would be hard to dial in. Working with 1/4″ material like this, often results in blowing out the center.

IMG_0282

But, I was getting somewhere. These patterns started resembling an impact of sorts and certainly express the immense kinetic energy of the waterjet machine, akin to the kinetic energy of a meteorite impact.

white_tile_detail

This engineered brick was one of my favorite results from this initial test. You can see the detail on the aggregate inside.

IMG_0290brick_all

And I got some weird results. This material, whatever it is, is simple too delicate, kind of like a pumice.

IMG_0289

This is a cement compound of some flavor and for a day, I even thought about pouring my own forms, but that’s too much work, even for me.

 

IMG_0291

I think these two are travertine tile samples and I wish I had more information on them, but alas, that’s what you get when you are looking through the lot. These are in the not-too-hard and not-too-soft zone, just where I want them to be.

 

IMG_0274

IMG_0292

I followed up these tests by hitting up several stoneyards and tiling places along the Peninsula (south of San Francisco). This basalt-like material is one of my favorite results, but is probably too porous for accuracy. Still, the fissures that it opens up in the pockets is amazing. Perhaps if I could tame the waterjet further, this would work.

IMG_0275basalt-detail

basalt-more-detailThis rockface/sandstone didn’t fare so well. The various layers shattered, producing unusable results.

IMG_0299discolored_slate

Likewise, this flagstone was a total fail.

IMG_0302flagstone-shatter

The non-honed quartzite gets very close to what I want, starting to look more like a data-etching. I just need to find one that isn’t so thick. This one will be too heavy to work with.

IMG_0284  quartzite_close_IMG_0340

Although this color doesn’t do much for me, I do like the results of this limestone.

IMG_0298

Here is a paver, that I got but can’t remember which kind it is. Better notes next time! Anyhow, it clearly is too weak for the water-jet.

IMG_0297

This is a slate. Nice results!

IMG_0296

And a few more, with mixed results.

IMG_0300 IMG_0301

And if you are a geologist and have some corrections or additions, feel free to contact me.

Art in Space: the First Art Exhibition in Space

Art in Space is the first art exhibition in space, which was created in conjunction with Autodesk’s Pier 9 Creative Workshops and Planet Labs, a company which dispatches many fast-orbiting imaging satellites that document rapid changes on the Earth’s surface.

For this exhibition, they selected several Pier 9 artists to create artwork, which were then etched onto the satellites panels. Though certainly not the first artwork in space*, this is the first exhibition of art in space. And, if you consider that several satellites are constantly orbiting Earth on opposite sides of the planet, this would be the largest art exhibition ever.

My contribution is an artwork called: Hello, World! It is the first algorithmically-generated artwork sent to space and also the first art data visualization in space. The artwork was deployed on August 19th, 2015 on the satellite: Dove 0C47. The artwork will circle the Earth for 18 months until its satellite orbit decays and it burns up in our atmosphere.

 

The left side of the satellite panel depicts the population of each city, represented by squares proportional to the population size. The graphics on the right side represent the carbon footprint of each city with circles proportional to carbon emissions. By comparing the two, one can make correlations between national policies and effects on the atmosphere. For example, even though Tokyo is the most populated city on earth, its carbon emissions per capita is very low, making its carbon footprint much smaller in size, than Houston, Shanghai or Riyadh, which have disproportionately large footprints.

The etched panel resembles a constellation of interconnected activity and inverts the viewpoint of the sky with that of the earth. It is from this “satellite eye,” that we can see ourselves and the effect of humans on the planet. The poetic gesture of the artwork burning up as the satellite re-enters the Earth’s atmosphere, serves as reminder about the fragile nature of Earth.

Also consider this: the Art in Space exhibition is something you can neither see nor is it lasting. After only 18 months, the satellite, as well as the artwork vaporizes. I thought of this as an opportunity to work with ephemerality and sculpture. And, this is the first time I have had the chance for a natural destruction of my work. Everything dies and we need to approach life with care.

A few people have asked me where did my title come from? Anyone who has written any software code is familiar with the phrase: “Hello, World!” This is the first test program that any instructional has you write. It shows the basic syntax for constructing a working program, which is helpful since all computer programs embody different language constructions. By making this test code work, you also have verified that your development environment is working properly.

“Hello, World!” C implementation.

/* Hello World program */
#include<stdio.h>
main() {
    printf("Hello World");
}

And here is the full a video that explains more about the Art in Space exhibition.

 

* There has been plenty of other art in space, and more recent projects such as my collaboration with Nathaniel Stern for Tweets in Space (2012) and Trevor Paglen’s The Last Pictures.

EquityBot World Tour

Art projects are like birthing little kids. You have grand aspirations but never know how they’re going to turn out. And no matter, what, you love them.

20151125 125225

It’s been a busy year for EquityBot. I didn’t expect at all last year that my stock-trading algorithm Twitterbot would resonate with curators, thinkers and  general audience so well. I’ve been very pleased with how well this “child” of mine has been doing.

This year, from August-December, it has been exhibited in 5 different venues, in 4 countries. They include MemFest 2015 (Bilbao), ISEA 2015, (Vancouver), MoneyLab 2, Economies of Dissent (Amsterdam) and Bay Area Digitalists (San Francisco).

Of course, it helps the narrative that EquityBot is doing incredibly well, with a return rate (as of December 4th) of 19.5%. I don’t have the exact figures, but the S&P for this time period, according to my calculations, is the neighborhood of -1.3%.

Screen Shot 2015-12-05 at 9.13.20 AM

 

The challenge with this networked art piece is how to display it. I settled on making a short video, with the assistance of a close friend, Mark Woloschuk. This does a great job of explaining how the project works.

And, accompanying it is a visual display of vinyl stickers, printed on the vinyl sticker machine at the Creative Workshops at Autodesk Pier 9, where I once had a residency and now work (part-time).

EquityBot_installation_screen_c

 

from-columbus-show

EquityBot Goes to ISEA

EquityBot will be presented at this year’s International Symposium on Electronic Art at Vancouver. The theme is Disruption. You can always follow EquityBot here: @equitybot.

EquityBot is an automated stock-trading algorithm that uses emotions on Twitter as the basis for investments in a simulated bank account.

This art project poses the question: can an artist create a stock-trading algorithm that will outperform professional managed accounts?

The original EquityBot, what I will call version 1, launched on October 28th via the Impakt organization, which was supported the project last fall during at artist residency.

I intended for it to run for 6 months and then to assess its performance results. I ended up letting it run a little bit longer (more on this later).

Since then, I’ve revamped EquityBot about 1 month ago. The new version is doing *great* with an annual rate of return of 10.86%. Most of this is due to some early investments in Google, whose stock prices have been doing fantastic.

equitybot-isea-8emotions-1086percent

How does EquityBot work? During stock market hours, EquityBot scrapes Twitter to determine the frequency of eight basic human emotions: anger, fear, joy, disgust, anticipation, trust, surprise and sadness.

equitybot-8emotions

The software code captures fluctuations in the number of tweets containing these emotions. It then correlates them to changes in stock prices.  When an emotion is trending upwards EquityBot will select a stock that follows a similar trajectory. It deems this to be a “correlated investment” and will buy this stock.

equitybot_correlation_graph

The ISEA version of EquityBot will run for another 6 months or so. The major change from version 1 was that with this version, I tracked 24 different emotions, all based on the Plutchik wheel.

Plutchik-wheel.svg_1

 

The problem that I found was this was too many emotions to track, both in terms. Statistically-speaking, there were too few tweets for many of the emotions for the correlation code to properly function.

The only change with the ISEA version (what I will call v1.1) is that it now tracks eight emotions instead of 24.

popular-emotions

How did v1 of EquityBot perform? It came out of the gates super-strong, hitting a high point of 20.21%. Wowza. These are also some earlier data-visualizations, which have since improved, slightly so.
equitybot-nov26-2021percent

But 1 month later, by December 15th, EquityBot dipped down to -4.58% percent. Yikes. These are the vicissitudes of the market and a short time-span

equitybot-dec15-minus-458percent

 

By January 21st 2015, EquityBot was almost back to even at -0.96%.

 

equitybot-jan21-minus096percent

Then by February 4th, 2015, EquityBot was back at a respectable 5.85%.

equitybot-feb4-585percent

And on March 1st, doing quite well at 7.36%

equitybot-march1-736percent

I let the experiment run until June 11th. The date was arbitrary, but -9.15% was the end result. This was pretty terrible.

equitybot-jun11-minus915percent

And which emotions performed the “best” — the labels aren’t on this graph, but the ones that were doing well were Trust and Terror. And the worst…was Rage (extreme Anger).

equitybot-investing-results-jun11

 

How do other managed accounts perform? According to the various websites, these are the numbers I’ve found.

Janus (Growth & Income): 7.35%
Fidelity (VIP Growth & Income): 4.70%
Franklin (Large Cap Equity): 0.46%
American Funds (The Income Fund of America): -1.23%
Vanguard (Growth and Income): 4.03%

This would put EquityBot v1.0 as dead last. Good thing this was a simulated bank account.

I’m hoping that v1.1 will do better. Eight emotions. Let’s see how it goes.

 

Machine Data Dreams: Barbie Video Girl Cam

One of the cameras they have here at the Signal Culture Residency is the Barbie Video Girl cam. This was a camera embedded inside a Barbie doll, produced in 2010.

The device was discontinued most notably after the FBI accidentally leaked a warning about possible predatory misuses of the camera, is  patently ridiculous.

The interface is awkward. The camera can’t be remotely activated. It’s troublesome to get the files off the device. The resolution is poor, but the quality is mesmerizing.

 

barbie_disassembly_1

The real perversion is the way you have to change the batteries for the camera, by pulling down Barbie’s pants and then opening up her leg with a screwdriver.

b-diss

I can only imagine kids wondering if the idealized female form is some sort of robot.

barbie_disassembly_3

The footage it takes is great. I brought it first to the local antique store, where I shot some of the many dolls for sale.

 

 

And, of course, I had to hit up the machines at Signal Culture to do a live analog remix using the Wobbulator and Jones Colorizer.

In the evening, as dusk approached, I took Barbie to the Evergreen Cemetery in Owego, which has gravestones dating from the 1850s and is still an active burial ground.

Here, Barbie contemplated her own mortality.

barbie_cemetery barbie_close_cross barbie_good barbie_gravestone_1 barbie_headstore barbie_warren

barbie_cemetery_mother

It was disconcerting for a grown man to be holding a Barbie doll with an outstretched arm to capture this footage, but I was pretty happy with the results.

I made this short edit.

And remixed with the Wobbulator. I decided to make a melodic harmony (life), with digital noise (death) in a move to mirror the cemetery — a site of transition between the living and the dead.

How does this look in my Machine Data Dreams software?

You can see the waveform here — the 2nd channel is run through the Critter & Guitari Video Scope.

Screen Shot 2015-08-04 at 10.43.09 AM

And the 3D model looks promising, though once again, I will work on these post-residency.

Screen Shot 2015-08-04 at 10.45.04 AM

Machine Data Dreams: Critter & Guitari Video Scope

Not to be confused with Deleuze and Guattari, this is a company that makes various hardware music synths.

For my new project, Machine Data Dreams, I’m looking at how machines might “think”, starting with the amazing analog video machines at Signal Culture.

signal_culture-fullsetup

This morning, I successfully stabilized my Arduino data logger. This captures the raw video signal from any device with RCA outputs and stores values at a sampling rate of ~3600 Hz.

It obviously misses a lot of the samples, but that’s the point, a machine-to-machine listener, bypassing any sort of standard digitizing software.

data_logger

For my first data-logging experiment, I decided to focus on this device, the Critter & Guitari Video Scope, which takes audio and coverts it to a video waveform.

critterguitari_3 critterguitari_2 Crittcritterguitari_1

Using the synths, I patched and modulated various waveforms. I’ve never worked with this kind of system until a few days ago, so I’m new to the concept of control voltages.audio_sythn

This is the 15-minute composition that I made for the data-logger.

Critter & Guitari Videoscope Composition (below)

And the captured output, in my custom OpenFrameworks software.

 

Screen Shot 2015-08-02 at 10.56.15 PMThe 3D model is very preliminary at this point, but I am getting some solid waveform output into a 3D shape. I’ll be developing this in the next few months. But since I only have a week at Signal Culture, I’ll tackle the 3D-shape generation later.

Screen Shot 2015-08-02 at 11.02.25 PM

My data logger can handle 2 channels of video, so I’m experimenting with outputting the video signal as sound and then running it back through the C&G Videoscope.

This is the Amiga Harmonizer — output, which looks great by itself. The audio, however, as a video signal, as expected comes out sounding like noise.

But the waveforms are compelling. there is a solid band at the bottom, which is the horizontal sync pulse. This is the signature for any composite (NTSC) devices.

2000px-Composite_Video.svg

 

So, every devices I log should have this signal at the bottom, which you can see below.

Screen Shot 2015-08-02 at 10.58.12 PM

Once again, the 3D forms I’ve generated in OpenFrameworks and then opened up in Meshlab are just to show that I’m capturing some sort of raw waveform data.

Screen Shot 2015-08-02 at 11.00.14 PM

Atari Adventure Synth

Hands down my favorite Atari game when I was a kid was Adventure (2). The dragons looked like giant ducks. Your avatar was just a square and a bat wreaks chaos by stealing your objects.

In the ongoing research for my new Machine Data Dreams project, beginning here at Signal Culture, I’ve been playing with the analog video and audio synths.

Yesterday afternoon, I explored the town of Owego. I ran across a used DVD, CD & electronics store and bought an Atari Flashback Console for $25. I didn’t even know these existed.

blog_pic

I can plug it directly into their video synth system. After futzing around with the various patch cables, I came up with this 5-minute composition, which shows me playing the game. The audio sounds like marching with dirty noise levels.

Also, here is the latest 3D model from my code, which now has a true 3D axis for data-plotting.

Time is one axis, video signal is another, audio signal is the third.

Screen Shot 2015-07-31 at 9.26.05 PMAnd a crude frequency plot.

Screen Shot 2015-08-01 at 3.03.24 PM

 

Bad Data: SF Evictions and Airbnb

The inevitable conversation about evictions at San Francisco every party…art organizations closing, friends getting evicted…the city is changing. It has become a boring topic, yet it is absolutely, completely 100% real.

For the Bad Data series — 12 data-visualizations depicting socially-polarized, scientifically dubious and morally ambiguous dataset, each etched onto an aluminum honeycomb panel — I am featuring two works: 18 Years of Evictions in San Francisco and 2015 AirBnb Listings for exactly this reason. These two etchings are the centerpieces of the show.

evictions_airbnb

This is the reality of San Francisco, it is changing and the data is ‘bad’ — not in the sense of inaccurate, but rather in the deeper sense of cultural malaise.

By the way, the reception for the “Bad Data” show is this Friday (July 24, 2015) at A Simple Collective, and the show runs through August 1st.

The Anti-Eviction Mapping Project has done a great job of aggregating data on this discouraging topic, hand-cleaning it and producing interactive maps that animate over time. They’re even using the Stamen map tiles, which are the same ones that I used for my Water Works project.

Screen Shot 2015-07-23 at 4.52.36 PM

When I embarked on the Bad Data series, I reached out to the organization and they assisted me with their data sets. My art colleagues may not know this, but I’m an old-time activist in San Francisco. This helped me with getting the datasets, for I know that the story of evictions is not new and certainly not on this scale.

In 2001, I worked in a now-defunct video activist group called Sleeping Giant, which worked on short videos in the era when Final Cut Pro made video-editing affordable and when anyone with a DV camera could make their own videos. We edited our work, sold DVDs and had local screenings, stirring up the activist community and telling stories from the point-of-view of people on the ground. Sure, now we have Twitter and social media, but at the time, this was a huge deal in breaking apart the top-down structures of media dissemination.

Here is No Nos Vamos a hastily-edited video about evictions in San Francisco. Yes, this was 14 years ago.

I’ve since moved away from video documentary work and towards making artwork: sculpture, performance, video and more. The video-activist work and documentary video in general felt overly confining as a creative tool.

My current artistic focus is to transform datasets using custom software code into physical objects. I’ve been working with the amazing fabrication machines at Autodesk’s Pier 9 facility to make work that was not previously possible.

Ths dataset (also provided through the SF Rent Board) includes all the no-fault evictions in San Francisco, I got my computer geek on…well, I do try to use my programming powers for non-profit work and artwork.

I mapped the data into vector shapes using the C++ open source toolkit, called OpenFrameworks and wrote code which transformed the ~9300 data points into plotable shapes, which I could open in Illustrator. I did some work tweaking the strokes and styles.

sf_evictions_20x20

This is what the etching looks like from above, once I ran int through the water jet. There were a lot of settings and tests to get to this point, but the final results were beautiful.

waterjet-overhead

The material is a 3/4″ honeycomb aluminum. I tuned the high-pressure from the water-jet to pierce through the top layer, but not the bottom layer. However, the water has to go somewhere. The collisions against the honeycomb produce unpredictable results.

…just like the evictions themselves. We don’t know the full effect of displacement, but can only guess as the city is rapidly becoming less diverse. The result is below, a 20″ x 20″ etching.

Bad Data: 18 Years of San Francisco Evictions

baddata_sfevictions

The Airbnb debate is a little less clear-cut. Yes, I do use Airbnb. It is incredibly convenient. I save money while traveling and also see neighborhoods I’d otherwise miss. However, the organization and its effect on city economies is a contentious one.

For example, there is the hotel tax in San Francisco, which after 3 years, they finally consented to paying — 14% to the city of San Francisco. Note: this is after they had a successful business.

There also seems to be a long-term effect on rent. Folks, and I’ve met several who do this, are renting out places as tenants on Airbnb. Some don’t actually live in their apartments any longer. The effect is to take a unit off the rental market and mark it as a vacation rental. Some argue that this also skirts the law rent-control in the first place, which was designed as a compromise solution between landlords and tenants.

There are potential zoning issues, as well…a myriad of issues around Airbnb.

BAD DATA: 2015 AIRBNB LISTINGS, etching file

airbnb_sf

In any case, the location of the Airbnb rentals (self-reported, not a complete list) certainly fit the premise of the Bad Data series. It’s an amazing dataset. Thanks to darkanddifficult.com for this data source.

BAD DATA: 2015 Airbnb Listings

baddata_airbnb

EquityBot goes live!

During my time at Impakt as an artist-in-residence, I have been working on a new project called EquityBot, which is an online commission from Impakt. It fits well into the Soft Machines theme of the festival: where machines integrate with the soft, emotional world.

EquityBot exists entirely as a networked art or “net art” project, meaning that it lives in the “cloud” and has no physical form. For those of you who are Twitter users, you can follow on Twitter: @equitybot

01_large

What is EquityBot? Many people have asked me that question.

EquityBot is a stock-trading algorithm that “invests” in emotions such as anger, joy, disgust and amazement. It relies on a classification system of twenty-four emotions, developed by psychologist and scholar, Robert Plutchik.

Plutchik-wheel.svg

how it works
During stock market hours, EquityBot continually tracks worldwide emotions on Twitter to gauge how people are feeling. In the simple data-visualization below, which is generated automatically by EquityBot, the larger circles indicate the more prominent emotions that people are Tweeting about.

At this point in time, just 1 hour after the stock market opened on October 28th, people were expressing emotions of disgust, interest and fear more prominently than others. During the course of the day, the emotions contained in Tweets continually shift in response to world events and many other unknown factors.

twitter_emotionsEquityBot then uses various statistical correlation equations to find pattern matches in the changes in emotions on Twitter to fluctuations in stocks prices. The details are thorny, I’ll skip the boring stuff. My time did involve a lot of work with scatterplots, which looked something like this.

correlationOnce EquityBot sees a viable pattern, for example that “Google” is consistently correlated to “anger” and that anger is a trending emotion on Twitter, EquityBot will issue a BUY order on the stock.

Conversely, if Google is correlated to anger, and the Tweets about anger are rapidly going down, EquityBot will issue a SELL order on the stock.

EquityBot runs a simulated investment account, seeded with $100,000 of imaginary money.

In my first few days of testing, EquityBot “lost” nearly $2000. This is why I’m not using real money!

Disclaimer: EquityBot is not a licensed financial advisor, so please don’t follow it’s stock investment patterns.

accountThe project treats human feelings as tradable commodities. It will track how “profitable” different emotions will be over the course of months. As a social commentary, I propose a future scenario that just about anything can be traded, including that which is ultimately human: the very emotions that separate us from a machine.

If a computer cannot be emotional, at the very least it can broker trades of emotions on a stock exchange.

affect_performanceAs a networked artwork, EquityBot generates these simple data visualizations autonomously (they will get better, I promise).

It’s Twitter account (@equitybot) serves as a performance vehicle, where the artwork “lives”. Also, all of these visualizations are interactive and on the EquityBot website: equitybot.org.

I don’t know if there is a correlation between emotions in Tweets and stock prices. No one does. I am working with the hypothesis that there is some sort of pattern involved. We will see over time. The project goes “live” on October 29th, 2014, which is the day of the opening of the Impakt Festival and I will let the first experiment run for 3 months to see what happens.

Feedback is always appreciated, you can find me, Scott Kildall, here at: @kildall.

 

Soft Machines and Deception

The Impakt Festival officially begins next Wednesday, but in the weeks prior to the event, Impakt has been hosting numerous talks, dinners and also a weekly “Movie Club,” which has been a social anchor for my time in Utrecht.

10437517_643169085789022_7756476391981345316_nEvery Tuesday, after a pizza dinner and drinks, an expert in the field of new media introduces a relatively recent film about machine intelligence, prompting questions that frame larger issues of human-machine relations in the films. An American audience might be impatient about a 20-minute talk before a movie, but in the Netherlands, the audience has been engaged. Afterwards, many linger in conversations about the very theme of the festival: Soft Machines.

1625471_643169265789004_3958937439824009299_n

The films have included I, Robot, Transcendence, Her and the documentary: Game Over: Kasparov and the Machine. They vary in quality, but with the introduction of the concepts ahead of time, even Transcendence, a thoroughly lackluster film engrossed me.

The underlying question that we end up debating is: can machines be intelligent? This seems to be a simple yes or no question, which cleaves any group into either a technophilic pro-Singularity or curmudgeonly Luddite camp. It’s a binary trap, like the Star Trek debates between Spock and Bones. The question is far more compelling and complex.

The Turing test is often cited as the starting point for this question. For those of you who are unfamiliar with this thought experiment, it was developed by British mathematician and computer scientist, Alan Turing in a 1950 paper that asked the simple question: “can machines think”.

The test goes like this: suppose you have someone at a computer terminal who is conversing with an entity by typing text conversations back and forth, what we now regularly do with instant messaging. The entity on the other terminal is either a computer or a human, the identity of which is unknown to the computer user. The user can have a conversation and ask questions. If he or she cannot ascertain “human or machine” after about 5 minutes, then the machine passes the Turing test. It responds as if a human would and can effectively “think”.

turing_model

In 1990, the thought experiment became a reality with the Loebner Prize. Every year, various chatbots — algorithms which converse via text with a computer user — compete to try to fool humans in a setup that replicates this exact test. Some algorithms have come close, but to date, no computer has ever successfully won the prize.

eliza2

The story goes that Alan Turing was inspired by a popular party game of the era called the “Imitation Game” where a questioner would ask an interlocutor various questions. This intermediary would then relay these questions to a hidden person who would answer via handwritten notes. The job of the questioner was to try to determine the gender of the unknown person. The hidden person would provide ambiguous answers. A question of “what is your favorite shade of lipstick” could be answered by “It depends on how I feel”. The answer is in this case is a dodge as a 1950s man certainly doesn’t know the names of lipstick shades.

Both the Turing test and the Imitation Game hover around the act of deception. This technique, widely deployed in predator-prey relationships in nature, is engrained in our biological systems. In the Loebner Prize competitions, there have even been instances where the human and computer will try to play with the judges, making statements like: “Sorry I am answering this slowly, I am running low on RAM”.

It may sound odd, but the computer doesn’t really know deception. Humans do. Every day we work with subtle queues of movement around social circles, flirtation with one another, exclusion and inclusion into a group and so on. These often rely on shades of deception: we say what we don’t really mean and have other agendas than our stated goals. Politicians, business executives and others that occupy high rungs of social power know these techniques well. However, we all use them.

The artificial intelligence software that powers chatbots has evolved rapidly over the years. Natural language processing (NLP) is widely used in various software industries. I had an informative lunch the other day in Amsterdam with a colleague of mine, Bruno Jakic at AI Applied, who I met through the Affect Lab. Among other things, he is in the business of sentiment analysis, which helps, for example, determine if a large mass of tweets indicates a positive or negative emotion. Bruno shared his methodology and working systems with me.

State-of-the-art sentiment analysis algorithms are generally effective, operating in the 75-85% range for identification of a “good” or “bad” feeling in a chuck of text such as a Tweet. Human consensus is in the similar range. Apparently, a group of people cannot agree on how “good” or “bad” various Twitter messages are, so machines are coning close to effective as humans on a general scale.

The NLP algorithms deploy brute force methods by crunching though millions of sentences using human-designed “classifiers” — rules to help determine how a sentence looks. For example, an emoticon like a frown-face, almost always indicates a bad feeling.

frown

Computers can figure this out because machine perception is millions of time faster than human perception. It can run through examples, rules and more but acts on logic alone. If NLP software code generally works, where specifically does it fail?

Bruno pointed out was that machines are generally incapable of figuring out if someone is being sarcastic. Humans immediately sense this by intuitive reasoning. We know, for example that getting locked out of your own house is bad. So if you write that this is a contradictory good thing, it is obviously sarcastic. The context is what our “intuition” — or emotional brain understands. It builds upon shared knowledge that we gather over many years.

sarcasm

The Movie Club films also tackle this issue of machine deception. At a critical moment in the film, Sonny, the main robot character in I, Robot, deceives the “bad” AI software that is attacking the humans by pretending to hold a gun to one of the main “good” characters. It  then winks to Will Smith (the protagonist) to let him know that he is tricking the evil AI machine. Sonny and Will Smith then cooperate, Hollywood style with guns blazing. Of course, they prevail in the end.

sony-wink

Sonny possess a sophisticated Theory of Mind: an understanding of its own mental state and well as that of the other robots and Will Smith. It takes initiative and pretends to be on the side of the evil AI computer by taking an an aggressive action. Earlier in the film, Sonny learned what winking signifies. It knows that the AI doesn’t understand this and so the wink will be understood by Will Smith and not be the evil AI.

In Game Over: Kasparov and the Machine, which recasts the narrative of the Deep Blue vs.Kasparov chess matches, the Theory of Mind of the computer resurfaces. We know that Deep Blue won the chess match, which was a series of 6 chess matches in 1997. It is the infamous Game 2, which obsessed Kasparov. The computer played aggressively and more like a human than Kasparov had expected.

At move 45, Kasparov resigned, convinced that Deep Blue had outfoxed him that day. Deep Blue had responded in the best possible way to Kasparov’s feints earlier in the game. Chess experts later discovered that Kasparov could have easily forced a honorable draw instead of resigning the match.

The computer appeared to have made a simple error. Kasparov was baffled and obsessed. How could the algorithm have failed on a simple move, when it was so thoroughly strategic earlier in the game. It didn’t make sense.

Kasparov felt like he was tricked into resigning. What he didn’t consider was that when te algorithm didn’t have enough time — since tournament chess games are run against a clock — to find the best-ranked move, that it would choose randomly from a set of moves…much like a human would do in similar circumstances. The decision we humans make is emotional at this point. Inadvertently, Kasparov the machine deceived Kasparov.

KASPAROVI’m convinced that ability to act deceptively is one necessary factor for machines need to be “intelligent”. Otherwise, they are simply code-crunchers. But there are other aspects, as well, which I’m discovering and exploring during the Impakt Festival.

I will continue this line of thought on machine intelligence in future blog posts, I welcome any thoughts and comments on machine intelligence and deception. You can find me on Twitter: @kildall.

 

 

 

 

 

 

 

 

Data-Visualizing + Tweeting Sentiments

It’s been a busy couple of weeks working on the EquityBot project, which will be ready for the upcoming Impakt Festival. Well, at least some functional prototype in my ongoing research project will be online for public consumption.

The good news is that the Twitter stream is now live. You can follow EquityBot here.

EquityBot now tweets images of data-visualizations on its own and is autonomous. I’m constantly surprised and a bit nervous by its Tweets.

exstasy_sentimentAt the end of last week, I put together a basic data visualization using D3, which is a powerful Javascript data-visualization tool.

Using code from Jim Vallandingham, In just one evening, I created dynamically-generated bubble maps of Twitter sentiments as they arrive EquityBot’s own sentiment analysis engine.

I mapped the colors directly from the Plutchik wheel of emotions, which is why they are still a little wonky like the fact that the emotion of Grief is unreadable. Will be fixed.

I did some screen captures and put them my Facebook and Twitter feed. I soon discovered that people were far more interested in images of the data visualizations than just text describing the emotions.

I was faced with a geeky problem: how to get my Twitterbot to generate images of the data visualizations using D3, a front-end Javascript client? I figured it out eventually, after stepping into a few rabbit holes.

Screen Shot 2014-10-21 at 11.31.09 AM

I ended up using PhantomJS, the Selenium web driver and my own Python management code to solve the problem. There biggest hurdle was getting Google webfonts to render properly. Trust me, you don’t want to know the details.

Screen Shot 2014-10-21 at 11.31.29 AM

 

But I’m happy with the results. EquityBot will now move to other Tweetable data-visualizations such as its own simulated bank account, stock-correlations and sentiments-stock pairings.

Blueprint for EquityBot

For my latest project, EquityBot, I’ve been researching, building and writing code during my 2 month residency at Impakt Works in Utrecht (Netherlands).

EquityBot is going through its final testing cycles before a public announcement on Twitter. For those of you who are Bot fans, I’ll go ahead and slip you the EquityBot’sTwitter feed: https://twitter.com/equitybot

The initial code-work has involved configuration of a back-end server that does many things, including “capturing” Twitter sentiments, tracking fluctuations in the stock market and running correlation algorithms.

I know, I know, it sounds boring. Often it is. After all, the result of many hours of work: a series of well-formatted JSON files. Blah.

But it’s like building city infrastructure: now that I have the EquityBot Server more or less working, it’s been incredibly reliable, cheap and customizable. It can act as a Twitterbot, a data server and a data visualization engine using D3.

This type of programming is yet another skill in my Creative Coding arsenal. And consists of mostly Python code that lives on a Linode server, which is a low-cost alternative to options like HostGator or GoDaddy, which incur high monthly costs. And there’s a geeky sense of satisfaction in creating a well-oiled software engine.

The EquityBot Server looks like a jumble of Python and PHP scripts. I cannot possibly explain it excruciating detail, nor would anyone in their right mind want to wade through the technical details.

Instead, I wrote up a blueprint for this project.

ebot_server_diagram_v1For those of you who are familiar with my art projects, this style of blueprint may look familiar. I adapted this design from my 2049 Series, which are laser-etched and painted blueprints of imaginary devices. I made these while an artist-in-residence at Recology San Francisco in 2011.

sniffer-blue

Water Works Final Report

Overview
Water Works is a project that I created for the Creative Code Fellowship in the Summer of 2014 with the combined support of Stamen Design, Autodesk and Gray Area.

Water Works is a 3D data visualization and mapping of the water infrastructure of San Francisco. The project is a relational investigation: I have been playing the role of a “Water Detective, Data Miner” and sifting through the web for water data. My results of from this 3-month investigation are three large-scale 3D-printed sculptures, each paired with an interactive web map.

The final website lives here: http://www.waterworks.io/

sewer

Stamen Design is a small design studio that creates sophisticated mapping and data-visualization projects for the web. Combined with the amazing physical fabrication space at Pier 9 at Autodesk, this was a perfect combination of collaborative players for my own focus: writing algorithms that transform datasets into 3D sculptures and installations. I split my time between the two organizations and both were amazing, creative environments.

Gray Area provided the project guidance and coursework: 12 hours a week of Creative Code Immersive classes in topics ranging from Arduino to Node.js. About half of the classes were review for me, e.g. OpenFrameworks, Processing, Arduino, but Javascript, Node and more were completely new.

This report is heavy on images, partially because I want to document the entire process of how I created these 3D mapping-visualizations. As far as I know, I’m the first person who has undertaken this creative process: from mining city data to 3D-printing the infrastructure, which is geo-located on a physical map.

My directive from the start of the Water Works project was to somehow make visible what is invisible. This simple message is one that I learned while I was working as a New Media Exhibit Developer at the Exploratorium (2012-2013). It also aligns with the work that Stamen Design creates and so I was pleased to be working with this organization.

Starting Point
Underneath our feet is an urban circulatory system that delivers water to our households, removes it from our toilets, delivers a reliable supply firefighting and ultimately purifies it and directs it into the bay and ocean. Most of us don’t think about this amazing system because we don’t have to — it simply works.

Like many others, I’m concerned about the California drought, which many climatologists think will persist for the next decade. I am also a committed urban-dweller and want to see the city I live in improve its infrastructure as it serves an expanding population. Finally, I undertook this project in order to celebrate infrastructure and to help make others aware of the benefits of city government.

drought

On more personal note, I am fascinated by urban architecture. As I walk through the city, I constantly notice the makings on manholes, the various sign posts and different types of fire hydrants.cistern_manhole

About a year ago, I had several in-depth conversations with employees at the Department of Public Works about the possibility of mapping the sewer system when I was working at the Exploratorium. We discussed possibilities of producing a sewer map for museum. For various reasons, the maps never came to fruition, but the data still rattled around my brain. All of the pipe and manhole data still existed. It was waiting to be mapped.

Three Water Systems of San Francisco
When I was awarded this Creative Code Fellowship in June this year, I very much about the San Francisco water system. I soon learned that the city has three separate sets of pipes that comprise the water infrastructure of San Francisco.

(1) Potable Water System — this is our drinking water, comes from Hetch Hetchy. Some fire hydrants uses this.

(2) Sewer System — San Francisco has a combined stormwater and wastewater system, which is nearly entirely gravity-fed. The water gets treated at one of the wastewater treatment plants. San Francisco is the only coastal California city with a combined system.

(3) Auxiliary Water Supply System (AWSS) — this is a separate system just for emergency fire-fighting. It was built in the years immediately following the 1906 Earthquake, where many of the water mains collapsed and most of the city proper was destroyed by fires. It is fed from the Twin Peaks Reservoir. San Francisco is the only city in the US that has such as system.

water_treatment

Follow the Data, Find the Story
From my previous work on Data Crystals, I learned that you have to work with the data you can actually get, not the data you want. In the first month of the Water Works project, this involved constant research and culling.

I worked with various tables of sewer data that the DPW provided to me. I discovered that the city had about 30,000 nodes (underground chambers with manholes) with 30,000 connections (pipes). This was an incredible dataset and it needed a lot of pruning, cleaning and other work, which I soon discovered was a daunting task.

Lesson #1: Contrary to popular belief, data is never clean.

What else was available? It was hard to say at first. I sent emails to the SFPUC asking for their the locations of the drinking water data — just like what I had for the sewer data. I thought this would be incredible to represent. I approached the project with a certain naivety.

Of course, I shouldn’t have been surprised about that this would be a security concern, but in no uncertain terms I received a resounding no from the SFPUC. This made sense, but it left me with only one dataset.

Given that there were three water systems, it would make sense to create three 3D-printed visualizations, once from each set. If not the pipes, what would I use?

In one of my late-night evenings research, I found a good story: the San Francisco Underground Cisterns. According to various blogs, there are about 170 of these, and are usually marked by a brick circle. What is underneath?

cistern_circle

In the 1850s, after a series of Great Fires in San Francisco tore through the city, 23 cisterns* were built. These smaller cisterns were all in the city proper, at that time between Telegraph Hill and Rincon Hill. They weren’t connected to any other pipes and the fire department intended to use them in case the water mains were broken, as a backup water supply.

They languished for decades. Many people thought they should be removed, especially after incidents like the 1868 Cistern Gas Explosion.

However, after the 1906 Earthquake, fires once again decimated the city. Many water mains broke and the neglected cisterns helped save portions of the city.

Afterward, the city passed a $5,200,000 bond and begin building the AWSS in 1908. This included the construction of many new cisterns and the rehabilitation of other, neglected ones. Most of the new cisterns could hold 75,000 gallons of water. The largest one is underneath the Civic Center and has a capacity of 243,000 gallons.

The original ones, presumably rebuilt, hold much less, anywhere from 15,000 to 50,000 gallons.

* from the various reports I’ve read, this number varies.

old-cisternsmap

I searched for a map of all the cisterns, which was to be difficult to find. There was no online map anywhere. I read that since these were part of the AWSS, that they were refilled by the fire department. I soon begin searching for fire department data and found this set of intersections, along with the volume of each cistern. The source was the SFFD Water Supplies Manual.

cisterdata

The story of the San Francisco Cisterns was to be my first of three stories in this project.

Autodesk also runs Instructables, a DIY, how-to-make-things website. One of the Instructables details the mapping process, so if you want details, have a look at this Instructable.

What I did to make this conversion happen was to write code in Python which called Google Maps API to convert the intersections into lat/longs as well as get elevation data. When I had asked people how to do this, I received many GitHub links. Most of them were buggy or poorly documented. I ended up writing mine from scratch.

Lesson #2: Because GitHub is both a backup system for source code and open source sharing project, many GitHub projects are confusing or useless.

The being said, here is my GitHub repo: SF Geocoder, which does this conversion. Caveat Emptor.

Mapping the San Francisco Sewers
This was my second “story” with the Water Works project, which is simply to somehow represent the complex system that is underneath us. The details of the sewers are staggering. With approximately 30,000 manholes and 30,000 pipes that connect them, how do you represent or even begin mapping this?

And what was the story after all — it doesn’t quite have the uniqueness character of the cisterns. But, it does portray a complex system. Even the DPW hadn’t mapped this out in 3D space. I don’t know if any city ever has. This was the compelling aspect: making the physical model itself from the large dataset.

Building a 3D Modeling System
In addition to looking for data and sifting through the sewer data that I hand, I spent the first few weeks building up a codebase in OpenFrameworks.

The only other possibility was using Rhino + Grasshopper, which is a software package I don’t know and not even an Autodesk product. Though it can handle algorithmic model-building, several colleagues were dubious that it could handle my large, custom dataset.

So, I built my own. After several days of work, I mapped out the nodes and pipes as you see below. I represented the nodes as cubes and pipes as cylinders — at least for the onscreen data visualization.

sewer-mapping

This is a closeup of the San Francisco bay waterfront. You can see some isolated nodes and pipes — not connected to the network. This is one example of where the data wasn’t clean. Since this is engineering data, there are all sorts of anomalies like virtual nodes, run-offs and more.

My code was fast and efficient since it was in C++. More importantly, I wrote custom STL exporters which empowered my workflow to go directly to a 3D printer without having to go through other 3D packages to clean up the data. This took a lot of time, but once I got it working, it saved me hours of frustration later in the project.

seweremapping2

I also mapped out the Cisterns in 3D space using the same code. The Cisterns are disconnected in reality but as a 3D print, they need to one cohesive structure. I modified the ofxDelaunay add-on (thanks GitHub) to create cylindrical supports that link the cisterns together.

What you see here is an “editor”, where I could change the thickness of the supports, remove unnecessary ones and edit the individual cisterns models to put holes in certain ones.

I also scaled the Cisterns according to their volume. The pre-1906 ones tend to be small, while the largest one, at Civic Center is about 243,000 gallons, which over 3 times the size of the standard post-earthquake 75,000 gallon cisterns.

OF-cisterns-nomap

Story #3: Imaginary Drinking Hydrants
In the same document that had the locations of all of the San Francisco Cisterns, I also found this gem: 67 emergency drinking hydrants for public use in a city-wide disaster.

Whoa, I thought, how interesting…

drinking_hydrants

I dug deeper and scouted out the intersections in person. I took some photos of the Emergency Drinking Hydrants. They have blue drops painted on them. You can even see them on Street View.

I found online news articles from several years ago, which discussed this program, introduced in 2006, also known as the Blue Drop Hydrant program.

Picture of What is the blue drop hydrant program

blue_drop_man.jpg

And, I generated a web map, using Javascript and Leaflet.

imaginary-drnkinghydrants

I then published a link to the map onto my Twitter feed. It generated a lot of excitement and was retweeted by many sources.

twitt.jpg

The SFist — a local San Francisco news blog ended up covering it. I was excited. I thought I was doing a good public service.

However, there was a backlash…of sorts. It turns out that the program was discontinued by the SFPUC. The organization did some quick publicity-control on their Facebook page and also contacted the SFist.

The writer of the article then issued a statement that this program was discontinued and a press statement by the SFPUC.

press2.jpg

He also had this quote, which was a bit of a jab at me. “It had sounded like designer Scott Kildall, who had been mapping the the hydrants, had done a fair amount of research, but apparently not.”

In my defense, I re-researched the emergency drinking hydrants. Nowhere did it say that the program was discontinued. So, apparently the SFPUC quietly shuffled it out.

But later, I found that my map birthed a larger discussion. The SFPUC had this response, also printed later on SFist.
Picture of But then a good public response

The key quote by Emergency Planning Director Mary Ellen Carroll is:

“When it comes to sheltering after a emergency, we don’t tell people ahead of time, ‘This is where you’ll need to go to find shelter after an earthquake’ because there’s no way to know if that shelter will still be there.

This makes sense that central gathering locations could be a bad idea. Imagine a gas leak or something similar at one of these locations. So a water distribution plan would have to be improvised according the the desasters.

We do know from various news articles and by my own photographs that there was not only a map, but physical blue drops painted on the hydrants in addition to a large publicity campaign. The program supposedly costs 1 million dollars, so that would have been an expensive map.

They SFPUC never pulled the old maps from their website nor did they inform the public that the blue drop hydrants were discontinued.

I blame it on general human miscommunication. And after visiting the SFPUC offices towards the end of my Water Works project, I’m entirely convinced that this is a progressive organization with smart people. They’re doing solid work.

But I had to rethink my mapping project, since these hydrants no longer existed.

When faced with adverse circumstances, at least in the area of mapping and art, you must be flexible. There’s always a solution. This one almost rhymes with Emergency — Imaginary.

Instead of hydrants for emergency drinking water, I ask the question: could we have a city where we could get tap water from these hydrants at any time? What if the water were recycled water?

They could have a faucet handle on them, so you could fill up your bottle when you get thirsty. More importantly, these hydrants could be a public service.

It’s probably impractical in the short term, but I love the idea of reusing the water lines for drinking lines — and having free drinking water in the public commons.

So, I rebranded this map and designed this hydrants with a drinking faucet attached to it. This would be the base form used for the maps.

Picture of Rebrand as Imaginary Drinking Hydrants

Creating Mini Models
I wanted to strike a balance with this data-visualization and mapping project between aesthetics and legibility.With the data sets I now had and the C++ code that I wrote, I could geolocate cisterns, hydrants and sewer lines.

These would be connected by support structures in the case of cisterns and hydrants and pipe data for the sewers.

I decided that the actual data points would be miniature models, which I designed in Fusion 360 with the help of Autodesk guru, Taylor Stein. The first one I created was the Cistern model.

cisterns-fusion360I went through several iterations to come up with this simple model. The design challenge was to come up with a form that looked like it could be an underground tank, but not bring up other associations. In this case, without the three rectangular stubby pieces, it looks like a tortilla holder.

After a day of design and 3D print tests, I settled on this one.

cistern-model

And here you can see the outputs of the cisterns and the hydrants in MeshLab.

meshlab-cisterns

Here is the underside of the hydrant structure, where you can see the holes in the hydrants, which I use later for creating the final sculpture. These are drill holes for mounting the final prints on wood.

meshlab-hydrants-underneathThe manhole chamber design was the hardest one to figure out. This one is more iconographic than representational. Without some sort of symmetry, the look of the underground chamber didn’t resonate. I also wanted to provide a manhole cover on top of the structure. The flat bottom distinguishes it from the pipes.

manhole

Mapping and Legibilitystamen

One of my favorite aspects about being at Stamen is that four days a week, they provided lunch for us. We all ate lunch together. This was a good chunk of unstructured time to talk about mapping, music, personal life, whatever.

We solidified bonds — so often shared lunch is overlooked in organizations. In addition to informal discussion on the project, we also had a few creative brainstorm sessions, where I would present the progress of the project and get feedback from several people at both Stamen. Folks from Autodesk and Gray Area also joined the discussion.

I hadn’t considered the idea of situating these on a map before, but they suggested integrating a map of some sort. Quickly, the idea was birthed that I should geolocate these on top of a map. This was a brilliant direction for the project.

OF-imaginaryhydrants-mapStamen provided be with a high-resolution map that I could laser etch, which came later, after the 3D printing. Now, with this direction for the project, I started making the actual 3D prints.  map-for-etching

Mega-prints with lots of cleaning
After all the mapping, arduous data-smoothing, tests upon structural tests, I was finally ready to spool off the large-scale 3D prints. Each print was approximately the size of the Object 500 print bed: 20″ x 16″, making these huge. A big thanks to Autodesk for sponsoring the work and providing the machines.

Each print took between 40 and 50 hours of machine time, so I sent these out as weekend-long jobs. Time and resources were limited, so this was a huge endeavor.

cisterns-buildtimeI was worried that the print would fail, but I got lucky in each case. The prints are a combine resin material: VeroClear and VeroWhite (for the Cisterns and Hydrants) and mixes of VeroWhite and VeroBlack for the Sewers.

support-cisterns-far

When the prints come off the print bed, they are encased in a support material which I first scraped off and then used a high-pressure water system to spray the rest off.
cleaning-cistern

It took hours upon hours to get from this.

sewerworks

To this: a fully cleaned version of the Sewer print. This 3D print is of a section of the city: the Embarcadero area, which includes the Pier 9 facility where Autodesk is located.

For the Sewer Works print, the manhole chambers and pipes are scaled to the size in the data tables. I increased the elevation about 3 time to capture the hilly terrain of San Francisco. What you see here is an aerial view as if you were in a helicopter flying from Oakland to San Francisco. The diagonal is Market Street, ending at the Ferry Building. On the right side, towards the back of the print is Telegraph Hill. There are large pipes and chambers along the Embarcadero. Smaller ones comprise the sewer system in the hilly areas.
sewerworks-3dMap-Etching and Final Fabrication
I’ll just summarize the final fabrication — this blog post is already very long. For a more details, you can read this Instructable on how I did the fabrication work. 

Using a cherry wood, which I planed and jointed and glued together, I laser-etched these maps, which came out beautifully.

I chose wood both because of the beautiful finish, but also because the material of wood references the wood Victorian and Edwardian houses that define the landscape of San Francisco. The laser-etching burns away the wood, like the fires after the 1906 Earthquake, which spawned the AWSS water system.

_MG_7318

The map above is the waterfront area for the Sewer Works print and the one below is the full map of the city that I used as the base for the San Francisco Cisterns and the Imaginary Drinking Hydrants sculptures._MG_7316The last stages of the woodwork involved traditional fabrication, which I did at the Autodesk facilities at Pier 9.

_MG_7314I drilled out the holes for mounting the final 3D prints on the wood bases and then mounted them on 1/16″ stainless rods, such that they float about 1/2″ above the wood map.
_MG_7330 And the final stage involved manually fitting the prints onto the rods._MG_7335

Final Results
Here are the three prints, mounted on the wood-etched maps.

Below is the Imaginary Drinking Hydrants. This was the most delicate of the 3D prints.

06_large

These are the San Francisco Cisterns, which are concentrated in the older parts of San Francisco. They are nearly absent from the western part of the city, which became densely populated well-after the 1906 Earthquake.02_large This is the Sewer Works print. The map is not as visible because of the density of the network. The pipes are a light gray and the manhole chambers a medium gray. The map does capture the extensive network of manmade piers along the waterfront.03_large The Website: San Francisco Cisterns and Imaginary Drinking Hydrants
The website for this project is: waterworks.io. It has three interactive web maps for each of the three water systems

The aforementioned Instuctable: Mapping San Francisco Cisterns details how I made these. The summary is that I did a lot of data-wrangling, often using Python to transform the data into a GeoJSON files, a web-mappable format.

The Stamen designer-technicians were invaluable in directing me to the path of Leaflet, an easy-to-use mapping interface. I struggled with it for awhile, as I was a complete newbie to Javascript, but eventually sorted out how to create maps and customize the interactive elements.

Fortunately, I also received help from the designers at Stamen on the graphics. I only have so many skills  and graphic design is not one of them.

cisternsmapping

The Website:The Website: Life of Poo
The performance on Leaflet bogged down when I had more than about 1500 markers in Leaflet and the sewer system has about 28,000.

I spent a lot of energy with node-trimming using a combination of Python and Java code and winnowed the count down to about 1500. The consolidated node list was based on distance and used various techniques to map the a small set of nodes in a cohesive way.

lifeofpoo

In the hours just before presenting the project, I finished Life of Poo: an interactive journey of toilet waste.

On the website, you can enter an address (in San Francisco) such as “Twin Peaks, SF” or “47th & Judah, SF” and the Life of Poo and then press Flush Toilet.

This will begin an animated poo journey down the sewer map and to the wastewater treatment plant.

Not all of the flushes works as you’d expect. There’s still glitches and bugs in the code. If you type in “16th & Mission”, the poo just sits there.

Why do I have the bugs? I have some ideas (see below) but I really like the chaotic results so will keep it for now.

Lesson 3: Sometimes you should sacrifice accuracy.

Future Directions
I worked very, very hard on this project and I’m going to let it rest for awhile. There’s still some work to do in the future, which I would like to do some day.

Cistern Map
I’d like to improve the Cistern Map as I think it has cultural value. As far as I know, it’s the only one on the web. The data is from the intersections and while close, is not entirely correct. Sometimes the intersection data is off by a block or so. I don’t think this affects the integrity of the 3D map, but would be important to correct for the web portion.

Life of Poo
I want to see how this interactive map plays out and see how people respond to it in the next couple of months. The animated poo is universally funny but it doesn’t behave “properly”. Sometimes it get stuck. This was the last part of the Water Works project and one that I got working the night before the presentation.

I had to do a lot of node-trimming to make this work — Leaflet can only handle about 1500 data points before it slows down too much, so I did a lot of trimming from a set of abut 28,000. This could be one source of the inaccuracies.

I don’t take into account gravity in the flow calculations, so this is why I think the poo has odd behavior. But maybe the map is more interesting this way. It is, after all, an animated poo emoji.

Infrastructure Fabrication
This is where the project gets very interesting. What I’ve been able to accomplish with the “Sewer Works” print is to show how the sewer pipes of San Francisco look as a physical manifestation. This is only the beginning of many possibilities. I’d be eager to develop this technology and modeling system further. And take the usual GIS maps and translate them into physical models.

Thanks for reading this far and I hope you enjoyed this project,
Scott Kildall




 

 

 

 

 

 

 

 

EquityBot: Capturing Emotions

In my ongoing research and development of EquityBot — a stock-trading bot* with a philanthropic personality, which is my residency project at Impakt Works — I’ve been researching various emotional models for humans.

The code I’m developing will try to make correlations between stock prices and group emotions on Twitter. It’s a daunting task and one where I’m not sure of the signal-to-noise ratio will be (see disclaimer). As an art experiment, I don’t know what will emerge from this, but it’s geeky and exciting.

In the last couple weeks, I’ve been creating a rudimentary system that will just capture words. A more complex system would use sentiment analysis algorithms. My time and budget is limited, so phase 1 will be a simple implementation.

I’ve been looking for some sort of emotional classification system. There are several competing models (of course).

My favorite is the Plutchik Wheel of Emotions, which was developed in 1980. It has a symmetrical look to it and apparently is deployed in various AI systems.

 

Plutchik-wheel.svg

Other models such as the Lövheim cube of emotion are more recent and seem compelling at first. But it’s missing something critical: sadness or grief. Really? This is such a basic human emotion and when I saw it was absent, I tossed the cube model.

1280px-Lövheim_cube_of_emotion

Back to the Plutchik model…my “Twitter bucket” captures certain words, from the color wheel above. I want enough words for a reasonable statistical correlation (about 2000 tweets/hour). Too many of one word will strain my little Linode server. For example, the word “happy” is a no-go since there thousands of Tweets with that word in it each minute.

Many people tweet about anger by just using the word “angry” or “anger”, so that’s an easy one. Same thing goes with boredom/boring/bored.

For other words, I need to go synonym-hunting, like: apprehension. The twitter stream with this word is just a trickle. I’ve mapped it to “worry” or “anxiety”, which shows up more often in tweets. It’s not quite correct, but reasonably close.

The word “terror” has completely lost it’s meaning, and now only refers to political discourse. I’m still trying to figure out a good synonym-map for terror: terrifying, terrify, terrible? It’s not quite right. There’s not a good word to represent that feeling of absolute fear.

This gets tricky and I’m walking into the dark valley of linguistics. I am well-aware of the pitfalls.

Screen Shot 2014-10-01 at 3.18.33 PM

 

* Disclaimer:
EquityBot doesn’t actually trade stocks. It is an art project intended for illustrative purposes only, and is not intended as actual investment advice. EquityBot is not a licensed financial advisor. EquityBoy It is not, and should not be regarded as investment advice or as a recommendation regarding any particular security or course of action.

 

Polycon in Berlin

This week I traveled to Berlin for Polycon. No…it’s not a convention on polyamory, but a porject developed by my longtime friend, Michael Ang (aka Mang). Polygon Construction Kit (aka Polycon) is a software toolkit for converting 3D polygon models into physical objects.

IMG_0246I wanted an excuse to visit Berlin, to hang out with Mang and to open up some possibilities for physical data-visualization behind EquityBot, which I’m working at for the artist-in-residency at Impakt Works and their upcoming festival.

I brought my recently-purchased Printrbot Simple Metal, which I had disassembled into this travel box.IMG_0281

After less than 30 minutes, I had it reassembled and working. Victory! Here it is, printing one of the polygon connectors.

IMG_0248How does Polycon work? Mang shared with me the details. You start with a simply 3D model from some sort of program. He uses SketchUp for creating physical models of his large-scale sculptures. I prefer OpenFrameworks, which is powerful and will let me easily manipulate shapes from data streams.

Here’s the simple screenshot in OpenFrameworks of two polyhedrons. I just wrote this the other day, so there’s no UI for it yet.

Screen Shot 2014-09-25 at 6.10.14 PM

And here is how it looks in MeshLab. It’s water-tight, meaning that it can be 3D-printed.Screen Shot 2014-09-25 at 6.10.59 PM

My goal is to do larger-scale data visualizations than some of my previous works such as Data Crystals and Water Works. I imagine room-sized installations. I’ve had this idea for many months of using the 3D printer to create joinery from datasets and to skin the faces using various techniques, TBD.

How it works: Polycon loads a 3D model and using Python scripts in FreeCAD will generate 3D joints that along with wooden dowels can be assembled into polygonal structures. Screen Shot 2014-09-25 at 6.09.00 PM

The Printrbot makes adequate joinery, but it’s nowhere near as pretty as the Vero prints on the Object 500 at Autodesk. It doesn’t matter that much because my digital joinery will be hidden in the final structures.IMG_0272Mang guided be through the construction of my first Polycon structure. There’s a lot of cleanup work involved such as drilling out the holes in each of the joints. IMG_0274It took awhile to assemble the basic form. There are vertex-numbering improvements that we’ll both make to the software. Together, Mang and I brainstormed ideas as to how to make the assembly go more quickly.IMG_0259After about 15 minutes, I got my first polygon assembled.

IMG_0265 It looks a lot like…the 3D model. I plan to be working on these forms in the next several months and so felt great after a successful first day.IMG_0268And here is a really nice image of one of Mang’s pieces — these are sculptures of mountains that he created. The backstory is that he made these from memories while flying high in a glider and they represent mountains. I like where he’s going with his artwork: making models based on nature, with ideas of recording these spaces and playing them back in various urban spaces. You can check out Michael Ang’s work here on his website.
IMG_0278




 

 

 

 

 

A Starting Point: Distributed Capital

I’m doing more research on EquityBot —the project for my Impakt Works residency, which I just started a couple of days ago.

EquityBot is a stock-trading algorithm that explores the connections between collective emotions on social media and financial speculation. It will be presented at the Impakt Festival at the end of October.

It will also consist of a sculptural component (presented post-festival), which is the more experimental form.

Many of you are familiar with Paul Baran’s work on designing a distributed network, but many others may not be. He worked for the U.S. Air Force and determined that a central communications network would be vulnerable to attack, and suggested that the United States use a distributed network.
baranInterestingly, there is a widespread myth that the Internet, derived from APANET, was designed to withstand a nuclear attack using this model. This isn’t the case, just that the architects of the internet transmission protocol heard of Rand’s work and adapted it for packet use. Yet, the myth persists.

On a side note, perhaps military technology could be useful for the public good. If only we could declassify the technology, like Baran did.

The distributed network reminds me of a 3D polygon mesh I think this could be a good source of 3D data-visualization: Distributed Capital. I’ll research this more in the future.

But EquityBot isn’t about networks in the formal sense, it is a project about constructing a predictive model of stock changes based on the idea that Twitter sentiments correlate with fluctuations in stock prices. Screen Shot 2014-09-17 at 6.08.23 AM

Do I know there is a correlation? Not yet, but I think there is a good possibility. One of my reading sources, The Computational Beauty of Nature, sums up the value of simulated models in its introduction. The predictive model might fail in its results but it will likely reveal a greater truth in the economic system that it is trying to predict. Thus, knowing the uncertainty ahead of time will provide a sense of certainty. EquityBot may not “work” but then again, it may.

compbeautyofnatureMy source of dissent is the excellent book, The Signal and The Noise: Why So Many Predictions Fail — but Some Don’t by Nate Silver. After reading this, last summer, I was convinced that any predictive analysis would be simply be noise. I was disheartened and halted the EquityBot project (previously called Grantbot) for many months.

la-ca-nate-silver

However, now I’m not so sure. It seems likely that people’s moods would affect financial decisions, which in turn would affect stock prices. With studies such as this one by Vagelis Hristidis, which found some correlation to Twitter chatter and stock, I think there is something to this, which is why I’ve revisited the EquityBot project.

I’ll follow the Buddhist maxim with this project and embrace its uncertainty.

 

EquityBot @ Impakt

My exciting news is that this fall I will be an artist-in-residence at Impakt Works, which is in Utrecht, the Netherlands. The same organization puts on the Impakt Festival every year, which is a media arts festival that has been happening since 1988. My residency is from Sept 15-Nov 15 and coincides with the festival at the end of October.

Utrecht is a 30 minute train ride from Amsterdam and 45 minutes from Rotterdam and by all accounts is a small, beautiful canal city with medieval origins and also hosts the largest university in the Netherlands.

Of course, I’m thrilled. This is my first European art residency and I’ll have a chance to reconnect with some friends who live in the region as well as make many new connections.

impakt; utrecht; www.impakt.nlThe project I’ll be working on is called EquityBot and will premiere at the Impakt Festival in late October as part of their online component. It will have a virtual presence like my Playing Duchamp artwork (a Turbulence commission) and my more recent project, Bot Collective, produced while an artist-in-residence at Autodesk.

Like many of my projects this year, this will involve heavy coding, data-visualization and a sculptural component.

equity_bot_logo

At this point, I’m in the research and pre-production phase. While configuring back-end server code, I’m also gathering reading materials about capital and algorithms for the upcoming plane rides, train rides and rainy Netherland evenings.

Here is the project description:

EquityBot

EquityBot is a stock-trading algorithm that explores the connections between collective emotions on social media and financial speculation. Using custom algorithms Equitybot correlates group sentiments expressed on Twitter with fluctuations in related stocks, distilling trends in worldwide moods into financial predictions which it then issues through its own Twitter feed. By re-inserting its results into the same social media system it draws upon, Equitybot elaborates on the ways in which digital networks can enchain complex systems of affect and decision making to produce unpredictable and volatile feedback loops between human and non-human actors.

Currently, autonomous trading algorithms comprise the large majority of stock trades.These analytic engines are normally sequestered by private investment companies operating with billions of dollars. EquityBot reworks this system, imagining what it might be like it this technological attention was directed towards the public good instead. How would the transparent, public sharing of powerful financial tools affect the way the stock market works for the average investor?

kildall_bigdatadreamsI’m imagining a digital fabrication portion of EquityBot, which will be the more experimental part of the project and will involve 3D-printed joinery. I’ll be collaborating with my longtime friend and colleague, Michael Ang on the technology — he’s already been developing a related polygon construction kit — as well as doing some idea-generation together.

“Mang” lives in Berlin, which is a relatively short train ride, so I’m planning to make a trip where we can work together in person and get inspired by some of the German architecture.

My new 3D printer — a Printrbot Simple Metal — will accompany me to Europe. This small, relatively portable machine produces decent quality results, at least for 3D joints, which will be hidden anyways.

printrbot