People I Want To Punch in the Face

“People I Want to Punch in the Face” is a book sold at the Whitney (and apparently on Etsy as well) with blank pages.

In one of them, unbeknownst to the bookstore staff, assorted visitors filled in their choices.

11222053_10153612163059274_906440019349147799_n

11058240_10153612163614274_2887715013415183768_n 11145561_10153612163409274_2681781276863721911_n 11214197_10153612163304274_2279591990549537961_n 11220458_10153612163539274_7186057447222198758_n 11220879_10153612163809274_422588321701773217_n  11223531_10153612163859274_8381130295349143957_n 11224733_10153612163924274_8767296298503521774_n 11702794_10153612163894274_5660821899910988506_n 11796299_10153612163389274_5730743001320813356_n 11800074_10153612163124274_3322783156749186006_n 11800312_10153612163149274_169491647891715725_n 11811389_10153612163489274_7267997940101690339_n 11811500_10153612163334274_2498593643581833620_n 11811505_10153612163654274_978369019232892087_n 11816853_10153612163719274_5747074713365788879_n 11825711_10153612163984274_5919445799299516647_n 11828615_10153612163759274_6926645940978707847_n

Bad Data: SF Evictions and Airbnb

The inevitable conversation about evictions at San Francisco every party…art organizations closing, friends getting evicted…the city is changing. It has become a boring topic, yet it is absolutely, completely 100% real.

For the Bad Data series — 12 data-visualizations depicting socially-polarized, scientifically dubious and morally ambiguous dataset, each etched onto an aluminum honeycomb panel — I am featuring two works: 18 Years of Evictions in San Francisco and 2015 AirBnb Listings for exactly this reason. These two etchings are the centerpieces of the show.

evictions_airbnb

This is the reality of San Francisco, it is changing and the data is ‘bad’ — not in the sense of inaccurate, but rather in the deeper sense of cultural malaise.

By the way, the reception for the “Bad Data” show is this Friday (July 24, 2015) at A Simple Collective, and the show runs through August 1st.

The Anti-Eviction Mapping Project has done a great job of aggregating data on this discouraging topic, hand-cleaning it and producing interactive maps that animate over time. They’re even using the Stamen map tiles, which are the same ones that I used for my Water Works project.

Screen Shot 2015-07-23 at 4.52.36 PM

When I embarked on the Bad Data series, I reached out to the organization and they assisted me with their data sets. My art colleagues may not know this, but I’m an old-time activist in San Francisco. This helped me with getting the datasets, for I know that the story of evictions is not new and certainly not on this scale.

In 2001, I worked in a now-defunct video activist group called Sleeping Giant, which worked on short videos in the era when Final Cut Pro made video-editing affordable and when anyone with a DV camera could make their own videos. We edited our work, sold DVDs and had local screenings, stirring up the activist community and telling stories from the point-of-view of people on the ground. Sure, now we have Twitter and social media, but at the time, this was a huge deal in breaking apart the top-down structures of media dissemination.

Here is No Nos Vamos a hastily-edited video about evictions in San Francisco. Yes, this was 14 years ago.

I’ve since moved away from video documentary work and towards making artwork: sculpture, performance, video and more. The video-activist work and documentary video in general felt overly confining as a creative tool.

My current artistic focus is to transform datasets using custom software code into physical objects. I’ve been working with the amazing fabrication machines at Autodesk’s Pier 9 facility to make work that was not previously possible.

Ths dataset (also provided through the SF Rent Board) includes all the no-fault evictions in San Francisco, I got my computer geek on…well, I do try to use my programming powers for non-profit work and artwork.

I mapped the data into vector shapes using the C++ open source toolkit, called OpenFrameworks and wrote code which transformed the ~9300 data points into plotable shapes, which I could open in Illustrator. I did some work tweaking the strokes and styles.

sf_evictions_20x20

This is what the etching looks like from above, once I ran int through the water jet. There were a lot of settings and tests to get to this point, but the final results were beautiful.

waterjet-overhead

The material is a 3/4″ honeycomb aluminum. I tuned the high-pressure from the water-jet to pierce through the top layer, but not the bottom layer. However, the water has to go somewhere. The collisions against the honeycomb produce unpredictable results.

…just like the evictions themselves. We don’t know the full effect of displacement, but can only guess as the city is rapidly becoming less diverse. The result is below, a 20″ x 20″ etching.

Bad Data: 18 Years of San Francisco Evictions

baddata_sfevictions

The Airbnb debate is a little less clear-cut. Yes, I do use Airbnb. It is incredibly convenient. I save money while traveling and also see neighborhoods I’d otherwise miss. However, the organization and its effect on city economies is a contentious one.

For example, there is the hotel tax in San Francisco, which after 3 years, they finally consented to paying — 14% to the city of San Francisco. Note: this is after they had a successful business.

There also seems to be a long-term effect on rent. Folks, and I’ve met several who do this, are renting out places as tenants on Airbnb. Some don’t actually live in their apartments any longer. The effect is to take a unit off the rental market and mark it as a vacation rental. Some argue that this also skirts the law rent-control in the first place, which was designed as a compromise solution between landlords and tenants.

There are potential zoning issues, as well…a myriad of issues around Airbnb.

BAD DATA: 2015 AIRBNB LISTINGS, etching file

airbnb_sf

In any case, the location of the Airbnb rentals (self-reported, not a complete list) certainly fit the premise of the Bad Data series. It’s an amazing dataset. Thanks to darkanddifficult.com for this data source.

BAD DATA: 2015 Airbnb Listings

baddata_airbnb

Selling Bad Data

The reception for my solo show “Bad Data”, featuring the Bad Data series is this Friday (July 24, 2015) at A Simple Collective.

Date: July 24th, 2015
Time: 7-9pm
Where: ASC Projects, 2830 20th Street (btw Bryant and York), Suite 105, San Francisco

The question I had, when pricing these works was how do you sell Bad Data? The material costs were relatively low. The labor time was high. And the data sets were (mostly) public.

We came up with this price list, subject to change.

///  Water-jet etched aluminum honeycomb:

baddata_sfevictions
18 Years of San Francisco Evictions, 2015 | 20 x 20 inches | $1,200
Data source: The Anti-Eviction Mapping Project and the SF Rent Board


baddata_airbnb
2015 AirBnB Listings in San Francisco, 2015 | 20 x 20 inches | $1,200
Data source: darkanddifficult.com


baddata_hauntedlocations
Worldwide Haunted Locations, 2015 | 24 x 12 inches | $650
Data source: Wikipedia


baddata_ufosightings

Worldwide UFO Sightings, 2015 | 24 x 12 inches | $650
Data source: National UFO Reporting Center (NUFORC)


baddata_missouriabortionalternatives

Missouri Abortion Alternatives, 2015 | 12 x 12 inches
Data source: data.gov (U.S. Government) | $150


baddata_socalstarbucks

Southern California Starbucks, 2015 | 12 x 8 inches | $80
Data source: https://github.com/ali-ce


baddata_usprisons

U.S. Prisons, 2015 | 18 x 10 inches | $475
Data source: Prison Policy Initiative prisonpolicy.org (via Josh Begley’s GitHub page)


///  Water-jet etched aluminum honeycomb with anodization:

baddata_denvermarijuana

Albuquerque Meth Labs, 2015 | 18 x 12 inches | $475
Data source: http://www.metromapper.org


baddata_usmassshootings

U.S. Mass Shootings (1982-2012), 2015 | 18 x 10 inches | $475
Data source: Mother Jones


baddata_blacklistedips-banner

Blacklisted IPs, 2015 | 20 x 8 ½  inches | $360
Data source: Suricata SSL Blacklist


baddata_databreaches

Internet Data Breaches, 2015 | 20 x 8 ½ inches | $360
Data source: http://www.informationisbeautiful.net

Bad Data, Internet Breaches, Blacklisted IPs

In 1989, I read Neuromancer for the first time. The thing that fascinated me the most was not the concept of “cyberspace” that Gibson introduced. Rather it was the physical description of virtual data. The oft-quoted line is:

“The matrix has its roots in primitive arcade games. … Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts. … A graphic representation of data abstracted from banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding.”

What was this graphic representation of data that struck me at first and has stuck with be ever since. I could only imagine what this could be. This concept of physicalizing virtual data later led to my Data Crystals project. Thank you, Mr. Gibson.

dc_sfart_v1

In Neuromancer, the protagonist Case is a freelance “hacker”. The book was published well-before Anonymous, back in the days when KILOBAUD was the equivalent of Spectre for the BBS world.

At the time, I thought that there would be no way that corporations would put their data in a central place that anyone with a computer and a dial-up connection (and, later T1, DSL, etc) could access. This would be incredibly stupid.

And then, the Internet happened, albeit more slowly than people remember. Now hacking and data breaches are commonplace.

My “Bad Data” series — waterjet etchings of ‘bad’ datasets onto aluminum honeycomb panels — capture two aspects of internet hacking: Internet data breaches and Blacklisted IPs.

In these examples, ‘bad’ has a two-layered meaning. The abrogations of accepted treatises of Internet behavior is widely considered a legal, though always not a moral crime. The data is also ‘bad’ in the sense that it is incomplete. Data breaches are usually not advertised by the entities that get breached. That would be poor publicity.

For the Bad Data series, I worked with no necessarily the data wanted, but rather the data that I could get. From Information Is Beautiful, I found this dataset of Internet data breaches.

Screen Shot 2015-07-12 at 8.22.04 PM

What did I discover? …that Washington DC is the leader of breached information. I suspect it’s mostly because the U.S. government is the biggest target rather than lax government security. The runner-up is New York City, the center of American finance. Other notable cities are San Francisco, Tehran and Seoul. San Francisco makes sense — the city is home to many internet companies. And Tehran, which is the target of Western internet attacks, government or otherwise. But Seoul? They claim to be targeted by North Korea. However, as we have found out, with the Sony Pictures Entertainment Hack, North Korea is an easy scapegoat.

BAD DATA: INTERNET DATA BREACHES (BELOW)

baddata_databreaches

Conversely, there are many lists of banned IPs. The one I worked with is the Suricata SSL Blacklist. This may not be the best source, as there are thousands of IP Blacklists, but it is one that is publicly available and reasonably complete. As I’ve learned, you have to work with the data you can get, not necessarily the data you want.

I ran these two etched panels both through an anodization process, which further created a filmy residue on the surface. I’m especially pleased with how the Banned IPs panel came out.

Bad Data: BLACKLISTED IPs (below)

baddata_blacklistedips

Genetic Portraits and Microscope Experiments

I recently finished a new artwork — called Genetic Portraits — which is a series of microscope photographs of laser-etched glass that data-visualize a person’s genetic traits.

I specifically developed this work as an experimental piece, for the Bearing Witness: Surveillance in the Drone Age show. I wanted to look at an extreme example of how we have freely surrendered our own personal data for corporate use. In this case, 23andMe provides a (paid) extensive genetic sequencing package. Many people, including myself have sent in saliva samples to the company, which they then process. From their website, you can get a variety of information, including their projected likelihood that you might be prone to specific diseases based on your genetic traits.

Following my line of inquiry with other projects such as Data Crystals and Water Works, where I wrote algorithms that transformed datasets into physical objects, this project processes individual’s genetic sequence to generate vector files, which I later use to laser-etch onto microscope slides. The full project details are here.

gp_scott_may11

Concept + Material
I began my experiment months earlier, before the project was solidified, by examining the effect of laser-etching on glass underneath a microscope. This stemmed from conversations with some colleagues about the effect of laser-cutting materials. When I looked at this underneath a microscope, I saw amazing results: an erratic universe accentuated by curved lines. Even with the same file, each etching is unique. The glass cracks in different ways. Digital fabrication techniques still results in distinct analog effects. 

blog-IMG_4106When the curators of the show, Hanna Regev and Matt McKinley, invited me to submit work on the topic of surveillance, I considered how to leverage various experiments of mine, and came back to this one, which would be a solid combination of material and concept: genetic data etched onto microscope slides and then shown at a macro scale: 20” x 15” digital prints.

Surrendering our Data
I had so many questions about my genetic data. Is the research being shared? Do we have ownership of this data? Does 23andMe even ask for user consent? As many articles point out, the answers are exactly what we fear. Their user agreement states that “authorized personnel of 23andMe” can use the data for research. This sounds officially-sounding text simply means that 23andMe decides who gets access to the genetic data I submitted. 23andMe is not unique: other gene-sequencing companies have similar provisions, as the article suggests.

Some proponents suggest that 23andMe is helping the research front, while still making money. It’s capitalism at work. This article in Scientific American sums up the privacy concerns. Your data becomes a marketing tool and people like me handed a valuable dataset to a corporation, which can then sell us products based on the very data we have provided. I completed the circle and I even paid for it.   

However, what concerns me even more than 23andMe selling or using the data — after all, I did provide my genetic data, fully aware of its potential use — is the statistical accuracy of genetic data. Some studies have reported a Eurocentric bias to the data and The FDA has also has battled with 23andMe regarding the health data they provide. The majority of the data (with the exception of Bloom’s Syndrome) simply wasn’t predictive enough. Too many people had false positives with the DNA testing, which not only causes worry and stress but could lead to customers taking pre-emptive measures such as getting a mastectomy if they mistakenly believe they have are genetically predisposed to breast cancer.

A deeper look at the 23andMe site shows a variety of charts that makes it appear like you might be susceptible (or immune) to certain traits. For example, I have lower-than-odds of having “Restless Leg Syndrome“, which is probably the only neurological disorder that makes most people laugh when hearing about it. My genetic odds of having it are simply listed as a percentage.

Our brains aren’t very good with probabilistic models, so we tend to inflate and deflate statistics. Hence, one of many problems of false positives.

And, as I later discovered, from an empirical standpoint, my own genetic data strayed far from my actual personality. Our DNA simply does not correspond closely enough to reality.

Screen Shot 2015-06-16 at 11.06.44 AM

Data Acquisition and Mapping
From the 23andMe site, you can download your raw genetic data. The resulting many-megabyte file is full of rsid data and the actual allele sequences.

Screen Shot 2015-06-15 at 10.37.08 AM

Isolating useful information from this was tricky. I cross-referenced some of the rsids used for common traits from 23andMe with the SNP database. At first I wanted to map ALL of the genetic data. But, the dataset was complex — too much so for this short experiment and straightforward artwork.

Instead, I worked with some specific indicators that correlate to physiological traits such as lactose tolerance, sprinter-based athleticism, norovirus resistances, pain sensitivity, the “math” gene, cilantro aversion — 15 in total. I avoided genes that might correlate to various general medical conditions like Alzheimer’s and metabolism.

For each trait I cross-referenced the SNP database with 23andMe data to make sure the allele values aligned properly. This was arduous at best.

There was also a limit on physical space for etching the slide, so having more than 24 marks or etchings one plate would be chaotic. Through days of experimentation, I found that 12-18 curved lines would make for compelling microscope photography.

To map the data onto the slide, I modified Golan Levin’s decades-old Yellowtail Processing sketch, which I had been using as a program to generate curved lines onto my test slides. I found that he had developed an elegant data-storage mechanism that captured gestures. From the isolated rsids, I then wrote code which gave weighted numbers to allele values (i.e. AA = 1, AG = 2, GG = 3, depending on the rsid).

gp_illustrator

Based on the rsid numbers themselves, my code generated (x, y) anchor points and curves with the allele values changing the shape of each curve. I spent some time tweaking the algorithm and moving the anchor points. Eventually, my algorithm produced this kind of result, based on the rsids.

genome_scott_notated

The question I always get asked about my data-translation projects is about legibility. How can you infer results from the artwork? Its a silly question, like asking an Kindle engineer to to analyze a Shakespeare play. A designer of data-visualization will try to tell a story using data and visual imagery.

My research and work focuses deep experimentation with the formal properties of sculpture — or physical forms — based on data. I want to push boundaries of what art can look like, continuing the lineage of algorithmically-generated work by artists such as Sol Lewitt, Sonia Rappaport and Casey Raes.

Is it legible? Slightly so. Does it produce interesting results? I hope so.

gp_slide_image

But, with this project, I’ve learned so much about genetic data — and even more about the inaccuracies involved. It’s still amazing to talk about the science that I’ve learned in the process of art-making.

Each of my 5 samples looks a little bit different. This is the mapping of actual genetic traits of my own sample and that of one other volunteer named “Nancy”.

genome_scott_notated

Genetic Traits for Scott (ABOVE)
GENETIC TRAITS FOR NaNCY (BELOW)

genome_scott_notatedWe both share a number of genetic traits such as the “empathy” gene and curly hair. The latter seems correct — both of our hair is remarkably straight. I’m not sure about the empathy part. Neither one of us is lactose intolerant (also true in reality).

But the test-accuracy breaks down on several specific points. Nancy and I do have several differences including athletic predisposition. I have the “sprinter” gene, which means that I should be great at fast-running. I also do not have the math gene. Neither one of these is at all true.

I’m much more suited to endurance sports such as long-distance cycling and my math skills are easily in the 99th percentile. From my own anecdotal standpoint, except for well-trodden genetics like eye color, cilantro aversion and curly hair, the 23andMe results often fail.

The genetic data simply doesn’t seem to be support the physical results. DNA is complex. We know this, it is non-predictive. Our genotype results in different phenotypes and the environmental factors are too complex for us to understand with current technology.

Back to the point about legibility. My artwork is deliberately non-legible based on the fact that the genetic data isn’t predictive. Other mapping projects such as Water Works are much more readable.

I’m not sure where this experiment will go. I’ve been happy with the results of the portraits, but I’d like to pursue this further, perhaps in collaboration with scientists who would be interested in collaboration around the genetic data.

FOUR FINAL SLIDE ETCHINGS  (BELOW)

gp_allison_may11

 

gp_michele_may11 gp_nancy_may11 gp_scott_may11

Dérive in Paris

The first day after arriving in Paris, we embarked on a dérive — the French word for a “drift” — an unplanned journey (usually) through an urban space. The idea is to immerse yourself in the moment, the now of a city. No maps, no mobile phones, no direction, just walk and make choices on where to go based on your senses: the smells, sights and sounds of a city. This experiment would hopefully be some sort of authentic experience, devoid of the central modes of organization and give us a subjective experience.

I did this once before, in Berlin, while reading Rebecca Solnit’s A Field Guide to Getting Lost. That time was by bicycle and I spent the first day meandering through the city with no direction. Every couple of hours, I’d stop for a cup of coffee or a snack and read Solnit’s book, which covered themes of mental and emotional wandering. It was profound. I noticed odd things, mostly architectural.

solnit_gettinglost

My recommendation is to do this when you first arrive in an unfamiliar city, after getting a night’s sleep but before you’ve done anything else. At this point, your body is still jet-lagged. Daily patterns have yet to be formed. Memories are unestablished. The brain is at its most receptive state.

IMG_1178

We started here, near where we were staying. All I know was that the 6th Arrondissement was on the Left Bank. I’ve since become familiar with the shell-like ordering of the city’s districts.

We picked the direction that we most “liked”, based on whatever looked best down the street.IMG_1179

When you’re not trying to get somewhere or having a conversation about something, you notice funny things, like tons of push-scooters locked with cheap cable locks everywhere.

IMG_1183

Or custom-painted tiles like these. Of course, these are “touristy”, but the walk pushed these labels out of my mind. IMG_1184

I wanted to document the dérive, but didn’t want to be in a documentation state-of-mind, so just snapped photos without much consideration for what I was shooting.IMG_1185

The space-for-women was inviting, but also seemed to be closed. It was some sort of library.IMG_1187

We never would have found this old store on Yelp, but it was incredible. Lots of old science and medical devices and posters were inside! The dérive soon meant that we could go inside shops and here is where my expectations of some sort of 1950s Paris that Guy Debord lived in quickly got dashed on the rocks. There were tons of distracting shops and restaurants everywhere. I guess that was the case 60 years ago as well, but I’m sure capitalist advertising techniques have advanced significantly since his time.IMG_1190

We found some contemporary art galleries, too.IMG_1192

Though the Jesus spinning on the turntable didn’t “work” for me.IMG_1193

With two people, the dérive meant compromising. Sometimes I wanted to walk on one side of the street and Victoria would walk on the other. And when we made a decision, we had to pick one person’s “way” if we disagreed. I’m would have been curious to see where my choices would have left me.IMG_1194

Sure, you notice all sorts of details.IMG_1195

And signs in French, mostly about parking rules.IMG_1196

Interesting chimneys on buildings.IMG_1198

You’re not supposed to stop to do errands, but we had to get some coffee capsules for the espresso machine in our room. And then I noticed the shrink-wrapped cheese. IMG_1201

Wide boulevards with complex intersections. Surprisingly little traffic noise and congestions for a major city.  IMG_1202

Streets signs and greenery.IMG_1203

Plaques with names of historical figures and where they once lived.IMG_1204

The smell of dog shit everywhere. Cigarettes, lots of cigarette smoke. I still hate getting the exhale of smoke in my face.IMG_1206

Many apartment buildings with exactly the same window dressing on them. Why do only the 2nd story windows have planters on them?IMG_1207 Everywhere, ads for various services, including “Tantra Massage” on drain pipes. IMG_1209

A giant old wooden door with intricate carvings.IMG_1210

An old church interspersed amongst the apartment buildings.IMG_1211

Odd urban compositions.IMG_1213

A time portal to the year 1858.IMG_1214

Bubble windows.IMG_1215

Ah, the iron work.IMG_1217

Gold leafing shop. Isn’t it dangerous to leave this in the window for potential thievery?IMG_1218

Real estate ads everywhere. Prices are comparable to San Francisco.IMG_1219

French flags outside what looks like government buildings.

IMG_1223

Lots of small dogs and apparently it’s okay to bring them into the restaurant with you.IMG_1222

Sign for a movie theater…or something else.IMG_1226

The most amazing air vent I’ve ever seen.IMG_1227

Reserving your parking spot with trash.IMG_1228

The stop sign figurine is fatter than the walk sign figurine.IMG_1229

Goats in a park.IMG_1230

One cannot escape the Eiffel Tower as a point of orientation.IMG_1231

Bodily functions rule in the end. The toilets are free, but the lines are long.IMG_1232

Make Art, Not Landfill

This Thursday (June 8, 2015), will be the opening of Make Art, Not Landfill, which is the 25th Anniversary of the Recology Artists in Residence program. If you are in San Francisco, you should go to the show.

I first heard about the program in the late 1990s. In 2010, I saw the 20th Anniversary show, and later that year, applied and was accepted. I started my residency in February 2011. During this time, I made a series called “2049” — where I played the role of a prospector from the year 2049, who was mining the dump for resources to construct “Imaginary Devices” to help me survive.

skl_051811_050These included items such as the Sniffer, the 2049 Hotline, the Universal Mailbox, Reality Simulator and Infinite Power. Each one was accompanied by a blueprint with imaginary symbols on it.

skl_051711_053_eq

Using these scavenged items, I built a complex narrative around some sort of future collapse. The work was odd, funny and touched veins of consumption for many people. Dorothy Santos did a writeup for Asterisk Magazine on the 2049 Series, which captured some of the feelings evoked by the sculptures, paintings and videos.

skl_051711_047_eq

Part of the deal with being an artist-in-residence at The Dump is that they get to keep one of your artworks. And exhibitions like this are exactly the reason why. The good folks at Recology put on shows, featuring work from their program. The artwork that they elected to retain was the Universal Mailbox (below), which will be in tomorrow’s show.

I constructed the Universal Mailbox from a discarded UPS keypad, scrap wood, a found satellite dish and dryer hose. I found the paint at the dump as well. skl_051811_018 I used a similar technique for the 2049 Hotline, and during the opening, friends of mine played the role of “emissaries from the year 2049”, who would talk to exhibit-goers on the phone. Their only directive was to stay in character — they had to be from the future, but the environment they imagined could be anything they wanted.skl_051711_003The artwork later traveled to the New York Hall of Science for their Regeneration Show (walkthrough below)

https://vimeo.com/52361064

This was a one-way mission for many of my sculptures, as they were fragile to begin with and 4 months at an Interactive Science Museum decimated the work. I knew this would happen. I always viewed the sculptures as temporary. I was even able to save some money on shipping costs. The artwork, after all, came from the dump!

skl_051811_001_prsThe blueprints survived, as well as a rebuilt versions of the Universal Mailbox and the 2049 Hotline, which I will continue to exhibit. The 2049 project and my 4 months at the dump was a lesson in attachment to material things, which flow from hands to hands and eventually to landfill and hopefully, sometimes, to art.

Water Works, NPR and Imagination

I recently achieved one of my life goals. I was on NPR!

The article, “Artists In Residence Give High-Tech Projects A Human Touch” discusses my Water Works* project as well as artwork by Laura Devendorf, and more generally, the artist-in-residence program at Autodesk.

sewer-works

“Water Works” 3D-printed Sewer Map in 3D printer at Autodesk

The production quality and caliber of the reporting is high. It’s NPR, after all. But, what makes this piece important is that it talks about the value of artists, because they are the ones who infuse imagination into culture. The reporter, Laura Sydell, did a fantastic job of condensing this thought into a 6 minute radio program.

Arts funding has been cut out of many government programs, at least in the United States. And education curriculum increasing is teaching engineering and technology over the humanities. But, without the fine arts and teaching actual creativity (and not just startup strategies), how can we, as a society, be truly creative?

Well, that’s what this article suggests. And specifically, that corporations such as Autodesk, will benefit from having artists in their facilities.

Perhaps one problem is that “imagination” is not quantifiable. We have the ability to measure so much: financial impact, number of clicks, test scores and more, but creativity and imagination, not so much. These are — at least to date — aspects of our culture that we cannot track on our phones or run web analytics on.

So, embracing imagination means embracing uncertainty, which is an existential problem that technology will have to cope with along the way.

waterwork_in_lobby

“WATER WORKS” Installed in AUTODESK LOBBY

At the end of the article, the reporter talks about Xerox Parc of the 1970s, which had a thriving artist-in-residence program. Early computer technology was filled with imagination, which is why this time was ripe with technology and excitement.

This is close to my heart. My father, Gary Kildall, was a key computer scientist back in the 1970s. His passions when he was in school was mathematics and art. By the time that I was a kid, he was no longer drawing or working in the wood shop. But, instead was designing computer architectures which defined the personal computer. He passed away in 1994, but I often wish he could see the kind of work I’m doing with art + technology now.

gary-kildall

Gary Kildall oN TELEVISION, examining COMPUTER HARDWARE, circa 1981

* Water Works was part of Creative Code Fellowship in 2014 with support from Gray Area, Stamen Design and Autodesk.

EEG Dinner Party @ SXSW

I’m experimenting with a new model for sustainable art practice: leveraging the intellectual property from my technology-infused artworks into lucrative contracts. And why not? Artists are creative engines and deserve to be compensated.

Teaching is how many of my ilk get their income and every professor I’ve talked to about the university-academia track constantly moans about the silo-like environment, the petty politics, the drudgery of the adjunct lifestyle and the low pay. They are overworked and burdened by administration. No thanks.

The other option is full-time work. Recently (2012-13), I was on full-time staff at the Exploratorium as a New Media Exhibit Developer. I love the people, the DIY shop environment and the mission of the organization. It was here that I fully re-engaged with my software coding practice and learned some of the basics about data-visualization. But, ultimately a full-time job meant that I wasn’t making my own artwork. My creative spirit was dying. I couldn’t let this happen, so when my fixed-term job came up, I decided not to try to pursue full-time employment. I now work with the Explo on selected part-time projects.

This year, I started an LLC and in January, I had landed my first contract job, which was to do the software coding, technical design and visitor interaction for a project called “EEG Dinner Party”, which was part of a larger installation for General Electric, which they called the “GE BBQ Research Center” to be presented at SXSW in Austin.

ge-bbq-research-center
The folks I directly worked with were Sheet Metal Alchemist (Lara and Sean who run the company, below) — they are fantastic company who build custom-fabrication solutions. They helped General Electric produce this interactive experience for SXSW, which featured a giant BBQ smoker with sensors and the EEG Dinner Party, which was the portion I was working on.

eegdinnerparty-17

My “intellectual property” was my artwork, After Thought (2009), which I made while an artist-in-residence at Eyebeam Art + Technology Center in New York. This is a portable personality testing kit using EEG brainwaves and flashcards, where I generate a personal video that expresses your “true” personality. I dressed in a lab coat and directed viewers in a short, 5-10 minute experiment with technology and EEG testing.

afterthought_main-1024x683When the folks at Sheet Metal Alchemist (SMA) contacted me about doing the EEG work, I was confident that I could transform the ideas behind this project — an interactive experience into one that would work for SXSW and General Electric.

From the get-go, I knew this wasn’t my art project and I didn’t have my usual the creative control. For one, General Electric that had a very specific message: “Your Brain on BBQ” and the entire SXSW site was designed as a research lab, of sorts. It was a promotional and branding engine for GE, who provided free meat and beer for the event.

The work that SMA did was just a portion, albeit the attractor (the smoker) and the high-tech demo (EEG) of what was going on, but GE also had videos, displays in vitrines, speakers, DJs and a mix-your-own-BBQ sauce stations. New Creatures were the ones that put the entire event together (check out the video at the end of this post).

The irony: I don’t even eat meat.

eegdinnerparty-20

I treaded the delicate balance doing of doing client work combined with my own artistic/tech designs. While I don’t work for GE, nor is their message mine, we’re all in it for a temporary goal: to produce a successful event. The end-result was an odd compromise of social-messaging, technology and visitor experience, which ended up being a very successful installation at the event.

The concept was the we conduct a series of “dinner parties” (it was actually during the day) where two tables of four people each would sit down and eat a 5-minute “meal”. We would track their brainwaves in real-time and generate a graph showing a composite index of what they were experiencing.

All of the event staff costumed ourselves in lab coats. Here I am with the two monitor display, wearing the EEG headset, which we chose: the Muse Headset, which after doing a lot of research, beat the pants off the competitors, Neurosky and Emotiv for its comfort and developer’s API.

eegdinnerparty-19

The technical setup took awhile to figure out, but I finally settled upon this system, which was very stable. Each of the eight headsets was paired to a cheap Android tablet. The tablet then streamed the EEG data to two separate Processing applications, one for each table via Open Sound Control (OSC).eegdinnerparty-18The tablet software that I wrote was based on some of the Android sample code from Muse and would show useful bits of information like the battery life and connection status for the 4 headset sensors. Also check out the “Touching Forehead” value. This simple on/off was invaluable and would let us know if the headset was actually on someone’s head. This way, I could run tables of just 1 person or all 4 people at a time.

eegdinnerparty-13

Each headset was assigned to a separate graph color and icon. My software then graphed the real-time composite brainwave index over the course of 5 minutes. The EEG signals are alpha, beta, delta, gamma and theta waves. But, showing all of these would be way too much information, so I produced a composite value of all 5 of these, weighting certain waves such as beta and theta waves (stress and meditation) more than others such as alpha (sleep).

eegdinnerparty-6

We ran the installation for 3 days. We soon has an efficient setup for registration and social media. You would make a reservation ahead of time and a greeter would fill in the spots on for empty tables.

eegdinnerparty-12

The next lab technician would have everyone digitally sign consent forms and ask for their personal information such as your name and Twitter handle.

eegdinnerparty-10

We soon had a reasonably-sized line.

eegdinnerparty-15

My job was to make sure the technology worked flawlessly. I would clean headsets, check the tablets, do any troubleshooting, as necessary. Fortunately, the installation went off very smoothly. We had just one headset stop working one the 2nd day and on the 3rd day, a drunk guest knocked one of the tablets off the table, shattering it. Of course, we had backups.

After fit all the guests with the headsets and make sure the connections worked, I’d pass the them off to Sean, who talked about EEG signals and answered questions about what the installation was all about. After about 5 minutes, we had people sitting at tables.

eegdinnerparty-9

Then, they got served. Food, that is.

eegdinnerparty-1

Here is a piece of sausage from the smoker, some coleslaw and a bit of banana cream pudding.
eegdinnerparty-5

As folks ate, they watched their brainwaves graph in real-time.

eegdinnerparty-8

Each headset was marked with the corresponding color on the graph. One dot was for Table A and two for Table B.eegdinnerparty-3The guests got a kick out of it, that’s for sure.
eegdinnerparty-7And while they consumed food, the photographer shot closeups of people eating.
eegdinnerparty-14If you chose to be at the EEG Dinner Party, you certainly had to have no fear of the media.
eegdinnerparty-16Then, the social media team would do a hand-tracing of the graph and send out an animated image via Twitter, like these.

Screen Shot 2015-03-18 at 11.07.56 PM
Screen Shot 2015-03-18 at 11.06.58 PM
Screen Shot 2015-03-18 at 11.07.33 PM
Of course, they ended up getting retweeted.

Screen Shot 2015-03-18 at 11.07.11 PM

And we had some celebrities! Here is Questlove.
questloveMy concluding thoughts on artwork-as-IP: it’s solid paid work. My billable rate is at least 3 times higher than any non-profit work that I do, which translates to a more sustainable art practice. My coding skills got sharper — this was my first Android application. I didn’t feel like I had to dial in the fine creativity and was more of a tech lead on the project. So, overall a success and I’m hoping I can do some future paid gigs with my technology-based artwork.

 

*As promised, here is the Hot Wheels Double Dare project, produced by New Creatures.

Panned by 7×7!

“a massive orgy of sugar cubes”…When my artwork gets denigrated like this, I almost always laugh.

My skin isn’t extra-thick, but after the Wikipedia Art project, where I got called a “troll” by Jimmy Wales (in the days before ‘trolling’ was common parlance), I always find humor in the insult.

Press, 7x7

 

 

 

In this case it, is my Data Crystals project, which has been called “data popcorn” by my friends. Orgiastic sugar cubes? I’l take it.

Press, 7x7 (2)

Producing Art via 3D printing

Let’s not get too excited until the reviews come out, but it’s always nice to receive some advance press coverageScreen Shot 2015-03-30 at 10.04.17 PM.

For this upcoming show, which is at the Peninsula Art Museum in Burlingame, I will be presenting my Data Crystals artwork. These have been written about extensively in the press, but not yet shown in an exhibition. That’s how it works sometimes.

datacrystals_med_shot

Exhibition Details:

What: “3D Printing: The Radical Shift”
When: April 26 through June 28
Hours: 11 a.m. to 5 p.m. Wednesdays through Sundays
Opening reception: 1-2 p.m. (members only), 2 p.m. – 4 p.m. (general public) April 26
Where: Peninsula Museum of Art, 1777 California Drive, Burlingame

Artist Talk @ Plug-in

Tonight, Victoria Scott and I gave a solid talk at Plug-In Gallery in Winnipeg, with support from Erika Lincoln and the Winnipeg Arts Council.

Here, I am with an old friend, Ken Gregory, artist, hardware hacker and kinetic sculpture of many decades. It was great to see him again after nearly 5 years.

kenandscott
I co-presented with Victoria, who showed some of her own work as well as some of our collaborative work. We also introduced our ReFILL workshop, which starts tomorrow (!).

vic
Ken’s artwork is much better than his photography skills. Here, I am partially cut-off. Hey this happens, sometimes. I’ll publish it anyhow.

Otherwise, the talk went great. We got a “Winnipeg reception”, which meant that folks seemed very interested — no cell phone distractions — but at the same time, hardly any questions, either. The feedback was that folks were “reserved”. Ah, welcome to Canada where people are, well…perhaps more genuine.

scott_pic

I <3 Classroom Artist Talks

Here’s my dirty secret. If you pay me a small stipend, I will come to your class and talk about my artwork. It’s one of my favorite things to do.

Last week, it was Jenny Odell’s class at the San Francisco Art Institute: Probing Social Networks. Her work is smart and I’ve been a fan, so perhaps it’s the case of the mutual admiration society. The two of us finally met in person at an opening at Recology San Francisco, where I was once an artist-in-residence (2011) and where she will soon spend some time digging through trash.

IMG_0867

My “playlist” covered more of the internet-art projects with some discussion of imaginary objects and virtual data:

No Matter (2008)
Second Front (2006-)
Wikipedia Art (2009)
Tweets in Space (2012)
Playing Duchamp (2009)
Data Crystals (2014)
Water Works (2014)
EquityBot (2014)

The classroon talks are relatively easy to do. Very little prep is required since I’ve spoken about all these project oodles of times. I do these talks mostly, because I remember so many of the artists that came through my MFA grad program and each and every one of them helped me develop my art practice. I want to return the favor.

With a high-level class like this, you always get some good questions. The one project that the students seemed most engaged by was EquityBot, which was both surprising — since it’s a stock-investment algorithm and inspiring since it’s my latest project.

IMG_0866

Water Works, Google Translated

My Water Works data-visualization was just featured in MetaTrend Journal (“Big Datification”, Volume 63, March 2015). It’s a subscription model, so you can’t read the article, plus it’s in Korean, which means I definitely can’t read it.

Screen Shot 2015-03-05 at 8.39.53 AM

 

I did get some partial text emailed to me from the organization and run it through Google translate, which gave me this paragraph:

Water Works project is implemented as a map to visualize 3D printing coming drainage and sewer systems of San Francisco . This is a project of visual artist Scott Kjeldahl data . San Francisco 170 water tanks visualize dozen water tank location (San Francisco Cisterns), 3 million , and visualize data points sewers activity (Sewer Works) and was made ??up of 67 of the most efficient virtual hydrant (Imaginary Drinking Hydrants) Map . Pipes, hydrants , circulation and the supply of urban waterways flow through the location and construction of a sewage treatment plant can see at a glance.

I like it! Once again Google Translate impresses with the odd results and the mangling of phrases.

ReFILL Workshop in Winnipeg

On March 27th & 28th (2015), Victoria Scott and I will be conducting a workshop in Winnipeg around the “libricide” in Canada’s DFO libraries. The full article on their closures is here.

Screen Shot 2015-02-26 at 8.30.29 AM

Here’s the description
On March 27th & 28th, 2015, San Francisco-based artists Victoria Scott and Scott Kildall will be leading 2-day, hands-on workshop to physically re-imagine and re-materialize some of the lost titles of the Freshwater Institute Library. We will discuss, imagine, draw, map and construct while listening to soothing water sounds and watching water-related videos. We will also discuss methodologies of data visualization and create a map which tracks the migration of these materials from publicly-funded resource into private hands and landfill.

Our project blog will always tell more!

Death and Language

This Thursday at 6pm at Root Division, I will be part of evening of conversation and performance.

The short talk I’ll be giving will be called Death and Language.

In 1972, my father, Gary Kildall, wrote the first high-level computer language for Intel’s microprocessors. This language, called PL/M was instrumental in the development of the personal computer and is now extinct. At around the same time, the last fluent speaker of the Tillamook language also died, thus extinguishing this natural language. What survives of the Tilamook language are audio recordings taken from 1965-1972. With digital preservation techniques as the backdrop, I will entertain questions regarding death of both natural and machine languages.

EndangeredLanguagePanel

Pier 9 Artist Profile

The good folks at Pier 9, Autodesk just released this video-profile of me and my Water Works project. I’m especially happy with Charlie Nordstrom’s excellent videography work and even got the chance help with the editing of the video itself.

Yes, in a previous life I used to do editing for video documentaries with now defunct, Sleeping Giant Video and the IndyMedia Center.

But now, I’m more interested in algorithms, data and sculpture.

Wikipedia and the Politics of Openness by Nathaniel Tkacz

I first met Nathaniel Tkacz, in India and then later in Amsterdam for a series of the Wikipedia CPOV (Critical Point of View) conferences. At these two events, my colleague, Nathaniel Stern and I were presenting a talk, which later became a paper on our Wikipedia Art project.

Congratulations to Nathaniel Tkacz. He has just released his book, Wikipedia and the Politics of Openness, which is covered in this Times Higher Education article by Karen Shook.

Screen Shot 2015-01-05 at 8.01.20 PM

 

 

 

Talk at David Baker Architects

Yesterday, I gave a brief artist talk at David Baker Architects, which is a local San Francisco architecture firm with numerous sustainability and innovation design awards. Here I am with David Baker, himself, who is sporting a stylish scarf. I want.me_and_baker

It was a casual lunchtime talk with about 15 or 20 people in attendance. An important part of my art practice is talking to organizations that both work outside of the art world and are doing amazing work. I want to share ideas and discuss compelling art ideas with a larger audience.. lunch_audience Here I am, showing my Data Crystals work and explaining the clustering algorithms at work. I later talked about mapping the water infrastructure with my Water Works project.

From this architecture firm, I got positive responses about data and design with in-depth knowledge about urban infrastructure.I hope to continue the code-to-3d prints work with these project. More proposals are in the works.

scott_gesture

 

Human Brain Project @ Impakt Festival

I spent my time at the five-day long Impakt Festival watching screenings, listening to talks, interacting with artworks and making plenty of connections with both new and old friends. I’m still digesting the deluge of aesthetic approaches, subjective responses and formal interpretations of the theme of the festival, “Soft Machines: Where the Optimized Human Meets Artificial Empathy”.

imapktIt’s impossible to summarize everything I’ve seen. While there were a few duds, like any festival, the majority of what I experienced was high-caliber work. Topping my “best of list” was the “Algorithmic Theater” talk by Annie Dorsen, the Omer Fast film, “5000 Feet is the Best”, the Hohokum video game by Richard Hogg and a captivating talk on the Human Brain Project.

vid-game

For the sake of brevity, I’m going to cover just the presentation on the Human Brain Project (HBP). Even though this is a science project, what impressed me was similarities in methodology to many art projects. HBP has simple directive: to map the human brain. However the process is highly experimental and the results are uncertain.

HBP is largely EU-funded and was awarded to a consortium of researchers from a competition with 26 different organizations. The total funding over the course of the 10-year project is about 1 billion Euros, which is a hefty price tag for a research project. The eventual goal, likely well-after the 10 year period will be to actualize a simulated human brain on a computer — an impossibly ambitious project given the state of technology in 2014.

I arrived skeptical, well-aware that technology projects often make empty promises when predicting the future. Marc-Oliver Gewaltig, who is one of the scientists on HBP presented the analogy of 15th-century mapmaking. In 1492, Martin Behaim collected as many known maps of the world as he could, then produced the Erdapfel, a map of the known world at the time. He knew that the work was incomplete. There were plenty of known places but also many uncertain geographical areas as well. The Erdapfel didn’t even include any of the Americas since it was created before the return of Columbus from his first voyage. But, the impressive part was that the Erdapfel was a paradigm shift, which synthesized all geographical knowledge into a single system. This map would then be a stepping stone for future maps.

Carte_behaimAccording to Gewaltig, the mission of the HBP will follow a similar trajectory and aggregate known brain research into a unified, but flawed model. He fully recognizes that the directive of the project, a fully working synthetic human brain is impossible at this point. The computing power isn’t available yet, nor will it likely be there in 10 years.

The human brain is filled with neurons and synapses. The interconnections are everywhere with very little empty space in a brain. Because of this complexity, the HBP project is beginning by trying to simulate a mouse brain, which is within technology’s grasp in the next 10 years.

brain-mapThe rough process is to analyze physical slices of a mouse brain rather than chemical and electrical signals. From this information, they can construct a 3D model of a mouse brain itself using advanced software. For those of you who are familiar with 3D modeling, can you imagine the polygon count?

Gewaltig also made a distinction in their approach from science-fiction style speculation. When thinking about artificial intelligence, we often think of high-level cognitive functions: reasoning, memory and emotional intelligence. But, the brain also handles numerous non-cognitive functions: regulating muscles, breathing, hormones, etc. For this reason, HBP is creating a physical model of a mouse, where it will eventually interact with a simulated world. Without a body, you cannot have a simulated brain, despite what many films about AI suggest.

virtualmouseWhile I still have doubts about the efficacy of the Human Brain Project, I left impressed. The goal is not a successful simulated brain but instead to experiment and push the boundaries of the technology as much as possible. Computing power will catch up some day, and this project will help push future research in the proper direction. The results will be open data available to other scientists. Is that something we can really argue against?

 

 

 

 

 

 

Impakt Festival: Opening Night

The Impakt Festival officially kicked off this Wednesday evening, and the first event was the exhibition opening at Foto Dok, curated by Alexander Benenson.

The works in the show circled around the theme of Soft Machines, which Impakt describes as “Where the Optimized Human Meets Artificial Empathy”.

Of the many powerful works in the show, my favorite was the 22-minute video, “Hyper Links or it Didn’t Happen,” by Cécile B. Evans. A failed CGI rendering of Philip Seymour Hoffman narrates fragmented stories of connection, exile and death. At one point, we see an “invisible woman” who lives on a beach and whose lover stays with her, after quitting a well-paying job. The video intercuts moments of odd narration by a Hoffman-AI. Spam bots and other digital entities surface and disappear. None of it makes complete sense, yet it somehow works and is absolutely riveting.

pseymore

After the exhibition opening, the crowd moved to Theater Kikker, where Michael Bell-Smith, presented a talk/performance titled “99 Computer Jokes”. He spared the audience by telling us one actual computer joke. Instead, he embarked on a discursive journey, covering topics of humor, glitch, skeuomorphs, repurposing technology and much more. Bell-Smith spoke with a voice of detached authority and made lateral connections to ideas from a multitude of places and spaces.

michael In the first section of his talk, he describes that successful art needs to have a certain amount of information — not too much, not too little, citing the words of arts curator Anthony Huberman:

“In art, what matters is curiosity, which in many ways is the currency of art. Whether we understand an artwork or not, what helps it succeed is the persistence with which it makes us curious. Art sparks and maintains curiosities, thereby enlivening imaginations, jumpstarting critical and independent thinking, creating departures from the familiar, the conventional, the known. An artwork creates a horizon: its viewer perceives it but remains necessarily distant from it. The aesthetic experience is always one of speculation, approximation and departure. It is located in the distance that exists between art and life.”

In the present time where faith in technology has vastly overshadowed that of art, these words are hyper-relevant. The Evans video accomplishes this, resting in this valley between the known and the uncertain. We recognize Hoffman and he is present, but in an semi-understandable, mutated form. We know that the real Philip Seymour Hoffman is dead. His ascension into a virtual space is fragmented and impure. The video suggests that traversing the membrane from the real into the screen space will forever distort the original. It triggers the imagination. It sticks with us in a way that stories do not.

What Bell-Smith alludes to his talk is that the idea of combining the human and the machine won’t work…as expected. He sidesteps any firm conclusions. His performance is like the artwork that Huberman describes: it never reaches resolution and opens up a space for curiosity.

Later he displayed slides of Photoshop distasters, a sort of “Where’s Waldo” of Photoshop errata. Microseconds after viewing the advertisement below, we know something is off. The image triggers an uncanny response. A moment later we can name the problem of the model having only one leg. Primal perception precedes a categorical response. Finally, everyone laughs together at the idiosyncrasy that someone let into the public sphere.

leg

After Bell-Smith’s talk we had a chance for eating-and-drinking. Hats off to the Impakt organization. I know I’m biased since I’m an artist-in-residence at Impakt during the festival itself, but they certainly know how to make everyone feel warm and cozy.
galaNext up was the keynote speaker, Bruce Sterling, who is a science fiction writer and cultural commentator. He boldly took the stage without a laptop, and so the audience had no slides or videos to bolster his arguments. He assumed the role of naysayer, deconstructing the very theme of the festival: Where Optimized Human Meets Artificial Empathy. Defining the terms “cognition” (human) vs “computation” (machine), he took the stance that the merging of the two was a categorical error in thinking. His example: birds can fly and drones can fly, but this doesn’t mean that drones can lay eggs. My mind raced, thinking that someday drone aircraft might reproduce. Would that be inconceivable?

Sterling tackled the notion of the Optimized Human with san analogy to Dostoyevsky’s Crime and Punishment. For those of you that don’t recall your required high school reading, the main character of the book is Raskolnikov, who is both brilliant and desperate for money. He carefully plans and then kills an morally bankrupt pawnbroker for her cash. The philosophical question that Dostoyevsky proposes is the idea of a superhuman:  select individuals who are exempt the prescribed moral and legal code. Could the murder of a terrible person be a justifiable act? And could the person to judge this would be someone who is excessively bright, essentially leaving the rest of the humanity behind?

In the book, the problem is that the social order gets disrupted. Raskolnikov action introduces an deadly unpredictable element into his village. With an uncertainty to the law and who executes it, no one feels safe. At the conclusion of the novel, Raskolnikov ends up in exile, in a sort of moral purgatory.

The very notion of the “optimized human” has similar problems. If select people are somehow “upgraded” through cybernetics, gene therapies and other technological enhancements, what happens to the social order? Sterling spoke about marketing, but I see the greater problem one of leveraged inequality. If there are a minority of improved humans who have combined integrated themselves with some sort of techno-futuristic advantages, our society rapidly escalates the classic problem of the digital divide. The reality is that this has already started happening. The future is here.bruce Bruce Sterling concluded with the point that we need to pay attention to how technology is leveraged. His example of Apple’s Siri system, albeit not a strong case of Artificial Empathy, is owned by a company with specific interests. When asked for the nearest gas station or a recipe for grilled chicken, Siri “happily” responds. If you ask her how to remove the DRM encoding on a song in your iTunes library, Siri will be helpless. While I disagreed with a number of Sterling’s points in his talk, what I do know is that I would hope for a non-predictive future for my Artificial Empathy machines.

The Impakt Festival continues through the weekend with the full schedule here.

 

 

 

EquityBot goes live!

During my time at Impakt as an artist-in-residence, I have been working on a new project called EquityBot, which is an online commission from Impakt. It fits well into the Soft Machines theme of the festival: where machines integrate with the soft, emotional world.

EquityBot exists entirely as a networked art or “net art” project, meaning that it lives in the “cloud” and has no physical form. For those of you who are Twitter users, you can follow on Twitter: @equitybot

01_large

What is EquityBot? Many people have asked me that question.

EquityBot is a stock-trading algorithm that “invests” in emotions such as anger, joy, disgust and amazement. It relies on a classification system of twenty-four emotions, developed by psychologist and scholar, Robert Plutchik.

Plutchik-wheel.svg

how it works
During stock market hours, EquityBot continually tracks worldwide emotions on Twitter to gauge how people are feeling. In the simple data-visualization below, which is generated automatically by EquityBot, the larger circles indicate the more prominent emotions that people are Tweeting about.

At this point in time, just 1 hour after the stock market opened on October 28th, people were expressing emotions of disgust, interest and fear more prominently than others. During the course of the day, the emotions contained in Tweets continually shift in response to world events and many other unknown factors.

twitter_emotionsEquityBot then uses various statistical correlation equations to find pattern matches in the changes in emotions on Twitter to fluctuations in stocks prices. The details are thorny, I’ll skip the boring stuff. My time did involve a lot of work with scatterplots, which looked something like this.

correlationOnce EquityBot sees a viable pattern, for example that “Google” is consistently correlated to “anger” and that anger is a trending emotion on Twitter, EquityBot will issue a BUY order on the stock.

Conversely, if Google is correlated to anger, and the Tweets about anger are rapidly going down, EquityBot will issue a SELL order on the stock.

EquityBot runs a simulated investment account, seeded with $100,000 of imaginary money.

In my first few days of testing, EquityBot “lost” nearly $2000. This is why I’m not using real money!

Disclaimer: EquityBot is not a licensed financial advisor, so please don’t follow it’s stock investment patterns.

accountThe project treats human feelings as tradable commodities. It will track how “profitable” different emotions will be over the course of months. As a social commentary, I propose a future scenario that just about anything can be traded, including that which is ultimately human: the very emotions that separate us from a machine.

If a computer cannot be emotional, at the very least it can broker trades of emotions on a stock exchange.

affect_performanceAs a networked artwork, EquityBot generates these simple data visualizations autonomously (they will get better, I promise).

It’s Twitter account (@equitybot) serves as a performance vehicle, where the artwork “lives”. Also, all of these visualizations are interactive and on the EquityBot website: equitybot.org.

I don’t know if there is a correlation between emotions in Tweets and stock prices. No one does. I am working with the hypothesis that there is some sort of pattern involved. We will see over time. The project goes “live” on October 29th, 2014, which is the day of the opening of the Impakt Festival and I will let the first experiment run for 3 months to see what happens.

Feedback is always appreciated, you can find me, Scott Kildall, here at: @kildall.

 

Soft Machines and Deception

The Impakt Festival officially begins next Wednesday, but in the weeks prior to the event, Impakt has been hosting numerous talks, dinners and also a weekly “Movie Club,” which has been a social anchor for my time in Utrecht.

10437517_643169085789022_7756476391981345316_nEvery Tuesday, after a pizza dinner and drinks, an expert in the field of new media introduces a relatively recent film about machine intelligence, prompting questions that frame larger issues of human-machine relations in the films. An American audience might be impatient about a 20-minute talk before a movie, but in the Netherlands, the audience has been engaged. Afterwards, many linger in conversations about the very theme of the festival: Soft Machines.

1625471_643169265789004_3958937439824009299_n

The films have included I, Robot, Transcendence, Her and the documentary: Game Over: Kasparov and the Machine. They vary in quality, but with the introduction of the concepts ahead of time, even Transcendence, a thoroughly lackluster film engrossed me.

The underlying question that we end up debating is: can machines be intelligent? This seems to be a simple yes or no question, which cleaves any group into either a technophilic pro-Singularity or curmudgeonly Luddite camp. It’s a binary trap, like the Star Trek debates between Spock and Bones. The question is far more compelling and complex.

The Turing test is often cited as the starting point for this question. For those of you who are unfamiliar with this thought experiment, it was developed by British mathematician and computer scientist, Alan Turing in a 1950 paper that asked the simple question: “can machines think”.

The test goes like this: suppose you have someone at a computer terminal who is conversing with an entity by typing text conversations back and forth, what we now regularly do with instant messaging. The entity on the other terminal is either a computer or a human, the identity of which is unknown to the computer user. The user can have a conversation and ask questions. If he or she cannot ascertain “human or machine” after about 5 minutes, then the machine passes the Turing test. It responds as if a human would and can effectively “think”.

turing_model

In 1990, the thought experiment became a reality with the Loebner Prize. Every year, various chatbots — algorithms which converse via text with a computer user — compete to try to fool humans in a setup that replicates this exact test. Some algorithms have come close, but to date, no computer has ever successfully won the prize.

eliza2

The story goes that Alan Turing was inspired by a popular party game of the era called the “Imitation Game” where a questioner would ask an interlocutor various questions. This intermediary would then relay these questions to a hidden person who would answer via handwritten notes. The job of the questioner was to try to determine the gender of the unknown person. The hidden person would provide ambiguous answers. A question of “what is your favorite shade of lipstick” could be answered by “It depends on how I feel”. The answer is in this case is a dodge as a 1950s man certainly doesn’t know the names of lipstick shades.

Both the Turing test and the Imitation Game hover around the act of deception. This technique, widely deployed in predator-prey relationships in nature, is engrained in our biological systems. In the Loebner Prize competitions, there have even been instances where the human and computer will try to play with the judges, making statements like: “Sorry I am answering this slowly, I am running low on RAM”.

It may sound odd, but the computer doesn’t really know deception. Humans do. Every day we work with subtle queues of movement around social circles, flirtation with one another, exclusion and inclusion into a group and so on. These often rely on shades of deception: we say what we don’t really mean and have other agendas than our stated goals. Politicians, business executives and others that occupy high rungs of social power know these techniques well. However, we all use them.

The artificial intelligence software that powers chatbots has evolved rapidly over the years. Natural language processing (NLP) is widely used in various software industries. I had an informative lunch the other day in Amsterdam with a colleague of mine, Bruno Jakic at AI Applied, who I met through the Affect Lab. Among other things, he is in the business of sentiment analysis, which helps, for example, determine if a large mass of tweets indicates a positive or negative emotion. Bruno shared his methodology and working systems with me.

State-of-the-art sentiment analysis algorithms are generally effective, operating in the 75-85% range for identification of a “good” or “bad” feeling in a chuck of text such as a Tweet. Human consensus is in the similar range. Apparently, a group of people cannot agree on how “good” or “bad” various Twitter messages are, so machines are coning close to effective as humans on a general scale.

The NLP algorithms deploy brute force methods by crunching though millions of sentences using human-designed “classifiers” — rules to help determine how a sentence looks. For example, an emoticon like a frown-face, almost always indicates a bad feeling.

frown

Computers can figure this out because machine perception is millions of time faster than human perception. It can run through examples, rules and more but acts on logic alone. If NLP software code generally works, where specifically does it fail?

Bruno pointed out was that machines are generally incapable of figuring out if someone is being sarcastic. Humans immediately sense this by intuitive reasoning. We know, for example that getting locked out of your own house is bad. So if you write that this is a contradictory good thing, it is obviously sarcastic. The context is what our “intuition” — or emotional brain understands. It builds upon shared knowledge that we gather over many years.

sarcasm

The Movie Club films also tackle this issue of machine deception. At a critical moment in the film, Sonny, the main robot character in I, Robot, deceives the “bad” AI software that is attacking the humans by pretending to hold a gun to one of the main “good” characters. It  then winks to Will Smith (the protagonist) to let him know that he is tricking the evil AI machine. Sonny and Will Smith then cooperate, Hollywood style with guns blazing. Of course, they prevail in the end.

sony-wink

Sonny possess a sophisticated Theory of Mind: an understanding of its own mental state and well as that of the other robots and Will Smith. It takes initiative and pretends to be on the side of the evil AI computer by taking an an aggressive action. Earlier in the film, Sonny learned what winking signifies. It knows that the AI doesn’t understand this and so the wink will be understood by Will Smith and not be the evil AI.

In Game Over: Kasparov and the Machine, which recasts the narrative of the Deep Blue vs.Kasparov chess matches, the Theory of Mind of the computer resurfaces. We know that Deep Blue won the chess match, which was a series of 6 chess matches in 1997. It is the infamous Game 2, which obsessed Kasparov. The computer played aggressively and more like a human than Kasparov had expected.

At move 45, Kasparov resigned, convinced that Deep Blue had outfoxed him that day. Deep Blue had responded in the best possible way to Kasparov’s feints earlier in the game. Chess experts later discovered that Kasparov could have easily forced a honorable draw instead of resigning the match.

The computer appeared to have made a simple error. Kasparov was baffled and obsessed. How could the algorithm have failed on a simple move, when it was so thoroughly strategic earlier in the game. It didn’t make sense.

Kasparov felt like he was tricked into resigning. What he didn’t consider was that when te algorithm didn’t have enough time — since tournament chess games are run against a clock — to find the best-ranked move, that it would choose randomly from a set of moves…much like a human would do in similar circumstances. The decision we humans make is emotional at this point. Inadvertently, Kasparov the machine deceived Kasparov.

KASPAROVI’m convinced that ability to act deceptively is one necessary factor for machines need to be “intelligent”. Otherwise, they are simply code-crunchers. But there are other aspects, as well, which I’m discovering and exploring during the Impakt Festival.

I will continue this line of thought on machine intelligence in future blog posts, I welcome any thoughts and comments on machine intelligence and deception. You can find me on Twitter: @kildall.

 

 

 

 

 

 

 

 

Data-Visualizing + Tweeting Sentiments

It’s been a busy couple of weeks working on the EquityBot project, which will be ready for the upcoming Impakt Festival. Well, at least some functional prototype in my ongoing research project will be online for public consumption.

The good news is that the Twitter stream is now live. You can follow EquityBot here.

EquityBot now tweets images of data-visualizations on its own and is autonomous. I’m constantly surprised and a bit nervous by its Tweets.

exstasy_sentimentAt the end of last week, I put together a basic data visualization using D3, which is a powerful Javascript data-visualization tool.

Using code from Jim Vallandingham, In just one evening, I created dynamically-generated bubble maps of Twitter sentiments as they arrive EquityBot’s own sentiment analysis engine.

I mapped the colors directly from the Plutchik wheel of emotions, which is why they are still a little wonky like the fact that the emotion of Grief is unreadable. Will be fixed.

I did some screen captures and put them my Facebook and Twitter feed. I soon discovered that people were far more interested in images of the data visualizations than just text describing the emotions.

I was faced with a geeky problem: how to get my Twitterbot to generate images of the data visualizations using D3, a front-end Javascript client? I figured it out eventually, after stepping into a few rabbit holes.

Screen Shot 2014-10-21 at 11.31.09 AM

I ended up using PhantomJS, the Selenium web driver and my own Python management code to solve the problem. There biggest hurdle was getting Google webfonts to render properly. Trust me, you don’t want to know the details.

Screen Shot 2014-10-21 at 11.31.29 AM

 

But I’m happy with the results. EquityBot will now move to other Tweetable data-visualizations such as its own simulated bank account, stock-correlations and sentiments-stock pairings.

Hacked By ReckLess

https://pastebin.com/raw/WPdrw7b6

security %0 h4ck you bitchesss !

Blueprint for EquityBot

For my latest project, EquityBot, I’ve been researching, building and writing code during my 2 month residency at Impakt Works in Utrecht (Netherlands).

EquityBot is going through its final testing cycles before a public announcement on Twitter. For those of you who are Bot fans, I’ll go ahead and slip you the EquityBot’sTwitter feed: https://twitter.com/equitybot

The initial code-work has involved configuration of a back-end server that does many things, including “capturing” Twitter sentiments, tracking fluctuations in the stock market and running correlation algorithms.

I know, I know, it sounds boring. Often it is. After all, the result of many hours of work: a series of well-formatted JSON files. Blah.

But it’s like building city infrastructure: now that I have the EquityBot Server more or less working, it’s been incredibly reliable, cheap and customizable. It can act as a Twitterbot, a data server and a data visualization engine using D3.

This type of programming is yet another skill in my Creative Coding arsenal. And consists of mostly Python code that lives on a Linode server, which is a low-cost alternative to options like HostGator or GoDaddy, which incur high monthly costs. And there’s a geeky sense of satisfaction in creating a well-oiled software engine.

The EquityBot Server looks like a jumble of Python and PHP scripts. I cannot possibly explain it excruciating detail, nor would anyone in their right mind want to wade through the technical details.

Instead, I wrote up a blueprint for this project.

ebot_server_diagram_v1For those of you who are familiar with my art projects, this style of blueprint may look familiar. I adapted this design from my 2049 Series, which are laser-etched and painted blueprints of imaginary devices. I made these while an artist-in-residence at Recology San Francisco in 2011.

sniffer-blue

Water Works Final Report

Overview
Water Works is a project that I created for the Creative Code Fellowship in the Summer of 2014 with the combined support of Stamen Design, Autodesk and Gray Area.

Water Works is a 3D data visualization and mapping of the water infrastructure of San Francisco. The project is a relational investigation: I have been playing the role of a “Water Detective, Data Miner” and sifting through the web for water data. My results of from this 3-month investigation are three large-scale 3D-printed sculptures, each paired with an interactive web map.

The final website lives here: http://www.waterworks.io/

sewer

Stamen Design is a small design studio that creates sophisticated mapping and data-visualization projects for the web. Combined with the amazing physical fabrication space at Pier 9 at Autodesk, this was a perfect combination of collaborative players for my own focus: writing algorithms that transform datasets into 3D sculptures and installations. I split my time between the two organizations and both were amazing, creative environments.

Gray Area provided the project guidance and coursework: 12 hours a week of Creative Code Immersive classes in topics ranging from Arduino to Node.js. About half of the classes were review for me, e.g. OpenFrameworks, Processing, Arduino, but Javascript, Node and more were completely new.

This report is heavy on images, partially because I want to document the entire process of how I created these 3D mapping-visualizations. As far as I know, I’m the first person who has undertaken this creative process: from mining city data to 3D-printing the infrastructure, which is geo-located on a physical map.

My directive from the start of the Water Works project was to somehow make visible what is invisible. This simple message is one that I learned while I was working as a New Media Exhibit Developer at the Exploratorium (2012-2013). It also aligns with the work that Stamen Design creates and so I was pleased to be working with this organization.

Starting Point
Underneath our feet is an urban circulatory system that delivers water to our households, removes it from our toilets, delivers a reliable supply firefighting and ultimately purifies it and directs it into the bay and ocean. Most of us don’t think about this amazing system because we don’t have to — it simply works.

Like many others, I’m concerned about the California drought, which many climatologists think will persist for the next decade. I am also a committed urban-dweller and want to see the city I live in improve its infrastructure as it serves an expanding population. Finally, I undertook this project in order to celebrate infrastructure and to help make others aware of the benefits of city government.

drought

On more personal note, I am fascinated by urban architecture. As I walk through the city, I constantly notice the makings on manholes, the various sign posts and different types of fire hydrants.cistern_manhole

About a year ago, I had several in-depth conversations with employees at the Department of Public Works about the possibility of mapping the sewer system when I was working at the Exploratorium. We discussed possibilities of producing a sewer map for museum. For various reasons, the maps never came to fruition, but the data still rattled around my brain. All of the pipe and manhole data still existed. It was waiting to be mapped.

Three Water Systems of San Francisco
When I was awarded this Creative Code Fellowship in June this year, I very much about the San Francisco water system. I soon learned that the city has three separate sets of pipes that comprise the water infrastructure of San Francisco.

(1) Potable Water System — this is our drinking water, comes from Hetch Hetchy. Some fire hydrants uses this.

(2) Sewer System — San Francisco has a combined stormwater and wastewater system, which is nearly entirely gravity-fed. The water gets treated at one of the wastewater treatment plants. San Francisco is the only coastal California city with a combined system.

(3) Auxiliary Water Supply System (AWSS) — this is a separate system just for emergency fire-fighting. It was built in the years immediately following the 1906 Earthquake, where many of the water mains collapsed and most of the city proper was destroyed by fires. It is fed from the Twin Peaks Reservoir. San Francisco is the only city in the US that has such as system.

water_treatment

Follow the Data, Find the Story
From my previous work on Data Crystals, I learned that you have to work with the data you can actually get, not the data you want. In the first month of the Water Works project, this involved constant research and culling.

I worked with various tables of sewer data that the DPW provided to me. I discovered that the city had about 30,000 nodes (underground chambers with manholes) with 30,000 connections (pipes). This was an incredible dataset and it needed a lot of pruning, cleaning and other work, which I soon discovered was a daunting task.

Lesson #1: Contrary to popular belief, data is never clean.

What else was available? It was hard to say at first. I sent emails to the SFPUC asking for their the locations of the drinking water data — just like what I had for the sewer data. I thought this would be incredible to represent. I approached the project with a certain naivety.

Of course, I shouldn’t have been surprised about that this would be a security concern, but in no uncertain terms I received a resounding no from the SFPUC. This made sense, but it left me with only one dataset.

Given that there were three water systems, it would make sense to create three 3D-printed visualizations, once from each set. If not the pipes, what would I use?

In one of my late-night evenings research, I found a good story: the San Francisco Underground Cisterns. According to various blogs, there are about 170 of these, and are usually marked by a brick circle. What is underneath?

cistern_circle

In the 1850s, after a series of Great Fires in San Francisco tore through the city, 23 cisterns* were built. These smaller cisterns were all in the city proper, at that time between Telegraph Hill and Rincon Hill. They weren’t connected to any other pipes and the fire department intended to use them in case the water mains were broken, as a backup water supply.

They languished for decades. Many people thought they should be removed, especially after incidents like the 1868 Cistern Gas Explosion.

However, after the 1906 Earthquake, fires once again decimated the city. Many water mains broke and the neglected cisterns helped save portions of the city.

Afterward, the city passed a $5,200,000 bond and begin building the AWSS in 1908. This included the construction of many new cisterns and the rehabilitation of other, neglected ones. Most of the new cisterns could hold 75,000 gallons of water. The largest one is underneath the Civic Center and has a capacity of 243,000 gallons.

The original ones, presumably rebuilt, hold much less, anywhere from 15,000 to 50,000 gallons.

* from the various reports I’ve read, this number varies.

old-cisternsmap

I searched for a map of all the cisterns, which was to be difficult to find. There was no online map anywhere. I read that since these were part of the AWSS, that they were refilled by the fire department. I soon begin searching for fire department data and found this set of intersections, along with the volume of each cistern. The source was the SFFD Water Supplies Manual.

cisterdata

The story of the San Francisco Cisterns was to be my first of three stories in this project.

Autodesk also runs Instructables, a DIY, how-to-make-things website. One of the Instructables details the mapping process, so if you want details, have a look at this Instructable.

What I did to make this conversion happen was to write code in Python which called Google Maps API to convert the intersections into lat/longs as well as get elevation data. When I had asked people how to do this, I received many GitHub links. Most of them were buggy or poorly documented. I ended up writing mine from scratch.

Lesson #2: Because GitHub is both a backup system for source code and open source sharing project, many GitHub projects are confusing or useless.

The being said, here is my GitHub repo: SF Geocoder, which does this conversion. Caveat Emptor.

Mapping the San Francisco Sewers
This was my second “story” with the Water Works project, which is simply to somehow represent the complex system that is underneath us. The details of the sewers are staggering. With approximately 30,000 manholes and 30,000 pipes that connect them, how do you represent or even begin mapping this?

And what was the story after all — it doesn’t quite have the uniqueness character of the cisterns. But, it does portray a complex system. Even the DPW hadn’t mapped this out in 3D space. I don’t know if any city ever has. This was the compelling aspect: making the physical model itself from the large dataset.

Building a 3D Modeling System
In addition to looking for data and sifting through the sewer data that I hand, I spent the first few weeks building up a codebase in OpenFrameworks.

The only other possibility was using Rhino + Grasshopper, which is a software package I don’t know and not even an Autodesk product. Though it can handle algorithmic model-building, several colleagues were dubious that it could handle my large, custom dataset.

So, I built my own. After several days of work, I mapped out the nodes and pipes as you see below. I represented the nodes as cubes and pipes as cylinders — at least for the onscreen data visualization.

sewer-mapping

This is a closeup of the San Francisco bay waterfront. You can see some isolated nodes and pipes — not connected to the network. This is one example of where the data wasn’t clean. Since this is engineering data, there are all sorts of anomalies like virtual nodes, run-offs and more.

My code was fast and efficient since it was in C++. More importantly, I wrote custom STL exporters which empowered my workflow to go directly to a 3D printer without having to go through other 3D packages to clean up the data. This took a lot of time, but once I got it working, it saved me hours of frustration later in the project.

seweremapping2

I also mapped out the Cisterns in 3D space using the same code. The Cisterns are disconnected in reality but as a 3D print, they need to one cohesive structure. I modified the ofxDelaunay add-on (thanks GitHub) to create cylindrical supports that link the cisterns together.

What you see here is an “editor”, where I could change the thickness of the supports, remove unnecessary ones and edit the individual cisterns models to put holes in certain ones.

I also scaled the Cisterns according to their volume. The pre-1906 ones tend to be small, while the largest one, at Civic Center is about 243,000 gallons, which over 3 times the size of the standard post-earthquake 75,000 gallon cisterns.

OF-cisterns-nomap

Story #3: Imaginary Drinking Hydrants
In the same document that had the locations of all of the San Francisco Cisterns, I also found this gem: 67 emergency drinking hydrants for public use in a city-wide disaster.

Whoa, I thought, how interesting…

drinking_hydrants

I dug deeper and scouted out the intersections in person. I took some photos of the Emergency Drinking Hydrants. They have blue drops painted on them. You can even see them on Street View.

I found online news articles from several years ago, which discussed this program, introduced in 2006, also known as the Blue Drop Hydrant program.

Picture of What is the blue drop hydrant program

blue_drop_man.jpg

And, I generated a web map, using Javascript and Leaflet.

imaginary-drnkinghydrants

I then published a link to the map onto my Twitter feed. It generated a lot of excitement and was retweeted by many sources.

twitt.jpg

The SFist — a local San Francisco news blog ended up covering it. I was excited. I thought I was doing a good public service.

However, there was a backlash…of sorts. It turns out that the program was discontinued by the SFPUC. The organization did some quick publicity-control on their Facebook page and also contacted the SFist.

The writer of the article then issued a statement that this program was discontinued and a press statement by the SFPUC.

press2.jpg

He also had this quote, which was a bit of a jab at me. “It had sounded like designer Scott Kildall, who had been mapping the the hydrants, had done a fair amount of research, but apparently not.”

In my defense, I re-researched the emergency drinking hydrants. Nowhere did it say that the program was discontinued. So, apparently the SFPUC quietly shuffled it out.

But later, I found that my map birthed a larger discussion. The SFPUC had this response, also printed later on SFist.
Picture of But then a good public response

The key quote by Emergency Planning Director Mary Ellen Carroll is:

“When it comes to sheltering after a emergency, we don’t tell people ahead of time, ‘This is where you’ll need to go to find shelter after an earthquake’ because there’s no way to know if that shelter will still be there.

This makes sense that central gathering locations could be a bad idea. Imagine a gas leak or something similar at one of these locations. So a water distribution plan would have to be improvised according the the desasters.

We do know from various news articles and by my own photographs that there was not only a map, but physical blue drops painted on the hydrants in addition to a large publicity campaign. The program supposedly costs 1 million dollars, so that would have been an expensive map.

They SFPUC never pulled the old maps from their website nor did they inform the public that the blue drop hydrants were discontinued.

I blame it on general human miscommunication. And after visiting the SFPUC offices towards the end of my Water Works project, I’m entirely convinced that this is a progressive organization with smart people. They’re doing solid work.

But I had to rethink my mapping project, since these hydrants no longer existed.

When faced with adverse circumstances, at least in the area of mapping and art, you must be flexible. There’s always a solution. This one almost rhymes with Emergency — Imaginary.

Instead of hydrants for emergency drinking water, I ask the question: could we have a city where we could get tap water from these hydrants at any time? What if the water were recycled water?

They could have a faucet handle on them, so you could fill up your bottle when you get thirsty. More importantly, these hydrants could be a public service.

It’s probably impractical in the short term, but I love the idea of reusing the water lines for drinking lines — and having free drinking water in the public commons.

So, I rebranded this map and designed this hydrants with a drinking faucet attached to it. This would be the base form used for the maps.

Picture of Rebrand as Imaginary Drinking Hydrants

Creating Mini Models
I wanted to strike a balance with this data-visualization and mapping project between aesthetics and legibility.With the data sets I now had and the C++ code that I wrote, I could geolocate cisterns, hydrants and sewer lines.

These would be connected by support structures in the case of cisterns and hydrants and pipe data for the sewers.

I decided that the actual data points would be miniature models, which I designed in Fusion 360 with the help of Autodesk guru, Taylor Stein. The first one I created was the Cistern model.

cisterns-fusion360I went through several iterations to come up with this simple model. The design challenge was to come up with a form that looked like it could be an underground tank, but not bring up other associations. In this case, without the three rectangular stubby pieces, it looks like a tortilla holder.

After a day of design and 3D print tests, I settled on this one.

cistern-model

And here you can see the outputs of the cisterns and the hydrants in MeshLab.

meshlab-cisterns

Here is the underside of the hydrant structure, where you can see the holes in the hydrants, which I use later for creating the final sculpture. These are drill holes for mounting the final prints on wood.

meshlab-hydrants-underneathThe manhole chamber design was the hardest one to figure out. This one is more iconographic than representational. Without some sort of symmetry, the look of the underground chamber didn’t resonate. I also wanted to provide a manhole cover on top of the structure. The flat bottom distinguishes it from the pipes.

manhole

Mapping and Legibilitystamen

One of my favorite aspects about being at Stamen is that four days a week, they provided lunch for us. We all ate lunch together. This was a good chunk of unstructured time to talk about mapping, music, personal life, whatever.

We solidified bonds — so often shared lunch is overlooked in organizations. In addition to informal discussion on the project, we also had a few creative brainstorm sessions, where I would present the progress of the project and get feedback from several people at both Stamen. Folks from Autodesk and Gray Area also joined the discussion.

I hadn’t considered the idea of situating these on a map before, but they suggested integrating a map of some sort. Quickly, the idea was birthed that I should geolocate these on top of a map. This was a brilliant direction for the project.

OF-imaginaryhydrants-mapStamen provided be with a high-resolution map that I could laser etch, which came later, after the 3D printing. Now, with this direction for the project, I started making the actual 3D prints.  map-for-etching

Mega-prints with lots of cleaning
After all the mapping, arduous data-smoothing, tests upon structural tests, I was finally ready to spool off the large-scale 3D prints. Each print was approximately the size of the Object 500 print bed: 20″ x 16″, making these huge. A big thanks to Autodesk for sponsoring the work and providing the machines.

Each print took between 40 and 50 hours of machine time, so I sent these out as weekend-long jobs. Time and resources were limited, so this was a huge endeavor.

cisterns-buildtimeI was worried that the print would fail, but I got lucky in each case. The prints are a combine resin material: VeroClear and VeroWhite (for the Cisterns and Hydrants) and mixes of VeroWhite and VeroBlack for the Sewers.

support-cisterns-far

When the prints come off the print bed, they are encased in a support material which I first scraped off and then used a high-pressure water system to spray the rest off.
cleaning-cistern

It took hours upon hours to get from this.

sewerworks

To this: a fully cleaned version of the Sewer print. This 3D print is of a section of the city: the Embarcadero area, which includes the Pier 9 facility where Autodesk is located.

For the Sewer Works print, the manhole chambers and pipes are scaled to the size in the data tables. I increased the elevation about 3 time to capture the hilly terrain of San Francisco. What you see here is an aerial view as if you were in a helicopter flying from Oakland to San Francisco. The diagonal is Market Street, ending at the Ferry Building. On the right side, towards the back of the print is Telegraph Hill. There are large pipes and chambers along the Embarcadero. Smaller ones comprise the sewer system in the hilly areas.
sewerworks-3dMap-Etching and Final Fabrication
I’ll just summarize the final fabrication — this blog post is already very long. For a more details, you can read this Instructable on how I did the fabrication work. 

Using a cherry wood, which I planed and jointed and glued together, I laser-etched these maps, which came out beautifully.

I chose wood both because of the beautiful finish, but also because the material of wood references the wood Victorian and Edwardian houses that define the landscape of San Francisco. The laser-etching burns away the wood, like the fires after the 1906 Earthquake, which spawned the AWSS water system.

_MG_7318

The map above is the waterfront area for the Sewer Works print and the one below is the full map of the city that I used as the base for the San Francisco Cisterns and the Imaginary Drinking Hydrants sculptures._MG_7316The last stages of the woodwork involved traditional fabrication, which I did at the Autodesk facilities at Pier 9.

_MG_7314I drilled out the holes for mounting the final 3D prints on the wood bases and then mounted them on 1/16″ stainless rods, such that they float about 1/2″ above the wood map.
_MG_7330 And the final stage involved manually fitting the prints onto the rods._MG_7335

Final Results
Here are the three prints, mounted on the wood-etched maps.

Below is the Imaginary Drinking Hydrants. This was the most delicate of the 3D prints.

06_large

These are the San Francisco Cisterns, which are concentrated in the older parts of San Francisco. They are nearly absent from the western part of the city, which became densely populated well-after the 1906 Earthquake.02_large This is the Sewer Works print. The map is not as visible because of the density of the network. The pipes are a light gray and the manhole chambers a medium gray. The map does capture the extensive network of manmade piers along the waterfront.03_large The Website: San Francisco Cisterns and Imaginary Drinking Hydrants
The website for this project is: waterworks.io. It has three interactive web maps for each of the three water systems

The aforementioned Instuctable: Mapping San Francisco Cisterns details how I made these. The summary is that I did a lot of data-wrangling, often using Python to transform the data into a GeoJSON files, a web-mappable format.

The Stamen designer-technicians were invaluable in directing me to the path of Leaflet, an easy-to-use mapping interface. I struggled with it for awhile, as I was a complete newbie to Javascript, but eventually sorted out how to create maps and customize the interactive elements.

Fortunately, I also received help from the designers at Stamen on the graphics. I only have so many skills  and graphic design is not one of them.

cisternsmapping

The Website:The Website: Life of Poo
The performance on Leaflet bogged down when I had more than about 1500 markers in Leaflet and the sewer system has about 28,000.

I spent a lot of energy with node-trimming using a combination of Python and Java code and winnowed the count down to about 1500. The consolidated node list was based on distance and used various techniques to map the a small set of nodes in a cohesive way.

lifeofpoo

In the hours just before presenting the project, I finished Life of Poo: an interactive journey of toilet waste.

On the website, you can enter an address (in San Francisco) such as “Twin Peaks, SF” or “47th & Judah, SF” and the Life of Poo and then press Flush Toilet.

This will begin an animated poo journey down the sewer map and to the wastewater treatment plant.

Not all of the flushes works as you’d expect. There’s still glitches and bugs in the code. If you type in “16th & Mission”, the poo just sits there.

Why do I have the bugs? I have some ideas (see below) but I really like the chaotic results so will keep it for now.

Lesson 3: Sometimes you should sacrifice accuracy.

Future Directions
I worked very, very hard on this project and I’m going to let it rest for awhile. There’s still some work to do in the future, which I would like to do some day.

Cistern Map
I’d like to improve the Cistern Map as I think it has cultural value. As far as I know, it’s the only one on the web. The data is from the intersections and while close, is not entirely correct. Sometimes the intersection data is off by a block or so. I don’t think this affects the integrity of the 3D map, but would be important to correct for the web portion.

Life of Poo
I want to see how this interactive map plays out and see how people respond to it in the next couple of months. The animated poo is universally funny but it doesn’t behave “properly”. Sometimes it get stuck. This was the last part of the Water Works project and one that I got working the night before the presentation.

I had to do a lot of node-trimming to make this work — Leaflet can only handle about 1500 data points before it slows down too much, so I did a lot of trimming from a set of abut 28,000. This could be one source of the inaccuracies.

I don’t take into account gravity in the flow calculations, so this is why I think the poo has odd behavior. But maybe the map is more interesting this way. It is, after all, an animated poo emoji.

Infrastructure Fabrication
This is where the project gets very interesting. What I’ve been able to accomplish with the “Sewer Works” print is to show how the sewer pipes of San Francisco look as a physical manifestation. This is only the beginning of many possibilities. I’d be eager to develop this technology and modeling system further. And take the usual GIS maps and translate them into physical models.

Thanks for reading this far and I hope you enjoyed this project,
Scott Kildall




 

 

 

 

 

 

 

 

EquityBot: Capturing Emotions

In my ongoing research and development of EquityBot — a stock-trading bot* with a philanthropic personality, which is my residency project at Impakt Works — I’ve been researching various emotional models for humans.

The code I’m developing will try to make correlations between stock prices and group emotions on Twitter. It’s a daunting task and one where I’m not sure of the signal-to-noise ratio will be (see disclaimer). As an art experiment, I don’t know what will emerge from this, but it’s geeky and exciting.

In the last couple weeks, I’ve been creating a rudimentary system that will just capture words. A more complex system would use sentiment analysis algorithms. My time and budget is limited, so phase 1 will be a simple implementation.

I’ve been looking for some sort of emotional classification system. There are several competing models (of course).

My favorite is the Plutchik Wheel of Emotions, which was developed in 1980. It has a symmetrical look to it and apparently is deployed in various AI systems.

 

Plutchik-wheel.svg

Other models such as the Lövheim cube of emotion are more recent and seem compelling at first. But it’s missing something critical: sadness or grief. Really? This is such a basic human emotion and when I saw it was absent, I tossed the cube model.

1280px-Lövheim_cube_of_emotion

Back to the Plutchik model…my “Twitter bucket” captures certain words, from the color wheel above. I want enough words for a reasonable statistical correlation (about 2000 tweets/hour). Too many of one word will strain my little Linode server. For example, the word “happy” is a no-go since there thousands of Tweets with that word in it each minute.

Many people tweet about anger by just using the word “angry” or “anger”, so that’s an easy one. Same thing goes with boredom/boring/bored.

For other words, I need to go synonym-hunting, like: apprehension. The twitter stream with this word is just a trickle. I’ve mapped it to “worry” or “anxiety”, which shows up more often in tweets. It’s not quite correct, but reasonably close.

The word “terror” has completely lost it’s meaning, and now only refers to political discourse. I’m still trying to figure out a good synonym-map for terror: terrifying, terrify, terrible? It’s not quite right. There’s not a good word to represent that feeling of absolute fear.

This gets tricky and I’m walking into the dark valley of linguistics. I am well-aware of the pitfalls.

Screen Shot 2014-10-01 at 3.18.33 PM

 

* Disclaimer:
EquityBot doesn’t actually trade stocks. It is an art project intended for illustrative purposes only, and is not intended as actual investment advice. EquityBot is not a licensed financial advisor. EquityBoy It is not, and should not be regarded as investment advice or as a recommendation regarding any particular security or course of action.

 

Polycon in Berlin

This week I traveled to Berlin for Polycon. No…it’s not a convention on polyamory, but a porject developed by my longtime friend, Michael Ang (aka Mang). Polygon Construction Kit (aka Polycon) is a software toolkit for converting 3D polygon models into physical objects.

IMG_0246I wanted an excuse to visit Berlin, to hang out with Mang and to open up some possibilities for physical data-visualization behind EquityBot, which I’m working at for the artist-in-residency at Impakt Works and their upcoming festival.

I brought my recently-purchased Printrbot Simple Metal, which I had disassembled into this travel box.IMG_0281

After less than 30 minutes, I had it reassembled and working. Victory! Here it is, printing one of the polygon connectors.

IMG_0248How does Polycon work? Mang shared with me the details. You start with a simply 3D model from some sort of program. He uses SketchUp for creating physical models of his large-scale sculptures. I prefer OpenFrameworks, which is powerful and will let me easily manipulate shapes from data streams.

Here’s the simple screenshot in OpenFrameworks of two polyhedrons. I just wrote this the other day, so there’s no UI for it yet.

Screen Shot 2014-09-25 at 6.10.14 PM

And here is how it looks in MeshLab. It’s water-tight, meaning that it can be 3D-printed.Screen Shot 2014-09-25 at 6.10.59 PM

My goal is to do larger-scale data visualizations than some of my previous works such as Data Crystals and Water Works. I imagine room-sized installations. I’ve had this idea for many months of using the 3D printer to create joinery from datasets and to skin the faces using various techniques, TBD.

How it works: Polycon loads a 3D model and using Python scripts in FreeCAD will generate 3D joints that along with wooden dowels can be assembled into polygonal structures. Screen Shot 2014-09-25 at 6.09.00 PM

The Printrbot makes adequate joinery, but it’s nowhere near as pretty as the Vero prints on the Object 500 at Autodesk. It doesn’t matter that much because my digital joinery will be hidden in the final structures.IMG_0272Mang guided be through the construction of my first Polycon structure. There’s a lot of cleanup work involved such as drilling out the holes in each of the joints. IMG_0274It took awhile to assemble the basic form. There are vertex-numbering improvements that we’ll both make to the software. Together, Mang and I brainstormed ideas as to how to make the assembly go more quickly.IMG_0259After about 15 minutes, I got my first polygon assembled.

IMG_0265 It looks a lot like…the 3D model. I plan to be working on these forms in the next several months and so felt great after a successful first day.IMG_0268And here is a really nice image of one of Mang’s pieces — these are sculptures of mountains that he created. The backstory is that he made these from memories while flying high in a glider and they represent mountains. I like where he’s going with his artwork: making models based on nature, with ideas of recording these spaces and playing them back in various urban spaces. You can check out Michael Ang’s work here on his website.
IMG_0278




 

 

 

 

 

A Starting Point: Distributed Capital

I’m doing more research on EquityBot —the project for my Impakt Works residency, which I just started a couple of days ago.

EquityBot is a stock-trading algorithm that explores the connections between collective emotions on social media and financial speculation. It will be presented at the Impakt Festival at the end of October.

It will also consist of a sculptural component (presented post-festival), which is the more experimental form.

Many of you are familiar with Paul Baran’s work on designing a distributed network, but many others may not be. He worked for the U.S. Air Force and determined that a central communications network would be vulnerable to attack, and suggested that the United States use a distributed network.
baranInterestingly, there is a widespread myth that the Internet, derived from APANET, was designed to withstand a nuclear attack using this model. This isn’t the case, just that the architects of the internet transmission protocol heard of Rand’s work and adapted it for packet use. Yet, the myth persists.

On a side note, perhaps military technology could be useful for the public good. If only we could declassify the technology, like Baran did.

The distributed network reminds me of a 3D polygon mesh I think this could be a good source of 3D data-visualization: Distributed Capital. I’ll research this more in the future.

But EquityBot isn’t about networks in the formal sense, it is a project about constructing a predictive model of stock changes based on the idea that Twitter sentiments correlate with fluctuations in stock prices. Screen Shot 2014-09-17 at 6.08.23 AM

Do I know there is a correlation? Not yet, but I think there is a good possibility. One of my reading sources, The Computational Beauty of Nature, sums up the value of simulated models in its introduction. The predictive model might fail in its results but it will likely reveal a greater truth in the economic system that it is trying to predict. Thus, knowing the uncertainty ahead of time will provide a sense of certainty. EquityBot may not “work” but then again, it may.

compbeautyofnatureMy source of dissent is the excellent book, The Signal and The Noise: Why So Many Predictions Fail — but Some Don’t by Nate Silver. After reading this, last summer, I was convinced that any predictive analysis would be simply be noise. I was disheartened and halted the EquityBot project (previously called Grantbot) for many months.

la-ca-nate-silver

However, now I’m not so sure. It seems likely that people’s moods would affect financial decisions, which in turn would affect stock prices. With studies such as this one by Vagelis Hristidis, which found some correlation to Twitter chatter and stock, I think there is something to this, which is why I’ve revisited the EquityBot project.

I’ll follow the Buddhist maxim with this project and embrace its uncertainty.