Introducing Machine Data Dreams

Earlier this year, I received an Individual Artist Commission grant from the San Francisco Arts Commission for a new project called Machine Data Dreams.

I was notified months ago, but the project was on the back-burner until now — where I’m beginning some initial research and experiments at a residency called Signal Culture. I expect full immersion in the fall.

The project description
Machine Data Dreams will be a large-scale sculptural installation that maps the emerging sentience of machines (laptops, phones, appliances) into physical form. Using the language of machines — software program code  — as linguistic data points, Scott Kildall will write custom algorithms that translate how computers perceive the world into physical representations that humans can experience.

The project’s narrative proposition is that machines are currently prosthetic extensions of ourselves, and in the future, they will transcend into something sentient. Computer chips not only run our laptops and phones, but increasingly our automobiles, our houses, our appliances and more. They are ubiquitous and yet, often silent. The key to understanding their perspective of the world is to envision how machines view the world, in an act of synthetic synesthesia.

Scott will write software code that will perform linguistic analysis on machine syntax from embedded systems — human-programmable machines that range from complex, general purpose devices (laptops and phones) to specific-use machines (refrigerators, elevators, etc) . Scott’s code will generate virtual 3D geometric monumental sculptures. More complex structures will reflect the higher-level machines and simpler structures will be generated from lower-level devices. We are intrigued by the experimental nature of what the form will take — this is something that he will not be able to plan.

kildall_5

Machine Data Dreams will utilize 3D printing and laser-cutting techniques, which are digital fabrication techniques that are changing how sculpture can be created — entirely from software algorithms. Simple and hidden electronics will control LED lights to imbue a sense of consciousness to the artwork. Plastic joints will be connected via aluminum dowels to form an armature of irregular polygons. The exterior panels will be clad by a semi-translucent acrylic, which will be adhered magnetically to the large-sized structures. Various installations can easily be disassembled and reassembled.

The project will build on my experiments with the Polycon Construction Kit by Michael Ang, where I’m doing some source-code collaboration. This will heat up the fall.

PCK-small-mountain-768x1024

At Signal Culture, I have 1 week of residency time. It’s short and sweet. I get to play with devices such as the Wobbulator, originally built by Nam June Paik and video engineer Shuya Abe.

The folks at Signal Culture built their own from the original designs.

What am I doing here, with analog synths and other devices? Well, I’m working with a home-built Arduino data logger that captures raw analog video signals (I will later modify it for audio).

20150730_200511I’ve optimized the code to capture about 3600 signals/second. The idea is to get a raw data feed of what a machine might be “saying”, or the electronic signature of a machine.

20150730_150950

Does it work? Well, I hooked it up to a Commodore Amiga (yes, they have one).

I captured about 30 seconds of video and I ran it through a crude version of my custom 3D data-generation software, which makes models and here is what I got. Whoa…

It is definitely capturing something.

Screen Shot 2015-07-30 at 10.08.49 PM

Its early research. The forms are flat 3D cube-plots. But also very promising.

Selling Bad Data

The reception for my solo show “Bad Data”, featuring the Bad Data series is this Friday (July 24, 2015) at A Simple Collective.

Date: July 24th, 2015
Time: 7-9pm
Where: ASC Projects, 2830 20th Street (btw Bryant and York), Suite 105, San Francisco

The question I had, when pricing these works was how do you sell Bad Data? The material costs were relatively low. The labor time was high. And the data sets were (mostly) public.

We came up with this price list, subject to change.

///  Water-jet etched aluminum honeycomb:

baddata_sfevictions
18 Years of San Francisco Evictions, 2015 | 20 x 20 inches | $1,200
Data source: The Anti-Eviction Mapping Project and the SF Rent Board


baddata_airbnb
2015 AirBnB Listings in San Francisco, 2015 | 20 x 20 inches | $1,200
Data source: darkanddifficult.com


baddata_hauntedlocations
Worldwide Haunted Locations, 2015 | 24 x 12 inches | $650
Data source: Wikipedia


baddata_ufosightings

Worldwide UFO Sightings, 2015 | 24 x 12 inches | $650
Data source: National UFO Reporting Center (NUFORC)


baddata_missouriabortionalternatives

Missouri Abortion Alternatives, 2015 | 12 x 12 inches
Data source: data.gov (U.S. Government) | $150


baddata_socalstarbucks

Southern California Starbucks, 2015 | 12 x 8 inches | $80
Data source: https://github.com/ali-ce


baddata_usprisons

U.S. Prisons, 2015 | 18 x 10 inches | $475
Data source: Prison Policy Initiative prisonpolicy.org (via Josh Begley’s GitHub page)


///  Water-jet etched aluminum honeycomb with anodization:

baddata_denvermarijuana

Albuquerque Meth Labs, 2015 | 18 x 12 inches | $475
Data source: http://www.metromapper.org


baddata_usmassshootings

U.S. Mass Shootings (1982-2012), 2015 | 18 x 10 inches | $475
Data source: Mother Jones


baddata_blacklistedips-banner

Blacklisted IPs, 2015 | 20 x 8 ½  inches | $360
Data source: Suricata SSL Blacklist


baddata_databreaches

Internet Data Breaches, 2015 | 20 x 8 ½ inches | $360
Data source: http://www.informationisbeautiful.net

Bad Data, Internet Breaches, Blacklisted IPs

In 1989, I read Neuromancer for the first time. The thing that fascinated me the most was not the concept of “cyberspace” that Gibson introduced. Rather it was the physical description of virtual data. The oft-quoted line is:

“The matrix has its roots in primitive arcade games. … Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts. … A graphic representation of data abstracted from banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding.”

What was this graphic representation of data that struck me at first and has stuck with be ever since. I could only imagine what this could be. This concept of physicalizing virtual data later led to my Data Crystals project. Thank you, Mr. Gibson.

dc_sfart_v1

In Neuromancer, the protagonist Case is a freelance “hacker”. The book was published well-before Anonymous, back in the days when KILOBAUD was the equivalent of Spectre for the BBS world.

At the time, I thought that there would be no way that corporations would put their data in a central place that anyone with a computer and a dial-up connection (and, later T1, DSL, etc) could access. This would be incredibly stupid.

And then, the Internet happened, albeit more slowly than people remember. Now hacking and data breaches are commonplace.

My “Bad Data” series — waterjet etchings of ‘bad’ datasets onto aluminum honeycomb panels — capture two aspects of internet hacking: Internet data breaches and Blacklisted IPs.

In these examples, ‘bad’ has a two-layered meaning. The abrogations of accepted treatises of Internet behavior is widely considered a legal, though always not a moral crime. The data is also ‘bad’ in the sense that it is incomplete. Data breaches are usually not advertised by the entities that get breached. That would be poor publicity.

For the Bad Data series, I worked with no necessarily the data wanted, but rather the data that I could get. From Information Is Beautiful, I found this dataset of Internet data breaches.

Screen Shot 2015-07-12 at 8.22.04 PM

What did I discover? …that Washington DC is the leader of breached information. I suspect it’s mostly because the U.S. government is the biggest target rather than lax government security. The runner-up is New York City, the center of American finance. Other notable cities are San Francisco, Tehran and Seoul. San Francisco makes sense — the city is home to many internet companies. And Tehran, which is the target of Western internet attacks, government or otherwise. But Seoul? They claim to be targeted by North Korea. However, as we have found out, with the Sony Pictures Entertainment Hack, North Korea is an easy scapegoat.

BAD DATA: INTERNET DATA BREACHES (BELOW)

baddata_databreaches

Conversely, there are many lists of banned IPs. The one I worked with is the Suricata SSL Blacklist. This may not be the best source, as there are thousands of IP Blacklists, but it is one that is publicly available and reasonably complete. As I’ve learned, you have to work with the data you can get, not necessarily the data you want.

I ran these two etched panels both through an anodization process, which further created a filmy residue on the surface. I’m especially pleased with how the Banned IPs panel came out.

Bad Data: BLACKLISTED IPs (below)

baddata_blacklistedips

Genetic Portraits and Microscope Experiments

I recently finished a new artwork — called Genetic Portraits — which is a series of microscope photographs of laser-etched glass that data-visualize a person’s genetic traits.

I specifically developed this work as an experimental piece, for the Bearing Witness: Surveillance in the Drone Age show. I wanted to look at an extreme example of how we have freely surrendered our own personal data for corporate use. In this case, 23andMe provides a (paid) extensive genetic sequencing package. Many people, including myself have sent in saliva samples to the company, which they then process. From their website, you can get a variety of information, including their projected likelihood that you might be prone to specific diseases based on your genetic traits.

Following my line of inquiry with other projects such as Data Crystals and Water Works, where I wrote algorithms that transformed datasets into physical objects, this project processes individual’s genetic sequence to generate vector files, which I later use to laser-etch onto microscope slides. The full project details are here.

gp_scott_may11

Concept + Material
I began my experiment months earlier, before the project was solidified, by examining the effect of laser-etching on glass underneath a microscope. This stemmed from conversations with some colleagues about the effect of laser-cutting materials. When I looked at this underneath a microscope, I saw amazing results: an erratic universe accentuated by curved lines. Even with the same file, each etching is unique. The glass cracks in different ways. Digital fabrication techniques still results in distinct analog effects. 

blog-IMG_4106When the curators of the show, Hanna Regev and Matt McKinley, invited me to submit work on the topic of surveillance, I considered how to leverage various experiments of mine, and came back to this one, which would be a solid combination of material and concept: genetic data etched onto microscope slides and then shown at a macro scale: 20” x 15” digital prints.

Surrendering our Data
I had so many questions about my genetic data. Is the research being shared? Do we have ownership of this data? Does 23andMe even ask for user consent? As many articles point out, the answers are exactly what we fear. Their user agreement states that “authorized personnel of 23andMe” can use the data for research. This sounds officially-sounding text simply means that 23andMe decides who gets access to the genetic data I submitted. 23andMe is not unique: other gene-sequencing companies have similar provisions, as the article suggests.

Some proponents suggest that 23andMe is helping the research front, while still making money. It’s capitalism at work. This article in Scientific American sums up the privacy concerns. Your data becomes a marketing tool and people like me handed a valuable dataset to a corporation, which can then sell us products based on the very data we have provided. I completed the circle and I even paid for it.   

However, what concerns me even more than 23andMe selling or using the data — after all, I did provide my genetic data, fully aware of its potential use — is the statistical accuracy of genetic data. Some studies have reported a Eurocentric bias to the data and The FDA has also has battled with 23andMe regarding the health data they provide. The majority of the data (with the exception of Bloom’s Syndrome) simply wasn’t predictive enough. Too many people had false positives with the DNA testing, which not only causes worry and stress but could lead to customers taking pre-emptive measures such as getting a mastectomy if they mistakenly believe they have are genetically predisposed to breast cancer.

A deeper look at the 23andMe site shows a variety of charts that makes it appear like you might be susceptible (or immune) to certain traits. For example, I have lower-than-odds of having “Restless Leg Syndrome“, which is probably the only neurological disorder that makes most people laugh when hearing about it. My genetic odds of having it are simply listed as a percentage.

Our brains aren’t very good with probabilistic models, so we tend to inflate and deflate statistics. Hence, one of many problems of false positives.

And, as I later discovered, from an empirical standpoint, my own genetic data strayed far from my actual personality. Our DNA simply does not correspond closely enough to reality.

Screen Shot 2015-06-16 at 11.06.44 AM

Data Acquisition and Mapping
From the 23andMe site, you can download your raw genetic data. The resulting many-megabyte file is full of rsid data and the actual allele sequences.

Screen Shot 2015-06-15 at 10.37.08 AM

Isolating useful information from this was tricky. I cross-referenced some of the rsids used for common traits from 23andMe with the SNP database. At first I wanted to map ALL of the genetic data. But, the dataset was complex — too much so for this short experiment and straightforward artwork.

Instead, I worked with some specific indicators that correlate to physiological traits such as lactose tolerance, sprinter-based athleticism, norovirus resistances, pain sensitivity, the “math” gene, cilantro aversion — 15 in total. I avoided genes that might correlate to various general medical conditions like Alzheimer’s and metabolism.

For each trait I cross-referenced the SNP database with 23andMe data to make sure the allele values aligned properly. This was arduous at best.

There was also a limit on physical space for etching the slide, so having more than 24 marks or etchings one plate would be chaotic. Through days of experimentation, I found that 12-18 curved lines would make for compelling microscope photography.

To map the data onto the slide, I modified Golan Levin’s decades-old Yellowtail Processing sketch, which I had been using as a program to generate curved lines onto my test slides. I found that he had developed an elegant data-storage mechanism that captured gestures. From the isolated rsids, I then wrote code which gave weighted numbers to allele values (i.e. AA = 1, AG = 2, GG = 3, depending on the rsid).

gp_illustrator

Based on the rsid numbers themselves, my code generated (x, y) anchor points and curves with the allele values changing the shape of each curve. I spent some time tweaking the algorithm and moving the anchor points. Eventually, my algorithm produced this kind of result, based on the rsids.

genome_scott_notated

The question I always get asked about my data-translation projects is about legibility. How can you infer results from the artwork? Its a silly question, like asking an Kindle engineer to to analyze a Shakespeare play. A designer of data-visualization will try to tell a story using data and visual imagery.

My research and work focuses deep experimentation with the formal properties of sculpture — or physical forms — based on data. I want to push boundaries of what art can look like, continuing the lineage of algorithmically-generated work by artists such as Sol Lewitt, Sonia Rappaport and Casey Raes.

Is it legible? Slightly so. Does it produce interesting results? I hope so.

gp_slide_image

But, with this project, I’ve learned so much about genetic data — and even more about the inaccuracies involved. It’s still amazing to talk about the science that I’ve learned in the process of art-making.

Each of my 5 samples looks a little bit different. This is the mapping of actual genetic traits of my own sample and that of one other volunteer named “Nancy”.

genome_scott_notated

Genetic Traits for Scott (ABOVE)
GENETIC TRAITS FOR NaNCY (BELOW)

genome_scott_notatedWe both share a number of genetic traits such as the “empathy” gene and curly hair. The latter seems correct — both of our hair is remarkably straight. I’m not sure about the empathy part. Neither one of us is lactose intolerant (also true in reality).

But the test-accuracy breaks down on several specific points. Nancy and I do have several differences including athletic predisposition. I have the “sprinter” gene, which means that I should be great at fast-running. I also do not have the math gene. Neither one of these is at all true.

I’m much more suited to endurance sports such as long-distance cycling and my math skills are easily in the 99th percentile. From my own anecdotal standpoint, except for well-trodden genetics like eye color, cilantro aversion and curly hair, the 23andMe results often fail.

The genetic data simply doesn’t seem to be support the physical results. DNA is complex. We know this, it is non-predictive. Our genotype results in different phenotypes and the environmental factors are too complex for us to understand with current technology.

Back to the point about legibility. My artwork is deliberately non-legible based on the fact that the genetic data isn’t predictive. Other mapping projects such as Water Works are much more readable.

I’m not sure where this experiment will go. I’ve been happy with the results of the portraits, but I’d like to pursue this further, perhaps in collaboration with scientists who would be interested in collaboration around the genetic data.

FOUR FINAL SLIDE ETCHINGS  (BELOW)

gp_allison_may11

 

gp_michele_may11 gp_nancy_may11 gp_scott_may11

Pier 9 Artist Profile

The good folks at Pier 9, Autodesk just released this video-profile of me and my Water Works project. I’m especially happy with Charlie Nordstrom’s excellent videography work and even got the chance help with the editing of the video itself.

Yes, in a previous life I used to do editing for video documentaries with now defunct, Sleeping Giant Video and the IndyMedia Center.

But now, I’m more interested in algorithms, data and sculpture.

Impakt Festival: Opening Night

The Impakt Festival officially kicked off this Wednesday evening, and the first event was the exhibition opening at Foto Dok, curated by Alexander Benenson.

The works in the show circled around the theme of Soft Machines, which Impakt describes as “Where the Optimized Human Meets Artificial Empathy”.

Of the many powerful works in the show, my favorite was the 22-minute video, “Hyper Links or it Didn’t Happen,” by Cécile B. Evans. A failed CGI rendering of Philip Seymour Hoffman narrates fragmented stories of connection, exile and death. At one point, we see an “invisible woman” who lives on a beach and whose lover stays with her, after quitting a well-paying job. The video intercuts moments of odd narration by a Hoffman-AI. Spam bots and other digital entities surface and disappear. None of it makes complete sense, yet it somehow works and is absolutely riveting.

pseymore

After the exhibition opening, the crowd moved to Theater Kikker, where Michael Bell-Smith, presented a talk/performance titled “99 Computer Jokes”. He spared the audience by telling us one actual computer joke. Instead, he embarked on a discursive journey, covering topics of humor, glitch, skeuomorphs, repurposing technology and much more. Bell-Smith spoke with a voice of detached authority and made lateral connections to ideas from a multitude of places and spaces.

michael In the first section of his talk, he describes that successful art needs to have a certain amount of information — not too much, not too little, citing the words of arts curator Anthony Huberman:

“In art, what matters is curiosity, which in many ways is the currency of art. Whether we understand an artwork or not, what helps it succeed is the persistence with which it makes us curious. Art sparks and maintains curiosities, thereby enlivening imaginations, jumpstarting critical and independent thinking, creating departures from the familiar, the conventional, the known. An artwork creates a horizon: its viewer perceives it but remains necessarily distant from it. The aesthetic experience is always one of speculation, approximation and departure. It is located in the distance that exists between art and life.”

In the present time where faith in technology has vastly overshadowed that of art, these words are hyper-relevant. The Evans video accomplishes this, resting in this valley between the known and the uncertain. We recognize Hoffman and he is present, but in an semi-understandable, mutated form. We know that the real Philip Seymour Hoffman is dead. His ascension into a virtual space is fragmented and impure. The video suggests that traversing the membrane from the real into the screen space will forever distort the original. It triggers the imagination. It sticks with us in a way that stories do not.

What Bell-Smith alludes to his talk is that the idea of combining the human and the machine won’t work…as expected. He sidesteps any firm conclusions. His performance is like the artwork that Huberman describes: it never reaches resolution and opens up a space for curiosity.

Later he displayed slides of Photoshop distasters, a sort of “Where’s Waldo” of Photoshop errata. Microseconds after viewing the advertisement below, we know something is off. The image triggers an uncanny response. A moment later we can name the problem of the model having only one leg. Primal perception precedes a categorical response. Finally, everyone laughs together at the idiosyncrasy that someone let into the public sphere.

leg

After Bell-Smith’s talk we had a chance for eating-and-drinking. Hats off to the Impakt organization. I know I’m biased since I’m an artist-in-residence at Impakt during the festival itself, but they certainly know how to make everyone feel warm and cozy.
galaNext up was the keynote speaker, Bruce Sterling, who is a science fiction writer and cultural commentator. He boldly took the stage without a laptop, and so the audience had no slides or videos to bolster his arguments. He assumed the role of naysayer, deconstructing the very theme of the festival: Where Optimized Human Meets Artificial Empathy. Defining the terms “cognition” (human) vs “computation” (machine), he took the stance that the merging of the two was a categorical error in thinking. His example: birds can fly and drones can fly, but this doesn’t mean that drones can lay eggs. My mind raced, thinking that someday drone aircraft might reproduce. Would that be inconceivable?

Sterling tackled the notion of the Optimized Human with san analogy to Dostoyevsky’s Crime and Punishment. For those of you that don’t recall your required high school reading, the main character of the book is Raskolnikov, who is both brilliant and desperate for money. He carefully plans and then kills an morally bankrupt pawnbroker for her cash. The philosophical question that Dostoyevsky proposes is the idea of a superhuman:  select individuals who are exempt the prescribed moral and legal code. Could the murder of a terrible person be a justifiable act? And could the person to judge this would be someone who is excessively bright, essentially leaving the rest of the humanity behind?

In the book, the problem is that the social order gets disrupted. Raskolnikov action introduces an deadly unpredictable element into his village. With an uncertainty to the law and who executes it, no one feels safe. At the conclusion of the novel, Raskolnikov ends up in exile, in a sort of moral purgatory.

The very notion of the “optimized human” has similar problems. If select people are somehow “upgraded” through cybernetics, gene therapies and other technological enhancements, what happens to the social order? Sterling spoke about marketing, but I see the greater problem one of leveraged inequality. If there are a minority of improved humans who have combined integrated themselves with some sort of techno-futuristic advantages, our society rapidly escalates the classic problem of the digital divide. The reality is that this has already started happening. The future is here.bruce Bruce Sterling concluded with the point that we need to pay attention to how technology is leveraged. His example of Apple’s Siri system, albeit not a strong case of Artificial Empathy, is owned by a company with specific interests. When asked for the nearest gas station or a recipe for grilled chicken, Siri “happily” responds. If you ask her how to remove the DRM encoding on a song in your iTunes library, Siri will be helpless. While I disagreed with a number of Sterling’s points in his talk, what I do know is that I would hope for a non-predictive future for my Artificial Empathy machines.

The Impakt Festival continues through the weekend with the full schedule here.

 

 

 

EquityBot goes live!

During my time at Impakt as an artist-in-residence, I have been working on a new project called EquityBot, which is an online commission from Impakt. It fits well into the Soft Machines theme of the festival: where machines integrate with the soft, emotional world.

EquityBot exists entirely as a networked art or “net art” project, meaning that it lives in the “cloud” and has no physical form. For those of you who are Twitter users, you can follow on Twitter: @equitybot

01_large

What is EquityBot? Many people have asked me that question.

EquityBot is a stock-trading algorithm that “invests” in emotions such as anger, joy, disgust and amazement. It relies on a classification system of twenty-four emotions, developed by psychologist and scholar, Robert Plutchik.

Plutchik-wheel.svg

how it works
During stock market hours, EquityBot continually tracks worldwide emotions on Twitter to gauge how people are feeling. In the simple data-visualization below, which is generated automatically by EquityBot, the larger circles indicate the more prominent emotions that people are Tweeting about.

At this point in time, just 1 hour after the stock market opened on October 28th, people were expressing emotions of disgust, interest and fear more prominently than others. During the course of the day, the emotions contained in Tweets continually shift in response to world events and many other unknown factors.

twitter_emotionsEquityBot then uses various statistical correlation equations to find pattern matches in the changes in emotions on Twitter to fluctuations in stocks prices. The details are thorny, I’ll skip the boring stuff. My time did involve a lot of work with scatterplots, which looked something like this.

correlationOnce EquityBot sees a viable pattern, for example that “Google” is consistently correlated to “anger” and that anger is a trending emotion on Twitter, EquityBot will issue a BUY order on the stock.

Conversely, if Google is correlated to anger, and the Tweets about anger are rapidly going down, EquityBot will issue a SELL order on the stock.

EquityBot runs a simulated investment account, seeded with $100,000 of imaginary money.

In my first few days of testing, EquityBot “lost” nearly $2000. This is why I’m not using real money!

Disclaimer: EquityBot is not a licensed financial advisor, so please don’t follow it’s stock investment patterns.

accountThe project treats human feelings as tradable commodities. It will track how “profitable” different emotions will be over the course of months. As a social commentary, I propose a future scenario that just about anything can be traded, including that which is ultimately human: the very emotions that separate us from a machine.

If a computer cannot be emotional, at the very least it can broker trades of emotions on a stock exchange.

affect_performanceAs a networked artwork, EquityBot generates these simple data visualizations autonomously (they will get better, I promise).

It’s Twitter account (@equitybot) serves as a performance vehicle, where the artwork “lives”. Also, all of these visualizations are interactive and on the EquityBot website: equitybot.org.

I don’t know if there is a correlation between emotions in Tweets and stock prices. No one does. I am working with the hypothesis that there is some sort of pattern involved. We will see over time. The project goes “live” on October 29th, 2014, which is the day of the opening of the Impakt Festival and I will let the first experiment run for 3 months to see what happens.

Feedback is always appreciated, you can find me, Scott Kildall, here at: @kildall.

 

Data-Visualizing + Tweeting Sentiments

It’s been a busy couple of weeks working on the EquityBot project, which will be ready for the upcoming Impakt Festival. Well, at least some functional prototype in my ongoing research project will be online for public consumption.

The good news is that the Twitter stream is now live. You can follow EquityBot here.

EquityBot now tweets images of data-visualizations on its own and is autonomous. I’m constantly surprised and a bit nervous by its Tweets.

exstasy_sentimentAt the end of last week, I put together a basic data visualization using D3, which is a powerful Javascript data-visualization tool.

Using code from Jim Vallandingham, In just one evening, I created dynamically-generated bubble maps of Twitter sentiments as they arrive EquityBot’s own sentiment analysis engine.

I mapped the colors directly from the Plutchik wheel of emotions, which is why they are still a little wonky like the fact that the emotion of Grief is unreadable. Will be fixed.

I did some screen captures and put them my Facebook and Twitter feed. I soon discovered that people were far more interested in images of the data visualizations than just text describing the emotions.

I was faced with a geeky problem: how to get my Twitterbot to generate images of the data visualizations using D3, a front-end Javascript client? I figured it out eventually, after stepping into a few rabbit holes.

Screen Shot 2014-10-21 at 11.31.09 AM

I ended up using PhantomJS, the Selenium web driver and my own Python management code to solve the problem. There biggest hurdle was getting Google webfonts to render properly. Trust me, you don’t want to know the details.

Screen Shot 2014-10-21 at 11.31.29 AM

 

But I’m happy with the results. EquityBot will now move to other Tweetable data-visualizations such as its own simulated bank account, stock-correlations and sentiments-stock pairings.

Blueprint for EquityBot

For my latest project, EquityBot, I’ve been researching, building and writing code during my 2 month residency at Impakt Works in Utrecht (Netherlands).

EquityBot is going through its final testing cycles before a public announcement on Twitter. For those of you who are Bot fans, I’ll go ahead and slip you the EquityBot’sTwitter feed: https://twitter.com/equitybot

The initial code-work has involved configuration of a back-end server that does many things, including “capturing” Twitter sentiments, tracking fluctuations in the stock market and running correlation algorithms.

I know, I know, it sounds boring. Often it is. After all, the result of many hours of work: a series of well-formatted JSON files. Blah.

But it’s like building city infrastructure: now that I have the EquityBot Server more or less working, it’s been incredibly reliable, cheap and customizable. It can act as a Twitterbot, a data server and a data visualization engine using D3.

This type of programming is yet another skill in my Creative Coding arsenal. And consists of mostly Python code that lives on a Linode server, which is a low-cost alternative to options like HostGator or GoDaddy, which incur high monthly costs. And there’s a geeky sense of satisfaction in creating a well-oiled software engine.

The EquityBot Server looks like a jumble of Python and PHP scripts. I cannot possibly explain it excruciating detail, nor would anyone in their right mind want to wade through the technical details.

Instead, I wrote up a blueprint for this project.

ebot_server_diagram_v1For those of you who are familiar with my art projects, this style of blueprint may look familiar. I adapted this design from my 2049 Series, which are laser-etched and painted blueprints of imaginary devices. I made these while an artist-in-residence at Recology San Francisco in 2011.

sniffer-blue

Water Works Final Report

Overview
Water Works is a project that I created for the Creative Code Fellowship in the Summer of 2014 with the combined support of Stamen Design, Autodesk and Gray Area.

Water Works is a 3D data visualization and mapping of the water infrastructure of San Francisco. The project is a relational investigation: I have been playing the role of a “Water Detective, Data Miner” and sifting through the web for water data. My results of from this 3-month investigation are three large-scale 3D-printed sculptures, each paired with an interactive web map.

The final website lives here: http://www.waterworks.io/

sewer

Stamen Design is a small design studio that creates sophisticated mapping and data-visualization projects for the web. Combined with the amazing physical fabrication space at Pier 9 at Autodesk, this was a perfect combination of collaborative players for my own focus: writing algorithms that transform datasets into 3D sculptures and installations. I split my time between the two organizations and both were amazing, creative environments.

Gray Area provided the project guidance and coursework: 12 hours a week of Creative Code Immersive classes in topics ranging from Arduino to Node.js. About half of the classes were review for me, e.g. OpenFrameworks, Processing, Arduino, but Javascript, Node and more were completely new.

This report is heavy on images, partially because I want to document the entire process of how I created these 3D mapping-visualizations. As far as I know, I’m the first person who has undertaken this creative process: from mining city data to 3D-printing the infrastructure, which is geo-located on a physical map.

My directive from the start of the Water Works project was to somehow make visible what is invisible. This simple message is one that I learned while I was working as a New Media Exhibit Developer at the Exploratorium (2012-2013). It also aligns with the work that Stamen Design creates and so I was pleased to be working with this organization.

Starting Point
Underneath our feet is an urban circulatory system that delivers water to our households, removes it from our toilets, delivers a reliable supply firefighting and ultimately purifies it and directs it into the bay and ocean. Most of us don’t think about this amazing system because we don’t have to — it simply works.

Like many others, I’m concerned about the California drought, which many climatologists think will persist for the next decade. I am also a committed urban-dweller and want to see the city I live in improve its infrastructure as it serves an expanding population. Finally, I undertook this project in order to celebrate infrastructure and to help make others aware of the benefits of city government.

drought

On more personal note, I am fascinated by urban architecture. As I walk through the city, I constantly notice the makings on manholes, the various sign posts and different types of fire hydrants.cistern_manhole

About a year ago, I had several in-depth conversations with employees at the Department of Public Works about the possibility of mapping the sewer system when I was working at the Exploratorium. We discussed possibilities of producing a sewer map for museum. For various reasons, the maps never came to fruition, but the data still rattled around my brain. All of the pipe and manhole data still existed. It was waiting to be mapped.

Three Water Systems of San Francisco
When I was awarded this Creative Code Fellowship in June this year, I very much about the San Francisco water system. I soon learned that the city has three separate sets of pipes that comprise the water infrastructure of San Francisco.

(1) Potable Water System — this is our drinking water, comes from Hetch Hetchy. Some fire hydrants uses this.

(2) Sewer System — San Francisco has a combined stormwater and wastewater system, which is nearly entirely gravity-fed. The water gets treated at one of the wastewater treatment plants. San Francisco is the only coastal California city with a combined system.

(3) Auxiliary Water Supply System (AWSS) — this is a separate system just for emergency fire-fighting. It was built in the years immediately following the 1906 Earthquake, where many of the water mains collapsed and most of the city proper was destroyed by fires. It is fed from the Twin Peaks Reservoir. San Francisco is the only city in the US that has such as system.

water_treatment

Follow the Data, Find the Story
From my previous work on Data Crystals, I learned that you have to work with the data you can actually get, not the data you want. In the first month of the Water Works project, this involved constant research and culling.

I worked with various tables of sewer data that the DPW provided to me. I discovered that the city had about 30,000 nodes (underground chambers with manholes) with 30,000 connections (pipes). This was an incredible dataset and it needed a lot of pruning, cleaning and other work, which I soon discovered was a daunting task.

Lesson #1: Contrary to popular belief, data is never clean.

What else was available? It was hard to say at first. I sent emails to the SFPUC asking for their the locations of the drinking water data — just like what I had for the sewer data. I thought this would be incredible to represent. I approached the project with a certain naivety.

Of course, I shouldn’t have been surprised about that this would be a security concern, but in no uncertain terms I received a resounding no from the SFPUC. This made sense, but it left me with only one dataset.

Given that there were three water systems, it would make sense to create three 3D-printed visualizations, once from each set. If not the pipes, what would I use?

In one of my late-night evenings research, I found a good story: the San Francisco Underground Cisterns. According to various blogs, there are about 170 of these, and are usually marked by a brick circle. What is underneath?

cistern_circle

In the 1850s, after a series of Great Fires in San Francisco tore through the city, 23 cisterns* were built. These smaller cisterns were all in the city proper, at that time between Telegraph Hill and Rincon Hill. They weren’t connected to any other pipes and the fire department intended to use them in case the water mains were broken, as a backup water supply.

They languished for decades. Many people thought they should be removed, especially after incidents like the 1868 Cistern Gas Explosion.

However, after the 1906 Earthquake, fires once again decimated the city. Many water mains broke and the neglected cisterns helped save portions of the city.

Afterward, the city passed a $5,200,000 bond and begin building the AWSS in 1908. This included the construction of many new cisterns and the rehabilitation of other, neglected ones. Most of the new cisterns could hold 75,000 gallons of water. The largest one is underneath the Civic Center and has a capacity of 243,000 gallons.

The original ones, presumably rebuilt, hold much less, anywhere from 15,000 to 50,000 gallons.

* from the various reports I’ve read, this number varies.

old-cisternsmap

I searched for a map of all the cisterns, which was to be difficult to find. There was no online map anywhere. I read that since these were part of the AWSS, that they were refilled by the fire department. I soon begin searching for fire department data and found this set of intersections, along with the volume of each cistern. The source was the SFFD Water Supplies Manual.

cisterdata

The story of the San Francisco Cisterns was to be my first of three stories in this project.

Autodesk also runs Instructables, a DIY, how-to-make-things website. One of the Instructables details the mapping process, so if you want details, have a look at this Instructable.

What I did to make this conversion happen was to write code in Python which called Google Maps API to convert the intersections into lat/longs as well as get elevation data. When I had asked people how to do this, I received many GitHub links. Most of them were buggy or poorly documented. I ended up writing mine from scratch.

Lesson #2: Because GitHub is both a backup system for source code and open source sharing project, many GitHub projects are confusing or useless.

The being said, here is my GitHub repo: SF Geocoder, which does this conversion. Caveat Emptor.

Mapping the San Francisco Sewers
This was my second “story” with the Water Works project, which is simply to somehow represent the complex system that is underneath us. The details of the sewers are staggering. With approximately 30,000 manholes and 30,000 pipes that connect them, how do you represent or even begin mapping this?

And what was the story after all — it doesn’t quite have the uniqueness character of the cisterns. But, it does portray a complex system. Even the DPW hadn’t mapped this out in 3D space. I don’t know if any city ever has. This was the compelling aspect: making the physical model itself from the large dataset.

Building a 3D Modeling System
In addition to looking for data and sifting through the sewer data that I hand, I spent the first few weeks building up a codebase in OpenFrameworks.

The only other possibility was using Rhino + Grasshopper, which is a software package I don’t know and not even an Autodesk product. Though it can handle algorithmic model-building, several colleagues were dubious that it could handle my large, custom dataset.

So, I built my own. After several days of work, I mapped out the nodes and pipes as you see below. I represented the nodes as cubes and pipes as cylinders — at least for the onscreen data visualization.

sewer-mapping

This is a closeup of the San Francisco bay waterfront. You can see some isolated nodes and pipes — not connected to the network. This is one example of where the data wasn’t clean. Since this is engineering data, there are all sorts of anomalies like virtual nodes, run-offs and more.

My code was fast and efficient since it was in C++. More importantly, I wrote custom STL exporters which empowered my workflow to go directly to a 3D printer without having to go through other 3D packages to clean up the data. This took a lot of time, but once I got it working, it saved me hours of frustration later in the project.

seweremapping2

I also mapped out the Cisterns in 3D space using the same code. The Cisterns are disconnected in reality but as a 3D print, they need to one cohesive structure. I modified the ofxDelaunay add-on (thanks GitHub) to create cylindrical supports that link the cisterns together.

What you see here is an “editor”, where I could change the thickness of the supports, remove unnecessary ones and edit the individual cisterns models to put holes in certain ones.

I also scaled the Cisterns according to their volume. The pre-1906 ones tend to be small, while the largest one, at Civic Center is about 243,000 gallons, which over 3 times the size of the standard post-earthquake 75,000 gallon cisterns.

OF-cisterns-nomap

Story #3: Imaginary Drinking Hydrants
In the same document that had the locations of all of the San Francisco Cisterns, I also found this gem: 67 emergency drinking hydrants for public use in a city-wide disaster.

Whoa, I thought, how interesting…

drinking_hydrants

I dug deeper and scouted out the intersections in person. I took some photos of the Emergency Drinking Hydrants. They have blue drops painted on them. You can even see them on Street View.

I found online news articles from several years ago, which discussed this program, introduced in 2006, also known as the Blue Drop Hydrant program.

Picture of What is the blue drop hydrant program

blue_drop_man.jpg

And, I generated a web map, using Javascript and Leaflet.

imaginary-drnkinghydrants

I then published a link to the map onto my Twitter feed. It generated a lot of excitement and was retweeted by many sources.

twitt.jpg

The SFist — a local San Francisco news blog ended up covering it. I was excited. I thought I was doing a good public service.

However, there was a backlash…of sorts. It turns out that the program was discontinued by the SFPUC. The organization did some quick publicity-control on their Facebook page and also contacted the SFist.

The writer of the article then issued a statement that this program was discontinued and a press statement by the SFPUC.

press2.jpg

He also had this quote, which was a bit of a jab at me. “It had sounded like designer Scott Kildall, who had been mapping the the hydrants, had done a fair amount of research, but apparently not.”

In my defense, I re-researched the emergency drinking hydrants. Nowhere did it say that the program was discontinued. So, apparently the SFPUC quietly shuffled it out.

But later, I found that my map birthed a larger discussion. The SFPUC had this response, also printed later on SFist.
Picture of But then a good public response

The key quote by Emergency Planning Director Mary Ellen Carroll is:

“When it comes to sheltering after a emergency, we don’t tell people ahead of time, ‘This is where you’ll need to go to find shelter after an earthquake’ because there’s no way to know if that shelter will still be there.

This makes sense that central gathering locations could be a bad idea. Imagine a gas leak or something similar at one of these locations. So a water distribution plan would have to be improvised according the the desasters.

We do know from various news articles and by my own photographs that there was not only a map, but physical blue drops painted on the hydrants in addition to a large publicity campaign. The program supposedly costs 1 million dollars, so that would have been an expensive map.

They SFPUC never pulled the old maps from their website nor did they inform the public that the blue drop hydrants were discontinued.

I blame it on general human miscommunication. And after visiting the SFPUC offices towards the end of my Water Works project, I’m entirely convinced that this is a progressive organization with smart people. They’re doing solid work.

But I had to rethink my mapping project, since these hydrants no longer existed.

When faced with adverse circumstances, at least in the area of mapping and art, you must be flexible. There’s always a solution. This one almost rhymes with Emergency — Imaginary.

Instead of hydrants for emergency drinking water, I ask the question: could we have a city where we could get tap water from these hydrants at any time? What if the water were recycled water?

They could have a faucet handle on them, so you could fill up your bottle when you get thirsty. More importantly, these hydrants could be a public service.

It’s probably impractical in the short term, but I love the idea of reusing the water lines for drinking lines — and having free drinking water in the public commons.

So, I rebranded this map and designed this hydrants with a drinking faucet attached to it. This would be the base form used for the maps.

Picture of Rebrand as Imaginary Drinking Hydrants

Creating Mini Models
I wanted to strike a balance with this data-visualization and mapping project between aesthetics and legibility.With the data sets I now had and the C++ code that I wrote, I could geolocate cisterns, hydrants and sewer lines.

These would be connected by support structures in the case of cisterns and hydrants and pipe data for the sewers.

I decided that the actual data points would be miniature models, which I designed in Fusion 360 with the help of Autodesk guru, Taylor Stein. The first one I created was the Cistern model.

cisterns-fusion360I went through several iterations to come up with this simple model. The design challenge was to come up with a form that looked like it could be an underground tank, but not bring up other associations. In this case, without the three rectangular stubby pieces, it looks like a tortilla holder.

After a day of design and 3D print tests, I settled on this one.

cistern-model

And here you can see the outputs of the cisterns and the hydrants in MeshLab.

meshlab-cisterns

Here is the underside of the hydrant structure, where you can see the holes in the hydrants, which I use later for creating the final sculpture. These are drill holes for mounting the final prints on wood.

meshlab-hydrants-underneathThe manhole chamber design was the hardest one to figure out. This one is more iconographic than representational. Without some sort of symmetry, the look of the underground chamber didn’t resonate. I also wanted to provide a manhole cover on top of the structure. The flat bottom distinguishes it from the pipes.

manhole

Mapping and Legibilitystamen

One of my favorite aspects about being at Stamen is that four days a week, they provided lunch for us. We all ate lunch together. This was a good chunk of unstructured time to talk about mapping, music, personal life, whatever.

We solidified bonds — so often shared lunch is overlooked in organizations. In addition to informal discussion on the project, we also had a few creative brainstorm sessions, where I would present the progress of the project and get feedback from several people at both Stamen. Folks from Autodesk and Gray Area also joined the discussion.

I hadn’t considered the idea of situating these on a map before, but they suggested integrating a map of some sort. Quickly, the idea was birthed that I should geolocate these on top of a map. This was a brilliant direction for the project.

OF-imaginaryhydrants-mapStamen provided be with a high-resolution map that I could laser etch, which came later, after the 3D printing. Now, with this direction for the project, I started making the actual 3D prints.  map-for-etching

Mega-prints with lots of cleaning
After all the mapping, arduous data-smoothing, tests upon structural tests, I was finally ready to spool off the large-scale 3D prints. Each print was approximately the size of the Object 500 print bed: 20″ x 16″, making these huge. A big thanks to Autodesk for sponsoring the work and providing the machines.

Each print took between 40 and 50 hours of machine time, so I sent these out as weekend-long jobs. Time and resources were limited, so this was a huge endeavor.

cisterns-buildtimeI was worried that the print would fail, but I got lucky in each case. The prints are a combine resin material: VeroClear and VeroWhite (for the Cisterns and Hydrants) and mixes of VeroWhite and VeroBlack for the Sewers.

support-cisterns-far

When the prints come off the print bed, they are encased in a support material which I first scraped off and then used a high-pressure water system to spray the rest off.
cleaning-cistern

It took hours upon hours to get from this.

sewerworks

To this: a fully cleaned version of the Sewer print. This 3D print is of a section of the city: the Embarcadero area, which includes the Pier 9 facility where Autodesk is located.

For the Sewer Works print, the manhole chambers and pipes are scaled to the size in the data tables. I increased the elevation about 3 time to capture the hilly terrain of San Francisco. What you see here is an aerial view as if you were in a helicopter flying from Oakland to San Francisco. The diagonal is Market Street, ending at the Ferry Building. On the right side, towards the back of the print is Telegraph Hill. There are large pipes and chambers along the Embarcadero. Smaller ones comprise the sewer system in the hilly areas.
sewerworks-3dMap-Etching and Final Fabrication
I’ll just summarize the final fabrication — this blog post is already very long. For a more details, you can read this Instructable on how I did the fabrication work. 

Using a cherry wood, which I planed and jointed and glued together, I laser-etched these maps, which came out beautifully.

I chose wood both because of the beautiful finish, but also because the material of wood references the wood Victorian and Edwardian houses that define the landscape of San Francisco. The laser-etching burns away the wood, like the fires after the 1906 Earthquake, which spawned the AWSS water system.

_MG_7318

The map above is the waterfront area for the Sewer Works print and the one below is the full map of the city that I used as the base for the San Francisco Cisterns and the Imaginary Drinking Hydrants sculptures._MG_7316The last stages of the woodwork involved traditional fabrication, which I did at the Autodesk facilities at Pier 9.

_MG_7314I drilled out the holes for mounting the final 3D prints on the wood bases and then mounted them on 1/16″ stainless rods, such that they float about 1/2″ above the wood map.
_MG_7330 And the final stage involved manually fitting the prints onto the rods._MG_7335

Final Results
Here are the three prints, mounted on the wood-etched maps.

Below is the Imaginary Drinking Hydrants. This was the most delicate of the 3D prints.

06_large

These are the San Francisco Cisterns, which are concentrated in the older parts of San Francisco. They are nearly absent from the western part of the city, which became densely populated well-after the 1906 Earthquake.02_large This is the Sewer Works print. The map is not as visible because of the density of the network. The pipes are a light gray and the manhole chambers a medium gray. The map does capture the extensive network of manmade piers along the waterfront.03_large The Website: San Francisco Cisterns and Imaginary Drinking Hydrants
The website for this project is: waterworks.io. It has three interactive web maps for each of the three water systems

The aforementioned Instuctable: Mapping San Francisco Cisterns details how I made these. The summary is that I did a lot of data-wrangling, often using Python to transform the data into a GeoJSON files, a web-mappable format.

The Stamen designer-technicians were invaluable in directing me to the path of Leaflet, an easy-to-use mapping interface. I struggled with it for awhile, as I was a complete newbie to Javascript, but eventually sorted out how to create maps and customize the interactive elements.

Fortunately, I also received help from the designers at Stamen on the graphics. I only have so many skills  and graphic design is not one of them.

cisternsmapping

The Website:The Website: Life of Poo
The performance on Leaflet bogged down when I had more than about 1500 markers in Leaflet and the sewer system has about 28,000.

I spent a lot of energy with node-trimming using a combination of Python and Java code and winnowed the count down to about 1500. The consolidated node list was based on distance and used various techniques to map the a small set of nodes in a cohesive way.

lifeofpoo

In the hours just before presenting the project, I finished Life of Poo: an interactive journey of toilet waste.

On the website, you can enter an address (in San Francisco) such as “Twin Peaks, SF” or “47th & Judah, SF” and the Life of Poo and then press Flush Toilet.

This will begin an animated poo journey down the sewer map and to the wastewater treatment plant.

Not all of the flushes works as you’d expect. There’s still glitches and bugs in the code. If you type in “16th & Mission”, the poo just sits there.

Why do I have the bugs? I have some ideas (see below) but I really like the chaotic results so will keep it for now.

Lesson 3: Sometimes you should sacrifice accuracy.

Future Directions
I worked very, very hard on this project and I’m going to let it rest for awhile. There’s still some work to do in the future, which I would like to do some day.

Cistern Map
I’d like to improve the Cistern Map as I think it has cultural value. As far as I know, it’s the only one on the web. The data is from the intersections and while close, is not entirely correct. Sometimes the intersection data is off by a block or so. I don’t think this affects the integrity of the 3D map, but would be important to correct for the web portion.

Life of Poo
I want to see how this interactive map plays out and see how people respond to it in the next couple of months. The animated poo is universally funny but it doesn’t behave “properly”. Sometimes it get stuck. This was the last part of the Water Works project and one that I got working the night before the presentation.

I had to do a lot of node-trimming to make this work — Leaflet can only handle about 1500 data points before it slows down too much, so I did a lot of trimming from a set of abut 28,000. This could be one source of the inaccuracies.

I don’t take into account gravity in the flow calculations, so this is why I think the poo has odd behavior. But maybe the map is more interesting this way. It is, after all, an animated poo emoji.

Infrastructure Fabrication
This is where the project gets very interesting. What I’ve been able to accomplish with the “Sewer Works” print is to show how the sewer pipes of San Francisco look as a physical manifestation. This is only the beginning of many possibilities. I’d be eager to develop this technology and modeling system further. And take the usual GIS maps and translate them into physical models.

Thanks for reading this far and I hope you enjoyed this project,
Scott Kildall




 

 

 

 

 

 

 

 

EquityBot: Capturing Emotions

In my ongoing research and development of EquityBot — a stock-trading bot* with a philanthropic personality, which is my residency project at Impakt Works — I’ve been researching various emotional models for humans.

The code I’m developing will try to make correlations between stock prices and group emotions on Twitter. It’s a daunting task and one where I’m not sure of the signal-to-noise ratio will be (see disclaimer). As an art experiment, I don’t know what will emerge from this, but it’s geeky and exciting.

In the last couple weeks, I’ve been creating a rudimentary system that will just capture words. A more complex system would use sentiment analysis algorithms. My time and budget is limited, so phase 1 will be a simple implementation.

I’ve been looking for some sort of emotional classification system. There are several competing models (of course).

My favorite is the Plutchik Wheel of Emotions, which was developed in 1980. It has a symmetrical look to it and apparently is deployed in various AI systems.

 

Plutchik-wheel.svg

Other models such as the Lövheim cube of emotion are more recent and seem compelling at first. But it’s missing something critical: sadness or grief. Really? This is such a basic human emotion and when I saw it was absent, I tossed the cube model.

1280px-Lövheim_cube_of_emotion

Back to the Plutchik model…my “Twitter bucket” captures certain words, from the color wheel above. I want enough words for a reasonable statistical correlation (about 2000 tweets/hour). Too many of one word will strain my little Linode server. For example, the word “happy” is a no-go since there thousands of Tweets with that word in it each minute.

Many people tweet about anger by just using the word “angry” or “anger”, so that’s an easy one. Same thing goes with boredom/boring/bored.

For other words, I need to go synonym-hunting, like: apprehension. The twitter stream with this word is just a trickle. I’ve mapped it to “worry” or “anxiety”, which shows up more often in tweets. It’s not quite correct, but reasonably close.

The word “terror” has completely lost it’s meaning, and now only refers to political discourse. I’m still trying to figure out a good synonym-map for terror: terrifying, terrify, terrible? It’s not quite right. There’s not a good word to represent that feeling of absolute fear.

This gets tricky and I’m walking into the dark valley of linguistics. I am well-aware of the pitfalls.

Screen Shot 2014-10-01 at 3.18.33 PM

 

* Disclaimer:
EquityBot doesn’t actually trade stocks. It is an art project intended for illustrative purposes only, and is not intended as actual investment advice. EquityBot is not a licensed financial advisor. EquityBoy It is not, and should not be regarded as investment advice or as a recommendation regarding any particular security or course of action.

 

A Starting Point: Distributed Capital

I’m doing more research on EquityBot —the project for my Impakt Works residency, which I just started a couple of days ago.

EquityBot is a stock-trading algorithm that explores the connections between collective emotions on social media and financial speculation. It will be presented at the Impakt Festival at the end of October.

It will also consist of a sculptural component (presented post-festival), which is the more experimental form.

Many of you are familiar with Paul Baran’s work on designing a distributed network, but many others may not be. He worked for the U.S. Air Force and determined that a central communications network would be vulnerable to attack, and suggested that the United States use a distributed network.
baranInterestingly, there is a widespread myth that the Internet, derived from APANET, was designed to withstand a nuclear attack using this model. This isn’t the case, just that the architects of the internet transmission protocol heard of Rand’s work and adapted it for packet use. Yet, the myth persists.

On a side note, perhaps military technology could be useful for the public good. If only we could declassify the technology, like Baran did.

The distributed network reminds me of a 3D polygon mesh I think this could be a good source of 3D data-visualization: Distributed Capital. I’ll research this more in the future.

But EquityBot isn’t about networks in the formal sense, it is a project about constructing a predictive model of stock changes based on the idea that Twitter sentiments correlate with fluctuations in stock prices. Screen Shot 2014-09-17 at 6.08.23 AM

Do I know there is a correlation? Not yet, but I think there is a good possibility. One of my reading sources, The Computational Beauty of Nature, sums up the value of simulated models in its introduction. The predictive model might fail in its results but it will likely reveal a greater truth in the economic system that it is trying to predict. Thus, knowing the uncertainty ahead of time will provide a sense of certainty. EquityBot may not “work” but then again, it may.

compbeautyofnatureMy source of dissent is the excellent book, The Signal and The Noise: Why So Many Predictions Fail — but Some Don’t by Nate Silver. After reading this, last summer, I was convinced that any predictive analysis would be simply be noise. I was disheartened and halted the EquityBot project (previously called Grantbot) for many months.

la-ca-nate-silver

However, now I’m not so sure. It seems likely that people’s moods would affect financial decisions, which in turn would affect stock prices. With studies such as this one by Vagelis Hristidis, which found some correlation to Twitter chatter and stock, I think there is something to this, which is why I’ve revisited the EquityBot project.

I’ll follow the Buddhist maxim with this project and embrace its uncertainty.

 

Life of Poo

I’ve been blogging about my Water Works project all summer and after the Creative Code Gray Area presentation on September 10th, the project is done. Phew. Except for some of the residual documentation.

In the hours just before I finished my presentation, I also managed to get Life of Poo working. What is it? Well, an interactive map of where your poo goes based on the sewer data that I used for this project.

Huh? Try it.

Screen Shot 2014-09-16 at 6.42.06 AM

This is the final piece of my web-mapping portion of Water Works and uses Leaflet with animated markers, all in Javascript, which is a new coding tool in my arsenal (I know, late to the party). I learned the basics in the Gray Area Creative Code Immersive class, which was provided as part of the fellowship.

The folks at Stamen Design also helped out and their designer-technicians turned me onto Leaflet as I bumbled my way through Javascript.

How does it work?

On the Life of Poo section of the Water Works website, you enter an address (in San Francisco) such as “Twin Peaks, SF” or “47th & Judah, SF” and the Life of Poo and then press Flush Toilet.

This will begin an animated poo journey down the sewer map and to the wastewater treatment plant.

Screen Shot 2014-09-16 at 6.50.17 AMNot all of the flushes works as you’d expect. There’s still glitches and bugs in the code. If you type in “16th & Mission”, the poo just sits there. Hmmm.

Why do I have the bugs? I have some ideas (see below) but I really like the chaotic results so will keep it for now.

Screen Shot 2014-09-16 at 6.54.32 AM

 

I think the erratic behavior is happening because of a utility I wrote, which does some complex node-trimming and doesn’t take into account gravity in its flow diagrams. The sewer data has about 30,000 valid data points and Leaflet can only handle about 1500 or so without it taking forever to load and refresh.

The utility I wrote parses the node data tree and recursively prunes it to a more reasonable number, combining upstream and downstream nodes. In an overflow situation, technically speaking, there are nodes where waste might be directed away from the waste-water treatment plant.

However, my code isn’t smart enough to determine which are overflow pipes and which are pipes to the treatment plants, so the node-flow doesn’t work properly.

In case you’re still reading, here’s an illustration of a typical combined system, that shows how the pipes might look. The sewer outfall doesn’t happen very often, but when your model ignores gravity, it sure will.

CombineWasteWaterOverflow

The 3D print of the sewer, the one that uses the exact same data set as Life of Poo looks like this.

sewerworks_front sewerworks_top

EquityBot @ Impakt

My exciting news is that this fall I will be an artist-in-residence at Impakt Works, which is in Utrecht, the Netherlands. The same organization puts on the Impakt Festival every year, which is a media arts festival that has been happening since 1988. My residency is from Sept 15-Nov 15 and coincides with the festival at the end of October.

Utrecht is a 30 minute train ride from Amsterdam and 45 minutes from Rotterdam and by all accounts is a small, beautiful canal city with medieval origins and also hosts the largest university in the Netherlands.

Of course, I’m thrilled. This is my first European art residency and I’ll have a chance to reconnect with some friends who live in the region as well as make many new connections.

impakt; utrecht; www.impakt.nlThe project I’ll be working on is called EquityBot and will premiere at the Impakt Festival in late October as part of their online component. It will have a virtual presence like my Playing Duchamp artwork (a Turbulence commission) and my more recent project, Bot Collective, produced while an artist-in-residence at Autodesk.

Like many of my projects this year, this will involve heavy coding, data-visualization and a sculptural component.

equity_bot_logo

At this point, I’m in the research and pre-production phase. While configuring back-end server code, I’m also gathering reading materials about capital and algorithms for the upcoming plane rides, train rides and rainy Netherland evenings.

Here is the project description:

EquityBot

EquityBot is a stock-trading algorithm that explores the connections between collective emotions on social media and financial speculation. Using custom algorithms Equitybot correlates group sentiments expressed on Twitter with fluctuations in related stocks, distilling trends in worldwide moods into financial predictions which it then issues through its own Twitter feed. By re-inserting its results into the same social media system it draws upon, Equitybot elaborates on the ways in which digital networks can enchain complex systems of affect and decision making to produce unpredictable and volatile feedback loops between human and non-human actors.

Currently, autonomous trading algorithms comprise the large majority of stock trades.These analytic engines are normally sequestered by private investment companies operating with billions of dollars. EquityBot reworks this system, imagining what it might be like it this technological attention was directed towards the public good instead. How would the transparent, public sharing of powerful financial tools affect the way the stock market works for the average investor?

kildall_bigdatadreamsI’m imagining a digital fabrication portion of EquityBot, which will be the more experimental part of the project and will involve 3D-printed joinery. I’ll be collaborating with my longtime friend and colleague, Michael Ang on the technology — he’s already been developing a related polygon construction kit — as well as doing some idea-generation together.

“Mang” lives in Berlin, which is a relatively short train ride, so I’m planning to make a trip where we can work together in person and get inspired by some of the German architecture.

My new 3D printer — a Printrbot Simple Metal — will accompany me to Europe. This small, relatively portable machine produces decent quality results, at least for 3D joints, which will be hidden anyways.

printrbot

WaterWorks: From Code to 3D Print

In my ongoing Water Works project —  a Creative Code Fellowship with Stamen DesignGray Area and Autodesk — I’ve been working for many many hours on code and data structures.

The immediate results were a Map of the San Francisco Cisterns and a Map of the “Imaginary Drinking Hydrants”.

However, I am also making 3D prints — fabricated sculptures, which I map out in 3D-space using and then 3D print.

The process has been arduous. I’ve learned a lot. I’m not sure I’d do it this way again, since I had to end up writing a lot of custom code to do things like triangle-winding for STL output and much, much more.

Here is how it works. First, I create a model in Fusion 360 — an Autodesk application — which I’ve slowly been learning and have become fond of.

Screen Shot 2014-08-21 at 10.12.47 PM

From various open datasets, I map out the geolocations locations of the hydrants or the cisterns in X,Y space. You can check out this Instructable on the Mapping Cisterns and this blog post on the mapping of the hydrants for more info. Using OpenFrameworks — an open source toolset in C++, I map these out in 3D space. The Z-axis is the elevation.

The hydrants or cisterns are both disconnected entities in 3D space. They’d fall apart when trying to make a 3D print, so I use Delaunay triangulation code to connect the nodes as a 3D shape.

Screen Shot 2014-08-21 at 10.07.59 PMI designed my custom software to export a ready-to-print set of files in an STL format. My C++ code includes an editor which lets you do two things:

(1) specify which hydrants are “normal” hydrants and which ones have mounting holes in the bottom. The green ones have mounting holes, which are different STL files. I will insert 1/16″ stainless steel rod into the mounting holes and have the 3D prints “floating” on a piece of wood or some other material.

(2) my editor will also let you remove and strengthen each Delaunay triangulation node — the red one is the one currently connected. This is the final layout for the print, but you can imagine how cross-crossed and hectic the original one was.

Screen Shot 2014-08-21 at 10.08.44 PM

Here is an exported STL in Meshlab. You can see the mounting holes at the bottom of some of the hydrants.
Screen Shot 2014-08-21 at 10.20.13 PM

I ran many, many tests before the final 3D print.

imaginary_drinking_faucets

And finally, I setup the print over the weekend. Here is the print 50 hours later.
on_the_tray

It’s like I’m holding a birthday cake — I look so happy. This is at midnight last Sunday.scott_holding_tray

The cleaning itself is super-arduous.

scott_cleaning

And after my initial round of cleaning, this is what I have.hydrats_roughAnd here are the cistern prints.

cisterns_3d

I haven’t yet mounted these prints, but this will come soon. There’s still loads of cleaning to do.

 

SFPUC says Emergency Drinking Hydrants Discontinued

Last week, I posted an online map of the 67 Emergency Drinking Water Hydrants in San Francisco. It was covered in SFist, got a lot of retweets and coverage.

I felt a semblance of pride in being a “citizen-mapper” and helping the public in case of a dire emergency. I wondered why these maps weren’t more public. I had located the emergency hydrant data from a couple of different places, but nowhere very visible.

Apparently, these hydrants are not for emergency use after all. Who knew? Nowhere could I find a place that said they were discontinued.

Last Friday, the SFPUC contacted SFist and issued this statement (forwarded to me by the reporter, Jay Barmann):

————————————

The biggest concern [about getting emergency water from hydrants] is public health and safety. First of all, tapping into a hydrant is dangerous as many are high pressure and can easily cause injury. Some are very high pressure! Second, even the blue water drop hydrants from our old program in 2006 (no longer active) can be contaminated after an earthquake due to back flow, crossed lines, etc. We absolutely do not want the public trying to open these hydrants and they could become sick from drinking the water. They could also tap a non-potable hydrant and become sick if they drink water for fire-fighting use. After an earthquake, we have water quality experts who will assess the safety of hydrants and water from the hydrants before providing it to the public.

AND of course, no way should ANYONE be opening hydrants except SFFD and SFWD; if people are messing with hydrants, this could de-pressurize the system when SFFD needs the water pressure to fight fires, and also will be a further distraction for emergency workers to monitor.

We are in the process of updating our emergency water program… We are also going to be training NERT teams to help assess water after an emergency.

————————————

Uh-oh.  Jay wrote: “It had sounded like designer Scott Kildall, who had been mapping the the hydrants, had done a fair amount of research, but apparently not.”

Was I lazy or over-excited? I don’t think so. I re-scoured the web, nowhere did I find a reference to the Blue Drop Hydrant Program being discontinued.

My reference were these two PDFs (links may be changed by municipal agencies after this post).

PDF Map on the SFPUC website

pdf_map

Water Supplies Manual from the San Francisco Fire Department 

supplies_manual

 

** I have some questions **
(1) Since nowhere on the web could I find a reference to this program being discontinued, why are these maps still online? Why didn’t the SFPUC make a public announcement that this program was being discontinued? It makes me look bad as a Water Detective, Data Miner, but more importantly there may have been other people relying on thse hydrants. Perhaps.

(2) Why are there still blue drops painted on some of these hydrants? Shouldn’t the SFPUC have repainted all of the blue drop hydrants white to signal that they are no longer in use?

(3) Why did our city spend 1 million dollars several years ago (2006) to set up these emergency hydrants in the first place when they weren’t maintainable? The SFPUC statement says: “even the blue water drop hydrants…can be contaminated after an earthquake due to back flow, crossed lines, etc.”

Did something change between 2006 and 2014? Wouldn’t these lines have always been susceptible to backflow, crossed lines, etc. when this program was initiated? 1 million bucks is a lot of money!

(4) Finally, and the most prescient question is why don’t we have emergency drinking hydrants or some other centralized system?

I *love* the idea of people going to central spots in their neighborhood case they don’t have access to drinking water. Yes, we should have emergency drinking water in our homes. But many people haven’t prepared. Or maybe your basement will collapse and your water will be unavailable. Or maybe you’ll be somewhere else: at work, at a restaurant, who knows?

Look, I’m a huge supporter of city government and want to celebrate the beautiful water infrastructure of San Francisco with my Water Works project, part of  the Creative Code Fellowship with Stamen DesignGray Area and Autodesk. The SFPUC does very good work. They are very drought-conscious and have great info on their website in general.

It’s unfortunate that these blue drop hydrants were discontinued.

It was an heartening tale of urban planning. I wish the SFPUC had contacted me directly instead of the person who wrote article. I’ll plan to update my map accordingly, perhaps stating that this is a historical map of sorts.

By the way, you can still see the blue drop hydrants on Street View:

blue_drop_man

And here’s the Facebook statement by SFPUC — hey, I’m glad they’re informing the public on this one!

wrench_blog_post

Mapping Emergency Drinking Water Hydrants

Did you know that San Francisco has 67 fire hydrants that are designed for emergency drinking water in case of an earthquake-scale disaster? Neither did I. That’s because just about no one knows about these hydrants.

While scouring the web for Cistern locations — as part my Water Works Project*, which will map out the San Francisco water infrastructure and data-visualize the physical pipes and structures that keep the H2O moving in our city — I found this list.

I became curious.

67_drinkingfountains

I couldn’t find a map of these hydrants *anywhere* — except for an odd Foursquare map that linked to a defunct website.

I decided to map them myself, which was not terribly difficult to do.

Since Water Works is a project for the Creative Code Fellowship with Stamen DesignGray Area and Autodesk and I’m collaborating with Stamen, mapping is essential for this project. I used Leaflet and Javascript. It’s crude but it works — the map does show the locations of the hydrants (click on the image to launch the map).

The map, will get better, but at least this will show you where the nearest emergency drinking hydrant is to your home.

map_link

Apparently, these emergency hydrants were developed in 2006 as part of a 1 million dollar program. These hydrants are tied to some of the most reliable drinking water mains.

Yesterday, I paid a visit to three hydrants in my neighborhood. They’re supposed to be marked with blue drops, but only 1 out of the 3 were properly marked.

Hydrant #46: 16th and Bryant, no blue dropIMG_0022

Hydrant #53, Precita & Folsom, has a blue dropIMG_0016

Hydrant #51, 23rd & Treat, no blue drop, with decorative stickerIMG_0011

Editors note: I had previously talked about buying a fire hydrant wrench for a “just in case” scenario*. I’ve retracted this suggestion (by editing this blog entry).

I apologize for this suggestion: No, none of us should be opening hydrants, of course. And I’m not going to actually buy a hydrant wrench. Neither should you, unless you are SFFD, SFWD or otherwise authorized.

Oh yes, and I’m not the first to wonder about these hydrants. Check out this video from a few years ago.

* For the record, I never said that would ever open a fire hydrant, just that I was planning to by a fire hydrant wrench. One possible scenario is that I would hand my fire hydrant wrench to a qualified and authorized municipal employee, in case they were in need.

Modeling Cisterns

How do you construct a 3D model of something that lives underground and only exists in a handful of pictures taken from the interior? This was my task for the Cisterns of San Francisco last week.

The backstory: have you ever seen those brick circles in intersections and wondered what the heck they mean? I sure have.

It turns out that underneath each circle is an underground cistern. There are 170 or so* of them spread throughout the city. They’re part of the AWSS (Auxiliary Water Supply System) of San Francisco, a water system that exists entirely for emergency use.

The cisterns are just one aspect of my research for Water Works, which will map out the San Francisco water infrastructure and data-visualize the physical pipes and structures that keep the H2O moving in our city.

This project is part of my Creative Code Fellowship with Stamen Design, Gray Area and Autodesk.

Cistern_1505_MedRes

Many others have written about the cisterns: Atlas Obscura, Untapped Cities, Found SF, and the cisterns even have their own Wikipedia page, albeit one that needs some edits.

The original cisterns, about 35 or so, were built in the 1850s, after a series of great fires ravaged the city, located in the Telegraph Hill to Rincon Hill area. In the next several decades they were largely unused, but the fire department filled them up with water for a “just in case” scenario.

Meanwhile, in the late 19th century as San Francisco rapidly developed into a large city, it began building a pressurized hydrant-based fire system, which was seen as many as a more effective way to deliver water in case of a fire. Many thought of the cisterns as antiquated and unnecessary.

However, when the 1906 earthquake hit, the SFFD was soon overwhelmed by a fire that tore through the city. The water mains collapsed. The old cisterns were one of the few sources of reliable water.

After the earthquake, the city passed bonds to begin construction of the AWSS — the separate water system just for fire emergencies. In addition to special pipes and hydrants fed from reservoirs for hydrants, the city constructed about 140 more underground cisterns.

Cisterns are disconnected nodes from the network, with no pipes and are maintained by the fire department, which presumably fill them every year. I’ve heard that some are incredibly leaky and others are watertight.

What do they look like inside? This is the *only* picture I can find anywhere and is of a cistern in the midst of seismic upgrade work. This one was built in 1910 and holds 75,000 gallons of water, the standard size for the cisterns. They are HUGE. As you can surmise from this picture, the water is not for drinking.cistern(Photographer: Robin Scheswohl; Title: Auxiliary Water supply system upgrade, San Francisco, USA)

Since we can’t see the outside of an underground cistern, I can only imagine what it might look like. My first sketch looked something like this.

cistern_drawingI approached Taylor Stein, Fusion 360 product evangelist at Autodesk, who helped me make my crude drawing come to life. I printed it out on one of the Autodesk 3D printers and lo and behold it looks like this: a double hamburger with a nipple on top. Arggh! Back to the virtual drawing board.IMG_0010I scoured the interwebs and found this reference photograph of an underground German cistern. It’s clearly smaller than the ones in San Francisco, but it looks like it would hold water. The form is unique and didn’t seem to connote something other than a vessel-that-holds-water.800px-Unterirdische_ZisterneOnce again, Taylor helped me bang this one out — within 45 minutes, we had a workable model in Fusion 360. We made ours with slightly wider dimensions on the top cone. The lid looks like a manhole.

cistern_3d

Within a couple hours, I had some 3D prints ready. I printed out several sizes, scaling the height to for various aesthetic tests.

cistern_models_printed

This was my favorite one. It vaguely looks like cooking pot or a tortilla canister, but not *very* much. Those three rectangular ridges, parked at 120-degree angles, give it an unusual form

IMG_0006

Now, it’s time to begin the more arduous project of mapping the cisterns themselves. And the tough part is still finishing the software that maps the cisterns into 3D space and exports them as an STL with some sort of binding support structure.

* I’ve only been able to locate 169 cisterns. Some reports state that there are 170 and others that there are 173 and 177.

Data Miner, Water Detective

This summer, I’m working on a Creative Code Fellowship with Stamen Design, Gray Area and Autodesk. The project is called Water Works, which will map and data-visualize the San Francisco water infrastructure using 3D-printing and the web.

Finding water data is harder than I thought. Like detective Gittes in the movie Chinatown, I’m poking my nose around and asking everyone about water. Instead of murder and slimy deals, I am scouring the internet and working with city government. I’ve spent many hours sleuthing and learning about the water system in our city.

chinatown-nicholsonanddunway

In San Francisco, where this story takes place, we have three primary water systems. Here’s an overview:

The Sewer System is owned and operated by the SFPUC. The DPW provides certain engineering services. This is a combined stormwater and wastewater system. Yup, that’s right, the water you flush down the toilet goes into the same pipes as the the rainwater. Everything gets piped to a state-of-the art wastewaster treatment plant. Amazingly the sewer pipes are fed almost entirely by gravity, taking advantage of the natural landscape of the city.

The Auxiliary Water Supply System (AWSS) was built in 1908 just after the 1906 San Francisco Earthquake. It is an entire water system that is dedicated solely to firefighting. 80% of the city was destroyed not by earthquake itself, but by the fires that ravaged the city. The fires rampaged through the city mostly because the water mains collapsed. Just afterwards, the city began construction on a separate this infrastructure for combatting future fires. It consists of reservoirs that feed an entire network of pipes to high-pressure fire hydrants and also includes approximately 170 underground cisterns at various intersections in the city. This incredible separate water system is unique to San Francisco.

The Potable Water System, a.k.a. drinking water is the water we get from our faucets and showers. It comes from the Hetch Hetchy — a historic valley but also a reservoir and water system constructed from 1913-1938 to provide water to San Francisco. This history is well-documented, but what I know little about is how the actual drinking water gets piped into San Francisco. homes Also, the San Francisco water is amongst the most safe in the world, so you can drink directly from your tap.

Given all of this, where is the story? This is the question that I asked folks at Stamen, Autodesk and Gray Area during a hyper-productive brainstorming session last week. Here’s the whiteboard with the notes. The takeaways, as folks call it are, are below and here I’m going to get nitty-gritty into process.

(whiteboard brainstorming session with Stamen)

stamen_brainstorm_full

(1) In my original proposal, I had envisioned a table-top version of the entire water infrastucture: pipes, cisterns, manhole chambers, reservoirs as a large-scale sculpture, printed in panels. It was kindly pointed out to me by the Autodesk Creative Projects team that this is unfeasible. I quickly realized the truth of this: 3D prints are expensive, time-consuming to clean and fragile. Divide the sculptural part of the project into several small parts.

(2) People are interested in the sewer system. Someone said, “I want to know if you take a dump at Nob Hill, where does the poop go?” It’s universal. Everyone poops, even the Queen of England and even Batman. It’s funny, it’s gross, it’s entirely human. This could be accessible to everyone.

(3) Making visible the invisible or revealing what’s in plain sight. The cisterns in San Francisco are one example. Those brick circles that you see in various intersections are actually 75,000 gallon underground cisterns. Work on a couple of discrete urban mapping projects.

(4) Think about focusing on making a beautiful and informative 3D map / data-visualization of just 1 square mile of San Francisco infrastructure. Hone on one area of the city.

(5) Complex systems can be modeled virtually. Over the last couple weeks, I’ve been running code tests, talking to many people in city government and building out an entire water modeling systems in C++ using OpenFrameworks. It’s been slow, deliberate and arduous. Balance the physical models with a complex virtual one.

I’m still not sure exactly where this project is heading, which is to be expected at this stage. For now, I’m mining data and acting as a detective. In the meantime, here is the trailer for Chinatown, which gives away the entire plot in 3 minutes.

 

Mapping Manholes

The last week has been a flurry of coding, as I’m quickly creating a crude but customized data-3D modeling application for Water Works — an art project for my Creative Code Fellowship with Stamen Design, Gray Area and Autodesk.

This project build on my Data Crystals sculptures, which transform various public datasets algorithmically into 3D-printable art objects. For this artwork, I used Processing with the Modelbuilder libraries to generate STL files. It was a fairly easy coding solution, but I ran into performance issues along tje wau.

But Processing tends to choke up at managing 30,000 simple 3D cubes. My clustering algorithms took hours to run. Because it isn’t compiled into machine code and is instead interpreted, it has layers of inefficiency.

I bit the coding bullet and this week migrated my code to OpenFrameworks (an open source C++ environment). I’ve used OF before, but never with 3D work. There are still lots of gaps in the libraries, specifically the STL exporting, but I’ve had some initial success, woo-hoo!

Here are all the manholes, the technical term being “sewer nodes”, mapped into 3D space using GIS lat/lon and elevation coordinates. The clear indicator that this is San Francisco, and not Wisconsin, which this mapping vaguely resembles is the swath of empty space that is Golden Gate Park.

What hooked me was that “a-ha” moment where 3D points rendered properly on my screen. I was on a plane flight home from Seattle and involuntarily emitted an audible yelp. Check out the 3D mapping. There’s a density of nodes along the Twin Peaks, and I accentuated the z-values to make San Francisco look even more hilly and to understand the location of the sewer chambers even better.

Sewer nodes are just the start. I don’t have the connecting pipes in there just yet, not to mention the cisterns and other goodies of the SF water infrastructure.

water_works_nodes_screen_shotOf course, I want to 3D print this. By increasing the node size — the cubic dimensions of each manhole location, I was able to generate a cohesive and 3D-printable structure. This is the Meshlab export with my custom-modified STL export code. I never thought I’d get this deep into 3D coding, but now, I know all sorts of details, like triangular winding and the right-hand rule for STL export.3d_terrain_meshlabAnd here is the 3D print of the San Francisco terrain, like the Data Crystals, with many intersecting cubes.

3d_terrain_better It doesn’t have the aesthetic crispness of the Data Crystals project, but this is just a test print — very much a work-in-progress.
data_crystals

 

Creative Code Fellowship: Water Works Proposal

Along with 3 other new media artists and creative coding experts, I was recently selected to be a Creative Code Fellow for 2014 — a project pioneered by Gray Area (formerly referred to as GAFFTA and now in a new location in the Mission District).

Each of us is paired with a partnering studio, which provides a space and creative direction for our proposed project. The studio that I’m pleased to be working with is Stamen Design, a leader in the field of aesthetics, mapping and data-visualization.

I’ll be also continuing my residency work at Autodesk at Pier 9, which will be providing support for this project as well.

My proposed project is called “Water Works” — a 3D-printed data visualization of San Francisco’s water system infrastructure, along with some sort of web component.

grayarea-fellowship-home-page

 

Creative Code Fellowship Application Scott Kildall

Project Proposal (250 limit)
My proposed project “Water Works” is a 3D data visualization of the complex network of pipes, aqueducts and cisterns that control the flow of water into our homes and out of our toilets. What lies beneath our feet is a unique combined wastewater system — where stormwater mixes with sewer lines and travels to a waste treatment plant, using gravitational energy from the San Francisco hills.

This dynamic flow is the circulatory system of the organism that is San Francisco. As we are impacted by climate change, which escalates drought and severe rainstorms, combined with population growth, how we obtain our water and dispose of it is critical to the lifeblood of this city.

Partnering with Autodesk, which will provide materials and shop support, I will write code, which will generate 3D prints from municipal GIS data. I imagine ghost-like underground 3D landscapes with thousands of threads of water — essentially flow data — interconnected to larger cisterns and aqueducts. The highly retinal work will invite viewers to explore the infrastructure the city provides. The end result might be panels that snap together on a tabletop for viewers to circumnavigate and explore.

The GIS data is available, though not online, from San Francisco and already I’ve obtained cooperation from SFDPW about providing some infrastructure data necessary to realize this project.

While my focus will be on the physical portion of this project, I will also build an interactive web-based version from the 3D data, making this a hybrid screen-physical project.

Why are you interested in participating in this fellowship? (150 word limit)
The fellowship would give me the funding, visibility and opportunity of working under the umbrage of two progressive organizations: Gray Area and Stamen Design. I would expand my knowledge, serve the community and increase my artistic potential by working with members of these two groups, both of which have a progressive vision for art and design in my longtime home of San Francisco.

Specifically, I wish to further integrate 3D printing into the data visualization conversation. With the expertise of Stamen, I hope to evolve my visualization work at Autodesk. The 3D-printing technology makes possible what has hitherto been impossible to create and has enormous possibilities to materialize the imaginary.

Additionally some of the immersive classes (HTML5, Javascript, Node.js) will be helpful in solidifying my web-programming skills so that I can produce the screen-based portion of this proposal.

What experience makes this a good fit for you? (150 word limit)
I have deep experience in producing both screen-based and physical data visualizations. While at the Exploratorium, I worked on many such exhibits for a general audience.

One example is a touch-screen exhibit called “Seasons of Plankton”, which shows how plankton species in the Bay change over the year, reflecting a diverse ecosystem of microscopic organisms. I collaborated with scientists and visitor evaluators to determine the optimal way to tell this story. I performed all of the coding work and media production for this successful piece.

While at Autodesk, my focus has been creating 3D data visualizations with my custom code that transforms public data sets into “Data Crystals” (these are the submitted images). This exploration favors aesthetics over legibility. I hope to build upon this work and create physical forms, which help people see the dynamics of a complex urban water system to invite curiosity through beauty.

 

@SelfiesBot: It’s Alive!!!

@SelfiesBot began tweeting last week and already the results have surprised me.

Selfies Bot is a portable sculpture which takes selfies and then tweets the images. With custom electronics and a long arm that holds a camera that points at itself, it is a portable art object that can travel to parks, the beach and to different cities.

I quickly learned that people want to pose with it, even in my early versions with a cardboard head (used to prove that the software works).

Last week, in an evening of experimentation, I added text component, where each Twitter pic gets accompanied by text that I scrape from Tweets with the #selfie hashtag.

This produces delightful results, like spinning a roulette wheel: you don’t know what the text will be until the Twitter website pubishes the tweet. The text + image gives an entirely new dimension to the project. The textual element acts as a mirror into the phenomenon of the self-portrait, reflecting the larger culture of the #selfie.

Produced while an artist-in-residence at Autodesk.

aaron
mikkela

And this is the final version! Just done.

selfes_bot_very_good

This is the “robot hand” that holds the camera on a 2-foot long gooseneck arm.

robot_hand
yo

two_people

martin

 

First three Data Crystals

My first three Data Crystals are finished! I “mined” these from the San Francisco Open Data portal. My custom software culls through the data and clusters it into a 3D-printable form.

Each one involves different clustering algorithms. All of these start with geo-located data (x,y) with either time/space on the z-axis.

Here they are! And I’d love to do more (though a lot of work was involved)

Incidents of Crime
This shows the crime incidents in San Francisco over a 3-month period with over 35,000 data points (the crystal took about 5 hours to “mine”).  Each incident is single cube. Less series crimes such as drug possession are represented as small cubes and more severe a crimes such as kidnapping are larger ones. It turns out that crime happens everywhere, which is why this is a densely-packed shape.
datacrystal_crime

 

Construction Permits
This shows current the development pipeline — the construction permits in San Francisco. Work that affects just a single unit are smaller cubes and larger cubes correspond the larger developments. The upper left side of the crystal is the south side of the city — there is a lot of activity in the Mission and Excelsior districts, as you would expect. The arm on the upper right is West Portal.  The nose towards the bottom is some skyscraper construction downtown. 

dc_development

 

Civic Art Collection
This Data Crystal is generated from the San Francisco Civic Art Collection. Each cube is the same size, since it doesn’t feel right to make one art piece larger than another. The high top is City Hall, and the part extending below is some of the spaces downtown. The tail on the end is the artwork at San Francisco Airport.

datacrystal_sfart

 

Support material is beatiful

I finished three final prints of my Data Crystals project over the weekend. They look great and tomorrow I’m taking official documentation pictures.

These are what they look like in the support material, which is also beautiful in its ghostly, womb-like feel.

I’ve posted photos of these before, but still stunned at how amazing they look.

IMG_1000 IMG_1003 IMG_1004 IMG_1006 IMG_1007 IMG_1009 IMG_1011 IMG_1012

3D Data Viz & SF Open Data

I’ve fallen a bit behind in my documentation and have a backlog of great stuff that I’ve been 3D-printing. These are a few of my early tests with my new project: Data Crystals. I am using various data sources, which I algorithmically transform data into 3D sculptures.

The source for these is the San Francisco Open Data Portal — which provides datasets about all sorts of interesting things such as housing permit data, locations of parking meters and more.

My custom algorithms transform this data into 3D sculptures. Legibility is still an issue, but initial tests show the wonderful work that algorithms can do.

This is a transformation of San Francisco Crime Data. It turns out that crime happens everywhere, so the data is in a giant block.

crime_data

After running some crude data transformations, I “mined” this crystal: the location of San Francisco public art. Most public art is located in the downtown and city hall area. But there is a tail, which represents the San Francisco Airport.

sf_art

More experiments: this is a test, based on the SF public art, where I played with varying the size of the cubes (this would be a suggested value of artwork, which I don’t have data for…yet). Now, I have a 4th axis for the data. Plus, there is a distinct aesthetic appeal of stacking differently-sized blocks as opposed to uniform ones.

Stay tuned, there is more to come!random_squares

Welcome to the Party: @lenenbot

Say hello to the latest Twitterbot from the Bot Collective: @lenenbot

vlad_john_lenen

Lenenbot* mixes up John Lennon and Vladimir Lenin quotes. The first half of one with the second half of the other.

Some of my favorites so far are:

Communism is everybody’s business.
It’s weird not to be able to run the country.
Revolution is love.

There are more, surreal ones. There are about 600 different possibilities, all randomized. Subscribe to the Twitter account here.

 

 

* I chose the name “Lenen” to avoid confusion. Lenonbot and Lenninbot look like misspellings of Lennon and Lenin, respectively. Lenen is it’s own bot.

Fabrication Challenge — Faceted Forms

The fabrication challenge for some of my new sculptures is to devise a way to transform models in 3D screen-space into faceted painted wood forms. The faceted look is something I first experimented using papercraft sculptures in the No Matter (2008) project, a collaboration with Victoria Scott.

nm_yellow_submarine

I later expanded upon this idea with the 2049 Series sculptures such as the Universal Mailbox and the 2049 Hotline. I constructed these sculptures from found wood at the dump while an artist-in-residence at Recology SF.

The problem I had getting the weird angles to be exact. I don’t have strong woodworking skills and ended up spending a lot of time with bondo fixing my mistakes. I’d like to be able to make these on the laser-cutter…no saws and no sanding and have them look perfect. Stay tuned.

malbox phonebooth

 

The Art of 3D Printing

My new work on “Data Crystals” is featured in a new episode of Science in the City, produced by the Exploratorium. You can watch it here.

The behind-the-scenes production involved many emails and then a quick video shoot. Phoebe (the videographer) interviewed me in the conference room at Autodesk. We had about 25 minutes to shoot the interview portion of the video. She filled me in on her intentions for the piece and asked me to talk about a few general topics related to 3D printing.

phoebe_interviews

Fortunately, over the years, I’ve become very comfortable with my voice and image. She also did a great job of making me look smart. I explained my new “Data Crystals” project, which is in the research phase. I am looking at open data sets provided by San Francisco Open Data Portal and mapping them as 3D sculptural objects. You can see me holding some of the 3D prints on the video.

 

 

 

Bling on the Water Jet

I just got trained on how to use the water jet tool at Autodesk and made this piece of bling as my sample project. The design came from my Grantbot project.

Anyone have some gold spray paint?

IMG_0902

Photography Imitates New Media

Usually “new media” pulls from other art disciplines: video, sculpture, photography and many more.

This morning, I saw this photographer’s (Alex John Beck) work on facial symmetry portraits on one of my art news feeds. He shoots a subject then mirrors their faces so that they have two symmetrical portraits, one from their left side and another from the right.

alexjohnbeckbothsidesof1

Wait a minute. I’m very familiar with this piece. There’s an exhibit at The Exploratorium (where I worked as a New Media Exhibit Developer from 2011-12) called “Two-Faced”, which does exactly that. It was created by my colleague, Bill Meyer.

YouTube videos are below (they have audio, beware), which show the piece in action.

No accusations of cribbing here, but the “new media version” provides a rich and interactive experience to thousands of visitors each year.