Data Crystals at EVA

I just finished attending the EVA London conference this week and did a demonstration of my Data Crystals project. This is the formal abstract for the demonstration and writing it helped clear up some of my ideas about the Data Crystals project and digital fabrication of physical sculptures and installations.

 

Embodied Data and Digital Fabrication: Demonstration with Code and Materials
by Scott Kildall

1. INTRODUCTION

Data has tangible consequences in the real world. Accordingly, physical data-visualizations have the potential to engage with the actual effects of the data itself. A data-generated sculpture or art installation is something that people can move around, though or inside of. They experience the dimensionality of data with their own natural perceptual mechanisms. However, creating physical data visualizations presents unique material challenges since these objects exist in stasis, rather than in a virtual space with a guided UX design. In this demonstration, I will present my recent research into producing sculptures from data using my custom software code that creates files for digital fabrication machines.

2. WHAT DOES DATA LOOK LIKE?

The overarching question that guides my work is: what does data look like? Referencing architecture, my artwork such as Data Crystals (figure 2) executes codes that maps, stacks and assembles data “bricks” to form unique digital artifacts. The form of these objects are impossible to predict from the original data-mapping, and the clustering code will produce different variations each time it runs.

Other sculptures remove material through intense kinetic energy. Bad Data (figure 3) and Strewn Fields (figure 1) both use the waterjet machine to gouge data into physical material using a high- pressure stream of water. The material in this case — aluminum honeycomb panels and stone slabs — reacts in adverse ways as it splinters and deforms due to the violence of the machine.

2.1 Material Expression

Physical data-visualizations act on materials instead of pixels and so there is a dialogue between the data and its material expression. Data Crystals depict municipal data of San Francisco and have a otherworldly ghostly quality of stacked and intersecting cubes. The data gets served from a web portal and is situated in the urban architecture and so the 3D-printed bricks are an appropriate form of expression.

Bad Data captures data that is “bad” in the shallow sense of the word, rendering datasets such as Internet Data Breaches, Worldwide UFO Sightings or Mass Shootings in the United States. The water from the machine gouges and ruptures aluminum honeycomb material in unpredictable ways, similar to the way data tears apart our social fabric. This material is emblematic of the modern era, as aluminum began to be mass-refined at the end of the 19th century. These datasets exemplify conflicts of our times such as science/heresy and digital security/infiltration.

2.2 Frozen in Time

Once created, these sculptures cannot be endlessly altered like screen-based data visualizations. This challenges the artwork to work with fixed data or to consider the effect of capturing a specific moment.

For example, Strewn Fields is a data-visualization of meteorite impact data. When a large asteroid enters the earths atmosphere, it does so at high velocity of approximately 30,000km/hour. Before impact, it breaks up into thousands of small fragments, which are meteorites. Usually they hit our planet in the ocean or at remote locations. The intense energy of the waterjet machine gouges the surface of each stone, mirroring the raw kinetic energy of a planetoid colliding with the surface of the Earth. The static etching captures the act of impact, and survives as an antithetical gesture to the event itself. The actual remnants and debris (the meteorites) have been collected, sold and scattered and what remains is just a dataset, which I have translated into a physical form.

2.3 Formal Challenges to Sculpture

This sort of “data art” challenges the formal aspects of sculpture. Firstly, machine-generated artwork removes the artist’s hand from the work, building upon the legacy of algorithmic artwork by Sol Lewitt and others. Execution of this work is conducted by the stepper motor rather than by gestures of the artist.

Secondly, the input source of data are unknowable forms until they are actually rendered. The patterns are neither mathematic nor random, giving a certain quality of perceptual coherence to the work. Data Crystals: Crime Incidents has 30,000 data points. Using code-based clustering algorithms, it creates forms only recently possible with the combination of digital fabrication and large amounts of data.

3. CODE

My sculpture-generation tools are custom- developed in C++ using Open Frameworks, an open source toolkit. My code repositories are on GitHub: https://github.com/scottkildall. My own software bypasses any conventional modeling package. It can handle very complex geometry, and more importantly doesn’t have the “look” that a program such as Rhino/Grasshopper generates.

3.1 Direct-to-Machine

My process of data-translation is optimized for specific machines. Data Crystals generate STL files which most 3D printers can read. My code generates PostScript (.ps) files for the waterjet machine. The conversation with the machine itself is direct. During the production and iteration process, once I define the workflow, the refinements proceed quickly. It is optimized, like the machine that creates the artwork.

3.2 London Layering

In my demonstration, I will use various open data from London. I focus not on data that I want to to acquire, but rather, data that I can acquire. I will demonstrate a custom build of Data Crystals which shows multiple layers of municipal data, and I will run clustering algorithms to create several Data Crystals for the City of London.

 

Figure 1: Strewn Fields (2016)
by Scott Kildall
Waterjet-etched stone

Figure 2:
Data Crystals: Crime Incidents (2014)
by Scott Kildall
3D-print mounted on wood

Figure 3:
Bad Data: U.S. Mass Shootings (2015)
by Scott Kildall
Waterjet-etched aluminum honeycomb panel

Display at Your Own Risk by Owen Mundy

I get a lot of press for my artwork. These articles often gloss over the nuances, distilling the essence of a story.

Well-written academic articles about my artwork is what thrills me the most.

Such is the case, with Owen Mundy’s article, Display at Your Own Risk, which looks at 3D printing, copyright and photogrammetry in art.

bts_matrix

The work, he is referring to, in our case is Chess with Mustaches, which is detailed here.

cwm_fullset_adjusted -960x540

What Mundy hones in on is that our original Duchamp Chess set is not like ‘ripping’ music from physical media to a computer, but rather a “hand” tracing from a set of photographs to create a 3D model. It is essentially a translation rather than a crude copy.

These are the sorts of comparisons and nuances that garner my appreciation.

cw_duchamp_pieces

 

 

Press for Chess with Mustaches: the response to the Duchamp Estate

Press coverage is like an improv performance. It’s unpredictable, erratic and sometimes works or falls on its face, usually by the lack of press.

I’ve seen my work get butchered, my name get dragged in the mud. I’ve been called a “would-be performance artist”, an “amateur cartographer” and even Cory Doctorow recently called me a “hobbyist”.

But as long as my name is spelled right, I’m happy.

We recently went public with our response to the Duchamp Estate and the Chess with Mustaches artwork.

We soon received coverage from three notable press sources: Hyperallergic, 3DPrint.com and The Atlantic, and this was soon followed up by Boing Boing and later 3ders.com, plus a mention in Fox News (scroll down) and then Tech Dirt.

These are arts blogs, 3D printing blogs, tech rags — and well, The Atlantic, a  well-read political and culture new source — so there’s a wide audience for this story.

The press has certainly reached the critical threshold for the work. The cat is out of the bag, after being inside for nearly a year…a frustrating process where we kept silent about the cease-and-desist letter from the Duchamp Estate.

This is perhaps the hardest part of any sort of potential legal conflict. You have to be quiet about it, otherwise it might imperil your legal position. The very act of saying anything might make the other party react in some sort of way.

But the outpouring of support has been amazing, both on a personal and a press level. Sure, some of the articles have overlooked certain aspects of the project.

And as always #dontreadthecomments. But overall, it has been such a relief to be able to be talk about the Duchamp Estate and the chess pieces, and to devise an appropriate artistic response.

 

cwm_fullset_adjusted

What Happened to the Readymake: Duchamp Chess Pieces?

Over the last several months, we (Scott Kildall and Bryan Cera) have been contacted by many people asking the same question: What happened to the Readymake: Duchamp Chess Pieces?

cwm_orig_set

The answer is that we ran into an unexpected copyright concern. The Marcel Duchamp Estate objected to the posting of our reconstructed 3D files on Thingiverse, claiming that our project was an infringement of French intellectual property law. Although the copyright claim never went to legal adjudication, we decided that it was in our best interests to remove the 3D-printable files from Thingiverse – both to avoid a legal conflict, and to respect the position of the estate.

For those of you who are unfamiliar with Readymake: Duchamp Chess Set by Scott Kildall and Bryan Cera, this was our original project description:

Readymake: Duchamp Chess Set is a 3D-printed chess set generated from an archival photograph of Marcel Duchamp’s own custom and hand-carved game. His original physical set no longer exists. We have resurrected the lost artifact by digitally recreating it, and then making the 3D files available for anyone to print.

We were inspired by Marcel Duchamp’s readymade — an ordinary manufactured object that the artist selected and modified for exhibition — the readymake brings the concept of the appropriated object to the realm of the internet, exploring the web’s potential to re-frame information and data, and their reciprocal relationships to matter and ideas. Readymakes transform photographs of objects lost in time into shared 3D digital spaces to provide new forms and meanings.

Just for the sake of clarity, what we call a “readymake” is a play on the phrase “readymade”. It is ready-to-make, since it can be physically generated by a 3D printer.

Our Readymake project was not to exist solely as the physical 3D prints that we made, but rather as the gesture of posting the 3D-printable files for anyone to download, as well as the initiation of a broader conversation around digital recreation in the context of artwork. We chose to reconstruct Duchamp’s chess set, specifically, for several reasons.

The chess set, originally created by Duchamp in 1917-18, was a material representation of his passion for the game. Our intention was not to create a derivative art work, but instead to re-contextualize an existing non-art object through a process of digital reconstruction as a separate art project.

What better subject matter to speak to this idea than a personal possession of the father of the Readymade, himself?  Given the artifact’s creation date, we believed it would be covered under U.S. Copyright Law. We’ll get back to that in a bit.

cwm_bw_duchamp_set

 cw_duchamp_pieces

On April 21st, 2014, we published this project on our website and also uploaded the 3D (STL) files onto Thingiverse, a public online repository of free 3D-printable models.  We saw our gesture of posting the files not only as an extension of our art project, but also as an opportunity to introduce the conceptual works of Duchamp, specifically his Readymades, to a wider audience.

cwm_makerbot_grouping

The project generated a lot of press. By encouraging discussion between art-oriented and technology-oriented audiences, it tapped into a vein of critical creative possibilities with 3D printing. And perhaps, with one of Marcel Duchamp’s personal belongings as the context, the very notions of object, ownership and authenticity were brought into question among these communities.

Unfortunately, the project also struck a nerve with the Duchamp Estate. On September 17th, 2014, we received a cease and desist letter from a lawyer representing the heirs of Marcel Duchamp. They were alleging intellectual property infringement on grounds that they held a copyright to the chess pieces under French law.

Gulp.

cwm_170914-letter-blackedout-p1

cwm_170914-letter-blackedout-p2

cwm_170914-letter-blackedout-p3

We assessed our options and talked to several lawyers. Yes, we talked to the Electronic Frontier Foundation…and others. We were publicly quiet about our options, as one needs to with legal matters such as this. The case was complex since jurisdiction was uncertain. Does French copyright law apply? Does that of the United States? We didn’t know, but had a number of conversations with legal experts.

Some of the facts, at least as we understand them

1)  Duchamp’s chess pieces were created in 1917-1918. According to US copyright law, works published before 1923 are in the realm of “expired copyright”.

2) The chess pieces themselves were created in 1917-1918 while Duchamp was in Argentina. He then brought the pieces back to France where he worked to market them.

3)  According to French copyright law, copyrighted works are protected for 70 years after the author’s death.

4)  Under French copyright law, you can be sued for damages and even serve jail time for copyright infringement.

5)  The only known copy of the chess set is in a private collection. We were originally led to believe the set was ‘lost’ – as it hasn’t been seen, publicly, for decades.

6) For the Estate to pursue us legally, the most common method would be to get a judgment in French court, then get a judgment in a United States court to enforce the judgement.

7) Legal jurisdiction is uncertain. As United States citizens, we are protected by U.S. copyright law. But, since websites like Thingiverse are global, French copyright could apply.

Our decision to back off

Many people have told us to fight the Estate on this one. This, of course, is an obvious response. But our research indicated this would be a costly battle. We pursued pro-bono representation from a variety of sources, and while those we reached out to agreed it was an interesting case, each declined. We even considered starting a legal defense fund or crowdsourcing legal costs through an organization such as Kickstarter. However, deeper research showed us that people were far more interested in funding in technology gadgets than legal battles.

Finally we ascertained, through various channels, that the Estate was quite serious. We wanted to avoid a serious legal conflict.

And so, without proper financial backing or pro-bono legal representation, we backed off — we pulled the files from Thingiverse. This was painful – it was incredible to see how excited people were to take part in our project, and when we deleted the Thingiverse entry and with it the comments and photo documentation shared by users, we did so with much regret. But we didn’t see any other option.

Initially, we really struggled to understand where the estate was coming from. As part of the estate’s task is to preserve Duchamp’s legacy, we were surprised that our project was seen by them as anything other than a celebration, and in some ways a revitalization, of his ideas and artworks. Despite the strongly-worded legal letter, we heard that the heirs were quite reasonable.

The resolution was this: we contacted the estate directly. We explained our intention for the project: to honor the legacy of Duchamp, and notified them that we had pulled the STL files from online sources.

We were surprised by the amicable email response — written sans lawyers — directly from one of the heirs. Their reply highlighted an appreciation for our project, and an understanding of our artistic intent. It turns out that their concern was not that we were using the chess set design, but rather that the files – then publicly available — could be taken by others and exploited.

We understand the Estate’s point-of-view – their duty, after all, is to preserve Duchamp’s legacy. Outside of an art context, a manufacturer could easily take the files and mass produce the set. Despite the fact we did put this under a Creative Commons license that stipulated that the chess set couldn’t be used for commercial purposes, we understand the concern.

If we had chosen to stand our ground, we would have had various defenses at our disposal. One of them is that French law wouldn’t have applied since we are doing this from a U.S. server. But, the rules around this are uncertain.

If we had been sued, we would have defended on two propositions: (1) our project would be protected under U.S. law; (2) not withstanding this, under U.S. law, we have a robust and widely-recognized defense under the nature of Fair Use.

We would make the argument that our original Duchamp Chess Pieces would have have added value to these objects. We would consider invoking Fair Use in this case.

But, the failure of a legal system is that it is difficult to employ these defenses unless you have the teeth to fight. And teeth cost a lot of money.

Parody: Our resolution

We thought about how to recoup the intent of this project without what we think will be a copyright infringement claim from the Duchamp Estate and realized one important aspect of the project, which would likely guarantee it as commentary is one of parody.

Accordingly, we have created Chess with Mustaches, which is based on our original design, however, adds mustaches to each piece. The pieces no longer looks like Duchamp’s originals, but instead improves upon the original set with each piece adorned with mustaches.

chesswithmustaches_fullset

cwm_plastic_set

The decorative mustache references vandalized work, including Duchamp’s own adornment of the Mona Lisa.

cwm_mona_lisa

Coming out with this new piece is risky. We realize the Duchamp Estate could try to come back at us with a new cease-and-desist. However, we believe that this parody response and retitled artwork will be protected under U.S. Copyright Law (and perhaps under French law as well). We are willing to stand up for ourselves with the Chess with Mustaches.

Also for this reason, we decided not to upload the mustachioed-pieces to Thingiverse or any other downloadable websites. They were created as physical objects solely in the United States.

cwm_king

Final thoughts

3D printing opens up entire new possibilities of material production. With the availability of cheap production, the very issue of who owns intellectual property comes into play. We’ve seen this already with the endless reproductions on sites such as Thingiverse. Recently, Katy Perry’s lawyers demanded that a 3D Print of the Left Shark should be removed from Shapeways.

And in 2012, Golan Levin and Shawn Sims provided the Free Universal Construction Kit, a set of 3D-printable files for anyone to print connectors between Legos, Tinker Toys and many other construction kits for kids. Although he seems to have dodged legal battles, this was perhaps a narrow victory.

Our belief is that this our project of reviving Duchamp’s chess set is a strong as both a conceptual and artistic gesture. It is unfortunate that we had to essentially delete this project from the Internet. What copyright law has done in this case is to squelch an otherwise compelling conversation about the original, Duchamp’s notion of the readymade in the context of 3D printing.

Will our original Duchamp Chess pieces, the cease-and-desist letter from the Duchamp Estate and our response of the Chess with Mustaches be another waypoint in this conversation?

We hope so.

And what would Marcel Duchamp have thought of our project? We can only guess.

   cwm_knight

Scott Kildall’s website is: www.kildall.com
Twitter: @kildall

Bryan Cera’s website is: http://bryancera.com
Twitter: @BryanJCera

Introducing Machine Data Dreams

Earlier this year, I received an Individual Artist Commission grant from the San Francisco Arts Commission for a new project called Machine Data Dreams.

I was notified months ago, but the project was on the back-burner until now — where I’m beginning some initial research and experiments at a residency called Signal Culture. I expect full immersion in the fall.

The project description
Machine Data Dreams will be a large-scale sculptural installation that maps the emerging sentience of machines (laptops, phones, appliances) into physical form. Using the language of machines — software program code  — as linguistic data points, Scott Kildall will write custom algorithms that translate how computers perceive the world into physical representations that humans can experience.

The project’s narrative proposition is that machines are currently prosthetic extensions of ourselves, and in the future, they will transcend into something sentient. Computer chips not only run our laptops and phones, but increasingly our automobiles, our houses, our appliances and more. They are ubiquitous and yet, often silent. The key to understanding their perspective of the world is to envision how machines view the world, in an act of synthetic synesthesia.

Scott will write software code that will perform linguistic analysis on machine syntax from embedded systems — human-programmable machines that range from complex, general purpose devices (laptops and phones) to specific-use machines (refrigerators, elevators, etc) . Scott’s code will generate virtual 3D geometric monumental sculptures. More complex structures will reflect the higher-level machines and simpler structures will be generated from lower-level devices. We are intrigued by the experimental nature of what the form will take — this is something that he will not be able to plan.

kildall_5

Machine Data Dreams will utilize 3D printing and laser-cutting techniques, which are digital fabrication techniques that are changing how sculpture can be created — entirely from software algorithms. Simple and hidden electronics will control LED lights to imbue a sense of consciousness to the artwork. Plastic joints will be connected via aluminum dowels to form an armature of irregular polygons. The exterior panels will be clad by a semi-translucent acrylic, which will be adhered magnetically to the large-sized structures. Various installations can easily be disassembled and reassembled.

The project will build on my experiments with the Polycon Construction Kit by Michael Ang, where I’m doing some source-code collaboration. This will heat up the fall.

PCK-small-mountain-768x1024

At Signal Culture, I have 1 week of residency time. It’s short and sweet. I get to play with devices such as the Wobbulator, originally built by Nam June Paik and video engineer Shuya Abe.

The folks at Signal Culture built their own from the original designs.

What am I doing here, with analog synths and other devices? Well, I’m working with a home-built Arduino data logger that captures raw analog video signals (I will later modify it for audio).

20150730_200511I’ve optimized the code to capture about 3600 signals/second. The idea is to get a raw data feed of what a machine might be “saying”, or the electronic signature of a machine.

20150730_150950

Does it work? Well, I hooked it up to a Commodore Amiga (yes, they have one).

I captured about 30 seconds of video and I ran it through a crude version of my custom 3D data-generation software, which makes models and here is what I got. Whoa…

It is definitely capturing something.

Screen Shot 2015-07-30 at 10.08.49 PM

Its early research. The forms are flat 3D cube-plots. But also very promising.

Water Works, NPR and Imagination

I recently achieved one of my life goals. I was on NPR!

The article, “Artists In Residence Give High-Tech Projects A Human Touch” discusses my Water Works* project as well as artwork by Laura Devendorf, and more generally, the artist-in-residence program at Autodesk.

sewer-works

“Water Works” 3D-printed Sewer Map in 3D printer at Autodesk

The production quality and caliber of the reporting is high. It’s NPR, after all. But, what makes this piece important is that it talks about the value of artists, because they are the ones who infuse imagination into culture. The reporter, Laura Sydell, did a fantastic job of condensing this thought into a 6 minute radio program.

Arts funding has been cut out of many government programs, at least in the United States. And education curriculum increasing is teaching engineering and technology over the humanities. But, without the fine arts and teaching actual creativity (and not just startup strategies), how can we, as a society, be truly creative?

Well, that’s what this article suggests. And specifically, that corporations such as Autodesk, will benefit from having artists in their facilities.

Perhaps one problem is that “imagination” is not quantifiable. We have the ability to measure so much: financial impact, number of clicks, test scores and more, but creativity and imagination, not so much. These are — at least to date — aspects of our culture that we cannot track on our phones or run web analytics on.

So, embracing imagination means embracing uncertainty, which is an existential problem that technology will have to cope with along the way.

waterwork_in_lobby

“WATER WORKS” Installed in AUTODESK LOBBY

At the end of the article, the reporter talks about Xerox Parc of the 1970s, which had a thriving artist-in-residence program. Early computer technology was filled with imagination, which is why this time was ripe with technology and excitement.

This is close to my heart. My father, Gary Kildall, was a key computer scientist back in the 1970s. His passions when he was in school was mathematics and art. By the time that I was a kid, he was no longer drawing or working in the wood shop. But, instead was designing computer architectures which defined the personal computer. He passed away in 1994, but I often wish he could see the kind of work I’m doing with art + technology now.

gary-kildall

Gary Kildall oN TELEVISION, examining COMPUTER HARDWARE, circa 1981

* Water Works was part of Creative Code Fellowship in 2014 with support from Gray Area, Stamen Design and Autodesk.

Water Works Final Report

Overview
Water Works is a project that I created for the Creative Code Fellowship in the Summer of 2014 with the combined support of Stamen Design, Autodesk and Gray Area.

Water Works is a 3D data visualization and mapping of the water infrastructure of San Francisco. The project is a relational investigation: I have been playing the role of a “Water Detective, Data Miner” and sifting through the web for water data. My results of from this 3-month investigation are three large-scale 3D-printed sculptures, each paired with an interactive web map.

The final website lives here: http://www.waterworks.io/

sewer

Stamen Design is a small design studio that creates sophisticated mapping and data-visualization projects for the web. Combined with the amazing physical fabrication space at Pier 9 at Autodesk, this was a perfect combination of collaborative players for my own focus: writing algorithms that transform datasets into 3D sculptures and installations. I split my time between the two organizations and both were amazing, creative environments.

Gray Area provided the project guidance and coursework: 12 hours a week of Creative Code Immersive classes in topics ranging from Arduino to Node.js. About half of the classes were review for me, e.g. OpenFrameworks, Processing, Arduino, but Javascript, Node and more were completely new.

This report is heavy on images, partially because I want to document the entire process of how I created these 3D mapping-visualizations. As far as I know, I’m the first person who has undertaken this creative process: from mining city data to 3D-printing the infrastructure, which is geo-located on a physical map.

My directive from the start of the Water Works project was to somehow make visible what is invisible. This simple message is one that I learned while I was working as a New Media Exhibit Developer at the Exploratorium (2012-2013). It also aligns with the work that Stamen Design creates and so I was pleased to be working with this organization.

Starting Point
Underneath our feet is an urban circulatory system that delivers water to our households, removes it from our toilets, delivers a reliable supply firefighting and ultimately purifies it and directs it into the bay and ocean. Most of us don’t think about this amazing system because we don’t have to — it simply works.

Like many others, I’m concerned about the California drought, which many climatologists think will persist for the next decade. I am also a committed urban-dweller and want to see the city I live in improve its infrastructure as it serves an expanding population. Finally, I undertook this project in order to celebrate infrastructure and to help make others aware of the benefits of city government.

drought

On more personal note, I am fascinated by urban architecture. As I walk through the city, I constantly notice the makings on manholes, the various sign posts and different types of fire hydrants.cistern_manhole

About a year ago, I had several in-depth conversations with employees at the Department of Public Works about the possibility of mapping the sewer system when I was working at the Exploratorium. We discussed possibilities of producing a sewer map for museum. For various reasons, the maps never came to fruition, but the data still rattled around my brain. All of the pipe and manhole data still existed. It was waiting to be mapped.

Three Water Systems of San Francisco
When I was awarded this Creative Code Fellowship in June this year, I very much about the San Francisco water system. I soon learned that the city has three separate sets of pipes that comprise the water infrastructure of San Francisco.

(1) Potable Water System — this is our drinking water, comes from Hetch Hetchy. Some fire hydrants uses this.

(2) Sewer System — San Francisco has a combined stormwater and wastewater system, which is nearly entirely gravity-fed. The water gets treated at one of the wastewater treatment plants. San Francisco is the only coastal California city with a combined system.

(3) Auxiliary Water Supply System (AWSS) — this is a separate system just for emergency fire-fighting. It was built in the years immediately following the 1906 Earthquake, where many of the water mains collapsed and most of the city proper was destroyed by fires. It is fed from the Twin Peaks Reservoir. San Francisco is the only city in the US that has such as system.

water_treatment

Follow the Data, Find the Story
From my previous work on Data Crystals, I learned that you have to work with the data you can actually get, not the data you want. In the first month of the Water Works project, this involved constant research and culling.

I worked with various tables of sewer data that the DPW provided to me. I discovered that the city had about 30,000 nodes (underground chambers with manholes) with 30,000 connections (pipes). This was an incredible dataset and it needed a lot of pruning, cleaning and other work, which I soon discovered was a daunting task.

Lesson #1: Contrary to popular belief, data is never clean.

What else was available? It was hard to say at first. I sent emails to the SFPUC asking for their the locations of the drinking water data — just like what I had for the sewer data. I thought this would be incredible to represent. I approached the project with a certain naivety.

Of course, I shouldn’t have been surprised about that this would be a security concern, but in no uncertain terms I received a resounding no from the SFPUC. This made sense, but it left me with only one dataset.

Given that there were three water systems, it would make sense to create three 3D-printed visualizations, once from each set. If not the pipes, what would I use?

In one of my late-night evenings research, I found a good story: the San Francisco Underground Cisterns. According to various blogs, there are about 170 of these, and are usually marked by a brick circle. What is underneath?

cistern_circle

In the 1850s, after a series of Great Fires in San Francisco tore through the city, 23 cisterns* were built. These smaller cisterns were all in the city proper, at that time between Telegraph Hill and Rincon Hill. They weren’t connected to any other pipes and the fire department intended to use them in case the water mains were broken, as a backup water supply.

They languished for decades. Many people thought they should be removed, especially after incidents like the 1868 Cistern Gas Explosion.

However, after the 1906 Earthquake, fires once again decimated the city. Many water mains broke and the neglected cisterns helped save portions of the city.

Afterward, the city passed a $5,200,000 bond and begin building the AWSS in 1908. This included the construction of many new cisterns and the rehabilitation of other, neglected ones. Most of the new cisterns could hold 75,000 gallons of water. The largest one is underneath the Civic Center and has a capacity of 243,000 gallons.

The original ones, presumably rebuilt, hold much less, anywhere from 15,000 to 50,000 gallons.

* from the various reports I’ve read, this number varies.

old-cisternsmap

I searched for a map of all the cisterns, which was to be difficult to find. There was no online map anywhere. I read that since these were part of the AWSS, that they were refilled by the fire department. I soon begin searching for fire department data and found this set of intersections, along with the volume of each cistern. The source was the SFFD Water Supplies Manual.

cisterdata

The story of the San Francisco Cisterns was to be my first of three stories in this project.

Autodesk also runs Instructables, a DIY, how-to-make-things website. One of the Instructables details the mapping process, so if you want details, have a look at this Instructable.

What I did to make this conversion happen was to write code in Python which called Google Maps API to convert the intersections into lat/longs as well as get elevation data. When I had asked people how to do this, I received many GitHub links. Most of them were buggy or poorly documented. I ended up writing mine from scratch.

Lesson #2: Because GitHub is both a backup system for source code and open source sharing project, many GitHub projects are confusing or useless.

The being said, here is my GitHub repo: SF Geocoder, which does this conversion. Caveat Emptor.

Mapping the San Francisco Sewers
This was my second “story” with the Water Works project, which is simply to somehow represent the complex system that is underneath us. The details of the sewers are staggering. With approximately 30,000 manholes and 30,000 pipes that connect them, how do you represent or even begin mapping this?

And what was the story after all — it doesn’t quite have the uniqueness character of the cisterns. But, it does portray a complex system. Even the DPW hadn’t mapped this out in 3D space. I don’t know if any city ever has. This was the compelling aspect: making the physical model itself from the large dataset.

Building a 3D Modeling System
In addition to looking for data and sifting through the sewer data that I hand, I spent the first few weeks building up a codebase in OpenFrameworks.

The only other possibility was using Rhino + Grasshopper, which is a software package I don’t know and not even an Autodesk product. Though it can handle algorithmic model-building, several colleagues were dubious that it could handle my large, custom dataset.

So, I built my own. After several days of work, I mapped out the nodes and pipes as you see below. I represented the nodes as cubes and pipes as cylinders — at least for the onscreen data visualization.

sewer-mapping

This is a closeup of the San Francisco bay waterfront. You can see some isolated nodes and pipes — not connected to the network. This is one example of where the data wasn’t clean. Since this is engineering data, there are all sorts of anomalies like virtual nodes, run-offs and more.

My code was fast and efficient since it was in C++. More importantly, I wrote custom STL exporters which empowered my workflow to go directly to a 3D printer without having to go through other 3D packages to clean up the data. This took a lot of time, but once I got it working, it saved me hours of frustration later in the project.

seweremapping2

I also mapped out the Cisterns in 3D space using the same code. The Cisterns are disconnected in reality but as a 3D print, they need to one cohesive structure. I modified the ofxDelaunay add-on (thanks GitHub) to create cylindrical supports that link the cisterns together.

What you see here is an “editor”, where I could change the thickness of the supports, remove unnecessary ones and edit the individual cisterns models to put holes in certain ones.

I also scaled the Cisterns according to their volume. The pre-1906 ones tend to be small, while the largest one, at Civic Center is about 243,000 gallons, which over 3 times the size of the standard post-earthquake 75,000 gallon cisterns.

OF-cisterns-nomap

Story #3: Imaginary Drinking Hydrants
In the same document that had the locations of all of the San Francisco Cisterns, I also found this gem: 67 emergency drinking hydrants for public use in a city-wide disaster.

Whoa, I thought, how interesting…

drinking_hydrants

I dug deeper and scouted out the intersections in person. I took some photos of the Emergency Drinking Hydrants. They have blue drops painted on them. You can even see them on Street View.

I found online news articles from several years ago, which discussed this program, introduced in 2006, also known as the Blue Drop Hydrant program.

Picture of What is the blue drop hydrant program

blue_drop_man.jpg

And, I generated a web map, using Javascript and Leaflet.

imaginary-drnkinghydrants

I then published a link to the map onto my Twitter feed. It generated a lot of excitement and was retweeted by many sources.

twitt.jpg

The SFist — a local San Francisco news blog ended up covering it. I was excited. I thought I was doing a good public service.

However, there was a backlash…of sorts. It turns out that the program was discontinued by the SFPUC. The organization did some quick publicity-control on their Facebook page and also contacted the SFist.

The writer of the article then issued a statement that this program was discontinued and a press statement by the SFPUC.

press2.jpg

He also had this quote, which was a bit of a jab at me. “It had sounded like designer Scott Kildall, who had been mapping the the hydrants, had done a fair amount of research, but apparently not.”

In my defense, I re-researched the emergency drinking hydrants. Nowhere did it say that the program was discontinued. So, apparently the SFPUC quietly shuffled it out.

But later, I found that my map birthed a larger discussion. The SFPUC had this response, also printed later on SFist.
Picture of But then a good public response

The key quote by Emergency Planning Director Mary Ellen Carroll is:

“When it comes to sheltering after a emergency, we don’t tell people ahead of time, ‘This is where you’ll need to go to find shelter after an earthquake’ because there’s no way to know if that shelter will still be there.

This makes sense that central gathering locations could be a bad idea. Imagine a gas leak or something similar at one of these locations. So a water distribution plan would have to be improvised according the the desasters.

We do know from various news articles and by my own photographs that there was not only a map, but physical blue drops painted on the hydrants in addition to a large publicity campaign. The program supposedly costs 1 million dollars, so that would have been an expensive map.

They SFPUC never pulled the old maps from their website nor did they inform the public that the blue drop hydrants were discontinued.

I blame it on general human miscommunication. And after visiting the SFPUC offices towards the end of my Water Works project, I’m entirely convinced that this is a progressive organization with smart people. They’re doing solid work.

But I had to rethink my mapping project, since these hydrants no longer existed.

When faced with adverse circumstances, at least in the area of mapping and art, you must be flexible. There’s always a solution. This one almost rhymes with Emergency — Imaginary.

Instead of hydrants for emergency drinking water, I ask the question: could we have a city where we could get tap water from these hydrants at any time? What if the water were recycled water?

They could have a faucet handle on them, so you could fill up your bottle when you get thirsty. More importantly, these hydrants could be a public service.

It’s probably impractical in the short term, but I love the idea of reusing the water lines for drinking lines — and having free drinking water in the public commons.

So, I rebranded this map and designed this hydrants with a drinking faucet attached to it. This would be the base form used for the maps.

Picture of Rebrand as Imaginary Drinking Hydrants

Creating Mini Models
I wanted to strike a balance with this data-visualization and mapping project between aesthetics and legibility.With the data sets I now had and the C++ code that I wrote, I could geolocate cisterns, hydrants and sewer lines.

These would be connected by support structures in the case of cisterns and hydrants and pipe data for the sewers.

I decided that the actual data points would be miniature models, which I designed in Fusion 360 with the help of Autodesk guru, Taylor Stein. The first one I created was the Cistern model.

cisterns-fusion360I went through several iterations to come up with this simple model. The design challenge was to come up with a form that looked like it could be an underground tank, but not bring up other associations. In this case, without the three rectangular stubby pieces, it looks like a tortilla holder.

After a day of design and 3D print tests, I settled on this one.

cistern-model

And here you can see the outputs of the cisterns and the hydrants in MeshLab.

meshlab-cisterns

Here is the underside of the hydrant structure, where you can see the holes in the hydrants, which I use later for creating the final sculpture. These are drill holes for mounting the final prints on wood.

meshlab-hydrants-underneathThe manhole chamber design was the hardest one to figure out. This one is more iconographic than representational. Without some sort of symmetry, the look of the underground chamber didn’t resonate. I also wanted to provide a manhole cover on top of the structure. The flat bottom distinguishes it from the pipes.

manhole

Mapping and Legibilitystamen

One of my favorite aspects about being at Stamen is that four days a week, they provided lunch for us. We all ate lunch together. This was a good chunk of unstructured time to talk about mapping, music, personal life, whatever.

We solidified bonds — so often shared lunch is overlooked in organizations. In addition to informal discussion on the project, we also had a few creative brainstorm sessions, where I would present the progress of the project and get feedback from several people at both Stamen. Folks from Autodesk and Gray Area also joined the discussion.

I hadn’t considered the idea of situating these on a map before, but they suggested integrating a map of some sort. Quickly, the idea was birthed that I should geolocate these on top of a map. This was a brilliant direction for the project.

OF-imaginaryhydrants-mapStamen provided be with a high-resolution map that I could laser etch, which came later, after the 3D printing. Now, with this direction for the project, I started making the actual 3D prints.  map-for-etching

Mega-prints with lots of cleaning
After all the mapping, arduous data-smoothing, tests upon structural tests, I was finally ready to spool off the large-scale 3D prints. Each print was approximately the size of the Object 500 print bed: 20″ x 16″, making these huge. A big thanks to Autodesk for sponsoring the work and providing the machines.

Each print took between 40 and 50 hours of machine time, so I sent these out as weekend-long jobs. Time and resources were limited, so this was a huge endeavor.

cisterns-buildtimeI was worried that the print would fail, but I got lucky in each case. The prints are a combine resin material: VeroClear and VeroWhite (for the Cisterns and Hydrants) and mixes of VeroWhite and VeroBlack for the Sewers.

support-cisterns-far

When the prints come off the print bed, they are encased in a support material which I first scraped off and then used a high-pressure water system to spray the rest off.
cleaning-cistern

It took hours upon hours to get from this.

sewerworks

To this: a fully cleaned version of the Sewer print. This 3D print is of a section of the city: the Embarcadero area, which includes the Pier 9 facility where Autodesk is located.

For the Sewer Works print, the manhole chambers and pipes are scaled to the size in the data tables. I increased the elevation about 3 time to capture the hilly terrain of San Francisco. What you see here is an aerial view as if you were in a helicopter flying from Oakland to San Francisco. The diagonal is Market Street, ending at the Ferry Building. On the right side, towards the back of the print is Telegraph Hill. There are large pipes and chambers along the Embarcadero. Smaller ones comprise the sewer system in the hilly areas.
sewerworks-3dMap-Etching and Final Fabrication
I’ll just summarize the final fabrication — this blog post is already very long. For a more details, you can read this Instructable on how I did the fabrication work. 

Using a cherry wood, which I planed and jointed and glued together, I laser-etched these maps, which came out beautifully.

I chose wood both because of the beautiful finish, but also because the material of wood references the wood Victorian and Edwardian houses that define the landscape of San Francisco. The laser-etching burns away the wood, like the fires after the 1906 Earthquake, which spawned the AWSS water system.

_MG_7318

The map above is the waterfront area for the Sewer Works print and the one below is the full map of the city that I used as the base for the San Francisco Cisterns and the Imaginary Drinking Hydrants sculptures._MG_7316The last stages of the woodwork involved traditional fabrication, which I did at the Autodesk facilities at Pier 9.

_MG_7314I drilled out the holes for mounting the final 3D prints on the wood bases and then mounted them on 1/16″ stainless rods, such that they float about 1/2″ above the wood map.
_MG_7330 And the final stage involved manually fitting the prints onto the rods._MG_7335

Final Results
Here are the three prints, mounted on the wood-etched maps.

Below is the Imaginary Drinking Hydrants. This was the most delicate of the 3D prints.

06_large

These are the San Francisco Cisterns, which are concentrated in the older parts of San Francisco. They are nearly absent from the western part of the city, which became densely populated well-after the 1906 Earthquake.02_large This is the Sewer Works print. The map is not as visible because of the density of the network. The pipes are a light gray and the manhole chambers a medium gray. The map does capture the extensive network of manmade piers along the waterfront.03_large The Website: San Francisco Cisterns and Imaginary Drinking Hydrants
The website for this project is: waterworks.io. It has three interactive web maps for each of the three water systems

The aforementioned Instuctable: Mapping San Francisco Cisterns details how I made these. The summary is that I did a lot of data-wrangling, often using Python to transform the data into a GeoJSON files, a web-mappable format.

The Stamen designer-technicians were invaluable in directing me to the path of Leaflet, an easy-to-use mapping interface. I struggled with it for awhile, as I was a complete newbie to Javascript, but eventually sorted out how to create maps and customize the interactive elements.

Fortunately, I also received help from the designers at Stamen on the graphics. I only have so many skills  and graphic design is not one of them.

cisternsmapping

The Website:The Website: Life of Poo
The performance on Leaflet bogged down when I had more than about 1500 markers in Leaflet and the sewer system has about 28,000.

I spent a lot of energy with node-trimming using a combination of Python and Java code and winnowed the count down to about 1500. The consolidated node list was based on distance and used various techniques to map the a small set of nodes in a cohesive way.

lifeofpoo

In the hours just before presenting the project, I finished Life of Poo: an interactive journey of toilet waste.

On the website, you can enter an address (in San Francisco) such as “Twin Peaks, SF” or “47th & Judah, SF” and the Life of Poo and then press Flush Toilet.

This will begin an animated poo journey down the sewer map and to the wastewater treatment plant.

Not all of the flushes works as you’d expect. There’s still glitches and bugs in the code. If you type in “16th & Mission”, the poo just sits there.

Why do I have the bugs? I have some ideas (see below) but I really like the chaotic results so will keep it for now.

Lesson 3: Sometimes you should sacrifice accuracy.

Future Directions
I worked very, very hard on this project and I’m going to let it rest for awhile. There’s still some work to do in the future, which I would like to do some day.

Cistern Map
I’d like to improve the Cistern Map as I think it has cultural value. As far as I know, it’s the only one on the web. The data is from the intersections and while close, is not entirely correct. Sometimes the intersection data is off by a block or so. I don’t think this affects the integrity of the 3D map, but would be important to correct for the web portion.

Life of Poo
I want to see how this interactive map plays out and see how people respond to it in the next couple of months. The animated poo is universally funny but it doesn’t behave “properly”. Sometimes it get stuck. This was the last part of the Water Works project and one that I got working the night before the presentation.

I had to do a lot of node-trimming to make this work — Leaflet can only handle about 1500 data points before it slows down too much, so I did a lot of trimming from a set of abut 28,000. This could be one source of the inaccuracies.

I don’t take into account gravity in the flow calculations, so this is why I think the poo has odd behavior. But maybe the map is more interesting this way. It is, after all, an animated poo emoji.

Infrastructure Fabrication
This is where the project gets very interesting. What I’ve been able to accomplish with the “Sewer Works” print is to show how the sewer pipes of San Francisco look as a physical manifestation. This is only the beginning of many possibilities. I’d be eager to develop this technology and modeling system further. And take the usual GIS maps and translate them into physical models.

Thanks for reading this far and I hope you enjoyed this project,
Scott Kildall




 

 

 

 

 

 

 

 

Polycon in Berlin

This week I traveled to Berlin for Polycon. No…it’s not a convention on polyamory, but a porject developed by my longtime friend, Michael Ang (aka Mang). Polygon Construction Kit (aka Polycon) is a software toolkit for converting 3D polygon models into physical objects.

IMG_0246I wanted an excuse to visit Berlin, to hang out with Mang and to open up some possibilities for physical data-visualization behind EquityBot, which I’m working at for the artist-in-residency at Impakt Works and their upcoming festival.

I brought my recently-purchased Printrbot Simple Metal, which I had disassembled into this travel box.IMG_0281

After less than 30 minutes, I had it reassembled and working. Victory! Here it is, printing one of the polygon connectors.

IMG_0248How does Polycon work? Mang shared with me the details. You start with a simply 3D model from some sort of program. He uses SketchUp for creating physical models of his large-scale sculptures. I prefer OpenFrameworks, which is powerful and will let me easily manipulate shapes from data streams.

Here’s the simple screenshot in OpenFrameworks of two polyhedrons. I just wrote this the other day, so there’s no UI for it yet.

Screen Shot 2014-09-25 at 6.10.14 PM

And here is how it looks in MeshLab. It’s water-tight, meaning that it can be 3D-printed.Screen Shot 2014-09-25 at 6.10.59 PM

My goal is to do larger-scale data visualizations than some of my previous works such as Data Crystals and Water Works. I imagine room-sized installations. I’ve had this idea for many months of using the 3D printer to create joinery from datasets and to skin the faces using various techniques, TBD.

How it works: Polycon loads a 3D model and using Python scripts in FreeCAD will generate 3D joints that along with wooden dowels can be assembled into polygonal structures. Screen Shot 2014-09-25 at 6.09.00 PM

The Printrbot makes adequate joinery, but it’s nowhere near as pretty as the Vero prints on the Object 500 at Autodesk. It doesn’t matter that much because my digital joinery will be hidden in the final structures.IMG_0272Mang guided be through the construction of my first Polycon structure. There’s a lot of cleanup work involved such as drilling out the holes in each of the joints. IMG_0274It took awhile to assemble the basic form. There are vertex-numbering improvements that we’ll both make to the software. Together, Mang and I brainstormed ideas as to how to make the assembly go more quickly.IMG_0259After about 15 minutes, I got my first polygon assembled.

IMG_0265 It looks a lot like…the 3D model. I plan to be working on these forms in the next several months and so felt great after a successful first day.IMG_0268And here is a really nice image of one of Mang’s pieces — these are sculptures of mountains that he created. The backstory is that he made these from memories while flying high in a glider and they represent mountains. I like where he’s going with his artwork: making models based on nature, with ideas of recording these spaces and playing them back in various urban spaces. You can check out Michael Ang’s work here on his website.
IMG_0278




 

 

 

 

 

Life of Poo

I’ve been blogging about my Water Works project all summer and after the Creative Code Gray Area presentation on September 10th, the project is done. Phew. Except for some of the residual documentation.

In the hours just before I finished my presentation, I also managed to get Life of Poo working. What is it? Well, an interactive map of where your poo goes based on the sewer data that I used for this project.

Huh? Try it.

Screen Shot 2014-09-16 at 6.42.06 AM

This is the final piece of my web-mapping portion of Water Works and uses Leaflet with animated markers, all in Javascript, which is a new coding tool in my arsenal (I know, late to the party). I learned the basics in the Gray Area Creative Code Immersive class, which was provided as part of the fellowship.

The folks at Stamen Design also helped out and their designer-technicians turned me onto Leaflet as I bumbled my way through Javascript.

How does it work?

On the Life of Poo section of the Water Works website, you enter an address (in San Francisco) such as “Twin Peaks, SF” or “47th & Judah, SF” and the Life of Poo and then press Flush Toilet.

This will begin an animated poo journey down the sewer map and to the wastewater treatment plant.

Screen Shot 2014-09-16 at 6.50.17 AMNot all of the flushes works as you’d expect. There’s still glitches and bugs in the code. If you type in “16th & Mission”, the poo just sits there. Hmmm.

Why do I have the bugs? I have some ideas (see below) but I really like the chaotic results so will keep it for now.

Screen Shot 2014-09-16 at 6.54.32 AM

 

I think the erratic behavior is happening because of a utility I wrote, which does some complex node-trimming and doesn’t take into account gravity in its flow diagrams. The sewer data has about 30,000 valid data points and Leaflet can only handle about 1500 or so without it taking forever to load and refresh.

The utility I wrote parses the node data tree and recursively prunes it to a more reasonable number, combining upstream and downstream nodes. In an overflow situation, technically speaking, there are nodes where waste might be directed away from the waste-water treatment plant.

However, my code isn’t smart enough to determine which are overflow pipes and which are pipes to the treatment plants, so the node-flow doesn’t work properly.

In case you’re still reading, here’s an illustration of a typical combined system, that shows how the pipes might look. The sewer outfall doesn’t happen very often, but when your model ignores gravity, it sure will.

CombineWasteWaterOverflow

The 3D print of the sewer, the one that uses the exact same data set as Life of Poo looks like this.

sewerworks_front sewerworks_top

EquityBot @ Impakt

My exciting news is that this fall I will be an artist-in-residence at Impakt Works, which is in Utrecht, the Netherlands. The same organization puts on the Impakt Festival every year, which is a media arts festival that has been happening since 1988. My residency is from Sept 15-Nov 15 and coincides with the festival at the end of October.

Utrecht is a 30 minute train ride from Amsterdam and 45 minutes from Rotterdam and by all accounts is a small, beautiful canal city with medieval origins and also hosts the largest university in the Netherlands.

Of course, I’m thrilled. This is my first European art residency and I’ll have a chance to reconnect with some friends who live in the region as well as make many new connections.

impakt; utrecht; www.impakt.nlThe project I’ll be working on is called EquityBot and will premiere at the Impakt Festival in late October as part of their online component. It will have a virtual presence like my Playing Duchamp artwork (a Turbulence commission) and my more recent project, Bot Collective, produced while an artist-in-residence at Autodesk.

Like many of my projects this year, this will involve heavy coding, data-visualization and a sculptural component.

equity_bot_logo

At this point, I’m in the research and pre-production phase. While configuring back-end server code, I’m also gathering reading materials about capital and algorithms for the upcoming plane rides, train rides and rainy Netherland evenings.

Here is the project description:

EquityBot

EquityBot is a stock-trading algorithm that explores the connections between collective emotions on social media and financial speculation. Using custom algorithms Equitybot correlates group sentiments expressed on Twitter with fluctuations in related stocks, distilling trends in worldwide moods into financial predictions which it then issues through its own Twitter feed. By re-inserting its results into the same social media system it draws upon, Equitybot elaborates on the ways in which digital networks can enchain complex systems of affect and decision making to produce unpredictable and volatile feedback loops between human and non-human actors.

Currently, autonomous trading algorithms comprise the large majority of stock trades.These analytic engines are normally sequestered by private investment companies operating with billions of dollars. EquityBot reworks this system, imagining what it might be like it this technological attention was directed towards the public good instead. How would the transparent, public sharing of powerful financial tools affect the way the stock market works for the average investor?

kildall_bigdatadreamsI’m imagining a digital fabrication portion of EquityBot, which will be the more experimental part of the project and will involve 3D-printed joinery. I’ll be collaborating with my longtime friend and colleague, Michael Ang on the technology — he’s already been developing a related polygon construction kit — as well as doing some idea-generation together.

“Mang” lives in Berlin, which is a relatively short train ride, so I’m planning to make a trip where we can work together in person and get inspired by some of the German architecture.

My new 3D printer — a Printrbot Simple Metal — will accompany me to Europe. This small, relatively portable machine produces decent quality results, at least for 3D joints, which will be hidden anyways.

printrbot

WaterWorks: From Code to 3D Print

In my ongoing Water Works project —  a Creative Code Fellowship with Stamen DesignGray Area and Autodesk — I’ve been working for many many hours on code and data structures.

The immediate results were a Map of the San Francisco Cisterns and a Map of the “Imaginary Drinking Hydrants”.

However, I am also making 3D prints — fabricated sculptures, which I map out in 3D-space using and then 3D print.

The process has been arduous. I’ve learned a lot. I’m not sure I’d do it this way again, since I had to end up writing a lot of custom code to do things like triangle-winding for STL output and much, much more.

Here is how it works. First, I create a model in Fusion 360 — an Autodesk application — which I’ve slowly been learning and have become fond of.

Screen Shot 2014-08-21 at 10.12.47 PM

From various open datasets, I map out the geolocations locations of the hydrants or the cisterns in X,Y space. You can check out this Instructable on the Mapping Cisterns and this blog post on the mapping of the hydrants for more info. Using OpenFrameworks — an open source toolset in C++, I map these out in 3D space. The Z-axis is the elevation.

The hydrants or cisterns are both disconnected entities in 3D space. They’d fall apart when trying to make a 3D print, so I use Delaunay triangulation code to connect the nodes as a 3D shape.

Screen Shot 2014-08-21 at 10.07.59 PMI designed my custom software to export a ready-to-print set of files in an STL format. My C++ code includes an editor which lets you do two things:

(1) specify which hydrants are “normal” hydrants and which ones have mounting holes in the bottom. The green ones have mounting holes, which are different STL files. I will insert 1/16″ stainless steel rod into the mounting holes and have the 3D prints “floating” on a piece of wood or some other material.

(2) my editor will also let you remove and strengthen each Delaunay triangulation node — the red one is the one currently connected. This is the final layout for the print, but you can imagine how cross-crossed and hectic the original one was.

Screen Shot 2014-08-21 at 10.08.44 PM

Here is an exported STL in Meshlab. You can see the mounting holes at the bottom of some of the hydrants.
Screen Shot 2014-08-21 at 10.20.13 PM

I ran many, many tests before the final 3D print.

imaginary_drinking_faucets

And finally, I setup the print over the weekend. Here is the print 50 hours later.
on_the_tray

It’s like I’m holding a birthday cake — I look so happy. This is at midnight last Sunday.scott_holding_tray

The cleaning itself is super-arduous.

scott_cleaning

And after my initial round of cleaning, this is what I have.hydrats_roughAnd here are the cistern prints.

cisterns_3d

I haven’t yet mounted these prints, but this will come soon. There’s still loads of cleaning to do.

 

Modeling Cisterns

How do you construct a 3D model of something that lives underground and only exists in a handful of pictures taken from the interior? This was my task for the Cisterns of San Francisco last week.

The backstory: have you ever seen those brick circles in intersections and wondered what the heck they mean? I sure have.

It turns out that underneath each circle is an underground cistern. There are 170 or so* of them spread throughout the city. They’re part of the AWSS (Auxiliary Water Supply System) of San Francisco, a water system that exists entirely for emergency use.

The cisterns are just one aspect of my research for Water Works, which will map out the San Francisco water infrastructure and data-visualize the physical pipes and structures that keep the H2O moving in our city.

This project is part of my Creative Code Fellowship with Stamen Design, Gray Area and Autodesk.

Cistern_1505_MedRes

Many others have written about the cisterns: Atlas Obscura, Untapped Cities, Found SF, and the cisterns even have their own Wikipedia page, albeit one that needs some edits.

The original cisterns, about 35 or so, were built in the 1850s, after a series of great fires ravaged the city, located in the Telegraph Hill to Rincon Hill area. In the next several decades they were largely unused, but the fire department filled them up with water for a “just in case” scenario.

Meanwhile, in the late 19th century as San Francisco rapidly developed into a large city, it began building a pressurized hydrant-based fire system, which was seen as many as a more effective way to deliver water in case of a fire. Many thought of the cisterns as antiquated and unnecessary.

However, when the 1906 earthquake hit, the SFFD was soon overwhelmed by a fire that tore through the city. The water mains collapsed. The old cisterns were one of the few sources of reliable water.

After the earthquake, the city passed bonds to begin construction of the AWSS — the separate water system just for fire emergencies. In addition to special pipes and hydrants fed from reservoirs for hydrants, the city constructed about 140 more underground cisterns.

Cisterns are disconnected nodes from the network, with no pipes and are maintained by the fire department, which presumably fill them every year. I’ve heard that some are incredibly leaky and others are watertight.

What do they look like inside? This is the *only* picture I can find anywhere and is of a cistern in the midst of seismic upgrade work. This one was built in 1910 and holds 75,000 gallons of water, the standard size for the cisterns. They are HUGE. As you can surmise from this picture, the water is not for drinking.cistern(Photographer: Robin Scheswohl; Title: Auxiliary Water supply system upgrade, San Francisco, USA)

Since we can’t see the outside of an underground cistern, I can only imagine what it might look like. My first sketch looked something like this.

cistern_drawingI approached Taylor Stein, Fusion 360 product evangelist at Autodesk, who helped me make my crude drawing come to life. I printed it out on one of the Autodesk 3D printers and lo and behold it looks like this: a double hamburger with a nipple on top. Arggh! Back to the virtual drawing board.IMG_0010I scoured the interwebs and found this reference photograph of an underground German cistern. It’s clearly smaller than the ones in San Francisco, but it looks like it would hold water. The form is unique and didn’t seem to connote something other than a vessel-that-holds-water.800px-Unterirdische_ZisterneOnce again, Taylor helped me bang this one out — within 45 minutes, we had a workable model in Fusion 360. We made ours with slightly wider dimensions on the top cone. The lid looks like a manhole.

cistern_3d

Within a couple hours, I had some 3D prints ready. I printed out several sizes, scaling the height to for various aesthetic tests.

cistern_models_printed

This was my favorite one. It vaguely looks like cooking pot or a tortilla canister, but not *very* much. Those three rectangular ridges, parked at 120-degree angles, give it an unusual form

IMG_0006

Now, it’s time to begin the more arduous project of mapping the cisterns themselves. And the tough part is still finishing the software that maps the cisterns into 3D space and exports them as an STL with some sort of binding support structure.

* I’ve only been able to locate 169 cisterns. Some reports state that there are 170 and others that there are 173 and 177.

Mapping Manholes

The last week has been a flurry of coding, as I’m quickly creating a crude but customized data-3D modeling application for Water Works — an art project for my Creative Code Fellowship with Stamen Design, Gray Area and Autodesk.

This project build on my Data Crystals sculptures, which transform various public datasets algorithmically into 3D-printable art objects. For this artwork, I used Processing with the Modelbuilder libraries to generate STL files. It was a fairly easy coding solution, but I ran into performance issues along tje wau.

But Processing tends to choke up at managing 30,000 simple 3D cubes. My clustering algorithms took hours to run. Because it isn’t compiled into machine code and is instead interpreted, it has layers of inefficiency.

I bit the coding bullet and this week migrated my code to OpenFrameworks (an open source C++ environment). I’ve used OF before, but never with 3D work. There are still lots of gaps in the libraries, specifically the STL exporting, but I’ve had some initial success, woo-hoo!

Here are all the manholes, the technical term being “sewer nodes”, mapped into 3D space using GIS lat/lon and elevation coordinates. The clear indicator that this is San Francisco, and not Wisconsin, which this mapping vaguely resembles is the swath of empty space that is Golden Gate Park.

What hooked me was that “a-ha” moment where 3D points rendered properly on my screen. I was on a plane flight home from Seattle and involuntarily emitted an audible yelp. Check out the 3D mapping. There’s a density of nodes along the Twin Peaks, and I accentuated the z-values to make San Francisco look even more hilly and to understand the location of the sewer chambers even better.

Sewer nodes are just the start. I don’t have the connecting pipes in there just yet, not to mention the cisterns and other goodies of the SF water infrastructure.

water_works_nodes_screen_shotOf course, I want to 3D print this. By increasing the node size — the cubic dimensions of each manhole location, I was able to generate a cohesive and 3D-printable structure. This is the Meshlab export with my custom-modified STL export code. I never thought I’d get this deep into 3D coding, but now, I know all sorts of details, like triangular winding and the right-hand rule for STL export.3d_terrain_meshlabAnd here is the 3D print of the San Francisco terrain, like the Data Crystals, with many intersecting cubes.

3d_terrain_better It doesn’t have the aesthetic crispness of the Data Crystals project, but this is just a test print — very much a work-in-progress.
data_crystals

 

Creative Code Fellowship: Water Works Proposal

Along with 3 other new media artists and creative coding experts, I was recently selected to be a Creative Code Fellow for 2014 — a project pioneered by Gray Area (formerly referred to as GAFFTA and now in a new location in the Mission District).

Each of us is paired with a partnering studio, which provides a space and creative direction for our proposed project. The studio that I’m pleased to be working with is Stamen Design, a leader in the field of aesthetics, mapping and data-visualization.

I’ll be also continuing my residency work at Autodesk at Pier 9, which will be providing support for this project as well.

My proposed project is called “Water Works” — a 3D-printed data visualization of San Francisco’s water system infrastructure, along with some sort of web component.

grayarea-fellowship-home-page

 

Creative Code Fellowship Application Scott Kildall

Project Proposal (250 limit)
My proposed project “Water Works” is a 3D data visualization of the complex network of pipes, aqueducts and cisterns that control the flow of water into our homes and out of our toilets. What lies beneath our feet is a unique combined wastewater system — where stormwater mixes with sewer lines and travels to a waste treatment plant, using gravitational energy from the San Francisco hills.

This dynamic flow is the circulatory system of the organism that is San Francisco. As we are impacted by climate change, which escalates drought and severe rainstorms, combined with population growth, how we obtain our water and dispose of it is critical to the lifeblood of this city.

Partnering with Autodesk, which will provide materials and shop support, I will write code, which will generate 3D prints from municipal GIS data. I imagine ghost-like underground 3D landscapes with thousands of threads of water — essentially flow data — interconnected to larger cisterns and aqueducts. The highly retinal work will invite viewers to explore the infrastructure the city provides. The end result might be panels that snap together on a tabletop for viewers to circumnavigate and explore.

The GIS data is available, though not online, from San Francisco and already I’ve obtained cooperation from SFDPW about providing some infrastructure data necessary to realize this project.

While my focus will be on the physical portion of this project, I will also build an interactive web-based version from the 3D data, making this a hybrid screen-physical project.

Why are you interested in participating in this fellowship? (150 word limit)
The fellowship would give me the funding, visibility and opportunity of working under the umbrage of two progressive organizations: Gray Area and Stamen Design. I would expand my knowledge, serve the community and increase my artistic potential by working with members of these two groups, both of which have a progressive vision for art and design in my longtime home of San Francisco.

Specifically, I wish to further integrate 3D printing into the data visualization conversation. With the expertise of Stamen, I hope to evolve my visualization work at Autodesk. The 3D-printing technology makes possible what has hitherto been impossible to create and has enormous possibilities to materialize the imaginary.

Additionally some of the immersive classes (HTML5, Javascript, Node.js) will be helpful in solidifying my web-programming skills so that I can produce the screen-based portion of this proposal.

What experience makes this a good fit for you? (150 word limit)
I have deep experience in producing both screen-based and physical data visualizations. While at the Exploratorium, I worked on many such exhibits for a general audience.

One example is a touch-screen exhibit called “Seasons of Plankton”, which shows how plankton species in the Bay change over the year, reflecting a diverse ecosystem of microscopic organisms. I collaborated with scientists and visitor evaluators to determine the optimal way to tell this story. I performed all of the coding work and media production for this successful piece.

While at Autodesk, my focus has been creating 3D data visualizations with my custom code that transforms public data sets into “Data Crystals” (these are the submitted images). This exploration favors aesthetics over legibility. I hope to build upon this work and create physical forms, which help people see the dynamics of a complex urban water system to invite curiosity through beauty.

 

World Data Crystals

I just finished three more Data Crystals, produced during my residency at Autodesk. This set of three are data visualizations of world datasets.

This first one captures all the population of cities in the world. After some internet sleuthing, I found a comprehensive .csv file of all of the cities by lat/long and their population and I worked on mapping the 30,000 or so data points into 3D space.

I rewrote my Data Crystal Generation program to translate the lat/long values into a sphere of world data points. I had to rotate the cubes to make them appear tangential to the globe. This forced me to re-learn high school trig functions, argh!

world_dcWhat I like about the way this looks is that the negative space invites the viewer into the 3D mapping. The Sahara Desert is empty, just like the Atlantic Ocean. Italy has no negative space. There are no national boundaries or geographical features, just cubes and cities.

I sized each city by area, so that the bigger cities are represented as larger cubes. Here is the largest city in the world, Tokyo

world_tokyo

This is the clustering algorithm in action. Running it realtime in Processing takes several hours. This is what the video would look like if I were using C++ instead of Java.

I’m happy with the clustered Data Crystal. The hole in the middle of it is result of the gap in data created by the Pacific Ocean.

world_pop_crystal

The next Data Crystal maps of all of the world airports. I learned that the United States has about 20,000 airports. Most of these are small, unpaved runways. I still don’t know why.

Here is a closeup of the US, askew with Florida in the upper-left corner.

us_closeup

I performed similar clustering functions and ended up with this Data Crystal, which vaguely resembles an airplane.

world_airports_data_crystalThe last dataset, which is not pictured because my camera ran out of batteries and my charger was at home represents all of the nuclear detonations in the world.

I’ll have better pictures of these crystals in the next week or so. Stay tuned.

 

First three Data Crystals

My first three Data Crystals are finished! I “mined” these from the San Francisco Open Data portal. My custom software culls through the data and clusters it into a 3D-printable form.

Each one involves different clustering algorithms. All of these start with geo-located data (x,y) with either time/space on the z-axis.

Here they are! And I’d love to do more (though a lot of work was involved)

Incidents of Crime
This shows the crime incidents in San Francisco over a 3-month period with over 35,000 data points (the crystal took about 5 hours to “mine”).  Each incident is single cube. Less series crimes such as drug possession are represented as small cubes and more severe a crimes such as kidnapping are larger ones. It turns out that crime happens everywhere, which is why this is a densely-packed shape.
datacrystal_crime

 

Construction Permits
This shows current the development pipeline — the construction permits in San Francisco. Work that affects just a single unit are smaller cubes and larger cubes correspond the larger developments. The upper left side of the crystal is the south side of the city — there is a lot of activity in the Mission and Excelsior districts, as you would expect. The arm on the upper right is West Portal.  The nose towards the bottom is some skyscraper construction downtown. 

dc_development

 

Civic Art Collection
This Data Crystal is generated from the San Francisco Civic Art Collection. Each cube is the same size, since it doesn’t feel right to make one art piece larger than another. The high top is City Hall, and the part extending below is some of the spaces downtown. The tail on the end is the artwork at San Francisco Airport.

datacrystal_sfart

 

Support material is beatiful

I finished three final prints of my Data Crystals project over the weekend. They look great and tomorrow I’m taking official documentation pictures.

These are what they look like in the support material, which is also beautiful in its ghostly, womb-like feel.

I’ve posted photos of these before, but still stunned at how amazing they look.

IMG_1000 IMG_1003 IMG_1004 IMG_1006 IMG_1007 IMG_1009 IMG_1011 IMG_1012

EEG Data Crystals

I’ve had the Neurosky Mindwave headset in a box for over a year and just dove into it, as part of my ongoing Data Crystals research at Autodesk. The device is the technology backbone behind the project: EEG AR with John Craig Freeman (still working on funding).

The headset fits comfortably. Its space age retro look aesthetically pleases except that I’d cover up the logo in a final art project. The gray arm rests on your forehead and reads your EEG levels, translating them into a several values. The most useful are “attention” and “meditation”, which are calculations derived from a few different brainwave patterns.

eeg_headestI’ve written custom software in Java, using the Processing libraries and ModelBuilder to generate 3D models in real-time from the headset. But after copious user-testing, I found out that the effective sample rate of the headset was 1 sample/second.* Ugh.

This isn’t the first time I’ve used the Neurosky set. In 2010, I developed art piece, which is a portable personality kit called “After Thought”. That piece, however, relied on slow activity and was more like a tarot card reading where the headset readings were secondary to the performance.

The general idea for the Data Crystals is to translate data into 3D prints. I’ve worked with data from the San Francisco’s Data Portal. However, the idea of generating realtime 3D models from biometric data is hard to resist.

This is one of my first crystals — just a small sample of 200 readings. The black jagged squares represents “attention” and the white cubes correspond to “meditation”.

IMG_0963

Back to the sample rate…a real-time reading of 600 samples would take 10 minutes. Still, it’s great to be able to do real-time, so I imagine a dark room and a beanbag chair where you think about your day and then generate the prints.

Here’s what the software looks like. This is a video of my own EEG readings (recorded then replayed back at a faster rate).

And another view of the 3D print sample:

IMG_0965

What I like about this 3D print is the mixing of the two digital materials, where the black triangles intersect with the white squares. I still have quite a bit of refinement work to do on this piece.

Now, the challenge is what kind of environment for a 10-minute “3D Recording Session”. Many colleagues immediately suggest sexual arousal and drugs, which is funny, but I want to avoid. One thing I learned at the Exploratorium was how to appeal to a wide audience, i.e. a more family-friendly one. This way, you can talk to anyone about the work you’re doing instead of a select audience.

Some thoughts: just after crossing the line in an extreme mountain bike race, right after waking up in the morning, drink a pot of coffee (our workplace drug-of-choice) or soaking in the hot tub!

IMG_0966

* The website  advertises a “512Hz sampling rate – 1Hz eSense calculation rate.” Various blog posts indicate that the raw values often get repeated, meaning that the effective rate is super-slow.

 

3D Data Viz & SF Open Data

I’ve fallen a bit behind in my documentation and have a backlog of great stuff that I’ve been 3D-printing. These are a few of my early tests with my new project: Data Crystals. I am using various data sources, which I algorithmically transform data into 3D sculptures.

The source for these is the San Francisco Open Data Portal — which provides datasets about all sorts of interesting things such as housing permit data, locations of parking meters and more.

My custom algorithms transform this data into 3D sculptures. Legibility is still an issue, but initial tests show the wonderful work that algorithms can do.

This is a transformation of San Francisco Crime Data. It turns out that crime happens everywhere, so the data is in a giant block.

crime_data

After running some crude data transformations, I “mined” this crystal: the location of San Francisco public art. Most public art is located in the downtown and city hall area. But there is a tail, which represents the San Francisco Airport.

sf_art

More experiments: this is a test, based on the SF public art, where I played with varying the size of the cubes (this would be a suggested value of artwork, which I don’t have data for…yet). Now, I have a 4th axis for the data. Plus, there is a distinct aesthetic appeal of stacking differently-sized blocks as opposed to uniform ones.

Stay tuned, there is more to come!random_squares

Materiality in 3D Prints

I’m resuming some of the 3D printing work this week for my ongoing 3D data visualization research (a.k.a. Data Crystals). Here are four small tests in the “embryonic” state.

IMG_0930Step 1 in the cleaning process is the arduous process of using dental tools to pick away the support material.

IMG_0932

I have four “crystals” — two constructed from a translucent resin material and two from a more rubbery black material. IMG_0933

And the finished product! The Tango Black (that’s the material) below. I’m not so happy with the how this feels: soft and bendy.IMG_0934

And the Vero Clear — which has an aesthetic appeal to it, and is a hard resin that resembles like ice. Remember the ICE (Intrusion Countermeasure Electronics) in Neuromancer…this is one source of inspiration.
IMG_0937

The Art of 3D Printing

My new work on “Data Crystals” is featured in a new episode of Science in the City, produced by the Exploratorium. You can watch it here.

The behind-the-scenes production involved many emails and then a quick video shoot. Phoebe (the videographer) interviewed me in the conference room at Autodesk. We had about 25 minutes to shoot the interview portion of the video. She filled me in on her intentions for the piece and asked me to talk about a few general topics related to 3D printing.

phoebe_interviews

Fortunately, over the years, I’ve become very comfortable with my voice and image. She also did a great job of making me look smart. I explained my new “Data Crystals” project, which is in the research phase. I am looking at open data sets provided by San Francisco Open Data Portal and mapping them as 3D sculptural objects. You can see me holding some of the 3D prints on the video.

 

 

 

3D printing: before and after

This is hot off the 3D printing press. Last night, I sent out one of my “data crystals” to the 3D printer and in the morning, I got this beautiful print.

data_crystals_beforeAfter about an hour of cleaning off the support material with dental tools and a high-pressure water jet, I got something below. It looks great for an early experiment! I feel like a modern-day Data Miner.

data_crystals_after