Flagscape: Data-visualizing Global Economic Exchange in Virtual Reality

Overview

Scott Kildall is conducting research into data-navigation techniques in virtual reality with a project called Flagscape, which constructs a surreal world of economic exchange between nations, based on United Nations data.

The work deploys “data bodies,” which represent exports such as metal ores and fossil fuels that move through space and impart complexities of economic relations. Viewers move through the procedurally-generated datascape rather than acting upon the data elements, inverting the common paradigm of legible and controlled data access.

Economic exchange in VR

Details

The code constructs data from several databases at runtime including population, carbon emissions per capita, military personnel per capita and a United Nations database on resource extraction. All of these get combined to construct the Flagscape data bodies. Each one represents a single datum, linked to a specific country.

The only stationary data body is a population model for each country, which scales to the relative value for each country and resembles a 3D person using a revolve around a central axis. The code positions these forms at their appropriate 3D world location, such that China and India — the largest two population bodies — act as waypoints as their forms dwarf all others.

Population bodies of India and China

A moshed flag skins every data body, acting as a glitched representation that subverts its own national identity. Underneath the flag is a complex set of relations of exchange that exceeds nationhood. For example, resource-extraction machines are made in one country that then get purchased by another to extract the very resources that make those machines.

Brazil flag, moshed

Flagscape reminds us that our borders are imaginary and in this idealized 3D space, there are no delineations of territory, only lines that guide trade between countries, forms magically gliding along an invisible path. What the database cannot tell us is how exactly the complex power relations move resources from one nation to another. Meanwhile, carbon emissions, the only untethered data body in Flagscape, which affects the entire planet spin out of control into the distance only to get endlessly respawned.

Carbon emissions by Canada and Australia

The primary acoustic element triggers when you navigate close to a population body. That country’s national anthem plays, filling your ears with a wash of drums, horns and militaristic melodies that flow into a state of sameness.

Initial Inspiration

The project is inspired by early notions of cyberspace described by writers such as William Gibson, where virtual reality is a space of infinity and abstraction. In Neuromancer, published in 1984, he describes cyberspace as:

“Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding…”

Neuromancer

While this text entices, most VR content recreates physical spaces, such as the British Museum with the same artwork, floor tiles and walls as the real, or it builds militarized spaces in which “you” are a set of hands that trigger weapons as you walk through combat mazes. At some level, this is a consequence of linear-thinking embedded in our fast-paced capitalist economy, arcing towards functionality but ignoring artistic possibilities. This research project acts as an antidote to these constrained environments.

OverkillVR, a virtual reality game

It was with these initial conversations around virtual datascapes with Ruth Gibson and Bruno Martelli that I was invited to be part of the Reality Remix project and was included in the AHRC Reality Remix grant which is part of their Next Generation Immersive Experiences call. My role is a “collaborator” (aka artist) who is creating their own project under these auspices.

Spatialization and Materializing Data

Unlike the 2D screen, which has a flatness and everyday familiarity, VR offers full spatialization and a new form of non-materiality, which Flagscapes fully plays with. One concept that I have been working with is that since data has physical consequences, it should exist as a “real” object. This project will expand this idea but will also blur sensorial experiences, tricking the visitor into a boundary zone of the non-material.

At the same time, Flasgscapes is its own form of landscape, creating an entire universe of possibility. It refers to traditions of depicting landscapes as art objects as well as iconic Earthworks pieces such as Spiral Jetty, where the Earth itself acts as a canvas. However, this type of datascape will be entirely infinite, like the boundaries of the imagination.

Spiral Jetty

Finally, Flagscape continues the steam of instruction-based work by artists such as Sol LeWitt, where an algorithm rather than the artist creates the work. Here, it accomplishes a few things such as taking the artists hand away from creating the form itself but also recognizing the power of artificial intelligence to assist in creating new forms of artwork.

Alternate Conception of Space in Virtual Reality

VR offers many unique forms of interaction, perception and immersion, but one aspect that defines it is the alternate sense of space. Similar to the religious spaces before the dominance of science, as described by Margaret Wertheim in the Pearly Gates of Cyberspace, this “other” space has the potential to create a set of rules that transport us to a unique imagination space.

As technology progresses and culture responds, the linearity of engineering-thinking often confines creativity rather than enhances it. Capitalist spaces get replicated and modified to adapt to the technology, validating McLuhan’s predictions of instantaneous, group-like thinking. The swipe gestures we use on our phones get encoded in muscle memory. We slyly refer to Wikipedia as the “wonder-killer”. The flying car is often cited as the most desirable future invention.

Flying car from Blade Runner

At stake with technological progress is imagination itself. Will the content of the spaces that get opened up with new technologies be ones that enhance our creativity or dull it? Who has access to technology-inspired culture? How can we use, enhance and subvert online distribution channels? These are just some of the questions and conversations that this project will ask — in the context of virtual space.

I see VR in a similar place as Video Art was in the 1970s, which thrived with access to affordable camcorders. However, VR and this specific project has the ability to easily disseminate into homes and public spaces through various app stores. Ultimately, with this project I hope to direct conversations around access and imagination with art and technology.

Marshall McLuhan with many telephones

Work-in-progress Presentation

Our Reality Remix group will be presenting its research, proof-of-concepts and prototypes at two venues in London on July 27th and July 28th, 2018 at Ravensbourne and Siobhan Davies Studios. Both free events are open to the public.

Bibliography
Gibson, W. (1993). Neuromancer. London: Harper Collins Science Fiction & Fantasy.
McLuhan, M. (1967). The medium is the massage : an inventory of effects. Bantam Books.
Wertheim, M. (2010). The pearly gates of cyberspace. New York [u.a.]: Norton.

Sonaqua goes to Biocultura

Last month…yes, blogging can be slow, I traveled to Santa Fe with the support of Andrea Polli and taught a workshop on my Sonaqua project.

The basic idea of Sonaqua is to sonfiy — create sounds — based on water quality. As a module, these are Arudino-based and designed for a single-user to make a sound. I’m actively teaching workshops on these and have open-sourced the software and made the hardware plans available.

interested in a Sonaqua workshop? then contact me

My Sonaqua installation creates orchestral arrangements of water samples based on electrical conductivity. Here’s a link to the video that explains the installation, which I did in Bangkok this June.

Back to New Mexico..In the early part of the week, I taught a workshop on the Sonaqua circuit at one of Andrea’s classes at UNM, creating single-player modules for each student. We collected water samples and played each one separately. The students were fun and set up this small example of water samples with progressive frequencies, almost like a scale.

The lower the pitch, the more polluted* the water sample and so higher-pitched samples might correspond to filtered drinking water.

Later in the week, I traveled to Biocultura in Santa Fe, which is a space that Andrea co-runs. Here, I installed the orchestral arrangement of the work, based on 12 water samples in New Mexico. She had a whole set of beakers and scientific-looking vessels, so I used what we had on hand and installed it on a shelf behind the presentation.

A physical map (hard to find!) of the sites where I took water samples.

And a close-up shot of one of the water samples + speakers. If you look closely, you can see an LED inside the water sample.

My face is obscured by the backlit screen. I presented my research with Sonaqua, as well as several other projects around water that evening to the Biocultura audience.

And afterwards, the attendees checked out the installation while I answered questions.

Data Crystals at EVA

I just finished attending the EVA London conference this week and did a demonstration of my Data Crystals project. This is the formal abstract for the demonstration and writing it helped clear up some of my ideas about the Data Crystals project and digital fabrication of physical sculptures and installations.

 

Embodied Data and Digital Fabrication: Demonstration with Code and Materials
by Scott Kildall

1. INTRODUCTION

Data has tangible consequences in the real world. Accordingly, physical data-visualizations have the potential to engage with the actual effects of the data itself. A data-generated sculpture or art installation is something that people can move around, though or inside of. They experience the dimensionality of data with their own natural perceptual mechanisms. However, creating physical data visualizations presents unique material challenges since these objects exist in stasis, rather than in a virtual space with a guided UX design. In this demonstration, I will present my recent research into producing sculptures from data using my custom software code that creates files for digital fabrication machines.

2. WHAT DOES DATA LOOK LIKE?

The overarching question that guides my work is: what does data look like? Referencing architecture, my artwork such as Data Crystals (figure 2) executes codes that maps, stacks and assembles data “bricks” to form unique digital artifacts. The form of these objects are impossible to predict from the original data-mapping, and the clustering code will produce different variations each time it runs.

Other sculptures remove material through intense kinetic energy. Bad Data (figure 3) and Strewn Fields (figure 1) both use the waterjet machine to gouge data into physical material using a high- pressure stream of water. The material in this case — aluminum honeycomb panels and stone slabs — reacts in adverse ways as it splinters and deforms due to the violence of the machine.

2.1 Material Expression

Physical data-visualizations act on materials instead of pixels and so there is a dialogue between the data and its material expression. Data Crystals depict municipal data of San Francisco and have a otherworldly ghostly quality of stacked and intersecting cubes. The data gets served from a web portal and is situated in the urban architecture and so the 3D-printed bricks are an appropriate form of expression.

Bad Data captures data that is “bad” in the shallow sense of the word, rendering datasets such as Internet Data Breaches, Worldwide UFO Sightings or Mass Shootings in the United States. The water from the machine gouges and ruptures aluminum honeycomb material in unpredictable ways, similar to the way data tears apart our social fabric. This material is emblematic of the modern era, as aluminum began to be mass-refined at the end of the 19th century. These datasets exemplify conflicts of our times such as science/heresy and digital security/infiltration.

2.2 Frozen in Time

Once created, these sculptures cannot be endlessly altered like screen-based data visualizations. This challenges the artwork to work with fixed data or to consider the effect of capturing a specific moment.

For example, Strewn Fields is a data-visualization of meteorite impact data. When a large asteroid enters the earths atmosphere, it does so at high velocity of approximately 30,000km/hour. Before impact, it breaks up into thousands of small fragments, which are meteorites. Usually they hit our planet in the ocean or at remote locations. The intense energy of the waterjet machine gouges the surface of each stone, mirroring the raw kinetic energy of a planetoid colliding with the surface of the Earth. The static etching captures the act of impact, and survives as an antithetical gesture to the event itself. The actual remnants and debris (the meteorites) have been collected, sold and scattered and what remains is just a dataset, which I have translated into a physical form.

2.3 Formal Challenges to Sculpture

This sort of “data art” challenges the formal aspects of sculpture. Firstly, machine-generated artwork removes the artist’s hand from the work, building upon the legacy of algorithmic artwork by Sol Lewitt and others. Execution of this work is conducted by the stepper motor rather than by gestures of the artist.

Secondly, the input source of data are unknowable forms until they are actually rendered. The patterns are neither mathematic nor random, giving a certain quality of perceptual coherence to the work. Data Crystals: Crime Incidents has 30,000 data points. Using code-based clustering algorithms, it creates forms only recently possible with the combination of digital fabrication and large amounts of data.

3. CODE

My sculpture-generation tools are custom- developed in C++ using Open Frameworks, an open source toolkit. My code repositories are on GitHub: https://github.com/scottkildall. My own software bypasses any conventional modeling package. It can handle very complex geometry, and more importantly doesn’t have the “look” that a program such as Rhino/Grasshopper generates.

3.1 Direct-to-Machine

My process of data-translation is optimized for specific machines. Data Crystals generate STL files which most 3D printers can read. My code generates PostScript (.ps) files for the waterjet machine. The conversation with the machine itself is direct. During the production and iteration process, once I define the workflow, the refinements proceed quickly. It is optimized, like the machine that creates the artwork.

3.2 London Layering

In my demonstration, I will use various open data from London. I focus not on data that I want to to acquire, but rather, data that I can acquire. I will demonstrate a custom build of Data Crystals which shows multiple layers of municipal data, and I will run clustering algorithms to create several Data Crystals for the City of London.

 

Figure 1: Strewn Fields (2016)
by Scott Kildall
Waterjet-etched stone

Figure 2:
Data Crystals: Crime Incidents (2014)
by Scott Kildall
3D-print mounted on wood

Figure 3:
Bad Data: U.S. Mass Shootings (2015)
by Scott Kildall
Waterjet-etched aluminum honeycomb panel

GPS Tracks

I am building water quality sensors which will capture geolocated data. This was my first test with this technology. This is part of my ongoing research at the Santa Fe Water Rights residency (March-April) and for the American Arts Incubator program in Thailand (May-June).

This GPS data-logging shield from Adafruit arrived yesterday and after a couple of hours of code-wrestling, I was able to capture the latitude and longitude to a CSV data file.

This is me walking from my studio at SFAI to the bedroom. The GPS signal at this range (100m) fluctuates greatly, but I like the odd compositional results. I did the plotting in OpenFrameworks, my tool-of-choice for displaying data that will be later transformed into sculptural results.

The second one is me driving in the car for a distance of about 2km. The tracks are much smoother. If you look closely, you can see where I stopped at the various traffic lights.

Now, GPS tracking alone isn’t super-compelling, and there are many mapping apps that will do this for you. But as soon as I can attach water sensor data to latitude/longitude, then it can transform into something much more interesting as the data will become multi-dimensional.

Machine Data Dreams @ Black & White Projects

This week, I opened a solo show called Machine Data Dreams, at Black & White Projects. This was the culmination of several months of work where I created three new series of works reflecting themes of data-mapping, machines and mortality.

The opening reception is Saturday, November 5th from 7-9pm. Full info on the event is here.

Two of the artworks are from my artist-in-residency with SETI and the third is a San Francisco Arts Commission Grant.

All of the artwork uses custom algorithms to translate datasets into physical form, which is an ongoing exploration that I’ve been focusing on in the last few years.

Each set of artwork deserves more detail but I’ll stick with a short summary of each.

Fresh from the waterjet, Strewn Fields visualizes meteorite impact data at four different locations on Earth.

water-jet-1Strewn Fields: Almahata Sitta

As an artist-in-residence with SETI, I worked with planetary scientist, Peter Jenniskens to produce these four sculptural etchings into stone.

When an asteroid enters the earths atmosphere, it does so at high velocity — approximately 30,000 km/hour. Before impact, it breaks into thousands of small fragments — meteorites which spread over areas as large as 30km. Usually the spatial debris fall into the ocean or hits at remote locations where scientists can’t collect the fragments.

And, only recently have scientists been able to use GPS technology to geolocate hundreds of meteorites, which they also weigh as they gather them. The spread patterns of data are called “Strewn Fields”.

Dr. Jenniskens is not only one of the world’s experts on meteorites but led the famous  2008 TC3 fragment recovery in Sudan of the Almahata Sitta impact.

With four datasets that he both provided and helped me decipher, I used the high-pressure waterjet machine at Autodesk’s Pier 9 Creative Workshops, where I work as an affiliate artist and also on their shop staff, to create four different sculptures.

water-jet-2Strewn Fields: Sutter’s Mill

The violence of the waterjet machine gouges the surface of each stone, mirroring the raw kinetic energy of a planetoid colliding with the surface of the Earth. My static etchings capture the act of impact, and survive as an antithetical gesture to the event itself. The actual remnants and debris — the meteorites themselves — have been collected, sold and scattered and what remains is just a dataset, which I have translated into a physical form.

A related work, Machine Data Dreams are data-etchings memorials to the camcorder, a consumer device which birthed video art by making video production accessible to artists.

pixel_visionMACHINE DATA DREAMS: PIXELVISION

This project was supported by an San Francisco Individual Arts Commission grant. I did the data-collection itself during an intense week-long residency at Signal Culture, which has many iconic and working camcorders from 1969 to the present.

sonyvideorecorderSONY VIDEORECORDER (1969)
pixelvisionPIXELVISION CAMERA (1987)

During the residency, I built a custom Arduino data-logger which captured the raw electronic video signals, bypassing any computer or digital-signal processing software.data_loggerWith custom software that I wrote, I transformed these into signals that I could then etch onto 2D surfaces.Screen Shot 2015-08-02 at 10.56.15 PM I paired each etching with its source video in the show itself.

sony_video_recorderMACHINE DATA DREAMS: PIXELVISION

Celebrity Asteroid Journeys is the last of the three artworks and is also a project of from the SETI Artist in Residency program, though is definitively more light-hearted than the Strewn Fields.

Celebrity Asteroid Journeys charts imaginary travels from one asteroid to another. There are about 700,000 known asteroids, with charted orbits. A small number of these have been named after celebrities.

Working with asteroid orbital data from JPL and estimated spaceship velocities, I charted 5 journeys between different sets of asteroids.

My software code ran calculations over 2 centuries (2100 – 2300) to figure out the the best path between four celebrities. I then transposed the 3D data into 2D space to make silkscreens with the dates of each stop.

20161025_165421_webCELEBRITY ASTEROID JOURNEY: MAKE BELIEVE LAND MASHUP

This was my first silkscreened artwork, which was a messy antidote to the precise cutting of the machine tools at Autodesk.

All of these artworks depict the ephemeral nature of the physical body in one form or another. Machine Data Dreams is a clear memorial itself, a physical artifact of the cameras that once were cutting-edge technology.

With Celebrity Asteroid Journeys, the timescale is unreachable. None of us will ever visit these asteroids. And the named asteroids are memorials themselves to celebrities (stars) that are now dead or soon, in the relative sense of the word, will be no longer with us.

Finally, Strewn Fields captures a the potential for an apocalyptic event from above. Although these asteroids are merely minor impacts, it is nevertheless the reality that an extinction-level event could wipe out human species with a large rock from space. This ominous threat of death reminds us that our own species is just a blip in Earth’s history of life.

 

Waterjet Etching Tests

For the last several weeks, I have been conducting experiments with etching on the waterjet — a digital fabrication machine that emits a 55,000 psi stream of water, usually used for precision cutting. The site for this activity is Autodesk Pier 9 Creative Workshops. I continue to have access to their amazing fabrication machines, where I work part-time as one of their Shop Staff.

My recent artwork focuses on writing software code that transforms datasets into sculptures and installations, essentially physical data-visualizations. One of my new projects is called Strewn Fields, which is part of my work as an artist-in-residence with the SETI Institute. I am collaborating with the SETI research scientist, Peter Jenniskens, who is a leading expert on meteor showers and meteorite impacts. My artwork will be a series of data-visualizations of meteorite impacts at four different sites around the globe.

While the waterjet is normally used for cutting stiff materials like thick steel, it can etch using lower water pressure rather than pierce the material. OMAX — the company that makes the waterjet that we use at Pier 9 —  does provide a simple etching software package called Intelli-ETCH. The problem is that it will etch the entire surface of the material. This is appropriate for some artwork, such as my Bad Data series, where I wanted to simulate raster lines.

Meth Labs in Albuquerque(Data source: http://www.metromapper.org)

The technique and skills that I apply to my artistic practice is to write custom software that generates specific files for digital fabrication machines: laser-cutters, 3D printers, the waterjet and CNC machines. The look-and-feel is unique, unlike using conventional tools that artists often work with.

For meteorite impacts, I first map data like the pattern below (this is from a 2008 asteroid impact). For these impacts, it doesn’t make sense to etch the entire surface of my material, but rather, just pockets, simulating how a meteorite might hit the earth.

strewn_field_15scaled_no_notation

I could go the route of working with a CAM package and generating paths that work with the OMAX Waterjet. Fusion 360 even offers a pathway to this. However, I am dealing with four different datasets, each with 400-600 data points. It just doesn’t make sense to go from a 2D mapping, into a 3D package, generate 3D tool paths and then back to (essentially) a 2D profiling machine.

So, I worked on generating my own tool paths using Open Frameworks, which outputs simple vector shapes based on the size of data. For the tool paths, I settled on using spirals rather than left-to-right traverses, which spends too much time on the outside of the material, and blows it out. The spirals produce very pleasing results.

My first tests were on some stainless steel scrap and you can see the results here, with the jagged areas where the water eats away at the material, which is the desired effect. I also found that you have to start the etching from the outside of the spiral and then wind towards the inside. If you start from the inside and go out, you get a nipple, like on the middle right of this test, where the water-jet has to essentially “warm-up”. I’m still getting the center divots, but am working to solve this problem.

This was a promising test, as the non-pocketed surface doesn’t get etched at all and the etching is relatively quick.

IMG_0286

I showed this test to other people and received many raised eyebrows of curiosity. I became more diligent in my test samples and produces this etch sample with 8 spirals, with an interior path ranging from 2mm to 9mm to test on a variety of materials.

sprial_paths.png

I was excited about this material, an acrylic composite that I had leftover from a landscape project. It is 1/2″ thick with green on one side and a semi-translucent white on the other. However, as you can see, the water-jet is too powerful and ends up shattering the edges, which is less than desirable.

IMG_0303

And then I began to survey various stone samples. I began with scavenging some material from Building Resources, which had an assortment of unnamed, cheap tiles and other samples.

Forgive me…I wish I hadn’t sat in the back row of “Rocks for Jocks” in college. Who knew that a couple decades later, I would actually need some knowledge of geology to make artwork?

I began with some harder stone — standard countertop stuff like marble and granite. I liked seeing how the spiral breaks down along the way. But, there is clearly not enough contrast. It just doesn’t look that good.

IMG_0280

IMG_0294

I’m not sure what stone this is, but like the marble, it’s a harder stone and doesn’t have much of an aesthetic appeal. The honed look makes it still feel like a countertop.

IMG_0295

I quickly learned that thinner tile samples would be hard to dial in. Working with 1/4″ material like this, often results in blowing out the center.

IMG_0282

But, I was getting somewhere. These patterns started resembling an impact of sorts and certainly express the immense kinetic energy of the waterjet machine, akin to the kinetic energy of a meteorite impact.

white_tile_detail

This engineered brick was one of my favorite results from this initial test. You can see the detail on the aggregate inside.

IMG_0290brick_all

And I got some weird results. This material, whatever it is, is simple too delicate, kind of like a pumice.

IMG_0289

This is a cement compound of some flavor and for a day, I even thought about pouring my own forms, but that’s too much work, even for me.

 

IMG_0291

I think these two are travertine tile samples and I wish I had more information on them, but alas, that’s what you get when you are looking through the lot. These are in the not-too-hard and not-too-soft zone, just where I want them to be.

 

IMG_0274

IMG_0292

I followed up these tests by hitting up several stoneyards and tiling places along the Peninsula (south of San Francisco). This basalt-like material is one of my favorite results, but is probably too porous for accuracy. Still, the fissures that it opens up in the pockets is amazing. Perhaps if I could tame the waterjet further, this would work.

IMG_0275basalt-detail

basalt-more-detailThis rockface/sandstone didn’t fare so well. The various layers shattered, producing unusable results.

IMG_0299discolored_slate

Likewise, this flagstone was a total fail.

IMG_0302flagstone-shatter

The non-honed quartzite gets very close to what I want, starting to look more like a data-etching. I just need to find one that isn’t so thick. This one will be too heavy to work with.

IMG_0284  quartzite_close_IMG_0340

Although this color doesn’t do much for me, I do like the results of this limestone.

IMG_0298

Here is a paver, that I got but can’t remember which kind it is. Better notes next time! Anyhow, it clearly is too weak for the water-jet.

IMG_0297

This is a slate. Nice results!

IMG_0296

And a few more, with mixed results.

IMG_0300 IMG_0301

And if you are a geologist and have some corrections or additions, feel free to contact me.

Cistern Mapping Project Reportback

On October 11th, 2015, 18 volunteer bike and mapping aficionados gathered at my place to work on the Cistern Mapping Project — an endeavor to physically document the 170 (or so) Cisterns in San Francisco. There exists no comprehensive map of these unique underground vessels. The resulting map is here.

I personally became fascinated by them, when working on my Water Works project*, which mapped the water infrastructure of San Francisco.

The history of the cisterns is unique, and notably incomplete.

Image2

The cisterns are part of the AWSS (Auxiliary Water Supply System) of San Francisco, a water system that exists entirely for emergency use and is separate from the potable drinking water supply and the sewer system.

In the 1850s, after a series of Great Fires in San Francisco tore through the city, about 23 cisterns were built. These smaller cisterns were all in the city proper, at that time between Telegraph Hill and Rincon Hill. They weren’t connected to any other pipes and the fire department intended to use them in case the water mains were broken, as a backup water supply.

They languished for decades. Many people thought they should be removed, especially after incidents like the 1868 Cistern Gas Explosion.

However, after the 1906 Earthquake, fires once again decimated the city. Many water mains broke and the neglected cisterns helped save portions of the city. Afterward, the city passed a $5,200,000 bond and begin building the AWSS in 1908. This included the construction of many new cisterns and the rehabilitation of other, neglected ones. Most of the new cisterns could hold 75,000 gallons of water. The largest one is underneath the Civic Center and has a capacity of 243,000 gallons.

The original ones, presumably rebuilt, hold much less, anywhere from 15,000 to 50,000 gallons.

Cistern109 22nd Dolores

Armed with a series of intersections of potential Cistern Locations, the plan was to bike to each intersection and get the exact latitude and longitude and a photograph of each of the cistern markers — either the circular bricks or the manholes themselves.

We had 18 volunteers, which is a huge turnout for a beautiful Sunday morning. I provided coffee and bagels and soon folks from my different communities of the bike teamExploratorium and other friends were chatting with one another.

Cistern Prep Meeting 3

One way to thank my lovely volunteers was to provide gifts. What I made for everyone were a series of moleskine notebooks with vinyl stickers of the cisterns and bikes. I was originally planning to laser-etch them, but found out that they were on the “forbidden materials” list at the Creative Workshops at Autodesk Pier 9, where I made them. Luckily, I always have a Plan B and so I made vinyl stickers instead.

Cistern Prep Meeting 4Cistern Prep Meeting 1

Here I am, in desperate need of a haircut, greeting everyone and explaining the process. I grouped the cisterns into blocks of about 10-20 into 10 different sets. This covered most of them and then we paired off riders in groups of 2 to try to map out the best way to figure out their ride.Cistern Prep Meeting 2

Some of the riders were friends beforehand and others became friends during the course of riding together. Here, you can see two riders figuring out the ideal route for their morning. Some folks were smart and brought paper maps, too!

Cistern Prep Meeting 7

Here are the bike-mappers just before embarking on their day-of-mapping. Great smiles all around!

 

Cistern Group 1

I would have preferred to ride, but instead was busy arranging the spreadsheet and verifying locations. Ah, admin work.

How did we do this? Simple: each team used a GPS app and emailed me the coordinates of the cistern marker, along with a photo of the cistern: the bricks, manhole or fire hydrant. I would coordinate via email and confirm that I got the right info and slowly fill out the spreadsheet. It was a busy few hours.

Screen Shot 2015-12-10 at 4.09.31 PM

The hills were steep, but fortunately we had a secret weapon: some riders from the Superpro Racing team! Here is Chris Ryan crushing the hills in Pacific Heights.

Cistern chris uphill

One reason that we traveled in pairs is that documentation can be dangerous. Sometimes we had to to put folks on the edge or actually in the street so they could get some great documentation.

Cisterns chris

So, how many cisterns did we map? The end result was 127 cisterns, which is about 75% of them, all in one day. We missed a few and then there were a series of outliers such as ones in Glen Park, Outer Sunset and Bayview that we didn’t quite make.

And the resulting map is here. There are still some glitches, but what I like about it is that you can now see the different intersections for each cistern. These have not been documented before, so it’s exciting to see most of them on the map.

Cistern web map

What did we discover?

Most of the cisterns are not actually marked with brick circles and just have a manhole that says “Cisterns” or even just “AWSS” on it.

Image1

What I really enjoyed, especially being in the “backroom” was how the cyclists captured the beautiful parts of the city in the background of the photos, such as the cable car tracks.

IMG 0609

Also, the green-capped fire hydrants usually are nearby. These are the ones that get used to fill up the cisterns by the SF Fire Department.

Green hydrant

A few were almost like their own art installations, with beautiful brickwork.

Cistern118 24th Noe

The ones in the Sunset and Richmond district are newer and are actually marked by brick rectangles

Cistern135 46th Geary

Thanks to the AMAZING volunteers on this day.

* Water Works is supported by a Creative Code Fellowship through Stamen Design, Autodesk and Gray Area.

EquityBot got clobbered

Just after the Dow Jones dropped 1000 points on Aug 24th (yesterday), I checked out how EquityBot was doing. Annual rate of return of > -50%

Screen Shot 2015-08-24 at 11.20.25 PM

Crazy! Of course, this is like taking the tangent of any curve and making a projection. A day later, EquityBot is at -32%.

Screen Shot 2015-08-25 at 8.57.06 AM

Still not good, but if if you were to invest yesterday, you could be much richer today.

I’m not that much of a gambler, so I’m glad that EquityBot is just a simulated (for now) bank account.

Bad Data: SF Evictions and Airbnb

The inevitable conversation about evictions at San Francisco every party…art organizations closing, friends getting evicted…the city is changing. It has become a boring topic, yet it is absolutely, completely 100% real.

For the Bad Data series — 12 data-visualizations depicting socially-polarized, scientifically dubious and morally ambiguous dataset, each etched onto an aluminum honeycomb panel — I am featuring two works: 18 Years of Evictions in San Francisco and 2015 AirBnb Listings for exactly this reason. These two etchings are the centerpieces of the show.

evictions_airbnb

This is the reality of San Francisco, it is changing and the data is ‘bad’ — not in the sense of inaccurate, but rather in the deeper sense of cultural malaise.

By the way, the reception for the “Bad Data” show is this Friday (July 24, 2015) at A Simple Collective, and the show runs through August 1st.

The Anti-Eviction Mapping Project has done a great job of aggregating data on this discouraging topic, hand-cleaning it and producing interactive maps that animate over time. They’re even using the Stamen map tiles, which are the same ones that I used for my Water Works project.

Screen Shot 2015-07-23 at 4.52.36 PM

When I embarked on the Bad Data series, I reached out to the organization and they assisted me with their data sets. My art colleagues may not know this, but I’m an old-time activist in San Francisco. This helped me with getting the datasets, for I know that the story of evictions is not new and certainly not on this scale.

In 2001, I worked in a now-defunct video activist group called Sleeping Giant, which worked on short videos in the era when Final Cut Pro made video-editing affordable and when anyone with a DV camera could make their own videos. We edited our work, sold DVDs and had local screenings, stirring up the activist community and telling stories from the point-of-view of people on the ground. Sure, now we have Twitter and social media, but at the time, this was a huge deal in breaking apart the top-down structures of media dissemination.

Here is No Nos Vamos a hastily-edited video about evictions in San Francisco. Yes, this was 14 years ago.

I’ve since moved away from video documentary work and towards making artwork: sculpture, performance, video and more. The video-activist work and documentary video in general felt overly confining as a creative tool.

My current artistic focus is to transform datasets using custom software code into physical objects. I’ve been working with the amazing fabrication machines at Autodesk’s Pier 9 facility to make work that was not previously possible.

Ths dataset (also provided through the SF Rent Board) includes all the no-fault evictions in San Francisco, I got my computer geek on…well, I do try to use my programming powers for non-profit work and artwork.

I mapped the data into vector shapes using the C++ open source toolkit, called OpenFrameworks and wrote code which transformed the ~9300 data points into plotable shapes, which I could open in Illustrator. I did some work tweaking the strokes and styles.

sf_evictions_20x20

This is what the etching looks like from above, once I ran int through the water jet. There were a lot of settings and tests to get to this point, but the final results were beautiful.

waterjet-overhead

The material is a 3/4″ honeycomb aluminum. I tuned the high-pressure from the water-jet to pierce through the top layer, but not the bottom layer. However, the water has to go somewhere. The collisions against the honeycomb produce unpredictable results.

…just like the evictions themselves. We don’t know the full effect of displacement, but can only guess as the city is rapidly becoming less diverse. The result is below, a 20″ x 20″ etching.

Bad Data: 18 Years of San Francisco Evictions

baddata_sfevictions

The Airbnb debate is a little less clear-cut. Yes, I do use Airbnb. It is incredibly convenient. I save money while traveling and also see neighborhoods I’d otherwise miss. However, the organization and its effect on city economies is a contentious one.

For example, there is the hotel tax in San Francisco, which after 3 years, they finally consented to paying — 14% to the city of San Francisco. Note: this is after they had a successful business.

There also seems to be a long-term effect on rent. Folks, and I’ve met several who do this, are renting out places as tenants on Airbnb. Some don’t actually live in their apartments any longer. The effect is to take a unit off the rental market and mark it as a vacation rental. Some argue that this also skirts the law rent-control in the first place, which was designed as a compromise solution between landlords and tenants.

There are potential zoning issues, as well…a myriad of issues around Airbnb.

BAD DATA: 2015 AIRBNB LISTINGS, etching file

airbnb_sf

In any case, the location of the Airbnb rentals (self-reported, not a complete list) certainly fit the premise of the Bad Data series. It’s an amazing dataset. Thanks to darkanddifficult.com for this data source.

BAD DATA: 2015 Airbnb Listings

baddata_airbnb

Selling Bad Data

The reception for my solo show “Bad Data”, featuring the Bad Data series is this Friday (July 24, 2015) at A Simple Collective.

Date: July 24th, 2015
Time: 7-9pm
Where: ASC Projects, 2830 20th Street (btw Bryant and York), Suite 105, San Francisco

The question I had, when pricing these works was how do you sell Bad Data? The material costs were relatively low. The labor time was high. And the data sets were (mostly) public.

We came up with this price list, subject to change.

///  Water-jet etched aluminum honeycomb:

baddata_sfevictions
18 Years of San Francisco Evictions, 2015 | 20 x 20 inches | $1,200
Data source: The Anti-Eviction Mapping Project and the SF Rent Board


baddata_airbnb
2015 AirBnB Listings in San Francisco, 2015 | 20 x 20 inches | $1,200
Data source: darkanddifficult.com


baddata_hauntedlocations
Worldwide Haunted Locations, 2015 | 24 x 12 inches | $650
Data source: Wikipedia


baddata_ufosightings

Worldwide UFO Sightings, 2015 | 24 x 12 inches | $650
Data source: National UFO Reporting Center (NUFORC)


baddata_missouriabortionalternatives

Missouri Abortion Alternatives, 2015 | 12 x 12 inches
Data source: data.gov (U.S. Government) | $150


baddata_socalstarbucks

Southern California Starbucks, 2015 | 12 x 8 inches | $80
Data source: https://github.com/ali-ce


baddata_usprisons

U.S. Prisons, 2015 | 18 x 10 inches | $475
Data source: Prison Policy Initiative prisonpolicy.org (via Josh Begley’s GitHub page)


///  Water-jet etched aluminum honeycomb with anodization:

baddata_denvermarijuana

Albuquerque Meth Labs, 2015 | 18 x 12 inches | $475
Data source: http://www.metromapper.org


baddata_usmassshootings

U.S. Mass Shootings (1982-2012), 2015 | 18 x 10 inches | $475
Data source: Mother Jones


baddata_blacklistedips-banner

Blacklisted IPs, 2015 | 20 x 8 ½  inches | $360
Data source: Suricata SSL Blacklist


baddata_databreaches

Internet Data Breaches, 2015 | 20 x 8 ½ inches | $360
Data source: http://www.informationisbeautiful.net

Bad Data, Internet Breaches, Blacklisted IPs

In 1989, I read Neuromancer for the first time. The thing that fascinated me the most was not the concept of “cyberspace” that Gibson introduced. Rather it was the physical description of virtual data. The oft-quoted line is:

“The matrix has its roots in primitive arcade games. … Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts. … A graphic representation of data abstracted from banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding.”

What was this graphic representation of data that struck me at first and has stuck with be ever since. I could only imagine what this could be. This concept of physicalizing virtual data later led to my Data Crystals project. Thank you, Mr. Gibson.

dc_sfart_v1

In Neuromancer, the protagonist Case is a freelance “hacker”. The book was published well-before Anonymous, back in the days when KILOBAUD was the equivalent of Spectre for the BBS world.

At the time, I thought that there would be no way that corporations would put their data in a central place that anyone with a computer and a dial-up connection (and, later T1, DSL, etc) could access. This would be incredibly stupid.

And then, the Internet happened, albeit more slowly than people remember. Now hacking and data breaches are commonplace.

My “Bad Data” series — waterjet etchings of ‘bad’ datasets onto aluminum honeycomb panels — capture two aspects of internet hacking: Internet data breaches and Blacklisted IPs.

In these examples, ‘bad’ has a two-layered meaning. The abrogations of accepted treatises of Internet behavior is widely considered a legal, though always not a moral crime. The data is also ‘bad’ in the sense that it is incomplete. Data breaches are usually not advertised by the entities that get breached. That would be poor publicity.

For the Bad Data series, I worked with no necessarily the data wanted, but rather the data that I could get. From Information Is Beautiful, I found this dataset of Internet data breaches.

Screen Shot 2015-07-12 at 8.22.04 PM

What did I discover? …that Washington DC is the leader of breached information. I suspect it’s mostly because the U.S. government is the biggest target rather than lax government security. The runner-up is New York City, the center of American finance. Other notable cities are San Francisco, Tehran and Seoul. San Francisco makes sense — the city is home to many internet companies. And Tehran, which is the target of Western internet attacks, government or otherwise. But Seoul? They claim to be targeted by North Korea. However, as we have found out, with the Sony Pictures Entertainment Hack, North Korea is an easy scapegoat.

BAD DATA: INTERNET DATA BREACHES (BELOW)

baddata_databreaches

Conversely, there are many lists of banned IPs. The one I worked with is the Suricata SSL Blacklist. This may not be the best source, as there are thousands of IP Blacklists, but it is one that is publicly available and reasonably complete. As I’ve learned, you have to work with the data you can get, not necessarily the data you want.

I ran these two etched panels both through an anodization process, which further created a filmy residue on the surface. I’m especially pleased with how the Banned IPs panel came out.

Bad Data: BLACKLISTED IPs (below)

baddata_blacklistedips

Genetic Portraits and Microscope Experiments

I recently finished a new artwork — called Genetic Portraits — which is a series of microscope photographs of laser-etched glass that data-visualize a person’s genetic traits.

I specifically developed this work as an experimental piece, for the Bearing Witness: Surveillance in the Drone Age show. I wanted to look at an extreme example of how we have freely surrendered our own personal data for corporate use. In this case, 23andMe provides a (paid) extensive genetic sequencing package. Many people, including myself have sent in saliva samples to the company, which they then process. From their website, you can get a variety of information, including their projected likelihood that you might be prone to specific diseases based on your genetic traits.

Following my line of inquiry with other projects such as Data Crystals and Water Works, where I wrote algorithms that transformed datasets into physical objects, this project processes individual’s genetic sequence to generate vector files, which I later use to laser-etch onto microscope slides. The full project details are here.

gp_scott_may11

Concept + Material
I began my experiment months earlier, before the project was solidified, by examining the effect of laser-etching on glass underneath a microscope. This stemmed from conversations with some colleagues about the effect of laser-cutting materials. When I looked at this underneath a microscope, I saw amazing results: an erratic universe accentuated by curved lines. Even with the same file, each etching is unique. The glass cracks in different ways. Digital fabrication techniques still results in distinct analog effects. 

blog-IMG_4106When the curators of the show, Hanna Regev and Matt McKinley, invited me to submit work on the topic of surveillance, I considered how to leverage various experiments of mine, and came back to this one, which would be a solid combination of material and concept: genetic data etched onto microscope slides and then shown at a macro scale: 20” x 15” digital prints.

Surrendering our Data
I had so many questions about my genetic data. Is the research being shared? Do we have ownership of this data? Does 23andMe even ask for user consent? As many articles point out, the answers are exactly what we fear. Their user agreement states that “authorized personnel of 23andMe” can use the data for research. This sounds officially-sounding text simply means that 23andMe decides who gets access to the genetic data I submitted. 23andMe is not unique: other gene-sequencing companies have similar provisions, as the article suggests.

Some proponents suggest that 23andMe is helping the research front, while still making money. It’s capitalism at work. This article in Scientific American sums up the privacy concerns. Your data becomes a marketing tool and people like me handed a valuable dataset to a corporation, which can then sell us products based on the very data we have provided. I completed the circle and I even paid for it.   

However, what concerns me even more than 23andMe selling or using the data — after all, I did provide my genetic data, fully aware of its potential use — is the statistical accuracy of genetic data. Some studies have reported a Eurocentric bias to the data and The FDA has also has battled with 23andMe regarding the health data they provide. The majority of the data (with the exception of Bloom’s Syndrome) simply wasn’t predictive enough. Too many people had false positives with the DNA testing, which not only causes worry and stress but could lead to customers taking pre-emptive measures such as getting a mastectomy if they mistakenly believe they have are genetically predisposed to breast cancer.

A deeper look at the 23andMe site shows a variety of charts that makes it appear like you might be susceptible (or immune) to certain traits. For example, I have lower-than-odds of having “Restless Leg Syndrome“, which is probably the only neurological disorder that makes most people laugh when hearing about it. My genetic odds of having it are simply listed as a percentage.

Our brains aren’t very good with probabilistic models, so we tend to inflate and deflate statistics. Hence, one of many problems of false positives.

And, as I later discovered, from an empirical standpoint, my own genetic data strayed far from my actual personality. Our DNA simply does not correspond closely enough to reality.

Screen Shot 2015-06-16 at 11.06.44 AM

Data Acquisition and Mapping
From the 23andMe site, you can download your raw genetic data. The resulting many-megabyte file is full of rsid data and the actual allele sequences.

Screen Shot 2015-06-15 at 10.37.08 AM

Isolating useful information from this was tricky. I cross-referenced some of the rsids used for common traits from 23andMe with the SNP database. At first I wanted to map ALL of the genetic data. But, the dataset was complex — too much so for this short experiment and straightforward artwork.

Instead, I worked with some specific indicators that correlate to physiological traits such as lactose tolerance, sprinter-based athleticism, norovirus resistances, pain sensitivity, the “math” gene, cilantro aversion — 15 in total. I avoided genes that might correlate to various general medical conditions like Alzheimer’s and metabolism.

For each trait I cross-referenced the SNP database with 23andMe data to make sure the allele values aligned properly. This was arduous at best.

There was also a limit on physical space for etching the slide, so having more than 24 marks or etchings one plate would be chaotic. Through days of experimentation, I found that 12-18 curved lines would make for compelling microscope photography.

To map the data onto the slide, I modified Golan Levin’s decades-old Yellowtail Processing sketch, which I had been using as a program to generate curved lines onto my test slides. I found that he had developed an elegant data-storage mechanism that captured gestures. From the isolated rsids, I then wrote code which gave weighted numbers to allele values (i.e. AA = 1, AG = 2, GG = 3, depending on the rsid).

gp_illustrator

Based on the rsid numbers themselves, my code generated (x, y) anchor points and curves with the allele values changing the shape of each curve. I spent some time tweaking the algorithm and moving the anchor points. Eventually, my algorithm produced this kind of result, based on the rsids.

genome_scott_notated

The question I always get asked about my data-translation projects is about legibility. How can you infer results from the artwork? Its a silly question, like asking an Kindle engineer to to analyze a Shakespeare play. A designer of data-visualization will try to tell a story using data and visual imagery.

My research and work focuses deep experimentation with the formal properties of sculpture — or physical forms — based on data. I want to push boundaries of what art can look like, continuing the lineage of algorithmically-generated work by artists such as Sol Lewitt, Sonia Rappaport and Casey Raes.

Is it legible? Slightly so. Does it produce interesting results? I hope so.

gp_slide_image

But, with this project, I’ve learned so much about genetic data — and even more about the inaccuracies involved. It’s still amazing to talk about the science that I’ve learned in the process of art-making.

Each of my 5 samples looks a little bit different. This is the mapping of actual genetic traits of my own sample and that of one other volunteer named “Nancy”.

genome_scott_notated

Genetic Traits for Scott (ABOVE)
GENETIC TRAITS FOR NaNCY (BELOW)

genome_scott_notatedWe both share a number of genetic traits such as the “empathy” gene and curly hair. The latter seems correct — both of our hair is remarkably straight. I’m not sure about the empathy part. Neither one of us is lactose intolerant (also true in reality).

But the test-accuracy breaks down on several specific points. Nancy and I do have several differences including athletic predisposition. I have the “sprinter” gene, which means that I should be great at fast-running. I also do not have the math gene. Neither one of these is at all true.

I’m much more suited to endurance sports such as long-distance cycling and my math skills are easily in the 99th percentile. From my own anecdotal standpoint, except for well-trodden genetics like eye color, cilantro aversion and curly hair, the 23andMe results often fail.

The genetic data simply doesn’t seem to be support the physical results. DNA is complex. We know this, it is non-predictive. Our genotype results in different phenotypes and the environmental factors are too complex for us to understand with current technology.

Back to the point about legibility. My artwork is deliberately non-legible based on the fact that the genetic data isn’t predictive. Other mapping projects such as Water Works are much more readable.

I’m not sure where this experiment will go. I’ve been happy with the results of the portraits, but I’d like to pursue this further, perhaps in collaboration with scientists who would be interested in collaboration around the genetic data.

FOUR FINAL SLIDE ETCHINGS  (BELOW)

gp_allison_may11

 

gp_michele_may11 gp_nancy_may11 gp_scott_may11

EquityBot goes live!

During my time at Impakt as an artist-in-residence, I have been working on a new project called EquityBot, which is an online commission from Impakt. It fits well into the Soft Machines theme of the festival: where machines integrate with the soft, emotional world.

EquityBot exists entirely as a networked art or “net art” project, meaning that it lives in the “cloud” and has no physical form. For those of you who are Twitter users, you can follow on Twitter: @equitybot

01_large

What is EquityBot? Many people have asked me that question.

EquityBot is a stock-trading algorithm that “invests” in emotions such as anger, joy, disgust and amazement. It relies on a classification system of twenty-four emotions, developed by psychologist and scholar, Robert Plutchik.

Plutchik-wheel.svg

how it works
During stock market hours, EquityBot continually tracks worldwide emotions on Twitter to gauge how people are feeling. In the simple data-visualization below, which is generated automatically by EquityBot, the larger circles indicate the more prominent emotions that people are Tweeting about.

At this point in time, just 1 hour after the stock market opened on October 28th, people were expressing emotions of disgust, interest and fear more prominently than others. During the course of the day, the emotions contained in Tweets continually shift in response to world events and many other unknown factors.

twitter_emotionsEquityBot then uses various statistical correlation equations to find pattern matches in the changes in emotions on Twitter to fluctuations in stocks prices. The details are thorny, I’ll skip the boring stuff. My time did involve a lot of work with scatterplots, which looked something like this.

correlationOnce EquityBot sees a viable pattern, for example that “Google” is consistently correlated to “anger” and that anger is a trending emotion on Twitter, EquityBot will issue a BUY order on the stock.

Conversely, if Google is correlated to anger, and the Tweets about anger are rapidly going down, EquityBot will issue a SELL order on the stock.

EquityBot runs a simulated investment account, seeded with $100,000 of imaginary money.

In my first few days of testing, EquityBot “lost” nearly $2000. This is why I’m not using real money!

Disclaimer: EquityBot is not a licensed financial advisor, so please don’t follow it’s stock investment patterns.

accountThe project treats human feelings as tradable commodities. It will track how “profitable” different emotions will be over the course of months. As a social commentary, I propose a future scenario that just about anything can be traded, including that which is ultimately human: the very emotions that separate us from a machine.

If a computer cannot be emotional, at the very least it can broker trades of emotions on a stock exchange.

affect_performanceAs a networked artwork, EquityBot generates these simple data visualizations autonomously (they will get better, I promise).

It’s Twitter account (@equitybot) serves as a performance vehicle, where the artwork “lives”. Also, all of these visualizations are interactive and on the EquityBot website: equitybot.org.

I don’t know if there is a correlation between emotions in Tweets and stock prices. No one does. I am working with the hypothesis that there is some sort of pattern involved. We will see over time. The project goes “live” on October 29th, 2014, which is the day of the opening of the Impakt Festival and I will let the first experiment run for 3 months to see what happens.

Feedback is always appreciated, you can find me, Scott Kildall, here at: @kildall.

 

Data-Visualizing + Tweeting Sentiments

It’s been a busy couple of weeks working on the EquityBot project, which will be ready for the upcoming Impakt Festival. Well, at least some functional prototype in my ongoing research project will be online for public consumption.

The good news is that the Twitter stream is now live. You can follow EquityBot here.

EquityBot now tweets images of data-visualizations on its own and is autonomous. I’m constantly surprised and a bit nervous by its Tweets.

exstasy_sentimentAt the end of last week, I put together a basic data visualization using D3, which is a powerful Javascript data-visualization tool.

Using code from Jim Vallandingham, In just one evening, I created dynamically-generated bubble maps of Twitter sentiments as they arrive EquityBot’s own sentiment analysis engine.

I mapped the colors directly from the Plutchik wheel of emotions, which is why they are still a little wonky like the fact that the emotion of Grief is unreadable. Will be fixed.

I did some screen captures and put them my Facebook and Twitter feed. I soon discovered that people were far more interested in images of the data visualizations than just text describing the emotions.

I was faced with a geeky problem: how to get my Twitterbot to generate images of the data visualizations using D3, a front-end Javascript client? I figured it out eventually, after stepping into a few rabbit holes.

Screen Shot 2014-10-21 at 11.31.09 AM

I ended up using PhantomJS, the Selenium web driver and my own Python management code to solve the problem. There biggest hurdle was getting Google webfonts to render properly. Trust me, you don’t want to know the details.

Screen Shot 2014-10-21 at 11.31.29 AM

 

But I’m happy with the results. EquityBot will now move to other Tweetable data-visualizations such as its own simulated bank account, stock-correlations and sentiments-stock pairings.

Water Works Final Report

Overview
Water Works is a project that I created for the Creative Code Fellowship in the Summer of 2014 with the combined support of Stamen Design, Autodesk and Gray Area.

Water Works is a 3D data visualization and mapping of the water infrastructure of San Francisco. The project is a relational investigation: I have been playing the role of a “Water Detective, Data Miner” and sifting through the web for water data. My results of from this 3-month investigation are three large-scale 3D-printed sculptures, each paired with an interactive web map.

The final website lives here: http://www.waterworks.io/

sewer

Stamen Design is a small design studio that creates sophisticated mapping and data-visualization projects for the web. Combined with the amazing physical fabrication space at Pier 9 at Autodesk, this was a perfect combination of collaborative players for my own focus: writing algorithms that transform datasets into 3D sculptures and installations. I split my time between the two organizations and both were amazing, creative environments.

Gray Area provided the project guidance and coursework: 12 hours a week of Creative Code Immersive classes in topics ranging from Arduino to Node.js. About half of the classes were review for me, e.g. OpenFrameworks, Processing, Arduino, but Javascript, Node and more were completely new.

This report is heavy on images, partially because I want to document the entire process of how I created these 3D mapping-visualizations. As far as I know, I’m the first person who has undertaken this creative process: from mining city data to 3D-printing the infrastructure, which is geo-located on a physical map.

My directive from the start of the Water Works project was to somehow make visible what is invisible. This simple message is one that I learned while I was working as a New Media Exhibit Developer at the Exploratorium (2012-2013). It also aligns with the work that Stamen Design creates and so I was pleased to be working with this organization.

Starting Point
Underneath our feet is an urban circulatory system that delivers water to our households, removes it from our toilets, delivers a reliable supply firefighting and ultimately purifies it and directs it into the bay and ocean. Most of us don’t think about this amazing system because we don’t have to — it simply works.

Like many others, I’m concerned about the California drought, which many climatologists think will persist for the next decade. I am also a committed urban-dweller and want to see the city I live in improve its infrastructure as it serves an expanding population. Finally, I undertook this project in order to celebrate infrastructure and to help make others aware of the benefits of city government.

drought

On more personal note, I am fascinated by urban architecture. As I walk through the city, I constantly notice the makings on manholes, the various sign posts and different types of fire hydrants.cistern_manhole

About a year ago, I had several in-depth conversations with employees at the Department of Public Works about the possibility of mapping the sewer system when I was working at the Exploratorium. We discussed possibilities of producing a sewer map for museum. For various reasons, the maps never came to fruition, but the data still rattled around my brain. All of the pipe and manhole data still existed. It was waiting to be mapped.

Three Water Systems of San Francisco
When I was awarded this Creative Code Fellowship in June this year, I very much about the San Francisco water system. I soon learned that the city has three separate sets of pipes that comprise the water infrastructure of San Francisco.

(1) Potable Water System — this is our drinking water, comes from Hetch Hetchy. Some fire hydrants uses this.

(2) Sewer System — San Francisco has a combined stormwater and wastewater system, which is nearly entirely gravity-fed. The water gets treated at one of the wastewater treatment plants. San Francisco is the only coastal California city with a combined system.

(3) Auxiliary Water Supply System (AWSS) — this is a separate system just for emergency fire-fighting. It was built in the years immediately following the 1906 Earthquake, where many of the water mains collapsed and most of the city proper was destroyed by fires. It is fed from the Twin Peaks Reservoir. San Francisco is the only city in the US that has such as system.

water_treatment

Follow the Data, Find the Story
From my previous work on Data Crystals, I learned that you have to work with the data you can actually get, not the data you want. In the first month of the Water Works project, this involved constant research and culling.

I worked with various tables of sewer data that the DPW provided to me. I discovered that the city had about 30,000 nodes (underground chambers with manholes) with 30,000 connections (pipes). This was an incredible dataset and it needed a lot of pruning, cleaning and other work, which I soon discovered was a daunting task.

Lesson #1: Contrary to popular belief, data is never clean.

What else was available? It was hard to say at first. I sent emails to the SFPUC asking for their the locations of the drinking water data — just like what I had for the sewer data. I thought this would be incredible to represent. I approached the project with a certain naivety.

Of course, I shouldn’t have been surprised about that this would be a security concern, but in no uncertain terms I received a resounding no from the SFPUC. This made sense, but it left me with only one dataset.

Given that there were three water systems, it would make sense to create three 3D-printed visualizations, once from each set. If not the pipes, what would I use?

In one of my late-night evenings research, I found a good story: the San Francisco Underground Cisterns. According to various blogs, there are about 170 of these, and are usually marked by a brick circle. What is underneath?

cistern_circle

In the 1850s, after a series of Great Fires in San Francisco tore through the city, 23 cisterns* were built. These smaller cisterns were all in the city proper, at that time between Telegraph Hill and Rincon Hill. They weren’t connected to any other pipes and the fire department intended to use them in case the water mains were broken, as a backup water supply.

They languished for decades. Many people thought they should be removed, especially after incidents like the 1868 Cistern Gas Explosion.

However, after the 1906 Earthquake, fires once again decimated the city. Many water mains broke and the neglected cisterns helped save portions of the city.

Afterward, the city passed a $5,200,000 bond and begin building the AWSS in 1908. This included the construction of many new cisterns and the rehabilitation of other, neglected ones. Most of the new cisterns could hold 75,000 gallons of water. The largest one is underneath the Civic Center and has a capacity of 243,000 gallons.

The original ones, presumably rebuilt, hold much less, anywhere from 15,000 to 50,000 gallons.

* from the various reports I’ve read, this number varies.

old-cisternsmap

I searched for a map of all the cisterns, which was to be difficult to find. There was no online map anywhere. I read that since these were part of the AWSS, that they were refilled by the fire department. I soon begin searching for fire department data and found this set of intersections, along with the volume of each cistern. The source was the SFFD Water Supplies Manual.

cisterdata

The story of the San Francisco Cisterns was to be my first of three stories in this project.

Autodesk also runs Instructables, a DIY, how-to-make-things website. One of the Instructables details the mapping process, so if you want details, have a look at this Instructable.

What I did to make this conversion happen was to write code in Python which called Google Maps API to convert the intersections into lat/longs as well as get elevation data. When I had asked people how to do this, I received many GitHub links. Most of them were buggy or poorly documented. I ended up writing mine from scratch.

Lesson #2: Because GitHub is both a backup system for source code and open source sharing project, many GitHub projects are confusing or useless.

The being said, here is my GitHub repo: SF Geocoder, which does this conversion. Caveat Emptor.

Mapping the San Francisco Sewers
This was my second “story” with the Water Works project, which is simply to somehow represent the complex system that is underneath us. The details of the sewers are staggering. With approximately 30,000 manholes and 30,000 pipes that connect them, how do you represent or even begin mapping this?

And what was the story after all — it doesn’t quite have the uniqueness character of the cisterns. But, it does portray a complex system. Even the DPW hadn’t mapped this out in 3D space. I don’t know if any city ever has. This was the compelling aspect: making the physical model itself from the large dataset.

Building a 3D Modeling System
In addition to looking for data and sifting through the sewer data that I hand, I spent the first few weeks building up a codebase in OpenFrameworks.

The only other possibility was using Rhino + Grasshopper, which is a software package I don’t know and not even an Autodesk product. Though it can handle algorithmic model-building, several colleagues were dubious that it could handle my large, custom dataset.

So, I built my own. After several days of work, I mapped out the nodes and pipes as you see below. I represented the nodes as cubes and pipes as cylinders — at least for the onscreen data visualization.

sewer-mapping

This is a closeup of the San Francisco bay waterfront. You can see some isolated nodes and pipes — not connected to the network. This is one example of where the data wasn’t clean. Since this is engineering data, there are all sorts of anomalies like virtual nodes, run-offs and more.

My code was fast and efficient since it was in C++. More importantly, I wrote custom STL exporters which empowered my workflow to go directly to a 3D printer without having to go through other 3D packages to clean up the data. This took a lot of time, but once I got it working, it saved me hours of frustration later in the project.

seweremapping2

I also mapped out the Cisterns in 3D space using the same code. The Cisterns are disconnected in reality but as a 3D print, they need to one cohesive structure. I modified the ofxDelaunay add-on (thanks GitHub) to create cylindrical supports that link the cisterns together.

What you see here is an “editor”, where I could change the thickness of the supports, remove unnecessary ones and edit the individual cisterns models to put holes in certain ones.

I also scaled the Cisterns according to their volume. The pre-1906 ones tend to be small, while the largest one, at Civic Center is about 243,000 gallons, which over 3 times the size of the standard post-earthquake 75,000 gallon cisterns.

OF-cisterns-nomap

Story #3: Imaginary Drinking Hydrants
In the same document that had the locations of all of the San Francisco Cisterns, I also found this gem: 67 emergency drinking hydrants for public use in a city-wide disaster.

Whoa, I thought, how interesting…

drinking_hydrants

I dug deeper and scouted out the intersections in person. I took some photos of the Emergency Drinking Hydrants. They have blue drops painted on them. You can even see them on Street View.

I found online news articles from several years ago, which discussed this program, introduced in 2006, also known as the Blue Drop Hydrant program.

Picture of What is the blue drop hydrant program

blue_drop_man.jpg

And, I generated a web map, using Javascript and Leaflet.

imaginary-drnkinghydrants

I then published a link to the map onto my Twitter feed. It generated a lot of excitement and was retweeted by many sources.

twitt.jpg

The SFist — a local San Francisco news blog ended up covering it. I was excited. I thought I was doing a good public service.

However, there was a backlash…of sorts. It turns out that the program was discontinued by the SFPUC. The organization did some quick publicity-control on their Facebook page and also contacted the SFist.

The writer of the article then issued a statement that this program was discontinued and a press statement by the SFPUC.

press2.jpg

He also had this quote, which was a bit of a jab at me. “It had sounded like designer Scott Kildall, who had been mapping the the hydrants, had done a fair amount of research, but apparently not.”

In my defense, I re-researched the emergency drinking hydrants. Nowhere did it say that the program was discontinued. So, apparently the SFPUC quietly shuffled it out.

But later, I found that my map birthed a larger discussion. The SFPUC had this response, also printed later on SFist.
Picture of But then a good public response

The key quote by Emergency Planning Director Mary Ellen Carroll is:

“When it comes to sheltering after a emergency, we don’t tell people ahead of time, ‘This is where you’ll need to go to find shelter after an earthquake’ because there’s no way to know if that shelter will still be there.

This makes sense that central gathering locations could be a bad idea. Imagine a gas leak or something similar at one of these locations. So a water distribution plan would have to be improvised according the the desasters.

We do know from various news articles and by my own photographs that there was not only a map, but physical blue drops painted on the hydrants in addition to a large publicity campaign. The program supposedly costs 1 million dollars, so that would have been an expensive map.

They SFPUC never pulled the old maps from their website nor did they inform the public that the blue drop hydrants were discontinued.

I blame it on general human miscommunication. And after visiting the SFPUC offices towards the end of my Water Works project, I’m entirely convinced that this is a progressive organization with smart people. They’re doing solid work.

But I had to rethink my mapping project, since these hydrants no longer existed.

When faced with adverse circumstances, at least in the area of mapping and art, you must be flexible. There’s always a solution. This one almost rhymes with Emergency — Imaginary.

Instead of hydrants for emergency drinking water, I ask the question: could we have a city where we could get tap water from these hydrants at any time? What if the water were recycled water?

They could have a faucet handle on them, so you could fill up your bottle when you get thirsty. More importantly, these hydrants could be a public service.

It’s probably impractical in the short term, but I love the idea of reusing the water lines for drinking lines — and having free drinking water in the public commons.

So, I rebranded this map and designed this hydrants with a drinking faucet attached to it. This would be the base form used for the maps.

Picture of Rebrand as Imaginary Drinking Hydrants

Creating Mini Models
I wanted to strike a balance with this data-visualization and mapping project between aesthetics and legibility.With the data sets I now had and the C++ code that I wrote, I could geolocate cisterns, hydrants and sewer lines.

These would be connected by support structures in the case of cisterns and hydrants and pipe data for the sewers.

I decided that the actual data points would be miniature models, which I designed in Fusion 360 with the help of Autodesk guru, Taylor Stein. The first one I created was the Cistern model.

cisterns-fusion360I went through several iterations to come up with this simple model. The design challenge was to come up with a form that looked like it could be an underground tank, but not bring up other associations. In this case, without the three rectangular stubby pieces, it looks like a tortilla holder.

After a day of design and 3D print tests, I settled on this one.

cistern-model

And here you can see the outputs of the cisterns and the hydrants in MeshLab.

meshlab-cisterns

Here is the underside of the hydrant structure, where you can see the holes in the hydrants, which I use later for creating the final sculpture. These are drill holes for mounting the final prints on wood.

meshlab-hydrants-underneathThe manhole chamber design was the hardest one to figure out. This one is more iconographic than representational. Without some sort of symmetry, the look of the underground chamber didn’t resonate. I also wanted to provide a manhole cover on top of the structure. The flat bottom distinguishes it from the pipes.

manhole

Mapping and Legibilitystamen

One of my favorite aspects about being at Stamen is that four days a week, they provided lunch for us. We all ate lunch together. This was a good chunk of unstructured time to talk about mapping, music, personal life, whatever.

We solidified bonds — so often shared lunch is overlooked in organizations. In addition to informal discussion on the project, we also had a few creative brainstorm sessions, where I would present the progress of the project and get feedback from several people at both Stamen. Folks from Autodesk and Gray Area also joined the discussion.

I hadn’t considered the idea of situating these on a map before, but they suggested integrating a map of some sort. Quickly, the idea was birthed that I should geolocate these on top of a map. This was a brilliant direction for the project.

OF-imaginaryhydrants-mapStamen provided be with a high-resolution map that I could laser etch, which came later, after the 3D printing. Now, with this direction for the project, I started making the actual 3D prints.  map-for-etching

Mega-prints with lots of cleaning
After all the mapping, arduous data-smoothing, tests upon structural tests, I was finally ready to spool off the large-scale 3D prints. Each print was approximately the size of the Object 500 print bed: 20″ x 16″, making these huge. A big thanks to Autodesk for sponsoring the work and providing the machines.

Each print took between 40 and 50 hours of machine time, so I sent these out as weekend-long jobs. Time and resources were limited, so this was a huge endeavor.

cisterns-buildtimeI was worried that the print would fail, but I got lucky in each case. The prints are a combine resin material: VeroClear and VeroWhite (for the Cisterns and Hydrants) and mixes of VeroWhite and VeroBlack for the Sewers.

support-cisterns-far

When the prints come off the print bed, they are encased in a support material which I first scraped off and then used a high-pressure water system to spray the rest off.
cleaning-cistern

It took hours upon hours to get from this.

sewerworks

To this: a fully cleaned version of the Sewer print. This 3D print is of a section of the city: the Embarcadero area, which includes the Pier 9 facility where Autodesk is located.

For the Sewer Works print, the manhole chambers and pipes are scaled to the size in the data tables. I increased the elevation about 3 time to capture the hilly terrain of San Francisco. What you see here is an aerial view as if you were in a helicopter flying from Oakland to San Francisco. The diagonal is Market Street, ending at the Ferry Building. On the right side, towards the back of the print is Telegraph Hill. There are large pipes and chambers along the Embarcadero. Smaller ones comprise the sewer system in the hilly areas.
sewerworks-3dMap-Etching and Final Fabrication
I’ll just summarize the final fabrication — this blog post is already very long. For a more details, you can read this Instructable on how I did the fabrication work. 

Using a cherry wood, which I planed and jointed and glued together, I laser-etched these maps, which came out beautifully.

I chose wood both because of the beautiful finish, but also because the material of wood references the wood Victorian and Edwardian houses that define the landscape of San Francisco. The laser-etching burns away the wood, like the fires after the 1906 Earthquake, which spawned the AWSS water system.

_MG_7318

The map above is the waterfront area for the Sewer Works print and the one below is the full map of the city that I used as the base for the San Francisco Cisterns and the Imaginary Drinking Hydrants sculptures._MG_7316The last stages of the woodwork involved traditional fabrication, which I did at the Autodesk facilities at Pier 9.

_MG_7314I drilled out the holes for mounting the final 3D prints on the wood bases and then mounted them on 1/16″ stainless rods, such that they float about 1/2″ above the wood map.
_MG_7330 And the final stage involved manually fitting the prints onto the rods._MG_7335

Final Results
Here are the three prints, mounted on the wood-etched maps.

Below is the Imaginary Drinking Hydrants. This was the most delicate of the 3D prints.

06_large

These are the San Francisco Cisterns, which are concentrated in the older parts of San Francisco. They are nearly absent from the western part of the city, which became densely populated well-after the 1906 Earthquake.02_large This is the Sewer Works print. The map is not as visible because of the density of the network. The pipes are a light gray and the manhole chambers a medium gray. The map does capture the extensive network of manmade piers along the waterfront.03_large The Website: San Francisco Cisterns and Imaginary Drinking Hydrants
The website for this project is: waterworks.io. It has three interactive web maps for each of the three water systems

The aforementioned Instuctable: Mapping San Francisco Cisterns details how I made these. The summary is that I did a lot of data-wrangling, often using Python to transform the data into a GeoJSON files, a web-mappable format.

The Stamen designer-technicians were invaluable in directing me to the path of Leaflet, an easy-to-use mapping interface. I struggled with it for awhile, as I was a complete newbie to Javascript, but eventually sorted out how to create maps and customize the interactive elements.

Fortunately, I also received help from the designers at Stamen on the graphics. I only have so many skills  and graphic design is not one of them.

cisternsmapping

The Website:The Website: Life of Poo
The performance on Leaflet bogged down when I had more than about 1500 markers in Leaflet and the sewer system has about 28,000.

I spent a lot of energy with node-trimming using a combination of Python and Java code and winnowed the count down to about 1500. The consolidated node list was based on distance and used various techniques to map the a small set of nodes in a cohesive way.

lifeofpoo

In the hours just before presenting the project, I finished Life of Poo: an interactive journey of toilet waste.

On the website, you can enter an address (in San Francisco) such as “Twin Peaks, SF” or “47th & Judah, SF” and the Life of Poo and then press Flush Toilet.

This will begin an animated poo journey down the sewer map and to the wastewater treatment plant.

Not all of the flushes works as you’d expect. There’s still glitches and bugs in the code. If you type in “16th & Mission”, the poo just sits there.

Why do I have the bugs? I have some ideas (see below) but I really like the chaotic results so will keep it for now.

Lesson 3: Sometimes you should sacrifice accuracy.

Future Directions
I worked very, very hard on this project and I’m going to let it rest for awhile. There’s still some work to do in the future, which I would like to do some day.

Cistern Map
I’d like to improve the Cistern Map as I think it has cultural value. As far as I know, it’s the only one on the web. The data is from the intersections and while close, is not entirely correct. Sometimes the intersection data is off by a block or so. I don’t think this affects the integrity of the 3D map, but would be important to correct for the web portion.

Life of Poo
I want to see how this interactive map plays out and see how people respond to it in the next couple of months. The animated poo is universally funny but it doesn’t behave “properly”. Sometimes it get stuck. This was the last part of the Water Works project and one that I got working the night before the presentation.

I had to do a lot of node-trimming to make this work — Leaflet can only handle about 1500 data points before it slows down too much, so I did a lot of trimming from a set of abut 28,000. This could be one source of the inaccuracies.

I don’t take into account gravity in the flow calculations, so this is why I think the poo has odd behavior. But maybe the map is more interesting this way. It is, after all, an animated poo emoji.

Infrastructure Fabrication
This is where the project gets very interesting. What I’ve been able to accomplish with the “Sewer Works” print is to show how the sewer pipes of San Francisco look as a physical manifestation. This is only the beginning of many possibilities. I’d be eager to develop this technology and modeling system further. And take the usual GIS maps and translate them into physical models.

Thanks for reading this far and I hope you enjoyed this project,
Scott Kildall




 

 

 

 

 

 

 

 

EquityBot: Capturing Emotions

In my ongoing research and development of EquityBot — a stock-trading bot* with a philanthropic personality, which is my residency project at Impakt Works — I’ve been researching various emotional models for humans.

The code I’m developing will try to make correlations between stock prices and group emotions on Twitter. It’s a daunting task and one where I’m not sure of the signal-to-noise ratio will be (see disclaimer). As an art experiment, I don’t know what will emerge from this, but it’s geeky and exciting.

In the last couple weeks, I’ve been creating a rudimentary system that will just capture words. A more complex system would use sentiment analysis algorithms. My time and budget is limited, so phase 1 will be a simple implementation.

I’ve been looking for some sort of emotional classification system. There are several competing models (of course).

My favorite is the Plutchik Wheel of Emotions, which was developed in 1980. It has a symmetrical look to it and apparently is deployed in various AI systems.

 

Plutchik-wheel.svg

Other models such as the Lövheim cube of emotion are more recent and seem compelling at first. But it’s missing something critical: sadness or grief. Really? This is such a basic human emotion and when I saw it was absent, I tossed the cube model.

1280px-Lövheim_cube_of_emotion

Back to the Plutchik model…my “Twitter bucket” captures certain words, from the color wheel above. I want enough words for a reasonable statistical correlation (about 2000 tweets/hour). Too many of one word will strain my little Linode server. For example, the word “happy” is a no-go since there thousands of Tweets with that word in it each minute.

Many people tweet about anger by just using the word “angry” or “anger”, so that’s an easy one. Same thing goes with boredom/boring/bored.

For other words, I need to go synonym-hunting, like: apprehension. The twitter stream with this word is just a trickle. I’ve mapped it to “worry” or “anxiety”, which shows up more often in tweets. It’s not quite correct, but reasonably close.

The word “terror” has completely lost it’s meaning, and now only refers to political discourse. I’m still trying to figure out a good synonym-map for terror: terrifying, terrify, terrible? It’s not quite right. There’s not a good word to represent that feeling of absolute fear.

This gets tricky and I’m walking into the dark valley of linguistics. I am well-aware of the pitfalls.

Screen Shot 2014-10-01 at 3.18.33 PM

 

* Disclaimer:
EquityBot doesn’t actually trade stocks. It is an art project intended for illustrative purposes only, and is not intended as actual investment advice. EquityBot is not a licensed financial advisor. EquityBoy It is not, and should not be regarded as investment advice or as a recommendation regarding any particular security or course of action.

 

Polycon in Berlin

This week I traveled to Berlin for Polycon. No…it’s not a convention on polyamory, but a porject developed by my longtime friend, Michael Ang (aka Mang). Polygon Construction Kit (aka Polycon) is a software toolkit for converting 3D polygon models into physical objects.

IMG_0246I wanted an excuse to visit Berlin, to hang out with Mang and to open up some possibilities for physical data-visualization behind EquityBot, which I’m working at for the artist-in-residency at Impakt Works and their upcoming festival.

I brought my recently-purchased Printrbot Simple Metal, which I had disassembled into this travel box.IMG_0281

After less than 30 minutes, I had it reassembled and working. Victory! Here it is, printing one of the polygon connectors.

IMG_0248How does Polycon work? Mang shared with me the details. You start with a simply 3D model from some sort of program. He uses SketchUp for creating physical models of his large-scale sculptures. I prefer OpenFrameworks, which is powerful and will let me easily manipulate shapes from data streams.

Here’s the simple screenshot in OpenFrameworks of two polyhedrons. I just wrote this the other day, so there’s no UI for it yet.

Screen Shot 2014-09-25 at 6.10.14 PM

And here is how it looks in MeshLab. It’s water-tight, meaning that it can be 3D-printed.Screen Shot 2014-09-25 at 6.10.59 PM

My goal is to do larger-scale data visualizations than some of my previous works such as Data Crystals and Water Works. I imagine room-sized installations. I’ve had this idea for many months of using the 3D printer to create joinery from datasets and to skin the faces using various techniques, TBD.

How it works: Polycon loads a 3D model and using Python scripts in FreeCAD will generate 3D joints that along with wooden dowels can be assembled into polygonal structures. Screen Shot 2014-09-25 at 6.09.00 PM

The Printrbot makes adequate joinery, but it’s nowhere near as pretty as the Vero prints on the Object 500 at Autodesk. It doesn’t matter that much because my digital joinery will be hidden in the final structures.IMG_0272Mang guided be through the construction of my first Polycon structure. There’s a lot of cleanup work involved such as drilling out the holes in each of the joints. IMG_0274It took awhile to assemble the basic form. There are vertex-numbering improvements that we’ll both make to the software. Together, Mang and I brainstormed ideas as to how to make the assembly go more quickly.IMG_0259After about 15 minutes, I got my first polygon assembled.

IMG_0265 It looks a lot like…the 3D model. I plan to be working on these forms in the next several months and so felt great after a successful first day.IMG_0268And here is a really nice image of one of Mang’s pieces — these are sculptures of mountains that he created. The backstory is that he made these from memories while flying high in a glider and they represent mountains. I like where he’s going with his artwork: making models based on nature, with ideas of recording these spaces and playing them back in various urban spaces. You can check out Michael Ang’s work here on his website.
IMG_0278




 

 

 

 

 

A Starting Point: Distributed Capital

I’m doing more research on EquityBot —the project for my Impakt Works residency, which I just started a couple of days ago.

EquityBot is a stock-trading algorithm that explores the connections between collective emotions on social media and financial speculation. It will be presented at the Impakt Festival at the end of October.

It will also consist of a sculptural component (presented post-festival), which is the more experimental form.

Many of you are familiar with Paul Baran’s work on designing a distributed network, but many others may not be. He worked for the U.S. Air Force and determined that a central communications network would be vulnerable to attack, and suggested that the United States use a distributed network.
baranInterestingly, there is a widespread myth that the Internet, derived from APANET, was designed to withstand a nuclear attack using this model. This isn’t the case, just that the architects of the internet transmission protocol heard of Rand’s work and adapted it for packet use. Yet, the myth persists.

On a side note, perhaps military technology could be useful for the public good. If only we could declassify the technology, like Baran did.

The distributed network reminds me of a 3D polygon mesh I think this could be a good source of 3D data-visualization: Distributed Capital. I’ll research this more in the future.

But EquityBot isn’t about networks in the formal sense, it is a project about constructing a predictive model of stock changes based on the idea that Twitter sentiments correlate with fluctuations in stock prices. Screen Shot 2014-09-17 at 6.08.23 AM

Do I know there is a correlation? Not yet, but I think there is a good possibility. One of my reading sources, The Computational Beauty of Nature, sums up the value of simulated models in its introduction. The predictive model might fail in its results but it will likely reveal a greater truth in the economic system that it is trying to predict. Thus, knowing the uncertainty ahead of time will provide a sense of certainty. EquityBot may not “work” but then again, it may.

compbeautyofnatureMy source of dissent is the excellent book, The Signal and The Noise: Why So Many Predictions Fail — but Some Don’t by Nate Silver. After reading this, last summer, I was convinced that any predictive analysis would be simply be noise. I was disheartened and halted the EquityBot project (previously called Grantbot) for many months.

la-ca-nate-silver

However, now I’m not so sure. It seems likely that people’s moods would affect financial decisions, which in turn would affect stock prices. With studies such as this one by Vagelis Hristidis, which found some correlation to Twitter chatter and stock, I think there is something to this, which is why I’ve revisited the EquityBot project.

I’ll follow the Buddhist maxim with this project and embrace its uncertainty.

 

Life of Poo

I’ve been blogging about my Water Works project all summer and after the Creative Code Gray Area presentation on September 10th, the project is done. Phew. Except for some of the residual documentation.

In the hours just before I finished my presentation, I also managed to get Life of Poo working. What is it? Well, an interactive map of where your poo goes based on the sewer data that I used for this project.

Huh? Try it.

Screen Shot 2014-09-16 at 6.42.06 AM

This is the final piece of my web-mapping portion of Water Works and uses Leaflet with animated markers, all in Javascript, which is a new coding tool in my arsenal (I know, late to the party). I learned the basics in the Gray Area Creative Code Immersive class, which was provided as part of the fellowship.

The folks at Stamen Design also helped out and their designer-technicians turned me onto Leaflet as I bumbled my way through Javascript.

How does it work?

On the Life of Poo section of the Water Works website, you enter an address (in San Francisco) such as “Twin Peaks, SF” or “47th & Judah, SF” and the Life of Poo and then press Flush Toilet.

This will begin an animated poo journey down the sewer map and to the wastewater treatment plant.

Screen Shot 2014-09-16 at 6.50.17 AMNot all of the flushes works as you’d expect. There’s still glitches and bugs in the code. If you type in “16th & Mission”, the poo just sits there. Hmmm.

Why do I have the bugs? I have some ideas (see below) but I really like the chaotic results so will keep it for now.

Screen Shot 2014-09-16 at 6.54.32 AM

 

I think the erratic behavior is happening because of a utility I wrote, which does some complex node-trimming and doesn’t take into account gravity in its flow diagrams. The sewer data has about 30,000 valid data points and Leaflet can only handle about 1500 or so without it taking forever to load and refresh.

The utility I wrote parses the node data tree and recursively prunes it to a more reasonable number, combining upstream and downstream nodes. In an overflow situation, technically speaking, there are nodes where waste might be directed away from the waste-water treatment plant.

However, my code isn’t smart enough to determine which are overflow pipes and which are pipes to the treatment plants, so the node-flow doesn’t work properly.

In case you’re still reading, here’s an illustration of a typical combined system, that shows how the pipes might look. The sewer outfall doesn’t happen very often, but when your model ignores gravity, it sure will.

CombineWasteWaterOverflow

The 3D print of the sewer, the one that uses the exact same data set as Life of Poo looks like this.

sewerworks_front sewerworks_top

EquityBot @ Impakt

My exciting news is that this fall I will be an artist-in-residence at Impakt Works, which is in Utrecht, the Netherlands. The same organization puts on the Impakt Festival every year, which is a media arts festival that has been happening since 1988. My residency is from Sept 15-Nov 15 and coincides with the festival at the end of October.

Utrecht is a 30 minute train ride from Amsterdam and 45 minutes from Rotterdam and by all accounts is a small, beautiful canal city with medieval origins and also hosts the largest university in the Netherlands.

Of course, I’m thrilled. This is my first European art residency and I’ll have a chance to reconnect with some friends who live in the region as well as make many new connections.

impakt; utrecht; www.impakt.nlThe project I’ll be working on is called EquityBot and will premiere at the Impakt Festival in late October as part of their online component. It will have a virtual presence like my Playing Duchamp artwork (a Turbulence commission) and my more recent project, Bot Collective, produced while an artist-in-residence at Autodesk.

Like many of my projects this year, this will involve heavy coding, data-visualization and a sculptural component.

equity_bot_logo

At this point, I’m in the research and pre-production phase. While configuring back-end server code, I’m also gathering reading materials about capital and algorithms for the upcoming plane rides, train rides and rainy Netherland evenings.

Here is the project description:

EquityBot

EquityBot is a stock-trading algorithm that explores the connections between collective emotions on social media and financial speculation. Using custom algorithms Equitybot correlates group sentiments expressed on Twitter with fluctuations in related stocks, distilling trends in worldwide moods into financial predictions which it then issues through its own Twitter feed. By re-inserting its results into the same social media system it draws upon, Equitybot elaborates on the ways in which digital networks can enchain complex systems of affect and decision making to produce unpredictable and volatile feedback loops between human and non-human actors.

Currently, autonomous trading algorithms comprise the large majority of stock trades.These analytic engines are normally sequestered by private investment companies operating with billions of dollars. EquityBot reworks this system, imagining what it might be like it this technological attention was directed towards the public good instead. How would the transparent, public sharing of powerful financial tools affect the way the stock market works for the average investor?

kildall_bigdatadreamsI’m imagining a digital fabrication portion of EquityBot, which will be the more experimental part of the project and will involve 3D-printed joinery. I’ll be collaborating with my longtime friend and colleague, Michael Ang on the technology — he’s already been developing a related polygon construction kit — as well as doing some idea-generation together.

“Mang” lives in Berlin, which is a relatively short train ride, so I’m planning to make a trip where we can work together in person and get inspired by some of the German architecture.

My new 3D printer — a Printrbot Simple Metal — will accompany me to Europe. This small, relatively portable machine produces decent quality results, at least for 3D joints, which will be hidden anyways.

printrbot

WaterWorks: From Code to 3D Print

In my ongoing Water Works project —  a Creative Code Fellowship with Stamen DesignGray Area and Autodesk — I’ve been working for many many hours on code and data structures.

The immediate results were a Map of the San Francisco Cisterns and a Map of the “Imaginary Drinking Hydrants”.

However, I am also making 3D prints — fabricated sculptures, which I map out in 3D-space using and then 3D print.

The process has been arduous. I’ve learned a lot. I’m not sure I’d do it this way again, since I had to end up writing a lot of custom code to do things like triangle-winding for STL output and much, much more.

Here is how it works. First, I create a model in Fusion 360 — an Autodesk application — which I’ve slowly been learning and have become fond of.

Screen Shot 2014-08-21 at 10.12.47 PM

From various open datasets, I map out the geolocations locations of the hydrants or the cisterns in X,Y space. You can check out this Instructable on the Mapping Cisterns and this blog post on the mapping of the hydrants for more info. Using OpenFrameworks — an open source toolset in C++, I map these out in 3D space. The Z-axis is the elevation.

The hydrants or cisterns are both disconnected entities in 3D space. They’d fall apart when trying to make a 3D print, so I use Delaunay triangulation code to connect the nodes as a 3D shape.

Screen Shot 2014-08-21 at 10.07.59 PMI designed my custom software to export a ready-to-print set of files in an STL format. My C++ code includes an editor which lets you do two things:

(1) specify which hydrants are “normal” hydrants and which ones have mounting holes in the bottom. The green ones have mounting holes, which are different STL files. I will insert 1/16″ stainless steel rod into the mounting holes and have the 3D prints “floating” on a piece of wood or some other material.

(2) my editor will also let you remove and strengthen each Delaunay triangulation node — the red one is the one currently connected. This is the final layout for the print, but you can imagine how cross-crossed and hectic the original one was.

Screen Shot 2014-08-21 at 10.08.44 PM

Here is an exported STL in Meshlab. You can see the mounting holes at the bottom of some of the hydrants.
Screen Shot 2014-08-21 at 10.20.13 PM

I ran many, many tests before the final 3D print.

imaginary_drinking_faucets

And finally, I setup the print over the weekend. Here is the print 50 hours later.
on_the_tray

It’s like I’m holding a birthday cake — I look so happy. This is at midnight last Sunday.scott_holding_tray

The cleaning itself is super-arduous.

scott_cleaning

And after my initial round of cleaning, this is what I have.hydrats_roughAnd here are the cistern prints.

cisterns_3d

I haven’t yet mounted these prints, but this will come soon. There’s still loads of cleaning to do.

 

Mapping Emergency Drinking Water Hydrants

Did you know that San Francisco has 67 fire hydrants that are designed for emergency drinking water in case of an earthquake-scale disaster? Neither did I. That’s because just about no one knows about these hydrants.

While scouring the web for Cistern locations — as part my Water Works Project*, which will map out the San Francisco water infrastructure and data-visualize the physical pipes and structures that keep the H2O moving in our city — I found this list.

I became curious.

67_drinkingfountains

I couldn’t find a map of these hydrants *anywhere* — except for an odd Foursquare map that linked to a defunct website.

I decided to map them myself, which was not terribly difficult to do.

Since Water Works is a project for the Creative Code Fellowship with Stamen DesignGray Area and Autodesk and I’m collaborating with Stamen, mapping is essential for this project. I used Leaflet and Javascript. It’s crude but it works — the map does show the locations of the hydrants (click on the image to launch the map).

The map, will get better, but at least this will show you where the nearest emergency drinking hydrant is to your home.

map_link

Apparently, these emergency hydrants were developed in 2006 as part of a 1 million dollar program. These hydrants are tied to some of the most reliable drinking water mains.

Yesterday, I paid a visit to three hydrants in my neighborhood. They’re supposed to be marked with blue drops, but only 1 out of the 3 were properly marked.

Hydrant #46: 16th and Bryant, no blue dropIMG_0022

Hydrant #53, Precita & Folsom, has a blue dropIMG_0016

Hydrant #51, 23rd & Treat, no blue drop, with decorative stickerIMG_0011

Editors note: I had previously talked about buying a fire hydrant wrench for a “just in case” scenario*. I’ve retracted this suggestion (by editing this blog entry).

I apologize for this suggestion: No, none of us should be opening hydrants, of course. And I’m not going to actually buy a hydrant wrench. Neither should you, unless you are SFFD, SFWD or otherwise authorized.

Oh yes, and I’m not the first to wonder about these hydrants. Check out this video from a few years ago.

* For the record, I never said that would ever open a fire hydrant, just that I was planning to by a fire hydrant wrench. One possible scenario is that I would hand my fire hydrant wrench to a qualified and authorized municipal employee, in case they were in need.

Modeling Cisterns

How do you construct a 3D model of something that lives underground and only exists in a handful of pictures taken from the interior? This was my task for the Cisterns of San Francisco last week.

The backstory: have you ever seen those brick circles in intersections and wondered what the heck they mean? I sure have.

It turns out that underneath each circle is an underground cistern. There are 170 or so* of them spread throughout the city. They’re part of the AWSS (Auxiliary Water Supply System) of San Francisco, a water system that exists entirely for emergency use.

The cisterns are just one aspect of my research for Water Works, which will map out the San Francisco water infrastructure and data-visualize the physical pipes and structures that keep the H2O moving in our city.

This project is part of my Creative Code Fellowship with Stamen Design, Gray Area and Autodesk.

Cistern_1505_MedRes

Many others have written about the cisterns: Atlas Obscura, Untapped Cities, Found SF, and the cisterns even have their own Wikipedia page, albeit one that needs some edits.

The original cisterns, about 35 or so, were built in the 1850s, after a series of great fires ravaged the city, located in the Telegraph Hill to Rincon Hill area. In the next several decades they were largely unused, but the fire department filled them up with water for a “just in case” scenario.

Meanwhile, in the late 19th century as San Francisco rapidly developed into a large city, it began building a pressurized hydrant-based fire system, which was seen as many as a more effective way to deliver water in case of a fire. Many thought of the cisterns as antiquated and unnecessary.

However, when the 1906 earthquake hit, the SFFD was soon overwhelmed by a fire that tore through the city. The water mains collapsed. The old cisterns were one of the few sources of reliable water.

After the earthquake, the city passed bonds to begin construction of the AWSS — the separate water system just for fire emergencies. In addition to special pipes and hydrants fed from reservoirs for hydrants, the city constructed about 140 more underground cisterns.

Cisterns are disconnected nodes from the network, with no pipes and are maintained by the fire department, which presumably fill them every year. I’ve heard that some are incredibly leaky and others are watertight.

What do they look like inside? This is the *only* picture I can find anywhere and is of a cistern in the midst of seismic upgrade work. This one was built in 1910 and holds 75,000 gallons of water, the standard size for the cisterns. They are HUGE. As you can surmise from this picture, the water is not for drinking.cistern(Photographer: Robin Scheswohl; Title: Auxiliary Water supply system upgrade, San Francisco, USA)

Since we can’t see the outside of an underground cistern, I can only imagine what it might look like. My first sketch looked something like this.

cistern_drawingI approached Taylor Stein, Fusion 360 product evangelist at Autodesk, who helped me make my crude drawing come to life. I printed it out on one of the Autodesk 3D printers and lo and behold it looks like this: a double hamburger with a nipple on top. Arggh! Back to the virtual drawing board.IMG_0010I scoured the interwebs and found this reference photograph of an underground German cistern. It’s clearly smaller than the ones in San Francisco, but it looks like it would hold water. The form is unique and didn’t seem to connote something other than a vessel-that-holds-water.800px-Unterirdische_ZisterneOnce again, Taylor helped me bang this one out — within 45 minutes, we had a workable model in Fusion 360. We made ours with slightly wider dimensions on the top cone. The lid looks like a manhole.

cistern_3d

Within a couple hours, I had some 3D prints ready. I printed out several sizes, scaling the height to for various aesthetic tests.

cistern_models_printed

This was my favorite one. It vaguely looks like cooking pot or a tortilla canister, but not *very* much. Those three rectangular ridges, parked at 120-degree angles, give it an unusual form

IMG_0006

Now, it’s time to begin the more arduous project of mapping the cisterns themselves. And the tough part is still finishing the software that maps the cisterns into 3D space and exports them as an STL with some sort of binding support structure.

* I’ve only been able to locate 169 cisterns. Some reports state that there are 170 and others that there are 173 and 177.

Data Miner, Water Detective

This summer, I’m working on a Creative Code Fellowship with Stamen Design, Gray Area and Autodesk. The project is called Water Works, which will map and data-visualize the San Francisco water infrastructure using 3D-printing and the web.

Finding water data is harder than I thought. Like detective Gittes in the movie Chinatown, I’m poking my nose around and asking everyone about water. Instead of murder and slimy deals, I am scouring the internet and working with city government. I’ve spent many hours sleuthing and learning about the water system in our city.

chinatown-nicholsonanddunway

In San Francisco, where this story takes place, we have three primary water systems. Here’s an overview:

The Sewer System is owned and operated by the SFPUC. The DPW provides certain engineering services. This is a combined stormwater and wastewater system. Yup, that’s right, the water you flush down the toilet goes into the same pipes as the the rainwater. Everything gets piped to a state-of-the art wastewaster treatment plant. Amazingly the sewer pipes are fed almost entirely by gravity, taking advantage of the natural landscape of the city.

The Auxiliary Water Supply System (AWSS) was built in 1908 just after the 1906 San Francisco Earthquake. It is an entire water system that is dedicated solely to firefighting. 80% of the city was destroyed not by earthquake itself, but by the fires that ravaged the city. The fires rampaged through the city mostly because the water mains collapsed. Just afterwards, the city began construction on a separate this infrastructure for combatting future fires. It consists of reservoirs that feed an entire network of pipes to high-pressure fire hydrants and also includes approximately 170 underground cisterns at various intersections in the city. This incredible separate water system is unique to San Francisco.

The Potable Water System, a.k.a. drinking water is the water we get from our faucets and showers. It comes from the Hetch Hetchy — a historic valley but also a reservoir and water system constructed from 1913-1938 to provide water to San Francisco. This history is well-documented, but what I know little about is how the actual drinking water gets piped into San Francisco. homes Also, the San Francisco water is amongst the most safe in the world, so you can drink directly from your tap.

Given all of this, where is the story? This is the question that I asked folks at Stamen, Autodesk and Gray Area during a hyper-productive brainstorming session last week. Here’s the whiteboard with the notes. The takeaways, as folks call it are, are below and here I’m going to get nitty-gritty into process.

(whiteboard brainstorming session with Stamen)

stamen_brainstorm_full

(1) In my original proposal, I had envisioned a table-top version of the entire water infrastucture: pipes, cisterns, manhole chambers, reservoirs as a large-scale sculpture, printed in panels. It was kindly pointed out to me by the Autodesk Creative Projects team that this is unfeasible. I quickly realized the truth of this: 3D prints are expensive, time-consuming to clean and fragile. Divide the sculptural part of the project into several small parts.

(2) People are interested in the sewer system. Someone said, “I want to know if you take a dump at Nob Hill, where does the poop go?” It’s universal. Everyone poops, even the Queen of England and even Batman. It’s funny, it’s gross, it’s entirely human. This could be accessible to everyone.

(3) Making visible the invisible or revealing what’s in plain sight. The cisterns in San Francisco are one example. Those brick circles that you see in various intersections are actually 75,000 gallon underground cisterns. Work on a couple of discrete urban mapping projects.

(4) Think about focusing on making a beautiful and informative 3D map / data-visualization of just 1 square mile of San Francisco infrastructure. Hone on one area of the city.

(5) Complex systems can be modeled virtually. Over the last couple weeks, I’ve been running code tests, talking to many people in city government and building out an entire water modeling systems in C++ using OpenFrameworks. It’s been slow, deliberate and arduous. Balance the physical models with a complex virtual one.

I’m still not sure exactly where this project is heading, which is to be expected at this stage. For now, I’m mining data and acting as a detective. In the meantime, here is the trailer for Chinatown, which gives away the entire plot in 3 minutes.

 

Mapping Manholes

The last week has been a flurry of coding, as I’m quickly creating a crude but customized data-3D modeling application for Water Works — an art project for my Creative Code Fellowship with Stamen Design, Gray Area and Autodesk.

This project build on my Data Crystals sculptures, which transform various public datasets algorithmically into 3D-printable art objects. For this artwork, I used Processing with the Modelbuilder libraries to generate STL files. It was a fairly easy coding solution, but I ran into performance issues along tje wau.

But Processing tends to choke up at managing 30,000 simple 3D cubes. My clustering algorithms took hours to run. Because it isn’t compiled into machine code and is instead interpreted, it has layers of inefficiency.

I bit the coding bullet and this week migrated my code to OpenFrameworks (an open source C++ environment). I’ve used OF before, but never with 3D work. There are still lots of gaps in the libraries, specifically the STL exporting, but I’ve had some initial success, woo-hoo!

Here are all the manholes, the technical term being “sewer nodes”, mapped into 3D space using GIS lat/lon and elevation coordinates. The clear indicator that this is San Francisco, and not Wisconsin, which this mapping vaguely resembles is the swath of empty space that is Golden Gate Park.

What hooked me was that “a-ha” moment where 3D points rendered properly on my screen. I was on a plane flight home from Seattle and involuntarily emitted an audible yelp. Check out the 3D mapping. There’s a density of nodes along the Twin Peaks, and I accentuated the z-values to make San Francisco look even more hilly and to understand the location of the sewer chambers even better.

Sewer nodes are just the start. I don’t have the connecting pipes in there just yet, not to mention the cisterns and other goodies of the SF water infrastructure.

water_works_nodes_screen_shotOf course, I want to 3D print this. By increasing the node size — the cubic dimensions of each manhole location, I was able to generate a cohesive and 3D-printable structure. This is the Meshlab export with my custom-modified STL export code. I never thought I’d get this deep into 3D coding, but now, I know all sorts of details, like triangular winding and the right-hand rule for STL export.3d_terrain_meshlabAnd here is the 3D print of the San Francisco terrain, like the Data Crystals, with many intersecting cubes.

3d_terrain_better It doesn’t have the aesthetic crispness of the Data Crystals project, but this is just a test print — very much a work-in-progress.
data_crystals

 

Creative Code Fellowship: Water Works Proposal

Along with 3 other new media artists and creative coding experts, I was recently selected to be a Creative Code Fellow for 2014 — a project pioneered by Gray Area (formerly referred to as GAFFTA and now in a new location in the Mission District).

Each of us is paired with a partnering studio, which provides a space and creative direction for our proposed project. The studio that I’m pleased to be working with is Stamen Design, a leader in the field of aesthetics, mapping and data-visualization.

I’ll be also continuing my residency work at Autodesk at Pier 9, which will be providing support for this project as well.

My proposed project is called “Water Works” — a 3D-printed data visualization of San Francisco’s water system infrastructure, along with some sort of web component.

grayarea-fellowship-home-page

 

Creative Code Fellowship Application Scott Kildall

Project Proposal (250 limit)
My proposed project “Water Works” is a 3D data visualization of the complex network of pipes, aqueducts and cisterns that control the flow of water into our homes and out of our toilets. What lies beneath our feet is a unique combined wastewater system — where stormwater mixes with sewer lines and travels to a waste treatment plant, using gravitational energy from the San Francisco hills.

This dynamic flow is the circulatory system of the organism that is San Francisco. As we are impacted by climate change, which escalates drought and severe rainstorms, combined with population growth, how we obtain our water and dispose of it is critical to the lifeblood of this city.

Partnering with Autodesk, which will provide materials and shop support, I will write code, which will generate 3D prints from municipal GIS data. I imagine ghost-like underground 3D landscapes with thousands of threads of water — essentially flow data — interconnected to larger cisterns and aqueducts. The highly retinal work will invite viewers to explore the infrastructure the city provides. The end result might be panels that snap together on a tabletop for viewers to circumnavigate and explore.

The GIS data is available, though not online, from San Francisco and already I’ve obtained cooperation from SFDPW about providing some infrastructure data necessary to realize this project.

While my focus will be on the physical portion of this project, I will also build an interactive web-based version from the 3D data, making this a hybrid screen-physical project.

Why are you interested in participating in this fellowship? (150 word limit)
The fellowship would give me the funding, visibility and opportunity of working under the umbrage of two progressive organizations: Gray Area and Stamen Design. I would expand my knowledge, serve the community and increase my artistic potential by working with members of these two groups, both of which have a progressive vision for art and design in my longtime home of San Francisco.

Specifically, I wish to further integrate 3D printing into the data visualization conversation. With the expertise of Stamen, I hope to evolve my visualization work at Autodesk. The 3D-printing technology makes possible what has hitherto been impossible to create and has enormous possibilities to materialize the imaginary.

Additionally some of the immersive classes (HTML5, Javascript, Node.js) will be helpful in solidifying my web-programming skills so that I can produce the screen-based portion of this proposal.

What experience makes this a good fit for you? (150 word limit)
I have deep experience in producing both screen-based and physical data visualizations. While at the Exploratorium, I worked on many such exhibits for a general audience.

One example is a touch-screen exhibit called “Seasons of Plankton”, which shows how plankton species in the Bay change over the year, reflecting a diverse ecosystem of microscopic organisms. I collaborated with scientists and visitor evaluators to determine the optimal way to tell this story. I performed all of the coding work and media production for this successful piece.

While at Autodesk, my focus has been creating 3D data visualizations with my custom code that transforms public data sets into “Data Crystals” (these are the submitted images). This exploration favors aesthetics over legibility. I hope to build upon this work and create physical forms, which help people see the dynamics of a complex urban water system to invite curiosity through beauty.

 

World Data Crystals

I just finished three more Data Crystals, produced during my residency at Autodesk. This set of three are data visualizations of world datasets.

This first one captures all the population of cities in the world. After some internet sleuthing, I found a comprehensive .csv file of all of the cities by lat/long and their population and I worked on mapping the 30,000 or so data points into 3D space.

I rewrote my Data Crystal Generation program to translate the lat/long values into a sphere of world data points. I had to rotate the cubes to make them appear tangential to the globe. This forced me to re-learn high school trig functions, argh!

world_dcWhat I like about the way this looks is that the negative space invites the viewer into the 3D mapping. The Sahara Desert is empty, just like the Atlantic Ocean. Italy has no negative space. There are no national boundaries or geographical features, just cubes and cities.

I sized each city by area, so that the bigger cities are represented as larger cubes. Here is the largest city in the world, Tokyo

world_tokyo

This is the clustering algorithm in action. Running it realtime in Processing takes several hours. This is what the video would look like if I were using C++ instead of Java.

I’m happy with the clustered Data Crystal. The hole in the middle of it is result of the gap in data created by the Pacific Ocean.

world_pop_crystal

The next Data Crystal maps of all of the world airports. I learned that the United States has about 20,000 airports. Most of these are small, unpaved runways. I still don’t know why.

Here is a closeup of the US, askew with Florida in the upper-left corner.

us_closeup

I performed similar clustering functions and ended up with this Data Crystal, which vaguely resembles an airplane.

world_airports_data_crystalThe last dataset, which is not pictured because my camera ran out of batteries and my charger was at home represents all of the nuclear detonations in the world.

I’ll have better pictures of these crystals in the next week or so. Stay tuned.

 

Crime Classifications in San Francisco

Below is a list of the crime classifications, extracted from the crime reports from the San Francisco Open Data Portal. This is part of my “data mining” work with the 3D-printed Data Crystals.

ARSON
ASSAULT
BAD CHECKS
BRIBERY
BURGLARY
DISORDERLY CONDUCT
DRIVING UNDER THE INFLUENCE
DRUG/NARCOTIC
DRUNKENNESS
EMBEZZLEMENT
EXTORTION
FAMILY OFFENSES
FORGERY/COUNTERFEITING
FRAUD
GAMBLING
KIDNAPPING
LARCENY/THEFT
LIQUOR LAWS
LOITERING
MISSING PERSON
NON-CRIMINAL
OTHER OFFENSES
PORNOGRAPHY/OBSCENE MAT
PROSTITUTION
RECOVERED VEHICLE
ROBBERY
RUNAWAY
SEX OFFENSES, FORCIBLE
SEX OFFENSES, NON FORCIBLE
STOLEN PROPERTY
SUICIDE
SUSPICIOUS OCC
TRESPASS
VANDALISM
VEHICLE THEFT
WARRANTS
WEAPON LAWS

3D Data Viz & SF Open Data

I’ve fallen a bit behind in my documentation and have a backlog of great stuff that I’ve been 3D-printing. These are a few of my early tests with my new project: Data Crystals. I am using various data sources, which I algorithmically transform data into 3D sculptures.

The source for these is the San Francisco Open Data Portal — which provides datasets about all sorts of interesting things such as housing permit data, locations of parking meters and more.

My custom algorithms transform this data into 3D sculptures. Legibility is still an issue, but initial tests show the wonderful work that algorithms can do.

This is a transformation of San Francisco Crime Data. It turns out that crime happens everywhere, so the data is in a giant block.

crime_data

After running some crude data transformations, I “mined” this crystal: the location of San Francisco public art. Most public art is located in the downtown and city hall area. But there is a tail, which represents the San Francisco Airport.

sf_art

More experiments: this is a test, based on the SF public art, where I played with varying the size of the cubes (this would be a suggested value of artwork, which I don’t have data for…yet). Now, I have a 4th axis for the data. Plus, there is a distinct aesthetic appeal of stacking differently-sized blocks as opposed to uniform ones.

Stay tuned, there is more to come!random_squares