Category: 3D Printing

WaterWorks: From Code to 3D Print

In my ongoing Water Works project —  a Creative Code Fellowship with Stamen DesignGray Area and Autodesk — I’ve been working for many many hours on code and data structures.

The immediate results were a Map of the San Francisco Cisterns and a Map of the “Imaginary Drinking Hydrants”.

However, I am also making 3D prints — fabricated sculptures, which I map out in 3D-space using and then 3D print.

The process has been arduous. I’ve learned a lot. I’m not sure I’d do it this way again, since I had to end up writing a lot of custom code to do things like triangle-winding for STL output and much, much more.

Here is how it works. First, I create a model in Fusion 360 — an Autodesk application — which I’ve slowly been learning and have become fond of.

Screen Shot 2014-08-21 at 10.12.47 PM

From various open datasets, I map out the geolocations locations of the hydrants or the cisterns in X,Y space. You can check out this Instructable on the Mapping Cisterns and this blog post on the mapping of the hydrants for more info. Using OpenFrameworks — an open source toolset in C++, I map these out in 3D space. The Z-axis is the elevation.

The hydrants or cisterns are both disconnected entities in 3D space. They’d fall apart when trying to make a 3D print, so I use Delaunay triangulation code to connect the nodes as a 3D shape.

Screen Shot 2014-08-21 at 10.07.59 PMI designed my custom software to export a ready-to-print set of files in an STL format. My C++ code includes an editor which lets you do two things:

(1) specify which hydrants are “normal” hydrants and which ones have mounting holes in the bottom. The green ones have mounting holes, which are different STL files. I will insert 1/16″ stainless steel rod into the mounting holes and have the 3D prints “floating” on a piece of wood or some other material.

(2) my editor will also let you remove and strengthen each Delaunay triangulation node — the red one is the one currently connected. This is the final layout for the print, but you can imagine how cross-crossed and hectic the original one was.

Screen Shot 2014-08-21 at 10.08.44 PM

Here is an exported STL in Meshlab. You can see the mounting holes at the bottom of some of the hydrants.
Screen Shot 2014-08-21 at 10.20.13 PM

I ran many, many tests before the final 3D print.

imaginary_drinking_faucets

And finally, I setup the print over the weekend. Here is the print 50 hours later.
on_the_tray

It’s like I’m holding a birthday cake — I look so happy. This is at midnight last Sunday.scott_holding_tray

The cleaning itself is super-arduous.

scott_cleaning

And after my initial round of cleaning, this is what I have.hydrats_roughAnd here are the cistern prints.

cisterns_3d

I haven’t yet mounted these prints, but this will come soon. There’s still loads of cleaning to do.

 

Modeling Cisterns

How do you construct a 3D model of something that lives underground and only exists in a handful of pictures taken from the interior? This was my task for the Cisterns of San Francisco last week.

The backstory: have you ever seen those brick circles in intersections and wondered what the heck they mean? I sure have.

It turns out that underneath each circle is an underground cistern. There are 170 or so* of them spread throughout the city. They’re part of the AWSS (Auxiliary Water Supply System) of San Francisco, a water system that exists entirely for emergency use.

The cisterns are just one aspect of my research for Water Works, which will map out the San Francisco water infrastructure and data-visualize the physical pipes and structures that keep the H2O moving in our city.

This project is part of my Creative Code Fellowship with Stamen Design, Gray Area and Autodesk.

Cistern_1505_MedRes

Many others have written about the cisterns: Atlas Obscura, Untapped Cities, Found SF, and the cisterns even have their own Wikipedia page, albeit one that needs some edits.

The original cisterns, about 35 or so, were built in the 1850s, after a series of great fires ravaged the city, located in the Telegraph Hill to Rincon Hill area. In the next several decades they were largely unused, but the fire department filled them up with water for a “just in case” scenario.

Meanwhile, in the late 19th century as San Francisco rapidly developed into a large city, it began building a pressurized hydrant-based fire system, which was seen as many as a more effective way to deliver water in case of a fire. Many thought of the cisterns as antiquated and unnecessary.

However, when the 1906 earthquake hit, the SFFD was soon overwhelmed by a fire that tore through the city. The water mains collapsed. The old cisterns were one of the few sources of reliable water.

After the earthquake, the city passed bonds to begin construction of the AWSS — the separate water system just for fire emergencies. In addition to special pipes and hydrants fed from reservoirs for hydrants, the city constructed about 140 more underground cisterns.

Cisterns are disconnected nodes from the network, with no pipes and are maintained by the fire department, which presumably fill them every year. I’ve heard that some are incredibly leaky and others are watertight.

What do they look like inside? This is the *only* picture I can find anywhere and is of a cistern in the midst of seismic upgrade work. This one was built in 1910 and holds 75,000 gallons of water, the standard size for the cisterns. They are HUGE. As you can surmise from this picture, the water is not for drinking.cistern(Photographer: Robin Scheswohl; Title: Auxiliary Water supply system upgrade, San Francisco, USA)

Since we can’t see the outside of an underground cistern, I can only imagine what it might look like. My first sketch looked something like this.

cistern_drawingI approached Taylor Stein, Fusion 360 product evangelist at Autodesk, who helped me make my crude drawing come to life. I printed it out on one of the Autodesk 3D printers and lo and behold it looks like this: a double hamburger with a nipple on top. Arggh! Back to the virtual drawing board.IMG_0010I scoured the interwebs and found this reference photograph of an underground German cistern. It’s clearly smaller than the ones in San Francisco, but it looks like it would hold water. The form is unique and didn’t seem to connote something other than a vessel-that-holds-water.800px-Unterirdische_ZisterneOnce again, Taylor helped me bang this one out — within 45 minutes, we had a workable model in Fusion 360. We made ours with slightly wider dimensions on the top cone. The lid looks like a manhole.

cistern_3d

Within a couple hours, I had some 3D prints ready. I printed out several sizes, scaling the height to for various aesthetic tests.

cistern_models_printed

This was my favorite one. It vaguely looks like cooking pot or a tortilla canister, but not *very* much. Those three rectangular ridges, parked at 120-degree angles, give it an unusual form

IMG_0006

Now, it’s time to begin the more arduous project of mapping the cisterns themselves. And the tough part is still finishing the software that maps the cisterns into 3D space and exports them as an STL with some sort of binding support structure.

* I’ve only been able to locate 169 cisterns. Some reports state that there are 170 and others that there are 173 and 177.

Mapping Manholes

The last week has been a flurry of coding, as I’m quickly creating a crude but customized data-3D modeling application for Water Works — an art project for my Creative Code Fellowship with Stamen Design, Gray Area and Autodesk.

This project build on my Data Crystals sculptures, which transform various public datasets algorithmically into 3D-printable art objects. For this artwork, I used Processing with the Modelbuilder libraries to generate STL files. It was a fairly easy coding solution, but I ran into performance issues along tje wau.

But Processing tends to choke up at managing 30,000 simple 3D cubes. My clustering algorithms took hours to run. Because it isn’t compiled into machine code and is instead interpreted, it has layers of inefficiency.

I bit the coding bullet and this week migrated my code to OpenFrameworks (an open source C++ environment). I’ve used OF before, but never with 3D work. There are still lots of gaps in the libraries, specifically the STL exporting, but I’ve had some initial success, woo-hoo!

Here are all the manholes, the technical term being “sewer nodes”, mapped into 3D space using GIS lat/lon and elevation coordinates. The clear indicator that this is San Francisco, and not Wisconsin, which this mapping vaguely resembles is the swath of empty space that is Golden Gate Park.

What hooked me was that “a-ha” moment where 3D points rendered properly on my screen. I was on a plane flight home from Seattle and involuntarily emitted an audible yelp. Check out the 3D mapping. There’s a density of nodes along the Twin Peaks, and I accentuated the z-values to make San Francisco look even more hilly and to understand the location of the sewer chambers even better.

Sewer nodes are just the start. I don’t have the connecting pipes in there just yet, not to mention the cisterns and other goodies of the SF water infrastructure.

water_works_nodes_screen_shotOf course, I want to 3D print this. By increasing the node size — the cubic dimensions of each manhole location, I was able to generate a cohesive and 3D-printable structure. This is the Meshlab export with my custom-modified STL export code. I never thought I’d get this deep into 3D coding, but now, I know all sorts of details, like triangular winding and the right-hand rule for STL export.3d_terrain_meshlabAnd here is the 3D print of the San Francisco terrain, like the Data Crystals, with many intersecting cubes.

3d_terrain_better It doesn’t have the aesthetic crispness of the Data Crystals project, but this is just a test print — very much a work-in-progress.
data_crystals

 

Creative Code Fellowship: Water Works Proposal

Along with 3 other new media artists and creative coding experts, I was recently selected to be a Creative Code Fellow for 2014 — a project pioneered by Gray Area (formerly referred to as GAFFTA and now in a new location in the Mission District).

Each of us is paired with a partnering studio, which provides a space and creative direction for our proposed project. The studio that I’m pleased to be working with is Stamen Design, a leader in the field of aesthetics, mapping and data-visualization.

I’ll be also continuing my residency work at Autodesk at Pier 9, which will be providing support for this project as well.

My proposed project is called “Water Works” — a 3D-printed data visualization of San Francisco’s water system infrastructure, along with some sort of web component.

grayarea-fellowship-home-page

 

Creative Code Fellowship Application Scott Kildall

Project Proposal (250 limit)
My proposed project “Water Works” is a 3D data visualization of the complex network of pipes, aqueducts and cisterns that control the flow of water into our homes and out of our toilets. What lies beneath our feet is a unique combined wastewater system — where stormwater mixes with sewer lines and travels to a waste treatment plant, using gravitational energy from the San Francisco hills.

This dynamic flow is the circulatory system of the organism that is San Francisco. As we are impacted by climate change, which escalates drought and severe rainstorms, combined with population growth, how we obtain our water and dispose of it is critical to the lifeblood of this city.

Partnering with Autodesk, which will provide materials and shop support, I will write code, which will generate 3D prints from municipal GIS data. I imagine ghost-like underground 3D landscapes with thousands of threads of water — essentially flow data — interconnected to larger cisterns and aqueducts. The highly retinal work will invite viewers to explore the infrastructure the city provides. The end result might be panels that snap together on a tabletop for viewers to circumnavigate and explore.

The GIS data is available, though not online, from San Francisco and already I’ve obtained cooperation from SFDPW about providing some infrastructure data necessary to realize this project.

While my focus will be on the physical portion of this project, I will also build an interactive web-based version from the 3D data, making this a hybrid screen-physical project.

Why are you interested in participating in this fellowship? (150 word limit)
The fellowship would give me the funding, visibility and opportunity of working under the umbrage of two progressive organizations: Gray Area and Stamen Design. I would expand my knowledge, serve the community and increase my artistic potential by working with members of these two groups, both of which have a progressive vision for art and design in my longtime home of San Francisco.

Specifically, I wish to further integrate 3D printing into the data visualization conversation. With the expertise of Stamen, I hope to evolve my visualization work at Autodesk. The 3D-printing technology makes possible what has hitherto been impossible to create and has enormous possibilities to materialize the imaginary.

Additionally some of the immersive classes (HTML5, Javascript, Node.js) will be helpful in solidifying my web-programming skills so that I can produce the screen-based portion of this proposal.

What experience makes this a good fit for you? (150 word limit)
I have deep experience in producing both screen-based and physical data visualizations. While at the Exploratorium, I worked on many such exhibits for a general audience.

One example is a touch-screen exhibit called “Seasons of Plankton”, which shows how plankton species in the Bay change over the year, reflecting a diverse ecosystem of microscopic organisms. I collaborated with scientists and visitor evaluators to determine the optimal way to tell this story. I performed all of the coding work and media production for this successful piece.

While at Autodesk, my focus has been creating 3D data visualizations with my custom code that transforms public data sets into “Data Crystals” (these are the submitted images). This exploration favors aesthetics over legibility. I hope to build upon this work and create physical forms, which help people see the dynamics of a complex urban water system to invite curiosity through beauty.

 

World Data Crystals

I just finished three more Data Crystals, produced during my residency at Autodesk. This set of three are data visualizations of world datasets.

This first one captures all the population of cities in the world. After some internet sleuthing, I found a comprehensive .csv file of all of the cities by lat/long and their population and I worked on mapping the 30,000 or so data points into 3D space.

I rewrote my Data Crystal Generation program to translate the lat/long values into a sphere of world data points. I had to rotate the cubes to make them appear tangential to the globe. This forced me to re-learn high school trig functions, argh!

world_dcWhat I like about the way this looks is that the negative space invites the viewer into the 3D mapping. The Sahara Desert is empty, just like the Atlantic Ocean. Italy has no negative space. There are no national boundaries or geographical features, just cubes and cities.

I sized each city by area, so that the bigger cities are represented as larger cubes. Here is the largest city in the world, Tokyo

world_tokyo

This is the clustering algorithm in action. Running it realtime in Processing takes several hours. This is what the video would look like if I were using C++ instead of Java.

I’m happy with the clustered Data Crystal. The hole in the middle of it is result of the gap in data created by the Pacific Ocean.

world_pop_crystal

The next Data Crystal maps of all of the world airports. I learned that the United States has about 20,000 airports. Most of these are small, unpaved runways. I still don’t know why.

Here is a closeup of the US, askew with Florida in the upper-left corner.

us_closeup

I performed similar clustering functions and ended up with this Data Crystal, which vaguely resembles an airplane.

world_airports_data_crystalThe last dataset, which is not pictured because my camera ran out of batteries and my charger was at home represents all of the nuclear detonations in the world.

I’ll have better pictures of these crystals in the next week or so. Stay tuned.

 

First three Data Crystals

My first three Data Crystals are finished! I “mined” these from the San Francisco Open Data portal. My custom software culls through the data and clusters it into a 3D-printable form.

Each one involves different clustering algorithms. All of these start with geo-located data (x,y) with either time/space on the z-axis.

Here they are! And I’d love to do more (though a lot of work was involved)

Incidents of Crime
This shows the crime incidents in San Francisco over a 3-month period with over 35,000 data points (the crystal took about 5 hours to “mine”).  Each incident is single cube. Less series crimes such as drug possession are represented as small cubes and more severe a crimes such as kidnapping are larger ones. It turns out that crime happens everywhere, which is why this is a densely-packed shape.
datacrystal_crime

 

Construction Permits
This shows current the development pipeline — the construction permits in San Francisco. Work that affects just a single unit are smaller cubes and larger cubes correspond the larger developments. The upper left side of the crystal is the south side of the city — there is a lot of activity in the Mission and Excelsior districts, as you would expect. The arm on the upper right is West Portal.  The nose towards the bottom is some skyscraper construction downtown. 

dc_development

 

Civic Art Collection
This Data Crystal is generated from the San Francisco Civic Art Collection. Each cube is the same size, since it doesn’t feel right to make one art piece larger than another. The high top is City Hall, and the part extending below is some of the spaces downtown. The tail on the end is the artwork at San Francisco Airport.

datacrystal_sfart

 

EEG Data Crystals

I’ve had the Neurosky Mindwave headset in a box for over a year and just dove into it, as part of my ongoing Data Crystals research at Autodesk. The device is the technology backbone behind the project: EEG AR with John Craig Freeman (still working on funding).

The headset fits comfortably. Its space age retro look aesthetically pleases except that I’d cover up the logo in a final art project. The gray arm rests on your forehead and reads your EEG levels, translating them into a several values. The most useful are “attention” and “meditation”, which are calculations derived from a few different brainwave patterns.

eeg_headestI’ve written custom software in Java, using the Processing libraries and ModelBuilder to generate 3D models in real-time from the headset. But after copious user-testing, I found out that the effective sample rate of the headset was 1 sample/second.* Ugh.

This isn’t the first time I’ve used the Neurosky set. In 2010, I developed art piece, which is a portable personality kit called “After Thought”. That piece, however, relied on slow activity and was more like a tarot card reading where the headset readings were secondary to the performance.

The general idea for the Data Crystals is to translate data into 3D prints. I’ve worked with data from the San Francisco’s Data Portal. However, the idea of generating realtime 3D models from biometric data is hard to resist.

This is one of my first crystals — just a small sample of 200 readings. The black jagged squares represents “attention” and the white cubes correspond to “meditation”.

IMG_0963

Back to the sample rate…a real-time reading of 600 samples would take 10 minutes. Still, it’s great to be able to do real-time, so I imagine a dark room and a beanbag chair where you think about your day and then generate the prints.

Here’s what the software looks like. This is a video of my own EEG readings (recorded then replayed back at a faster rate).

And another view of the 3D print sample:

IMG_0965

What I like about this 3D print is the mixing of the two digital materials, where the black triangles intersect with the white squares. I still have quite a bit of refinement work to do on this piece.

Now, the challenge is what kind of environment for a 10-minute “3D Recording Session”. Many colleagues immediately suggest sexual arousal and drugs, which is funny, but I want to avoid. One thing I learned at the Exploratorium was how to appeal to a wide audience, i.e. a more family-friendly one. This way, you can talk to anyone about the work you’re doing instead of a select audience.

Some thoughts: just after crossing the line in an extreme mountain bike race, right after waking up in the morning, drink a pot of coffee (our workplace drug-of-choice) or soaking in the hot tub!

IMG_0966

* The website  advertises a “512Hz sampling rate – 1Hz eSense calculation rate.” Various blog posts indicate that the raw values often get repeated, meaning that the effective rate is super-slow.

 

3D Data Viz & SF Open Data

I’ve fallen a bit behind in my documentation and have a backlog of great stuff that I’ve been 3D-printing. These are a few of my early tests with my new project: Data Crystals. I am using various data sources, which I algorithmically transform data into 3D sculptures.

The source for these is the San Francisco Open Data Portal — which provides datasets about all sorts of interesting things such as housing permit data, locations of parking meters and more.

My custom algorithms transform this data into 3D sculptures. Legibility is still an issue, but initial tests show the wonderful work that algorithms can do.

This is a transformation of San Francisco Crime Data. It turns out that crime happens everywhere, so the data is in a giant block.

crime_data

After running some crude data transformations, I “mined” this crystal: the location of San Francisco public art. Most public art is located in the downtown and city hall area. But there is a tail, which represents the San Francisco Airport.

sf_art

More experiments: this is a test, based on the SF public art, where I played with varying the size of the cubes (this would be a suggested value of artwork, which I don’t have data for…yet). Now, I have a 4th axis for the data. Plus, there is a distinct aesthetic appeal of stacking differently-sized blocks as opposed to uniform ones.

Stay tuned, there is more to come!random_squares

Materiality in 3D Prints

I’m resuming some of the 3D printing work this week for my ongoing 3D data visualization research (a.k.a. Data Crystals). Here are four small tests in the “embryonic” state.

IMG_0930Step 1 in the cleaning process is the arduous process of using dental tools to pick away the support material.

IMG_0932

I have four “crystals” — two constructed from a translucent resin material and two from a more rubbery black material. IMG_0933

And the finished product! The Tango Black (that’s the material) below. I’m not so happy with the how this feels: soft and bendy.IMG_0934

And the Vero Clear — which has an aesthetic appeal to it, and is a hard resin that resembles like ice. Remember the ICE (Intrusion Countermeasure Electronics) in Neuromancer…this is one source of inspiration.
IMG_0937