In the hours just before I finished my presentation, I also managed to get Life of Poo working. What is it? Well, an interactive map of where your poo goes based on the sewer data that I used for this project.
This is the final piece of my web-mapping portion of Water Works and uses Leaflet with animated markers, all in Javascript, which is a new coding tool in my arsenal (I know, late to the party). I learned the basics in the Gray Area Creative Code Immersive class, which was provided as part of the fellowship.
The folks at Stamen Design also helped out and their designer-technicians turned me onto Leaflet as I bumbled my way through Javascript.
This will begin an animated poo journey down the sewer map and to the wastewater treatment plant.
Not all of the flushes works as you’d expect. There’s still glitches and bugs in the code. If you type in “16th & Mission”, the poo just sits there. Hmmm.
Why do I have the bugs? I have some ideas (see below) but I really like the chaotic results so will keep it for now.
I think the erratic behavior is happening because of a utility I wrote, which does some complex node-trimming and doesn’t take into account gravity in its flow diagrams. The sewer data has about 30,000 valid data points and Leaflet can only handle about 1500 or so without it taking forever to load and refresh.
The utility I wrote parses the node data tree and recursively prunes it to a more reasonable number, combining upstream and downstream nodes. In an overflow situation, technically speaking, there are nodes where waste might be directed away from the waste-water treatment plant.
However, my code isn’t smart enough to determine which are overflow pipes and which are pipes to the treatment plants, so the node-flow doesn’t work properly.
In case you’re still reading, here’s an illustration of a typical combined system, that shows how the pipes might look. The sewer outfall doesn’t happen very often, but when your model ignores gravity, it sure will.
The 3D print of the sewer, the one that uses the exact same data set as Life of Poo looks like this.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-09-16 01:11:072014-09-16 01:11:07Life of Poo
My exciting news is that this fall I will be an artist-in-residence at Impakt Works, which is in Utrecht, the Netherlands. The same organization puts on the Impakt Festival every year, which is a media arts festival that has been happening since 1988. My residency is from Sept 15-Nov 15 and coincides with the festival at the end of October.
Utrecht is a 30 minute train ride from Amsterdam and 45 minutes from Rotterdam and by all accounts is a small, beautiful canal city with medieval origins and also hosts the largest university in the Netherlands.
Of course, I’m thrilled. This is my first European art residency and I’ll have a chance to reconnect with some friends who live in the region as well as make many new connections.
The project I’ll be working on is called EquityBot and will premiere at the Impakt Festival in late October as part of their online component. It will have a virtual presence like my Playing Duchamp artwork (a Turbulence commission) and my more recent project, Bot Collective, produced while an artist-in-residence at Autodesk.
Like many of my projects this year, this will involve heavy coding, data-visualization and a sculptural component.
At this point, I’m in the research and pre-production phase. While configuring back-end server code, I’m also gathering reading materials about capital and algorithms for the upcoming plane rides, train rides and rainy Netherland evenings.
Here is the project description:
EquityBot
EquityBot is a stock-trading algorithm that explores the connections between collective emotions on social media and financial speculation. Using custom algorithms Equitybot correlates group sentiments expressed on Twitter with fluctuations in related stocks, distilling trends in worldwide moods into financial predictions which it then issues through its own Twitter feed. By re-inserting its results into the same social media system it draws upon, Equitybot elaborates on the ways in which digital networks can enchain complex systems of affect and decision making to produce unpredictable and volatile feedback loops between human and non-human actors.
Currently, autonomous trading algorithms comprise the large majority of stock trades.These analytic engines are normally sequestered by private investment companies operating with billions of dollars. EquityBot reworks this system, imagining what it might be like it this technological attention was directed towards the public good instead. How would the transparent, public sharing of powerful financial tools affect the way the stock market works for the average investor?
I’m imagining a digital fabrication portion of EquityBot, which will be the more experimental part of the project and will involve 3D-printed joinery. I’ll be collaborating with my longtime friend and colleague, Michael Ang on the technology — he’s already been developing a related polygon construction kit — as well as doing some idea-generation together.
“Mang” lives in Berlin, which is a relatively short train ride, so I’m planning to make a trip where we can work together in person and get inspired by some of the German architecture.
My new 3D printer — a Printrbot Simple Metal — will accompany me to Europe. This small, relatively portable machine produces decent quality results, at least for 3D joints, which will be hidden anyways.
However, I am also making 3D prints — fabricated sculptures, which I map out in 3D-space using and then 3D print.
The process has been arduous. I’ve learned a lot. I’m not sure I’d do it this way again, since I had to end up writing a lot of custom code to do things like triangle-winding for STL output and much, much more.
Here is how it works. First, I create a model in Fusion 360 — an Autodesk application — which I’ve slowly been learning and have become fond of.
The hydrants or cisterns are both disconnected entities in 3D space. They’d fall apart when trying to make a 3D print, so I use Delaunay triangulation code to connect the nodes as a 3D shape.
I designed my custom software to export a ready-to-print set of files in an STL format. My C++ code includes an editor which lets you do two things:
(1) specify which hydrants are “normal” hydrants and which ones have mounting holes in the bottom. The green ones have mounting holes, which are different STL files. I will insert 1/16″ stainless steel rod into the mounting holes and have the 3D prints “floating” on a piece of wood or some other material.
(2) my editor will also let you remove and strengthen each Delaunay triangulation node — the red one is the one currently connected. This is the final layout for the print, but you can imagine how cross-crossed and hectic the original one was.
Here is an exported STL in Meshlab. You can see the mounting holes at the bottom of some of the hydrants.
I ran many, many tests before the final 3D print.
And finally, I setup the print over the weekend. Here is the print 50 hours later.
It’s like I’m holding a birthday cake — I look so happy. This is at midnight last Sunday.
The cleaning itself is super-arduous.
And after my initial round of cleaning, this is what I have.And here are the cistern prints.
I haven’t yet mounted these prints, but this will come soon. There’s still loads of cleaning to do.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-08-22 12:48:182014-08-22 12:48:18WaterWorks: From Code to 3D Print
I felt a semblance of pride in being a “citizen-mapper” and helping the public in case of a dire emergency. I wondered why these maps weren’t more public. I had located the emergency hydrant data from a couple of different places, but nowhere very visible.
Apparently, these hydrants are not for emergency use after all. Who knew? Nowhere could I find a place that said they were discontinued.
Last Friday, the SFPUC contacted SFist and issued this statement (forwarded to me by the reporter, Jay Barmann):
————————————
The biggest concern [about getting emergency water from hydrants] is public health and safety. First of all, tapping into a hydrant is dangerous as many are high pressure and can easily cause injury. Some are very high pressure! Second, even the blue water drop hydrants from our old program in 2006 (no longer active) can be contaminated after an earthquake due to back flow, crossed lines, etc. We absolutely do not want the public trying to open these hydrants and they could become sick from drinking the water. They could also tap a non-potable hydrant and become sick if they drink water for fire-fighting use. After an earthquake, we have water quality experts who will assess the safety of hydrants and water from the hydrants before providing it to the public.
AND of course, no way should ANYONE be opening hydrants except SFFD and SFWD; if people are messing with hydrants, this could de-pressurize the system when SFFD needs the water pressure to fight fires, and also will be a further distraction for emergency workers to monitor.
We are in the process of updating our emergency water program… We are also going to be training NERT teams to help assess water after an emergency.
————————————
Uh-oh. Jay wrote: “It had sounded like designer Scott Kildall, who had been mapping the the hydrants, had done a fair amount of research, but apparently not.”
Was I lazy or over-excited? I don’t think so. I re-scoured the web, nowhere did I find a reference to the Blue Drop Hydrant Program being discontinued.
My reference were these two PDFs (links may be changed by municipal agencies after this post).
** I have some questions **
(1) Since nowhere on the web could I find a reference to this program being discontinued, why are these maps still online? Why didn’t the SFPUC make a public announcement that this program was being discontinued? It makes me look bad as a Water Detective, Data Miner, but more importantly there may have been other people relying on thse hydrants. Perhaps.
(2) Why are there still blue drops painted on some of these hydrants? Shouldn’t the SFPUC have repainted all of the blue drop hydrants white to signal that they are no longer in use?
(3) Why did our city spend 1 million dollars several years ago (2006) to set up these emergency hydrants in the first place when they weren’t maintainable? The SFPUC statement says: “even the blue water drop hydrants…can be contaminated after an earthquake due to back flow, crossed lines, etc.”
Did something change between 2006 and 2014? Wouldn’t these lines have always been susceptible to backflow, crossed lines, etc. when this program was initiated? 1 million bucks is a lot of money!
(4) Finally, and the most prescient question is why don’t we have emergency drinking hydrants or some other centralized system?
I *love* the idea of people going to central spots in their neighborhood case they don’t have access to drinking water. Yes, we should have emergency drinking water in our homes. But many people haven’t prepared. Or maybe your basement will collapse and your water will be unavailable. Or maybe you’ll be somewhere else: at work, at a restaurant, who knows?
Look, I’m a huge supporter of city government and want to celebrate the beautiful water infrastructure of San Francisco with my Water Works project, part of the Creative Code Fellowship with Stamen Design, Gray Area and Autodesk. The SFPUC does very good work. They are very drought-conscious and have great info on their website in general.
It’s unfortunate that these blue drop hydrants were discontinued.
It was an heartening tale of urban planning. I wish the SFPUC had contacted me directly instead of the person who wrote article. I’ll plan to update my map accordingly, perhaps stating that this is a historical map of sorts.
By the way, you can still see the blue drop hydrants on Street View:
And here’s the Facebook statement by SFPUC — hey, I’m glad they’re informing the public on this one!
Did you know that San Francisco has 67 fire hydrants that are designed for emergency drinking water in case of an earthquake-scale disaster? Neither did I. That’s because just about no one knows about these hydrants.
While scouring the web for Cistern locations — as part my Water Works Project*, which will map out the San Francisco water infrastructure and data-visualize the physical pipes and structures that keep the H2O moving in our city — I found this list.
I became curious.
I couldn’t find a map of these hydrants *anywhere* — except for an odd Foursquare map that linked to a defunct website.
I decided to map them myself, which was not terribly difficult to do.
Since Water Works is a project for the Creative Code Fellowship with Stamen Design, Gray Area and Autodesk and I’m collaborating with Stamen, mapping is essential for this project. I used Leaflet and Javascript. It’s crude but it works — the map does show the locations of the hydrants (click on the image to launch the map).
The map, will get better, but at least this will show you where the nearest emergency drinking hydrant is to your home.
Yesterday, I paid a visit to three hydrants in my neighborhood. They’re supposed to be marked with blue drops, but only 1 out of the 3 were properly marked.
Hydrant #46: 16th and Bryant, no blue drop
Hydrant #53, Precita & Folsom, has a blue drop
Hydrant #51, 23rd & Treat, no blue drop, with decorative sticker
Editors note: I had previously talked about buying a fire hydrant wrench for a “just in case” scenario*. I’ve retracted this suggestion (by editing this blog entry).
I apologize for this suggestion: No, none of us should be opening hydrants, of course. And I’m not going to actually buy a hydrant wrench. Neither should you, unless you are SFFD, SFWD or otherwise authorized.
Oh yes, and I’m not the first to wonder about these hydrants. Check out this video from a few years ago.
* For the record, I never said that would ever open a fire hydrant, just that I was planning to by a fire hydrant wrench. One possible scenario is that I would hand my fire hydrant wrench to a qualified and authorized municipal employee, in case they were in need.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-07-28 22:17:542014-07-28 22:17:54Mapping Emergency Drinking Water Hydrants
How do you construct a 3D model of something that lives underground and only exists in a handful of pictures taken from the interior? This was my task for the Cisterns of San Francisco last week.
The backstory: have you ever seen those brick circles in intersections and wondered what the heck they mean? I sure have.
It turns out that underneath each circle is an underground cistern. There are 170 or so* of them spread throughout the city. They’re part of the AWSS (Auxiliary Water Supply System) of San Francisco, a water system that exists entirely for emergency use.
The cisterns are just one aspect of my research for Water Works, which will map out the San Francisco water infrastructure and data-visualize the physical pipes and structures that keep the H2O moving in our city.
The original cisterns, about 35 or so, were built in the 1850s, after a series of great fires ravaged the city, located in the Telegraph Hill to Rincon Hill area. In the next several decades they were largely unused, but the fire department filled them up with water for a “just in case” scenario.
Meanwhile, in the late 19th century as San Francisco rapidly developed into a large city, it began building a pressurized hydrant-based fire system, which was seen as many as a more effective way to deliver water in case of a fire. Many thought of the cisterns as antiquated and unnecessary.
However, when the 1906 earthquake hit, the SFFD was soon overwhelmed by a fire that tore through the city. The water mains collapsed. The old cisterns were one of the few sources of reliable water.
After the earthquake, the city passed bonds to begin construction of the AWSS — the separate water system just for fire emergencies. In addition to special pipes and hydrants fed from reservoirs for hydrants, the city constructed about 140 more underground cisterns.
Cisterns are disconnected nodes from the network, with no pipes and are maintained by the fire department, which presumably fill them every year. I’ve heard that some are incredibly leaky and others are watertight.
What do they look like inside? This is the *only* picture I can find anywhere and is of a cistern in the midst of seismic upgrade work. This one was built in 1910 and holds 75,000 gallons of water, the standard size for the cisterns. They are HUGE. As you can surmise from this picture, the water is not for drinking.(Photographer: Robin Scheswohl; Title: Auxiliary Water supply system upgrade, San Francisco, USA)
Since we can’t see the outside of an underground cistern, I can only imagine what it might look like. My first sketch looked something like this.
I approached Taylor Stein, Fusion 360 product evangelist at Autodesk, who helped me make my crude drawing come to life. I printed it out on one of the Autodesk 3D printers and lo and behold it looks like this: a double hamburger with a nipple on top. Arggh! Back to the virtual drawing board.I scoured the interwebs and found this reference photograph of an underground German cistern. It’s clearly smaller than the ones in San Francisco, but it looks like it would hold water. The form is unique and didn’t seem to connote something other than a vessel-that-holds-water.Once again, Taylor helped me bang this one out — within 45 minutes, we had a workable model in Fusion 360. We made ours with slightly wider dimensions on the top cone. The lid looks like a manhole.
Within a couple hours, I had some 3D prints ready. I printed out several sizes, scaling the height to for various aesthetic tests.
This was my favorite one. It vaguely looks like cooking pot or a tortilla canister, but not *very* much. Those three rectangular ridges, parked at 120-degree angles, give it an unusual form
Now, it’s time to begin the more arduous project of mapping the cisterns themselves. And the tough part is still finishing the software that maps the cisterns into 3D space and exports them as an STL with some sort of binding support structure.
* I’ve only been able to locate 169 cisterns. Some reports state that there are 170 and others that there are 173 and 177.
Finding water data is harder than I thought. Like detective Gittes in the movie Chinatown, I’m poking my nose around and asking everyone about water. Instead of murder and slimy deals, I am scouring the internet and working with city government. I’ve spent many hours sleuthing and learning about the water system in our city.
In San Francisco, where this story takes place, we have three primary water systems. Here’s an overview:
The Sewer System is owned and operated by the SFPUC. The DPW provides certain engineering services. This is a combined stormwater and wastewater system. Yup, that’s right, the water you flush down the toilet goes into the same pipes as the the rainwater. Everything gets piped to a state-of-the art wastewaster treatment plant. Amazingly the sewer pipes are fed almost entirely by gravity, taking advantage of the natural landscape of the city.
The Auxiliary Water Supply System (AWSS) was built in 1908 just after the 1906 San Francisco Earthquake. It is an entire water system that is dedicated solely to firefighting. 80% of the city was destroyed not by earthquake itself, but by the fires that ravaged the city. The fires rampaged through the city mostly because the water mains collapsed. Just afterwards, the city began construction on a separate this infrastructure for combatting future fires. It consists of reservoirs that feed an entire network of pipes to high-pressure fire hydrants and also includes approximately 170 underground cisterns at various intersections in the city. This incredible separate water system is unique to San Francisco.
The Potable WaterSystem, a.k.a. drinking water is the water we get from our faucets and showers. It comes from the Hetch Hetchy — a historic valley but also a reservoir and water system constructed from 1913-1938 to provide water to San Francisco. This history is well-documented, but what I know little about is how the actual drinking water gets piped into San Francisco. homes Also, the San Francisco water is amongst the most safe in the world, so you can drink directly from your tap.
Given all of this, where is the story? This is the question that I asked folks at Stamen, Autodesk and Gray Area during a hyper-productive brainstorming session last week. Here’s the whiteboard with the notes. The takeaways, as folks call it are, are below and here I’m going to get nitty-gritty into process.
(whiteboard brainstorming session with Stamen)
(1) In my original proposal, I had envisioned a table-top version of the entire water infrastucture: pipes, cisterns, manhole chambers, reservoirs as a large-scale sculpture, printed in panels. It was kindly pointed out to me by the Autodesk Creative Projects team that this is unfeasible. I quickly realized the truth of this: 3D prints are expensive, time-consuming to clean and fragile. Divide the sculptural part of the project into several small parts.
(2) People are interested in the sewer system. Someone said, “I want to know if you take a dump at Nob Hill, where does the poop go?” It’s universal. Everyone poops, even the Queen of England and even Batman. It’s funny, it’s gross, it’s entirely human. This could be accessible to everyone.
(3) Making visible the invisible or revealing what’s in plain sight. The cisterns in San Francisco are one example. Those brick circles that you see in various intersections are actually 75,000 gallon underground cisterns. Work on a couple of discrete urban mapping projects.
(4) Think about focusing on making a beautiful and informative 3D map / data-visualization of just 1 square mile of San Francisco infrastructure. Hone on one area of the city.
(5) Complex systems can be modeled virtually. Over the last couple weeks, I’ve been running code tests, talking to many people in city government and building out an entire water modeling systems in C++ using OpenFrameworks. It’s been slow, deliberate and arduous. Balance the physical models with a complex virtual one.
I’m still not sure exactly where this project is heading, which is to be expected at this stage. For now, I’m mining data and acting as a detective. In the meantime, here is the trailer for Chinatown, which gives away the entire plot in 3 minutes.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-07-17 12:21:372014-07-17 12:21:37Data Miner, Water Detective
This project build on my Data Crystals sculptures, which transform various public datasets algorithmically into 3D-printable art objects. For this artwork, I used Processing with the Modelbuilder libraries to generate STL files. It was a fairly easy coding solution, but I ran into performance issues along tje wau.
But Processing tends to choke up at managing 30,000 simple 3D cubes. My clustering algorithms took hours to run. Because it isn’t compiled into machine code and is instead interpreted, it has layers of inefficiency.
I bit the coding bullet and this week migrated my code to OpenFrameworks (an open source C++ environment). I’ve used OF before, but never with 3D work. There are still lots of gaps in the libraries, specifically the STL exporting, but I’ve had some initial success, woo-hoo!
Here are all the manholes, the technical term being “sewer nodes”, mapped into 3D space using GIS lat/lon and elevation coordinates. The clear indicator that this is San Francisco, and not Wisconsin, which this mapping vaguely resembles is the swath of empty space that is Golden Gate Park.
What hooked me was that “a-ha” moment where 3D points rendered properly on my screen. I was on a plane flight home from Seattle and involuntarily emitted an audible yelp. Check out the 3D mapping. There’s a density of nodes along the Twin Peaks, and I accentuated the z-values to make San Francisco look even more hilly and to understand the location of the sewer chambers even better.
Sewer nodes are just the start. I don’t have the connecting pipes in there just yet, not to mention the cisterns and other goodies of the SF water infrastructure.
Of course, I want to 3D print this. By increasing the node size — the cubic dimensions of each manhole location, I was able to generate a cohesive and 3D-printable structure. This is the Meshlab export with my custom-modified STL export code. I never thought I’d get this deep into 3D coding, but now, I know all sorts of details, like triangular winding and the right-hand rule for STL export.And here is the 3D print of the San Francisco terrain, like the Data Crystals, with many intersecting cubes.
It doesn’t have the aesthetic crispness of the Data Crystals project, but this is just a test print — very much a work-in-progress.
Along with 3 other new media artists and creative coding experts, I was recently selected to be a Creative Code Fellow for 2014 — a project pioneered by Gray Area (formerly referred to as GAFFTA and now in a new location in the Mission District).
Each of us is paired with a partnering studio, which provides a space and creative direction for our proposed project. The studio that I’m pleased to be working with is Stamen Design, a leader in the field of aesthetics, mapping and data-visualization.
I’ll be also continuing my residency work at Autodesk at Pier 9, which will be providing support for this project as well.
My proposed project is called “Water Works” — a 3D-printed data visualization of San Francisco’s water system infrastructure, along with some sort of web component.
Creative Code Fellowship Application Scott Kildall
Project Proposal (250 limit)
My proposed project “Water Works” is a 3D data visualization of the complex network of pipes, aqueducts and cisterns that control the flow of water into our homes and out of our toilets. What lies beneath our feet is a unique combined wastewater system — where stormwater mixes with sewer lines and travels to a waste treatment plant, using gravitational energy from the San Francisco hills.
This dynamic flow is the circulatory system of the organism that is San Francisco. As we are impacted by climate change, which escalates drought and severe rainstorms, combined with population growth, how we obtain our water and dispose of it is critical to the lifeblood of this city.
Partnering with Autodesk, which will provide materials and shop support, I will write code, which will generate 3D prints from municipal GIS data. I imagine ghost-like underground 3D landscapes with thousands of threads of water — essentially flow data — interconnected to larger cisterns and aqueducts. The highly retinal work will invite viewers to explore the infrastructure the city provides. The end result might be panels that snap together on a tabletop for viewers to circumnavigate and explore.
The GIS data is available, though not online, from San Francisco and already I’ve obtained cooperation from SFDPW about providing some infrastructure data necessary to realize this project.
While my focus will be on the physical portion of this project, I will also build an interactive web-based version from the 3D data, making this a hybrid screen-physical project.
Why are you interested in participating in this fellowship? (150 word limit) The fellowship would give me the funding, visibility and opportunity of working under the umbrage of two progressive organizations: Gray Area and Stamen Design. I would expand my knowledge, serve the community and increase my artistic potential by working with members of these two groups, both of which have a progressive vision for art and design in my longtime home of San Francisco.
Specifically, I wish to further integrate 3D printing into the data visualization conversation. With the expertise of Stamen, I hope to evolve my visualization work at Autodesk. The 3D-printing technology makes possible what has hitherto been impossible to create and has enormous possibilities to materialize the imaginary.
Additionally some of the immersive classes (HTML5, Javascript, Node.js) will be helpful in solidifying my web-programming skills so that I can produce the screen-based portion of this proposal.
What experience makes this a good fit for you? (150 word limit) I have deep experience in producing both screen-based and physical data visualizations. While at the Exploratorium, I worked on many such exhibits for a general audience.
One example is a touch-screen exhibit called “Seasons of Plankton”, which shows how plankton species in the Bay change over the year, reflecting a diverse ecosystem of microscopic organisms. I collaborated with scientists and visitor evaluators to determine the optimal way to tell this story. I performed all of the coding work and media production for this successful piece.
While at Autodesk, my focus has been creating 3D data visualizations with my custom code that transforms public data sets into “Data Crystals” (these are the submitted images). This exploration favors aesthetics over legibility. I hope to build upon this work and create physical forms, which help people see the dynamics of a complex urban water system to invite curiosity through beauty.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-07-03 11:31:562014-07-03 11:31:56Creative Code Fellowship: Water Works Proposal
@SelfiesBot began tweeting last week and already the results have surprised me.
Selfies Bot is a portable sculpture which takes selfies and then tweets the images. With custom electronics and a long arm that holds a camera that points at itself, it is a portable art object that can travel to parks, the beach and to different cities.
I quickly learned that people want to pose with it, even in my early versions with a cardboard head (used to prove that the software works).
Last week, in an evening of experimentation, I added text component, where each Twitter pic gets accompanied by text that I scrape from Tweets with the #selfie hashtag.
This produces delightful results, like spinning a roulette wheel: you don’t know what the text will be until the Twitter website pubishes the tweet. The text + image gives an entirely new dimension to the project. The textual element acts as a mirror into the phenomenon of the self-portrait, reflecting the larger culture of the #selfie.
This first one captures all the population of cities in the world. After some internet sleuthing, I found a comprehensive .csv file of all of the cities by lat/long and their population and I worked on mapping the 30,000 or so data points into 3D space.
I rewrote my Data Crystal Generation program to translate the lat/long values into a sphere of world data points. I had to rotate the cubes to make them appear tangential to the globe. This forced me to re-learn high school trig functions, argh!
What I like about the way this looks is that the negative space invites the viewer into the 3D mapping. The Sahara Desert is empty, just like the Atlantic Ocean. Italy has no negative space. There are no national boundaries or geographical features, just cubes and cities.
I sized each city by area, so that the bigger cities are represented as larger cubes. Here is the largest city in the world, Tokyo
This is the clustering algorithm in action. Running it realtime in Processing takes several hours. This is what the video would look like if I were using C++ instead of Java.
I’m happy with the clustered Data Crystal. The hole in the middle of it is result of the gap in data created by the Pacific Ocean.
The next Data Crystal maps of all of the world airports. I learned that the United States has about 20,000 airports. Most of these are small, unpaved runways. I still don’t know why.
Here is a closeup of the US, askew with Florida in the upper-left corner.
I performed similar clustering functions and ended up with this Data Crystal, which vaguely resembles an airplane.
The last dataset, which is not pictured because my camera ran out of batteries and my charger was at home represents all of the nuclear detonations in the world.
I’ll have better pictures of these crystals in the next week or so. Stay tuned.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-05-01 14:03:172014-05-01 14:03:17World Data Crystals
This plaque in Pacific Grove, California, is the IEEE Milestone honoring my dad’s computer work in the 1970s. He was a true inventor and laid the foundation for the personal computer architecture that we now take for granted.
Gary Kildall’s is the 139th IEEE Milestone. These awards honor the key historical achievements in electrical and electronic engineering that have changed the world, and include the invention of the battery by Volta, Marconi’s work with the telegraph, and the invention of the transistor.
More pictures plus a short write-up of the ceremony can be found here: http://bit.ly/1io2wFH
The dedication event was emotional and powerful, with several of my father’s close colleagues from decades ago gathered to recount his contributions. I knew most of the stories and his work, but there were several aspects of his methodology that I had never heard before.
For example, I learned that my dad was not only a software programmer, but a systems architect, and would spend days diagramming data structures and logic trees on sheets of paper, using a door blank on sawhorses as his work table.
After fastidious corrections, and days poring over the designs, he would embark on programming binges to code what he had designed. And the final program would often work flawlessly on the first run.
With a PhD from the University of Washington, lots of hard work, incredible focus on long-term solutions, plus extraordinary talent, Gary created a vision of how to bring the personal computer to the desks of millions of users, and shared his enthusiasm with just about everyone he met.
My dad turned his passion into two key products: CP/M (the operating system), and BIOS (the firmware interface that lets different hardware devices talk to the same operating system). From this combination, people could, for the first time, load the same operating system onto any home computer.
The IEEE and David Laws from the Computer History Museum did a tremendous job of pulling in an amazing contingent of computer industry pioneers from the early days of personal computing to commemorate this occasion.
At the dedication, my sister Kristin and I had a chance to reconnect with many former Digital Research employees, and I think everyone felt a sense of happiness, relief, catharsis, and dare I say, closure for my dad’s work, which has often been overlooked by the popular press since his premature death in 1994, right in the middle of his career.
My mother, Dorothy McEwen, ran Digital Research as its business manager, to complement my dad the inventor. Together they changed computer history. It was here in Pacific Grove, 1974 that Gary Kildall loaded CP/M from a tape drive onto a floppy disk and booted it up for the first time: the birth of the personal computer.
If you find yourself in Pacific Grove, take a visit to 801 Lighthouse Avenue, Digital Research headquarters in the 1970s, and you can see this milestone for yourself.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-04-27 12:12:082014-04-27 12:12:08IEEE Milestone for my dad, Gary Kildall
What I’m designing is a PCB that goes to your Raspberry Pi cobbler breakout with some basic components: switches, LEDs, and potentiometers. I’m getting some great help from one of the 123D Circuits team members. It’s going to build on some of my Raspberry Pi Instructables, as well as be a critical component in my upcoming Bot Collective project.
Here’s the preliminary circuit diagram…see anything wrong?
Fritzing, the other viable competitor — as much as an open source program can be considered so — has a snappier grid system and is faster. It is after all, a desktop application and doesn’t have the odd performance issues in a browser app.
However, 123D Circuits has a community. Bah, a community, why is this important? (see below)
What won me over to the 123D Circuits…besides the fact that I know the some of the people who work on the product: the MCP3008 chip. I need this chip for the Raspberry Pi ADC do-all circuit that I’m building.
123D Circuits has it. Fritzing doesn’t. That’s because someone out there made the chip and now I’m using it. 123D FTW.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-04-11 11:57:012014-04-11 11:57:01Getting into 123D Circuits
My first three Data Crystals are finished! I “mined” these from the San Francisco Open Data portal. My custom software culls through the data and clusters it into a 3D-printable form.
Each one involves different clustering algorithms. All of these start with geo-located data (x,y) with either time/space on the z-axis.
Here they are! And I’d love to do more (though a lot of work was involved)
Incidents of Crime This shows the crime incidents in San Francisco over a 3-month period with over 35,000 data points (the crystal took about 5 hours to “mine”). Each incident is single cube. Less series crimes such as drug possession are represented as small cubes and more severe a crimes such as kidnapping are larger ones. It turns out that crime happens everywhere, which is why this is a densely-packed shape.
Construction Permits This shows current the development pipeline — the construction permits in San Francisco. Work that affects just a single unit are smaller cubes and larger cubes correspond the larger developments. The upper left side of the crystal is the south side of the city — there is a lot of activity in the Mission and Excelsior districts, as you would expect. The arm on the upper right is West Portal. The nose towards the bottom is some skyscraper construction downtown.
Civic Art Collection This Data Crystal is generated from the San Francisco Civic Art Collection. Each cube is the same size, since it doesn’t feel right to make one art piece larger than another. The high top is City Hall, and the part extending below is some of the spaces downtown. The tail on the end is the artwork at San Francisco Airport.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-04-01 23:36:522014-04-01 23:36:52First three Data Crystals
I finished three final prints of my Data Crystals project over the weekend. They look great and tomorrow I’m taking official documentation pictures.
These are what they look like in the support material, which is also beautiful in its ghostly, womb-like feel.
I’ve posted photos of these before, but still stunned at how amazing they look.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-04-01 01:44:142014-04-01 01:44:14Support material is beatiful
Below is a list of the crime classifications, extracted from the crime reports from the San Francisco Open Data Portal. This is part of my “data mining” work with the 3D-printed Data Crystals.
ARSON
ASSAULT
BAD CHECKS
BRIBERY
BURGLARY
DISORDERLY CONDUCT
DRIVING UNDER THE INFLUENCE
DRUG/NARCOTIC
DRUNKENNESS
EMBEZZLEMENT
EXTORTION
FAMILY OFFENSES
FORGERY/COUNTERFEITING
FRAUD
GAMBLING
KIDNAPPING
LARCENY/THEFT
LIQUOR LAWS
LOITERING
MISSING PERSON
NON-CRIMINAL
OTHER OFFENSES
PORNOGRAPHY/OBSCENE MAT
PROSTITUTION
RECOVERED VEHICLE
ROBBERY
RUNAWAY
SEX OFFENSES, FORCIBLE
SEX OFFENSES, NON FORCIBLE
STOLEN PROPERTY
SUICIDE
SUSPICIOUS OCC
TRESPASS
VANDALISM
VEHICLE THEFT
WARRANTS
WEAPON LAWS
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-03-22 18:48:062014-03-22 18:48:06Crime Classifications in San Francisco
I’ve had the Neurosky Mindwave headset in a box for over a year and just dove into it, as part of my ongoing Data Crystals research at Autodesk. The device is the technology backbone behind the project: EEG AR with John Craig Freeman (still working on funding).
The headset fits comfortably. Its space age retro look aesthetically pleases except that I’d cover up the logo in a final art project. The gray arm rests on your forehead and reads your EEG levels, translating them into a several values. The most useful are “attention” and “meditation”, which are calculations derived from a few different brainwave patterns.
I’ve written custom software in Java, using the Processing libraries and ModelBuilder to generate 3D models in real-time from the headset. But after copious user-testing, I found out that the effective sample rate of the headset was 1 sample/second.* Ugh.
This isn’t the first time I’ve used the Neurosky set. In 2010, I developed art piece, which is a portable personality kit called “After Thought”. That piece, however, relied on slow activity and was more like a tarot card reading where the headset readings were secondary to the performance.
The general idea for the Data Crystals is to translate data into 3D prints. I’ve worked with data from the San Francisco’s Data Portal. However, the idea of generating realtime 3D models from biometric data is hard to resist.
This is one of my first crystals — just a small sample of 200 readings. The black jagged squares represents “attention” and the white cubes correspond to “meditation”.
Back to the sample rate…a real-time reading of 600 samples would take 10 minutes. Still, it’s great to be able to do real-time, so I imagine a dark room and a beanbag chair where you think about your day and then generate the prints.
Here’s what the software looks like. This is a video of my own EEG readings (recorded then replayed back at a faster rate).
And another view of the 3D print sample:
What I like about this 3D print is the mixing of the two digital materials, where the black triangles intersect with the white squares. I still have quite a bit of refinement work to do on this piece.
Now, the challenge is what kind of environment for a 10-minute “3D Recording Session”. Many colleagues immediately suggest sexual arousal and drugs, which is funny, but I want to avoid. One thing I learned at the Exploratorium was how to appeal to a wide audience, i.e. a more family-friendly one. This way, you can talk to anyone about the work you’re doing instead of a select audience.
Some thoughts: just after crossing the line in an extreme mountain bike race, right after waking up in the morning, drink a pot of coffee (our workplace drug-of-choice) or soaking in the hot tub!
* The website advertises a “512Hz sampling rate – 1Hz eSense calculation rate.” Various blog posts indicate that the raw values often get repeated, meaning that the effective rate is super-slow.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-03-20 12:42:352014-03-20 12:42:35EEG Data Crystals
While monkeying around with the Raspberry Pi and the camera and the GPIO, I took this selfie. I guess the camera was upside down!
The Raspberry Pi is pretty great overall. The real bugaboo is the wifi and networking capabilities. You still have to specify these settings manually.
But the cost, only $40! I have 4 of them running now, all doing different tasks. Perfect for my upcoming Bot Collective project (lots and lots of Twitterbots)
7/10 stars.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-03-19 12:22:242014-03-19 12:22:24Accidental Raspberry Pi Selfie
I’m still recovering from my broken collarbone (surgery was on Wednesday). Today I’m definitely feeling ouchy and tender. I get pretty wiped out walking around outside with the jostling movement, so have been staying home a lot.
To keep myself busy, I’ve been working on a backlog of Instructables for my residency at Autodesk.
While a resident artist at Autodesk, we are supposed to write many Instructables. Often, the temptation is to make your projects and then write the how-to-guides in a haste.
Since I broke my collarbone, I really can’t make anything physical, but I can type one-handed. Besides the daily naps and the doctors’ appointments, and slowly doing one-handed chores like sorting laundry, I have to keep my mind active (I’m still too vulnerable up to go outside on my own).
Here is a new one: an Introduction to Git and GitHub. I originally found this source-control system to be weird and confusing, but now I’m 100% down with it. Feel free to add comments on the guide, as I’m a relative Git/GitHub nOOb and also have a thick skin for scathing Linux criticism.
And here is my post-surgey selfie from yesterday, where they put the pins in my collarbone. The doctors told me it went well. All I know is that I woke up feeling groggy with extra bandages on my shoulder. That’s how easy it is these days.
I had a bicycle accident on Sunday during a group ride (no cars were involved) and I smacked the pavement hard enough to break my collarbone. Ouch!
The upshot is no fabrication work for at least 4 weeks. This will change my time as a resident artist at Autodesk, as I was in the middle of an intense period of time there. I’m not sure just yet how this will play out.
Everyone has been telling me to rest up, but I have a hard time sitting still. I expect to be doing some research, reading and a bit of one-handed coding + blogging, plus plenty of sleeping. Fortunately, it was my left collarbone and I’m a righty. It is already easier than the other way around — I broke my right collarbone 4 years ago and having a clumsy one hand is so much harder.
A shot of morphine in the ER and put a smile on my face. Now, I’m trying to stay in good spirits without the drugs!
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-03-11 12:27:042014-03-11 12:27:04No fabrication work for a while
I’ve fallen a bit behind in my documentation and have a backlog of great stuff that I’ve been 3D-printing. These are a few of my early tests with my new project: Data Crystals. I am using various data sources, which I algorithmically transform data into 3D sculptures.
The source for these is the San Francisco Open Data Portal — which provides datasets about all sorts of interesting things such as housing permit data, locations of parking meters and more.
My custom algorithms transform this data into 3D sculptures. Legibility is still an issue, but initial tests show the wonderful work that algorithms can do.
This is a transformation of San Francisco Crime Data. It turns out that crime happens everywhere, so the data is in a giant block.
After running some crude data transformations, I “mined” this crystal: the location of San Francisco public art. Most public art is located in the downtown and city hall area. But there is a tail, which represents the San Francisco Airport.
More experiments: this is a test, based on the SF public art, where I played with varying the size of the cubes (this would be a suggested value of artwork, which I don’t have data for…yet). Now, I have a 4th axis for the data. Plus, there is a distinct aesthetic appeal of stacking differently-sized blocks as opposed to uniform ones.
Stay tuned, there is more to come!
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-03-06 13:56:312014-03-06 13:56:313D Data Viz & SF Open Data
I arrived at 9am and introduced myself to Casey Reas, co-founder of Processing, who was leading the hackathon and a super-nice guy. When I was working as a New Media Exhibit Developer at the Exploratorium (2012-13), Processing was the primary tool we used for building installations. Thanks Casey!
I arrived alone and expected a bunch of nerdy 20-somethings. Instead, I ran into some old friends, including Karen Marcelo, who has been generously running dorkbot for 15+ years and has an SRL email address. (coolPoints *= coolPoints)
And, I shouldn’t have been surprised, but Eric Socolofsky, whom I worked directly with at the Exploratorium was also present. He is a heavy-hitter in terms of code and data-viz and taught me how to get the Processing libs running in Java, which makes hacking much much easier.
I sat down at a table with Karen and invited Eric over. Also sitting with us were Jesse Day, a graduate student in Learning, Design and Technology at Stanford and Kristin Henry, artist and computer scientist. The 5 of us were soon to become a team — Team JEKKS…get it?
The folks from GAFFTA (Josette Melchor), swissnex and BCNM took turns presenting slides about possibilities for data canvas projects for 30 minutes. This was followed by another 30 of questions from a curious crowd of 60 people, which mean a lot to ingest.
The night before, we were given a dataset in a .csv format. I’d recommend never, ever looking at datasets just before going to sleep. I dreamt of strings, ints and timestamps.
The data included four Market Street locations, which tracked people, cars, trucks and buses for every minute of time. There was a lot of material there. How did they track this? Answer: Air quality sensors. That’s right, small dips in various emissions and others could give us minute-by-minute extrapolations on what kind of traffic was happening at each place. This is an amazing model — though I still wonder about its accuracy.
This was a competition and as such, we would be judged on three criteria: Audience Engagement: Would a general audience be attracted to installation? Would they stop and watch/interact?
Legibility of Data: Can people understand the data and make sense of the specifics?
Actionability: Are people spurred to action, presumably to change their mode of transport to reduce emissions?
At 10:30, we started. I don’t have any pictures of us working. They’re pretty much exactly what you’d imagine — a bunch of dorks huddled around a table with laptops.
After introducing ourselves and talking about our individual strengths, it was apparent we had a strong group of thinkers. We tossed around various ideas for about 30 minutes and then decided to do individual experiments for about an hour.
We decided to focus our data investigation on time rather than location. The 4 locations would somehow be on the same timeline for visitors to see. Kristin dove into Python and began transcoding the data sets into a more usable format. She translated them into graphics.
I played around with a hand-drawn aesthetic, tracing over a map of the downtown area by hand and drawing individual points, angling for something a little more low-tech. I also knew that Eric would devise something precise, neat and clean, so left him with the hard-viz duties.
Karen worked on her own to come up with some circular representations in Processing. As with everyone, in a hackathon, people work with the strong toolsets they already have.
Jesse was the only one of us who didn’t start coding right away. Smart man. He was also the one with the conceptual breakthrough, and began coloring bars on the vehicles themselves to represent emissions.
We huddled and decided to focus on representing the emissions as a series of colors. We settled on representing particulates, VOC (body odor), CO, CO2 and EMF (phones, electricity), not sure at the time if they were actually being tracked by the sensors.
More coding. Eric and I tapped into our collective exhibition design/art design experience and talked a compelling interaction model. The two things that people universally enjoy are to see themselves and to control timelines. Everyone liked the idea of “seeing yourself” as particulate emissions.
We all hashed out an idea of a 2-monitor installation and consulted with Casey about whether this was permissible (answer = yes). The first would be a real-time data visualization of the various stations. The other monitor would be a mirror which — get this — would do live video-tracking and map graphic of buses, cars, trucks and people onto corresponding moving bits in the background. Additionally, you could see yourself in the background.
Since it was a hackathon-style proposal, it doesn’t have to actually work. Beauty, eh?
2:30pm. 4 hours to make it happen. The rules were: laptops closed at 6:30 and then we all present as a group.
Jesse did the design work. We argued about colors: “too 70s”, “too saturated”, etc. Eric worked on the arduous task of getting the data into a legible data visualization. I worked on the animation, which involved no data translation.
I reused animation code that I’ve used in the Player Two rotoscoping project and for the Tweets in Space video installation. The next few hours were fast-n-furious and not especially “fun”. Eric was down to the wire with the data translation into graphics. At 5:30, I was busy making animated bus, car and truck exhaust farts, which made us all laugh. At 6:30 we were done.
We had two visualizations to show the crowd. Eric’s came out perfectly and was precise and legible. I was thankful that I roped him into our team. (note: video sped up by 4x).
The animation I wrote supplemented the visualization well. It was scrappy and funny we know would make people in the audience laugh.
Neither Karen and Kristin were able to make it for our presentation, so only the boys were represented in the pictures.
We were due up towards the end and so had a chance to watch the others before us. Almost everyone else had slide shows (oops!). There were so many both crazy and conventional ideas floating around. I can’t remember all of them — it’s like reading a book of short stores where you only can recall a handful.
I did notice a few things: a lot of the younger folks had a design-approach to making the visualizations, starting with well-illustrated concept slides. A few didn’t have any code and just the slides (to their credit, I think the Processing environment wasn’t familiar to everyone). One group made a website with a top level domain (!), one worked in the Unity game engine, there were many web-based implementations, one piece which was a sound-art piece (low points for legibility, but high for artistic merit) and one had a zombie game. Some presentations were a muddled and others were clear.
We gave a solid presentation, led by Jesse, which we called “Particulate Matters” (ba-dum-bum). We started with the “hard” data visualization and ended with the animation, which got a lot of laughs. I felt solid about our work.
The judging took a while. Fortunately, they provided beer! The results were in and we got 2nd place (woo-hoo!) out of about 14 teams. 1st place deserved it — a clean concept, which included accumulated particle emissions with Processing code showing emission-shapes dropping from the sky and accumulating on piles on the ground. The shapes matched the data. Nice work.
We got lots of chocolate as our prize. Yummy!
It turns out that Karen is the geekiest of all of us and in the days after the hackathon, improved her Processing sketch to come up with this cool-looking visualization.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-02-27 09:06:432014-02-27 09:06:43Urban Data Challenge
This single purchase seems to have glitched my Amazon preferences. As a straight, white male, I now get recommendations that contradict my “personality profile”. Check these out:
Onto the text itself: I found myself fascinated by Rodriguez’s textual interactions and queer latina identity, especially since her world of net.interaction happened in a pre-Facebook world with IRC chat rooms (really not that long ago…)
My favorite passage in the book is this one
Digital discourses, those virtual exchanges we glimpse on the Net, are textual performances: fleeting, transient, ephemeral, already past. Like the text of a play, they leave a trace to which meaning can be assigned, but these traces are haunted by the absence that was the performance itself, its reception, and its emotive power. To write about these online performances already alters their significance; a shift in temporal and spatial context produces a shift in meaning.
I remember the textual performances (as Second Front) we did in Second Life such as “Breaking News” (also not that long ago). The “playbook” for this performance was simply: we go into the Reuters headquarters and use the chat window to shout headlines such as: BREAKING NEWS: AVATARS IN REUTERS NEED ATTENTION!
But now, the performance only exists in writing, and absurd documentation videos like this:
I’m resuming some of the 3D printing work this week for my ongoing 3D data visualization research (a.k.a. Data Crystals). Here are four small tests in the “embryonic” state.
Step 1 in the cleaning process is the arduous process of using dental tools to pick away the support material.
I have four “crystals” — two constructed from a translucent resin material and two from a more rubbery black material.
And the finished product! The Tango Black (that’s the material) below. I’m not so happy with the how this feels: soft and bendy.
And the Vero Clear — which has an aesthetic appeal to it, and is a hard resin that resembles like ice. Remember the ICE (Intrusion Countermeasure Electronics) in Neuromancer…this is one source of inspiration.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-02-21 03:10:432014-02-21 03:10:43Materiality in 3D Prints
* I chose the name “Lenen” to avoid confusion. Lenonbot and Lenninbot look like misspellings of Lennon and Lenin, respectively. Lenen is it’s own bot.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-02-20 00:40:452014-02-20 00:40:45Welcome to the Party: @lenenbot
I’ve been working on a Digital Fabrication Technique for building precise 3D-faceted forms. I ended up making an armature, which is close to a good solution, but still has too much play in the joints.
One of the other resident artists at Autodesk, suggested a solution where I make wooden squares to solidify the joints in the armature. I cut out a variety of squares, each with a slightly different width+height, to account for the kerf of the laser-cutter. I also laser-etched them with their measurements.You can see here where I cut out a groove in the bottom of the armature, 1/8″ deep. The square fits nicely in there. I found that 25/1000″ seems to be the right amount of compensation.
I also added squares for the top joints. Using the brad nailer, I adhered the bottom squares to the armature.
Then the top squares and then the bottom panel of the structure.I built up the structure quickly. The precision of the armature made it easy to align the wood-paneled faces.
This is what it looks like before I put the last panel on.
After the “Digital Fabrication Fail” based on my self-defined Fabrication Challenge, I’ve gotten closer to a more precise solution. After an evening of frustration, while riding my bike home, I realized that an armature for the 3D sculptures would be the solution.
I designed a quick-and-dirty armature in Sketchup (I know, I know…) and exported the faces to Illustrator with an SVG exporter.I then laser-cut the armature pieces and put them together.
I made a few mistakes at first, but after a few tries got these three pieces to easily fit together.
However, even with accounting for the kerf, there is still a lot of play in the structure. You can’t see it in the images, but I can easily wiggle the pieces back and forth.
If I model the tolerances too tightly, then I can’t slide the inner portions of the armature together. It is certainly an improvement, but I’m looking for something that has more precision and is still easy to assemble.
I’m working on some simple tests or my Faceted Forms Fabrication Challenge . I started with this model, which has 10 faces and is relatively simple.
Then, I laser-cut these pieces from a 1/8″ sheet of wood.
And, I also cut out these joints.
Then, using the brad nail gun and glue, I began with the base and built up the structure using the joints for support.The first level, with the rectangular base went well.However, when I started assembling the trapezoid sections, I quickly ran into problems. The nail gun pushed the joint blocks away from the wood, and it was difficult to align the joint pieces correctly. I had to redo sections over again. Although this photo doesn’t entirely capture the first-try-failure, you can see the nail holes everywhere and also the gap between the joints. I threw in the towel pretty quickly and went home to sleep on the project, and hopefully, will come up with a better solution.
The fabrication challenge for some of my new sculptures is to devise a way to transform models in 3D screen-space into faceted painted wood forms. The faceted look is something I first experimented using papercraft sculptures in the No Matter (2008) project, a collaboration with Victoria Scott.
The problem I had getting the weird angles to be exact. I don’t have strong woodworking skills and ended up spending a lot of time with bondo fixing my mistakes. I’d like to be able to make these on the laser-cutter…no saws and no sanding and have them look perfect. Stay tuned.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-02-15 19:55:192014-02-15 19:55:19Fabrication Challenge — Faceted Forms
Life of Poo
/by Scott KildallI’ve been blogging about my Water Works project all summer and after the Creative Code Gray Area presentation on September 10th, the project is done. Phew. Except for some of the residual documentation.
In the hours just before I finished my presentation, I also managed to get Life of Poo working. What is it? Well, an interactive map of where your poo goes based on the sewer data that I used for this project.
Huh? Try it.
This is the final piece of my web-mapping portion of Water Works and uses Leaflet with animated markers, all in Javascript, which is a new coding tool in my arsenal (I know, late to the party). I learned the basics in the Gray Area Creative Code Immersive class, which was provided as part of the fellowship.
The folks at Stamen Design also helped out and their designer-technicians turned me onto Leaflet as I bumbled my way through Javascript.
How does it work?
On the Life of Poo section of the Water Works website, you enter an address (in San Francisco) such as “Twin Peaks, SF” or “47th & Judah, SF” and the Life of Poo and then press Flush Toilet.
This will begin an animated poo journey down the sewer map and to the wastewater treatment plant.
Why do I have the bugs? I have some ideas (see below) but I really like the chaotic results so will keep it for now.
I think the erratic behavior is happening because of a utility I wrote, which does some complex node-trimming and doesn’t take into account gravity in its flow diagrams. The sewer data has about 30,000 valid data points and Leaflet can only handle about 1500 or so without it taking forever to load and refresh.
The utility I wrote parses the node data tree and recursively prunes it to a more reasonable number, combining upstream and downstream nodes. In an overflow situation, technically speaking, there are nodes where waste might be directed away from the waste-water treatment plant.
However, my code isn’t smart enough to determine which are overflow pipes and which are pipes to the treatment plants, so the node-flow doesn’t work properly.
In case you’re still reading, here’s an illustration of a typical combined system, that shows how the pipes might look. The sewer outfall doesn’t happen very often, but when your model ignores gravity, it sure will.
The 3D print of the sewer, the one that uses the exact same data set as Life of Poo looks like this.
EquityBot @ Impakt
/by Scott KildallMy exciting news is that this fall I will be an artist-in-residence at Impakt Works, which is in Utrecht, the Netherlands. The same organization puts on the Impakt Festival every year, which is a media arts festival that has been happening since 1988. My residency is from Sept 15-Nov 15 and coincides with the festival at the end of October.
Utrecht is a 30 minute train ride from Amsterdam and 45 minutes from Rotterdam and by all accounts is a small, beautiful canal city with medieval origins and also hosts the largest university in the Netherlands.
Of course, I’m thrilled. This is my first European art residency and I’ll have a chance to reconnect with some friends who live in the region as well as make many new connections.
Like many of my projects this year, this will involve heavy coding, data-visualization and a sculptural component.
At this point, I’m in the research and pre-production phase. While configuring back-end server code, I’m also gathering reading materials about capital and algorithms for the upcoming plane rides, train rides and rainy Netherland evenings.
Here is the project description:
EquityBot
EquityBot is a stock-trading algorithm that explores the connections between collective emotions on social media and financial speculation. Using custom algorithms Equitybot correlates group sentiments expressed on Twitter with fluctuations in related stocks, distilling trends in worldwide moods into financial predictions which it then issues through its own Twitter feed. By re-inserting its results into the same social media system it draws upon, Equitybot elaborates on the ways in which digital networks can enchain complex systems of affect and decision making to produce unpredictable and volatile feedback loops between human and non-human actors.
Currently, autonomous trading algorithms comprise the large majority of stock trades.These analytic engines are normally sequestered by private investment companies operating with billions of dollars. EquityBot reworks this system, imagining what it might be like it this technological attention was directed towards the public good instead. How would the transparent, public sharing of powerful financial tools affect the way the stock market works for the average investor?
“Mang” lives in Berlin, which is a relatively short train ride, so I’m planning to make a trip where we can work together in person and get inspired by some of the German architecture.
My new 3D printer — a Printrbot Simple Metal — will accompany me to Europe. This small, relatively portable machine produces decent quality results, at least for 3D joints, which will be hidden anyways.
WaterWorks: From Code to 3D Print
/by Scott KildallIn my ongoing Water Works project — a Creative Code Fellowship with Stamen Design, Gray Area and Autodesk — I’ve been working for many many hours on code and data structures.
The immediate results were a Map of the San Francisco Cisterns and a Map of the “Imaginary Drinking Hydrants”.
However, I am also making 3D prints — fabricated sculptures, which I map out in 3D-space using and then 3D print.
The process has been arduous. I’ve learned a lot. I’m not sure I’d do it this way again, since I had to end up writing a lot of custom code to do things like triangle-winding for STL output and much, much more.
Here is how it works. First, I create a model in Fusion 360 — an Autodesk application — which I’ve slowly been learning and have become fond of.
From various open datasets, I map out the geolocations locations of the hydrants or the cisterns in X,Y space. You can check out this Instructable on the Mapping Cisterns and this blog post on the mapping of the hydrants for more info. Using OpenFrameworks — an open source toolset in C++, I map these out in 3D space. The Z-axis is the elevation.
The hydrants or cisterns are both disconnected entities in 3D space. They’d fall apart when trying to make a 3D print, so I use Delaunay triangulation code to connect the nodes as a 3D shape.
(1) specify which hydrants are “normal” hydrants and which ones have mounting holes in the bottom. The green ones have mounting holes, which are different STL files. I will insert 1/16″ stainless steel rod into the mounting holes and have the 3D prints “floating” on a piece of wood or some other material.
(2) my editor will also let you remove and strengthen each Delaunay triangulation node — the red one is the one currently connected. This is the final layout for the print, but you can imagine how cross-crossed and hectic the original one was.
Here is an exported STL in Meshlab. You can see the mounting holes at the bottom of some of the hydrants.

I ran many, many tests before the final 3D print.
And finally, I setup the print over the weekend. Here is the print 50 hours later.

It’s like I’m holding a birthday cake — I look so happy. This is at midnight last Sunday.
The cleaning itself is super-arduous.
And after my initial round of cleaning, this is what I have.
And here are the cistern prints.
I haven’t yet mounted these prints, but this will come soon. There’s still loads of cleaning to do.
SFPUC says Emergency Drinking Hydrants Discontinued
/by Scott KildallLast week, I posted an online map of the 67 Emergency Drinking Water Hydrants in San Francisco. It was covered in SFist, got a lot of retweets and coverage.
I felt a semblance of pride in being a “citizen-mapper” and helping the public in case of a dire emergency. I wondered why these maps weren’t more public. I had located the emergency hydrant data from a couple of different places, but nowhere very visible.
Apparently, these hydrants are not for emergency use after all. Who knew? Nowhere could I find a place that said they were discontinued.
Last Friday, the SFPUC contacted SFist and issued this statement (forwarded to me by the reporter, Jay Barmann):
————————————
The biggest concern [about getting emergency water from hydrants] is public health and safety. First of all, tapping into a hydrant is dangerous as many are high pressure and can easily cause injury. Some are very high pressure! Second, even the blue water drop hydrants from our old program in 2006 (no longer active) can be contaminated after an earthquake due to back flow, crossed lines, etc. We absolutely do not want the public trying to open these hydrants and they could become sick from drinking the water. They could also tap a non-potable hydrant and become sick if they drink water for fire-fighting use. After an earthquake, we have water quality experts who will assess the safety of hydrants and water from the hydrants before providing it to the public.
AND of course, no way should ANYONE be opening hydrants except SFFD and SFWD; if people are messing with hydrants, this could de-pressurize the system when SFFD needs the water pressure to fight fires, and also will be a further distraction for emergency workers to monitor.
We are in the process of updating our emergency water program… We are also going to be training NERT teams to help assess water after an emergency.
————————————
Uh-oh. Jay wrote: “It had sounded like designer Scott Kildall, who had been mapping the the hydrants, had done a fair amount of research, but apparently not.”
Was I lazy or over-excited? I don’t think so. I re-scoured the web, nowhere did I find a reference to the Blue Drop Hydrant Program being discontinued.
My reference were these two PDFs (links may be changed by municipal agencies after this post).
PDF Map on the SFPUC website
Water Supplies Manual from the San Francisco Fire Department
** I have some questions **
(1) Since nowhere on the web could I find a reference to this program being discontinued, why are these maps still online? Why didn’t the SFPUC make a public announcement that this program was being discontinued? It makes me look bad as a Water Detective, Data Miner, but more importantly there may have been other people relying on thse hydrants. Perhaps.
(2) Why are there still blue drops painted on some of these hydrants? Shouldn’t the SFPUC have repainted all of the blue drop hydrants white to signal that they are no longer in use?
(3) Why did our city spend 1 million dollars several years ago (2006) to set up these emergency hydrants in the first place when they weren’t maintainable? The SFPUC statement says: “even the blue water drop hydrants…can be contaminated after an earthquake due to back flow, crossed lines, etc.”
Did something change between 2006 and 2014? Wouldn’t these lines have always been susceptible to backflow, crossed lines, etc. when this program was initiated? 1 million bucks is a lot of money!
(4) Finally, and the most prescient question is why don’t we have emergency drinking hydrants or some other centralized system?
I *love* the idea of people going to central spots in their neighborhood case they don’t have access to drinking water. Yes, we should have emergency drinking water in our homes. But many people haven’t prepared. Or maybe your basement will collapse and your water will be unavailable. Or maybe you’ll be somewhere else: at work, at a restaurant, who knows?
Look, I’m a huge supporter of city government and want to celebrate the beautiful water infrastructure of San Francisco with my Water Works project, part of the Creative Code Fellowship with Stamen Design, Gray Area and Autodesk. The SFPUC does very good work. They are very drought-conscious and have great info on their website in general.
It’s unfortunate that these blue drop hydrants were discontinued.
It was an heartening tale of urban planning. I wish the SFPUC had contacted me directly instead of the person who wrote article. I’ll plan to update my map accordingly, perhaps stating that this is a historical map of sorts.
By the way, you can still see the blue drop hydrants on Street View:
And here’s the Facebook statement by SFPUC — hey, I’m glad they’re informing the public on this one!
Mapping Emergency Drinking Water Hydrants
/by Scott KildallDid you know that San Francisco has 67 fire hydrants that are designed for emergency drinking water in case of an earthquake-scale disaster? Neither did I. That’s because just about no one knows about these hydrants.
While scouring the web for Cistern locations — as part my Water Works Project*, which will map out the San Francisco water infrastructure and data-visualize the physical pipes and structures that keep the H2O moving in our city — I found this list.
I became curious.
I couldn’t find a map of these hydrants *anywhere* — except for an odd Foursquare map that linked to a defunct website.
I decided to map them myself, which was not terribly difficult to do.
Since Water Works is a project for the Creative Code Fellowship with Stamen Design, Gray Area and Autodesk and I’m collaborating with Stamen, mapping is essential for this project. I used Leaflet and Javascript. It’s crude but it works — the map does show the locations of the hydrants (click on the image to launch the map).
The map, will get better, but at least this will show you where the nearest emergency drinking hydrant is to your home.
Apparently, these emergency hydrants were developed in 2006 as part of a 1 million dollar program. These hydrants are tied to some of the most reliable drinking water mains.
Yesterday, I paid a visit to three hydrants in my neighborhood. They’re supposed to be marked with blue drops, but only 1 out of the 3 were properly marked.
Hydrant #46: 16th and Bryant, no blue drop
Hydrant #53, Precita & Folsom, has a blue drop
Hydrant #51, 23rd & Treat, no blue drop, with decorative sticker
Editors note: I had previously talked about buying a fire hydrant wrench for a “just in case” scenario*. I’ve retracted this suggestion (by editing this blog entry).
I apologize for this suggestion: No, none of us should be opening hydrants, of course. And I’m not going to actually buy a hydrant wrench. Neither should you, unless you are SFFD, SFWD or otherwise authorized.
Oh yes, and I’m not the first to wonder about these hydrants. Check out this video from a few years ago.
* For the record, I never said that would ever open a fire hydrant, just that I was planning to by a fire hydrant wrench. One possible scenario is that I would hand my fire hydrant wrench to a qualified and authorized municipal employee, in case they were in need.
Modeling Cisterns
/by Scott KildallHow do you construct a 3D model of something that lives underground and only exists in a handful of pictures taken from the interior? This was my task for the Cisterns of San Francisco last week.
The backstory: have you ever seen those brick circles in intersections and wondered what the heck they mean? I sure have.
It turns out that underneath each circle is an underground cistern. There are 170 or so* of them spread throughout the city. They’re part of the AWSS (Auxiliary Water Supply System) of San Francisco, a water system that exists entirely for emergency use.
The cisterns are just one aspect of my research for Water Works, which will map out the San Francisco water infrastructure and data-visualize the physical pipes and structures that keep the H2O moving in our city.
This project is part of my Creative Code Fellowship with Stamen Design, Gray Area and Autodesk.
Many others have written about the cisterns: Atlas Obscura, Untapped Cities, Found SF, and the cisterns even have their own Wikipedia page, albeit one that needs some edits.
The original cisterns, about 35 or so, were built in the 1850s, after a series of great fires ravaged the city, located in the Telegraph Hill to Rincon Hill area. In the next several decades they were largely unused, but the fire department filled them up with water for a “just in case” scenario.
Meanwhile, in the late 19th century as San Francisco rapidly developed into a large city, it began building a pressurized hydrant-based fire system, which was seen as many as a more effective way to deliver water in case of a fire. Many thought of the cisterns as antiquated and unnecessary.
However, when the 1906 earthquake hit, the SFFD was soon overwhelmed by a fire that tore through the city. The water mains collapsed. The old cisterns were one of the few sources of reliable water.
After the earthquake, the city passed bonds to begin construction of the AWSS — the separate water system just for fire emergencies. In addition to special pipes and hydrants fed from reservoirs for hydrants, the city constructed about 140 more underground cisterns.
Cisterns are disconnected nodes from the network, with no pipes and are maintained by the fire department, which presumably fill them every year. I’ve heard that some are incredibly leaky and others are watertight.
What do they look like inside? This is the *only* picture I can find anywhere and is of a cistern in the midst of seismic upgrade work. This one was built in 1910 and holds 75,000 gallons of water, the standard size for the cisterns. They are HUGE. As you can surmise from this picture, the water is not for drinking.
(Photographer: Robin Scheswohl; Title: Auxiliary Water supply system upgrade, San Francisco, USA)
Since we can’t see the outside of an underground cistern, I can only imagine what it might look like. My first sketch looked something like this.
Within a couple hours, I had some 3D prints ready. I printed out several sizes, scaling the height to for various aesthetic tests.
This was my favorite one. It vaguely looks like cooking pot or a tortilla canister, but not *very* much. Those three rectangular ridges, parked at 120-degree angles, give it an unusual form
Now, it’s time to begin the more arduous project of mapping the cisterns themselves. And the tough part is still finishing the software that maps the cisterns into 3D space and exports them as an STL with some sort of binding support structure.
* I’ve only been able to locate 169 cisterns. Some reports state that there are 170 and others that there are 173 and 177.
Data Miner, Water Detective
/by Scott KildallThis summer, I’m working on a Creative Code Fellowship with Stamen Design, Gray Area and Autodesk. The project is called Water Works, which will map and data-visualize the San Francisco water infrastructure using 3D-printing and the web.
Finding water data is harder than I thought. Like detective Gittes in the movie Chinatown, I’m poking my nose around and asking everyone about water. Instead of murder and slimy deals, I am scouring the internet and working with city government. I’ve spent many hours sleuthing and learning about the water system in our city.
In San Francisco, where this story takes place, we have three primary water systems. Here’s an overview:
The Sewer System is owned and operated by the SFPUC. The DPW provides certain engineering services. This is a combined stormwater and wastewater system. Yup, that’s right, the water you flush down the toilet goes into the same pipes as the the rainwater. Everything gets piped to a state-of-the art wastewaster treatment plant. Amazingly the sewer pipes are fed almost entirely by gravity, taking advantage of the natural landscape of the city.
The Auxiliary Water Supply System (AWSS) was built in 1908 just after the 1906 San Francisco Earthquake. It is an entire water system that is dedicated solely to firefighting. 80% of the city was destroyed not by earthquake itself, but by the fires that ravaged the city. The fires rampaged through the city mostly because the water mains collapsed. Just afterwards, the city began construction on a separate this infrastructure for combatting future fires. It consists of reservoirs that feed an entire network of pipes to high-pressure fire hydrants and also includes approximately 170 underground cisterns at various intersections in the city. This incredible separate water system is unique to San Francisco.
The Potable Water System, a.k.a. drinking water is the water we get from our faucets and showers. It comes from the Hetch Hetchy — a historic valley but also a reservoir and water system constructed from 1913-1938 to provide water to San Francisco. This history is well-documented, but what I know little about is how the actual drinking water gets piped into San Francisco. homes Also, the San Francisco water is amongst the most safe in the world, so you can drink directly from your tap.
Given all of this, where is the story? This is the question that I asked folks at Stamen, Autodesk and Gray Area during a hyper-productive brainstorming session last week. Here’s the whiteboard with the notes. The takeaways, as folks call it are, are below and here I’m going to get nitty-gritty into process.
(whiteboard brainstorming session with Stamen)
(1) In my original proposal, I had envisioned a table-top version of the entire water infrastucture: pipes, cisterns, manhole chambers, reservoirs as a large-scale sculpture, printed in panels. It was kindly pointed out to me by the Autodesk Creative Projects team that this is unfeasible. I quickly realized the truth of this: 3D prints are expensive, time-consuming to clean and fragile. Divide the sculptural part of the project into several small parts.
(2) People are interested in the sewer system. Someone said, “I want to know if you take a dump at Nob Hill, where does the poop go?” It’s universal. Everyone poops, even the Queen of England and even Batman. It’s funny, it’s gross, it’s entirely human. This could be accessible to everyone.
(3) Making visible the invisible or revealing what’s in plain sight. The cisterns in San Francisco are one example. Those brick circles that you see in various intersections are actually 75,000 gallon underground cisterns. Work on a couple of discrete urban mapping projects.
(4) Think about focusing on making a beautiful and informative 3D map / data-visualization of just 1 square mile of San Francisco infrastructure. Hone on one area of the city.
(5) Complex systems can be modeled virtually. Over the last couple weeks, I’ve been running code tests, talking to many people in city government and building out an entire water modeling systems in C++ using OpenFrameworks. It’s been slow, deliberate and arduous. Balance the physical models with a complex virtual one.
I’m still not sure exactly where this project is heading, which is to be expected at this stage. For now, I’m mining data and acting as a detective. In the meantime, here is the trailer for Chinatown, which gives away the entire plot in 3 minutes.
Mapping Manholes
/by Scott KildallThe last week has been a flurry of coding, as I’m quickly creating a crude but customized data-3D modeling application for Water Works — an art project for my Creative Code Fellowship with Stamen Design, Gray Area and Autodesk.
This project build on my Data Crystals sculptures, which transform various public datasets algorithmically into 3D-printable art objects. For this artwork, I used Processing with the Modelbuilder libraries to generate STL files. It was a fairly easy coding solution, but I ran into performance issues along tje wau.
But Processing tends to choke up at managing 30,000 simple 3D cubes. My clustering algorithms took hours to run. Because it isn’t compiled into machine code and is instead interpreted, it has layers of inefficiency.
I bit the coding bullet and this week migrated my code to OpenFrameworks (an open source C++ environment). I’ve used OF before, but never with 3D work. There are still lots of gaps in the libraries, specifically the STL exporting, but I’ve had some initial success, woo-hoo!
Here are all the manholes, the technical term being “sewer nodes”, mapped into 3D space using GIS lat/lon and elevation coordinates. The clear indicator that this is San Francisco, and not Wisconsin, which this mapping vaguely resembles is the swath of empty space that is Golden Gate Park.
What hooked me was that “a-ha” moment where 3D points rendered properly on my screen. I was on a plane flight home from Seattle and involuntarily emitted an audible yelp. Check out the 3D mapping. There’s a density of nodes along the Twin Peaks, and I accentuated the z-values to make San Francisco look even more hilly and to understand the location of the sewer chambers even better.
Sewer nodes are just the start. I don’t have the connecting pipes in there just yet, not to mention the cisterns and other goodies of the SF water infrastructure.
Creative Code Fellowship: Water Works Proposal
/by Scott KildallAlong with 3 other new media artists and creative coding experts, I was recently selected to be a Creative Code Fellow for 2014 — a project pioneered by Gray Area (formerly referred to as GAFFTA and now in a new location in the Mission District).
Each of us is paired with a partnering studio, which provides a space and creative direction for our proposed project. The studio that I’m pleased to be working with is Stamen Design, a leader in the field of aesthetics, mapping and data-visualization.
I’ll be also continuing my residency work at Autodesk at Pier 9, which will be providing support for this project as well.
My proposed project is called “Water Works” — a 3D-printed data visualization of San Francisco’s water system infrastructure, along with some sort of web component.
Creative Code Fellowship Application Scott Kildall
Project Proposal (250 limit)
My proposed project “Water Works” is a 3D data visualization of the complex network of pipes, aqueducts and cisterns that control the flow of water into our homes and out of our toilets. What lies beneath our feet is a unique combined wastewater system — where stormwater mixes with sewer lines and travels to a waste treatment plant, using gravitational energy from the San Francisco hills.
This dynamic flow is the circulatory system of the organism that is San Francisco. As we are impacted by climate change, which escalates drought and severe rainstorms, combined with population growth, how we obtain our water and dispose of it is critical to the lifeblood of this city.
Partnering with Autodesk, which will provide materials and shop support, I will write code, which will generate 3D prints from municipal GIS data. I imagine ghost-like underground 3D landscapes with thousands of threads of water — essentially flow data — interconnected to larger cisterns and aqueducts. The highly retinal work will invite viewers to explore the infrastructure the city provides. The end result might be panels that snap together on a tabletop for viewers to circumnavigate and explore.
The GIS data is available, though not online, from San Francisco and already I’ve obtained cooperation from SFDPW about providing some infrastructure data necessary to realize this project.
While my focus will be on the physical portion of this project, I will also build an interactive web-based version from the 3D data, making this a hybrid screen-physical project.
Why are you interested in participating in this fellowship? (150 word limit)
The fellowship would give me the funding, visibility and opportunity of working under the umbrage of two progressive organizations: Gray Area and Stamen Design. I would expand my knowledge, serve the community and increase my artistic potential by working with members of these two groups, both of which have a progressive vision for art and design in my longtime home of San Francisco.
Specifically, I wish to further integrate 3D printing into the data visualization conversation. With the expertise of Stamen, I hope to evolve my visualization work at Autodesk. The 3D-printing technology makes possible what has hitherto been impossible to create and has enormous possibilities to materialize the imaginary.
Additionally some of the immersive classes (HTML5, Javascript, Node.js) will be helpful in solidifying my web-programming skills so that I can produce the screen-based portion of this proposal.
What experience makes this a good fit for you? (150 word limit)
I have deep experience in producing both screen-based and physical data visualizations. While at the Exploratorium, I worked on many such exhibits for a general audience.
One example is a touch-screen exhibit called “Seasons of Plankton”, which shows how plankton species in the Bay change over the year, reflecting a diverse ecosystem of microscopic organisms. I collaborated with scientists and visitor evaluators to determine the optimal way to tell this story. I performed all of the coding work and media production for this successful piece.
While at Autodesk, my focus has been creating 3D data visualizations with my custom code that transforms public data sets into “Data Crystals” (these are the submitted images). This exploration favors aesthetics over legibility. I hope to build upon this work and create physical forms, which help people see the dynamics of a complex urban water system to invite curiosity through beauty.
@SelfiesBot: It’s Alive!!!
/by Scott Kildall@SelfiesBot began tweeting last week and already the results have surprised me.
Selfies Bot is a portable sculpture which takes selfies and then tweets the images. With custom electronics and a long arm that holds a camera that points at itself, it is a portable art object that can travel to parks, the beach and to different cities.
I quickly learned that people want to pose with it, even in my early versions with a cardboard head (used to prove that the software works).
Last week, in an evening of experimentation, I added text component, where each Twitter pic gets accompanied by text that I scrape from Tweets with the #selfie hashtag.
This produces delightful results, like spinning a roulette wheel: you don’t know what the text will be until the Twitter website pubishes the tweet. The text + image gives an entirely new dimension to the project. The textual element acts as a mirror into the phenomenon of the self-portrait, reflecting the larger culture of the #selfie.
Produced while an artist-in-residence at Autodesk.
And this is the final version! Just done.
This is the “robot hand” that holds the camera on a 2-foot long gooseneck arm.
World Data Crystals
/by Scott KildallI just finished three more Data Crystals, produced during my residency at Autodesk. This set of three are data visualizations of world datasets.
This first one captures all the population of cities in the world. After some internet sleuthing, I found a comprehensive .csv file of all of the cities by lat/long and their population and I worked on mapping the 30,000 or so data points into 3D space.
I rewrote my Data Crystal Generation program to translate the lat/long values into a sphere of world data points. I had to rotate the cubes to make them appear tangential to the globe. This forced me to re-learn high school trig functions, argh!
I sized each city by area, so that the bigger cities are represented as larger cubes. Here is the largest city in the world, Tokyo
This is the clustering algorithm in action. Running it realtime in Processing takes several hours. This is what the video would look like if I were using C++ instead of Java.
I’m happy with the clustered Data Crystal. The hole in the middle of it is result of the gap in data created by the Pacific Ocean.
The next Data Crystal maps of all of the world airports. I learned that the United States has about 20,000 airports. Most of these are small, unpaved runways. I still don’t know why.
Here is a closeup of the US, askew with Florida in the upper-left corner.
I performed similar clustering functions and ended up with this Data Crystal, which vaguely resembles an airplane.
I’ll have better pictures of these crystals in the next week or so. Stay tuned.
IEEE Milestone for my dad, Gary Kildall
/by Scott KildallThis plaque in Pacific Grove, California, is the IEEE Milestone honoring my dad’s computer work in the 1970s. He was a true inventor and laid the foundation for the personal computer architecture that we now take for granted.
Gary Kildall’s is the 139th IEEE Milestone. These awards honor the key historical achievements in electrical and electronic engineering that have changed the world, and include the invention of the battery by Volta, Marconi’s work with the telegraph, and the invention of the transistor.
More pictures plus a short write-up of the ceremony can be found here: http://bit.ly/1io2wFH
The dedication event was emotional and powerful, with several of my father’s close colleagues from decades ago gathered to recount his contributions. I knew most of the stories and his work, but there were several aspects of his methodology that I had never heard before.
For example, I learned that my dad was not only a software programmer, but a systems architect, and would spend days diagramming data structures and logic trees on sheets of paper, using a door blank on sawhorses as his work table.
After fastidious corrections, and days poring over the designs, he would embark on programming binges to code what he had designed. And the final program would often work flawlessly on the first run.
With a PhD from the University of Washington, lots of hard work, incredible focus on long-term solutions, plus extraordinary talent, Gary created a vision of how to bring the personal computer to the desks of millions of users, and shared his enthusiasm with just about everyone he met.
My dad turned his passion into two key products: CP/M (the operating system), and BIOS (the firmware interface that lets different hardware devices talk to the same operating system). From this combination, people could, for the first time, load the same operating system onto any home computer.
The IEEE and David Laws from the Computer History Museum did a tremendous job of pulling in an amazing contingent of computer industry pioneers from the early days of personal computing to commemorate this occasion.
At the dedication, my sister Kristin and I had a chance to reconnect with many former Digital Research employees, and I think everyone felt a sense of happiness, relief, catharsis, and dare I say, closure for my dad’s work, which has often been overlooked by the popular press since his premature death in 1994, right in the middle of his career.
My mother, Dorothy McEwen, ran Digital Research as its business manager, to complement my dad the inventor. Together they changed computer history. It was here in Pacific Grove, 1974 that Gary Kildall loaded CP/M from a tape drive onto a floppy disk and booted it up for the first time: the birth of the personal computer.
If you find yourself in Pacific Grove, take a visit to 801 Lighthouse Avenue, Digital Research headquarters in the 1970s, and you can see this milestone for yourself.
Getting into 123D Circuits
/by Scott KildallI’m a convert to 123D Circuits and not just because I’m an Autodesk shill (full disclosure: I’m in the residency program), but because it has the shared component library that anyone can tap into.
What I’m designing is a PCB that goes to your Raspberry Pi cobbler breakout with some basic components: switches, LEDs, and potentiometers. I’m getting some great help from one of the 123D Circuits team members. It’s going to build on some of my Raspberry Pi Instructables, as well as be a critical component in my upcoming Bot Collective project.
Here’s the preliminary circuit diagram…see anything wrong?
Fritzing, the other viable competitor — as much as an open source program can be considered so — has a snappier grid system and is faster. It is after all, a desktop application and doesn’t have the odd performance issues in a browser app.
However, 123D Circuits has a community. Bah, a community, why is this important? (see below)
123D Circuits has it. Fritzing doesn’t. That’s because someone out there made the chip and now I’m using it. 123D FTW.
First three Data Crystals
/by Scott KildallMy first three Data Crystals are finished! I “mined” these from the San Francisco Open Data portal. My custom software culls through the data and clusters it into a 3D-printable form.
Each one involves different clustering algorithms. All of these start with geo-located data (x,y) with either time/space on the z-axis.
Here they are! And I’d love to do more (though a lot of work was involved)
Incidents of Crime

This shows the crime incidents in San Francisco over a 3-month period with over 35,000 data points (the crystal took about 5 hours to “mine”). Each incident is single cube. Less series crimes such as drug possession are represented as small cubes and more severe a crimes such as kidnapping are larger ones. It turns out that crime happens everywhere, which is why this is a densely-packed shape.
Construction Permits
This shows current the development pipeline — the construction permits in San Francisco. Work that affects just a single unit are smaller cubes and larger cubes correspond the larger developments. The upper left side of the crystal is the south side of the city — there is a lot of activity in the Mission and Excelsior districts, as you would expect. The arm on the upper right is West Portal. The nose towards the bottom is some skyscraper construction downtown.
Civic Art Collection
This Data Crystal is generated from the San Francisco Civic Art Collection. Each cube is the same size, since it doesn’t feel right to make one art piece larger than another. The high top is City Hall, and the part extending below is some of the spaces downtown. The tail on the end is the artwork at San Francisco Airport.
Support material is beatiful
/by Scott KildallI finished three final prints of my Data Crystals project over the weekend. They look great and tomorrow I’m taking official documentation pictures.
These are what they look like in the support material, which is also beautiful in its ghostly, womb-like feel.
I’ve posted photos of these before, but still stunned at how amazing they look.
Crime Classifications in San Francisco
/by Scott KildallBelow is a list of the crime classifications, extracted from the crime reports from the San Francisco Open Data Portal. This is part of my “data mining” work with the 3D-printed Data Crystals.
ARSON
ASSAULT
BAD CHECKS
BRIBERY
BURGLARY
DISORDERLY CONDUCT
DRIVING UNDER THE INFLUENCE
DRUG/NARCOTIC
DRUNKENNESS
EMBEZZLEMENT
EXTORTION
FAMILY OFFENSES
FORGERY/COUNTERFEITING
FRAUD
GAMBLING
KIDNAPPING
LARCENY/THEFT
LIQUOR LAWS
LOITERING
MISSING PERSON
NON-CRIMINAL
OTHER OFFENSES
PORNOGRAPHY/OBSCENE MAT
PROSTITUTION
RECOVERED VEHICLE
ROBBERY
RUNAWAY
SEX OFFENSES, FORCIBLE
SEX OFFENSES, NON FORCIBLE
STOLEN PROPERTY
SUICIDE
SUSPICIOUS OCC
TRESPASS
VANDALISM
VEHICLE THEFT
WARRANTS
WEAPON LAWS
EEG Data Crystals
/by Scott KildallI’ve had the Neurosky Mindwave headset in a box for over a year and just dove into it, as part of my ongoing Data Crystals research at Autodesk. The device is the technology backbone behind the project: EEG AR with John Craig Freeman (still working on funding).
The headset fits comfortably. Its space age retro look aesthetically pleases except that I’d cover up the logo in a final art project. The gray arm rests on your forehead and reads your EEG levels, translating them into a several values. The most useful are “attention” and “meditation”, which are calculations derived from a few different brainwave patterns.
This isn’t the first time I’ve used the Neurosky set. In 2010, I developed art piece, which is a portable personality kit called “After Thought”. That piece, however, relied on slow activity and was more like a tarot card reading where the headset readings were secondary to the performance.
The general idea for the Data Crystals is to translate data into 3D prints. I’ve worked with data from the San Francisco’s Data Portal. However, the idea of generating realtime 3D models from biometric data is hard to resist.
This is one of my first crystals — just a small sample of 200 readings. The black jagged squares represents “attention” and the white cubes correspond to “meditation”.
Back to the sample rate…a real-time reading of 600 samples would take 10 minutes. Still, it’s great to be able to do real-time, so I imagine a dark room and a beanbag chair where you think about your day and then generate the prints.
Here’s what the software looks like. This is a video of my own EEG readings (recorded then replayed back at a faster rate).
And another view of the 3D print sample:
What I like about this 3D print is the mixing of the two digital materials, where the black triangles intersect with the white squares. I still have quite a bit of refinement work to do on this piece.
Now, the challenge is what kind of environment for a 10-minute “3D Recording Session”. Many colleagues immediately suggest sexual arousal and drugs, which is funny, but I want to avoid. One thing I learned at the Exploratorium was how to appeal to a wide audience, i.e. a more family-friendly one. This way, you can talk to anyone about the work you’re doing instead of a select audience.
Some thoughts: just after crossing the line in an extreme mountain bike race, right after waking up in the morning, drink a pot of coffee (our workplace drug-of-choice) or soaking in the hot tub!
* The website advertises a “512Hz sampling rate – 1Hz eSense calculation rate.” Various blog posts indicate that the raw values often get repeated, meaning that the effective rate is super-slow.
Accidental Raspberry Pi Selfie
/by Scott KildallWhile monkeying around with the Raspberry Pi and the camera and the GPIO, I took this selfie. I guess the camera was upside down!
The Raspberry Pi is pretty great overall. The real bugaboo is the wifi and networking capabilities. You still have to specify these settings manually.
But the cost, only $40! I have 4 of them running now, all doing different tasks. Perfect for my upcoming Bot Collective project (lots and lots of Twitterbots)
7/10 stars.
Ultimate Raspberry Pi Configuration Guide
/by Scott KildallI’m still recovering from my broken collarbone (surgery was on Wednesday). Today I’m definitely feeling ouchy and tender. I get pretty wiped out walking around outside with the jostling movement, so have been staying home a lot.
To keep myself busy, I’ve been working on a backlog of Instructables for my residency at Autodesk.
This one is called the Ultimate Raspberry Pi Configuration Guide — it took a long time to write!
Even with two-hands and full mobility, it would have been arduous.
My GitHub Instructable (while convalescing)
/by Scott KildallWhile a resident artist at Autodesk, we are supposed to write many Instructables. Often, the temptation is to make your projects and then write the how-to-guides in a haste.
Since I broke my collarbone, I really can’t make anything physical, but I can type one-handed. Besides the daily naps and the doctors’ appointments, and slowly doing one-handed chores like sorting laundry, I have to keep my mind active (I’m still too vulnerable up to go outside on my own).
Here is a new one: an Introduction to Git and GitHub. I originally found this source-control system to be weird and confusing, but now I’m 100% down with it. Feel free to add comments on the guide, as I’m a relative Git/GitHub nOOb and also have a thick skin for scathing Linux criticism.
Full Instructable here:
http://www.instructables.com/id/Introduction-to-GitHub/
And here is my post-surgey selfie from yesterday, where they put the pins in my collarbone. The doctors told me it went well. All I know is that I woke up feeling groggy with extra bandages on my shoulder. That’s how easy it is these days.
No fabrication work for a while
/by Scott KildallI had a bicycle accident on Sunday during a group ride (no cars were involved) and I smacked the pavement hard enough to break my collarbone. Ouch!
The upshot is no fabrication work for at least 4 weeks. This will change my time as a resident artist at Autodesk, as I was in the middle of an intense period of time there. I’m not sure just yet how this will play out.
Everyone has been telling me to rest up, but I have a hard time sitting still. I expect to be doing some research, reading and a bit of one-handed coding + blogging, plus plenty of sleeping. Fortunately, it was my left collarbone and I’m a righty. It is already easier than the other way around — I broke my right collarbone 4 years ago and having a clumsy one hand is so much harder.
3D Data Viz & SF Open Data
/by Scott KildallI’ve fallen a bit behind in my documentation and have a backlog of great stuff that I’ve been 3D-printing. These are a few of my early tests with my new project: Data Crystals. I am using various data sources, which I algorithmically transform data into 3D sculptures.
The source for these is the San Francisco Open Data Portal — which provides datasets about all sorts of interesting things such as housing permit data, locations of parking meters and more.
My custom algorithms transform this data into 3D sculptures. Legibility is still an issue, but initial tests show the wonderful work that algorithms can do.
This is a transformation of San Francisco Crime Data. It turns out that crime happens everywhere, so the data is in a giant block.
After running some crude data transformations, I “mined” this crystal: the location of San Francisco public art. Most public art is located in the downtown and city hall area. But there is a tail, which represents the San Francisco Airport.
More experiments: this is a test, based on the SF public art, where I played with varying the size of the cubes (this would be a suggested value of artwork, which I don’t have data for…yet). Now, I have a 4th axis for the data. Plus, there is a distinct aesthetic appeal of stacking differently-sized blocks as opposed to uniform ones.
Stay tuned, there is more to come!
Urban Data Challenge
/by Scott KildallLast Saturday was my first-ever hackathon — The Urban Data Challenge, sponsored by GAFFTA, swissnex, the Berkeley Center for New Media and Rebar.
I arrived at 9am and introduced myself to Casey Reas, co-founder of Processing, who was leading the hackathon and a super-nice guy. When I was working as a New Media Exhibit Developer at the Exploratorium (2012-13), Processing was the primary tool we used for building installations. Thanks Casey!
I arrived alone and expected a bunch of nerdy 20-somethings. Instead, I ran into some old friends, including Karen Marcelo, who has been generously running dorkbot for 15+ years and has an SRL email address. (coolPoints *= coolPoints)
And, I shouldn’t have been surprised, but Eric Socolofsky, whom I worked directly with at the Exploratorium was also present. He is a heavy-hitter in terms of code and data-viz and taught me how to get the Processing libs running in Java, which makes hacking much much easier.
I sat down at a table with Karen and invited Eric over. Also sitting with us were Jesse Day, a graduate student in Learning, Design and Technology at Stanford and Kristin Henry, artist and computer scientist. The 5 of us were soon to become a team — Team JEKKS…get it?
The folks from GAFFTA (Josette Melchor), swissnex and BCNM took turns presenting slides about possibilities for data canvas projects for 30 minutes. This was followed by another 30 of questions from a curious crowd of 60 people, which mean a lot to ingest.
The night before, we were given a dataset in a .csv format. I’d recommend never, ever looking at datasets just before going to sleep. I dreamt of strings, ints and timestamps.
The data included four Market Street locations, which tracked people, cars, trucks and buses for every minute of time. There was a lot of material there. How did they track this? Answer: Air quality sensors. That’s right, small dips in various emissions and others could give us minute-by-minute extrapolations on what kind of traffic was happening at each place. This is an amazing model — though I still wonder about its accuracy.
Audience Engagement: Would a general audience be attracted to installation? Would they stop and watch/interact?
Legibility of Data: Can people understand the data and make sense of the specifics?
Actionability: Are people spurred to action, presumably to change their mode of transport to reduce emissions?
At 10:30, we started. I don’t have any pictures of us working. They’re pretty much exactly what you’d imagine — a bunch of dorks huddled around a table with laptops.
After introducing ourselves and talking about our individual strengths, it was apparent we had a strong group of thinkers. We tossed around various ideas for about 30 minutes and then decided to do individual experiments for about an hour.
We decided to focus our data investigation on time rather than location. The 4 locations would somehow be on the same timeline for visitors to see. Kristin dove into Python and began transcoding the data sets into a more usable format. She translated them into graphics.
I played around with a hand-drawn aesthetic, tracing over a map of the downtown area by hand and drawing individual points, angling for something a little more low-tech. I also knew that Eric would devise something precise, neat and clean, so left him with the hard-viz duties.
Karen worked on her own to come up with some circular representations in Processing. As with everyone, in a hackathon, people work with the strong toolsets they already have.
Jesse was the only one of us who didn’t start coding right away. Smart man. He was also the one with the conceptual breakthrough, and began coloring bars on the vehicles themselves to represent emissions.
We huddled and decided to focus on representing the emissions as a series of colors. We settled on representing particulates, VOC (body odor), CO, CO2 and EMF (phones, electricity), not sure at the time if they were actually being tracked by the sensors.
More coding. Eric and I tapped into our collective exhibition design/art design experience and talked a compelling interaction model. The two things that people universally enjoy are to see themselves and to control timelines. Everyone liked the idea of “seeing yourself” as particulate emissions.
We all hashed out an idea of a 2-monitor installation and consulted with Casey about whether this was permissible (answer = yes). The first would be a real-time data visualization of the various stations. The other monitor would be a mirror which — get this — would do live video-tracking and map graphic of buses, cars, trucks and people onto corresponding moving bits in the background. Additionally, you could see yourself in the background.
Since it was a hackathon-style proposal, it doesn’t have to actually work. Beauty, eh?
2:30pm. 4 hours to make it happen. The rules were: laptops closed at 6:30 and then we all present as a group.
Jesse did the design work. We argued about colors: “too 70s”, “too saturated”, etc. Eric worked on the arduous task of getting the data into a legible data visualization. I worked on the animation, which involved no data translation.

I reused animation code that I’ve used in the Player Two rotoscoping project and for the Tweets in Space video installation. The next few hours were fast-n-furious and not especially “fun”. Eric was down to the wire with the data translation into graphics. At 5:30, I was busy making animated bus, car and truck exhaust farts, which made us all laugh. At 6:30 we were done.
We had two visualizations to show the crowd. Eric’s came out perfectly and was precise and legible. I was thankful that I roped him into our team. (note: video sped up by 4x).
The animation I wrote supplemented the visualization well. It was scrappy and funny we know would make people in the audience laugh.
Neither Karen and Kristin were able to make it for our presentation, so only the boys were represented in the pictures.
We were due up towards the end and so had a chance to watch the others before us. Almost everyone else had slide shows (oops!). There were so many both crazy and conventional ideas floating around. I can’t remember all of them — it’s like reading a book of short stores where you only can recall a handful.
I did notice a few things: a lot of the younger folks had a design-approach to making the visualizations, starting with well-illustrated concept slides. A few didn’t have any code and just the slides (to their credit, I think the Processing environment wasn’t familiar to everyone). One group made a website with a top level domain (!), one worked in the Unity game engine, there were many web-based implementations, one piece which was a sound-art piece (low points for legibility, but high for artistic merit) and one had a zombie game. Some presentations were a muddled and others were clear.
We gave a solid presentation, led by Jesse, which we called “Particulate Matters” (ba-dum-bum). We started with the “hard” data visualization and ended with the animation, which got a lot of laughs. I felt solid about our work.
The judging took a while. Fortunately, they provided beer!
The results were in and we got 2nd place (woo-hoo!) out of about 14 teams. 1st place deserved it — a clean concept, which included accumulated particle emissions with Processing code showing emission-shapes dropping from the sky and accumulating on piles on the ground. The shapes matched the data. Nice work.
We got lots of chocolate as our prize. Yummy!
Amazon Preferences & Queer Latinidad
/by Scott KildallI just finished reading “Queer Latinidad” by Juana Rodriguez, which I downloaded for the Kindle (perfect medium for theories of electronic discourse).
This single purchase seems to have glitched my Amazon preferences. As a straight, white male, I now get recommendations that contradict my “personality profile”. Check these out:
Onto the text itself: I found myself fascinated by Rodriguez’s textual interactions and queer latina identity, especially since her world of net.interaction happened in a pre-Facebook world with IRC chat rooms (really not that long ago…)
My favorite passage in the book is this one
I remember the textual performances (as Second Front) we did in Second Life such as “Breaking News” (also not that long ago). The “playbook” for this performance was simply: we go into the Reuters headquarters and use the chat window to shout headlines such as: BREAKING NEWS: AVATARS IN REUTERS NEED ATTENTION!
But now, the performance only exists in writing, and absurd documentation videos like this:
Materiality in 3D Prints
/by Scott KildallI’m resuming some of the 3D printing work this week for my ongoing 3D data visualization research (a.k.a. Data Crystals). Here are four small tests in the “embryonic” state.
I have four “crystals” — two constructed from a translucent resin material and two from a more rubbery black material.
And the finished product! The Tango Black (that’s the material) below. I’m not so happy with the how this feels: soft and bendy.
And the Vero Clear — which has an aesthetic appeal to it, and is a hard resin that resembles like ice. Remember the ICE (Intrusion Countermeasure Electronics) in Neuromancer…this is one source of inspiration.

Welcome to the Party: @lenenbot
/by Scott KildallSay hello to the latest Twitterbot from the Bot Collective: @lenenbot
Lenenbot* mixes up John Lennon and Vladimir Lenin quotes. The first half of one with the second half of the other.
Some of my favorites so far are:
There are more, surreal ones. There are about 600 different possibilities, all randomized. Subscribe to the Twitter account here.
* I chose the name “Lenen” to avoid confusion. Lenonbot and Lenninbot look like misspellings of Lennon and Lenin, respectively. Lenen is it’s own bot.
Digital Fabrication Success
/by Scott KildallI’ve been working on a Digital Fabrication Technique for building precise 3D-faceted forms. I ended up making an armature, which is close to a good solution, but still has too much play in the joints.
Then the top squares and then the bottom panel of the structure.
I built up the structure quickly. The precision of the armature made it easy to align the wood-paneled faces.
It looks just like the model!
Digital Fabrication — Better
/by Scott KildallAfter the “Digital Fabrication Fail” based on my self-defined Fabrication Challenge, I’ve gotten closer to a more precise solution. After an evening of frustration, while riding my bike home, I realized that an armature for the 3D sculptures would be the solution.
This is a bit tedious design-wise since I’d have to custom-design the armature for every 3D form. However, it would work — I remembered the Gift Horse Project and the armature that we built for this.
I made a few mistakes at first, but after a few tries got these three pieces to easily fit together.
If I model the tolerances too tightly, then I can’t slide the inner portions of the armature together.
It is certainly an improvement, but I’m looking for something that has more precision and is still easy to assemble.
Digital Fabrication Fail #1
/by Scott KildallI’m working on some simple tests or my Faceted Forms Fabrication Challenge . I started with this model, which has 10 faces and is relatively simple.
Fabrication Challenge — Faceted Forms
/by Scott KildallThe fabrication challenge for some of my new sculptures is to devise a way to transform models in 3D screen-space into faceted painted wood forms. The faceted look is something I first experimented using papercraft sculptures in the No Matter (2008) project, a collaboration with Victoria Scott.
I later expanded upon this idea with the 2049 Series sculptures such as the Universal Mailbox and the 2049 Hotline. I constructed these sculptures from found wood at the dump while an artist-in-residence at Recology SF.
The problem I had getting the weird angles to be exact. I don’t have strong woodworking skills and ended up spending a lot of time with bondo fixing my mistakes. I’d like to be able to make these on the laser-cutter…no saws and no sanding and have them look perfect. Stay tuned.