Urban Data Challenge

Last Saturday was my first-ever hackathon — The Urban Data Challenge, sponsored by GAFFTA, swissnex, the Berkeley Center for New Media and Rebar.

I arrived at 9am and introduced myself to Casey Reas, co-founder of Processing, who was leading the hackathon and a super-nice guy. When I was working as a New Media Exhibit Developer at the Exploratorium (2012-13), Processing was the primary tool we used for building installations. Thanks Casey!scott_talking_to_casey

I arrived alone and expected a bunch of nerdy 20-somethings. Instead, I ran into some old friends, including Karen Marcelo, who has been generously running dorkbot for 15+ years and has an SRL email address. (coolPoints *= coolPoints)

And, I shouldn’t have been surprised, but Eric Socolofsky, whom I worked directly with at the Exploratorium was also present. He is a heavy-hitter in terms of code and data-viz and taught me how to get the Processing libs running in Java, which makes hacking much much easier.

I sat down at a table with Karen and invited Eric over. Also sitting with us were Jesse Day, a graduate student in Learning, Design and Technology at Stanford and Kristin Henry, artist and computer scientist. The 5 of us were soon to become a team — Team JEKKS…get it?

The folks from GAFFTA (Josette Melchor), swissnex and BCNM took turns presenting slides about possibilities for data canvas projects for 30 minutes. This was followed by another 30 of questions from a curious crowd of 60 people, which mean a lot to ingest.

The night before, we were given a dataset in a .csv format. I’d recommend never, ever looking at datasets just  before going to sleep. I dreamt of strings, ints and timestamps.

The data included four Market Street locations, which tracked people, cars, trucks and buses for every minute of time. There was a lot of material there. How did they track this? Answer: Air quality sensors. That’s right, small dips in various emissions and others could give us minute-by-minute extrapolations on what kind of traffic was happening at each place. This is an amazing model — though I still wonder about its accuracy.

data_hackathonThis was a competition and as such, we would be judged on three criteria:
Audience Engagement: Would a general audience be attracted to installation? Would they stop and watch/interact?

Legibility of Data: Can people understand the data and make sense of the specifics?

Actionability: Are people spurred to action, presumably to change their mode of transport to reduce emissions?

At 10:30, we started. I don’t have any pictures of us working. They’re pretty much exactly what you’d imagine — a bunch of dorks huddled around a table with laptops.

After introducing ourselves and talking about our individual strengths, it was apparent we had a strong group of thinkers. We tossed around various ideas for about 30 minutes and then decided to do individual experiments for about an hour.

We decided to focus our data investigation on time rather than location. The 4 locations would somehow be on the same timeline for visitors to see. Kristin dove into Python and began transcoding the data sets into a more usable format. She translated them into graphics.lines

I played around with a hand-drawn aesthetic, tracing over a map of the downtown area by hand and drawing individual points, angling for something a little more low-tech. I also knew that Eric would devise something precise, neat and clean, so left him with the hard-viz duties.


Karen worked on her own to come up with some circular representations in Processing. As with everyone, in a hackathon, people work with the strong toolsets they already have.


Jesse was the only one of us who didn’t start coding right away. Smart man. He was also the one with the conceptual breakthrough, and began coloring bars on the vehicles themselves to represent emissions.

street_view_before street_view_with_graph

We huddled and decided to focus on representing the emissions as a series of colors. We settled on representing particulates, VOC (body odor), CO, CO2 and EMF (phones, electricity), not sure at the time if they were actually being tracked by the sensors.

More coding. Eric and I tapped into our collective exhibition design/art design experience and talked a compelling interaction model. The two things that people universally enjoy are to see themselves and to control timelines. Everyone liked the idea of “seeing yourself” as particulate emissions.

We all hashed out an idea of a 2-monitor installation and consulted with Casey about whether this was permissible (answer = yes). The first would be a real-time data visualization of the various stations. The other monitor would be a mirror which — get this — would do live video-tracking and map graphic of buses, cars, trucks and people onto corresponding moving bits in the background. Additionally, you could see yourself in the background.

Since it was a hackathon-style proposal, it doesn’t have to actually work. Beauty, eh?

2:30pm. 4 hours to make it happen. The rules were: laptops closed at 6:30 and then we all present as a group.

Jesse did the design work. We argued about colors: “too 70s”, “too saturated”, etc. Eric worked on the arduous task of getting the data into a legible data visualization. I worked on the animation, which involved no data translation.

I reused animation code that I’ve used in the Player Two rotoscoping project and for the Tweets in Space video installation. The next few hours were fast-n-furious and not especially “fun”. Eric was down to the wire with the data translation into graphics. At 5:30, I was busy making animated bus, car and truck exhaust farts, which made us all laugh. At 6:30 we were done.

We had two visualizations to show the crowd. Eric’s came out perfectly and was precise and legible. I was thankful that I roped him into our team. (note: video sped up by 4x).

The animation I wrote supplemented the visualization well. It was scrappy and funny we know would make people in the audience laugh.

Neither Karen and Kristin were able to make it for our presentation, so only the boys were represented in the pictures.

We were due up towards the end and so had a chance to watch the others before us. Almost everyone else had slide shows (oops!). There were so many both crazy and conventional ideas floating around. I can’t remember all of them — it’s like reading a book of short stores where you only can recall a handful.

I did notice a few things: a lot of the younger folks had a  design-approach to making the visualizations, starting with well-illustrated concept slides. A few didn’t have any code and just the slides (to their credit, I think the Processing environment wasn’t familiar to everyone). One group made a website with a top level domain (!), one worked in the Unity game engine, there were many web-based implementations, one piece which was a sound-art piece (low points for legibility, but high for artistic merit) and one had a zombie game. Some presentations were a muddled and others were clear.

We gave a solid presentation, led by Jesse, which we called “Particulate Matters” (ba-dum-bum). We started with the “hard” data visualization and ended with the animation, which got a lot of laughs. I felt solid about our work.


The judging took a while. Fortunately, they provided beer!
beerThe results were in and we got 2nd place (woo-hoo!) out of about 14 teams. 1st place deserved it — a clean concept, which included accumulated particle emissions with Processing code showing emission-shapes dropping from the sky and accumulating on piles on the ground. The shapes matched the data. Nice work.

We got lots of chocolate as our prize. Yummy!

chocoIt turns out that Karen is the geekiest of all of us and in the days after the hackathon, improved her Processing sketch to come up with this cool-looking visualization.

Amazon Preferences & Queer Latinidad

I just finished reading “Queer Latinidad” by Juana Rodriguez, which I downloaded for the Kindle (perfect medium for theories of electronic discourse).

This single purchase seems to have glitched my Amazon preferences. As a straight, white male, I now get recommendations that contradict my “personality profile”. Check these out:


Onto the text itself: I found myself fascinated by Rodriguez’s textual interactions and queer latina identity, especially since her world of net.interaction happened in a pre-Facebook world with IRC chat rooms (really not that long ago…)

My favorite passage in the book is this one

Digital discourses, those virtual exchanges we glimpse on the Net, are textual performances: fleeting, transient, ephemeral, already past. Like the text of a play, they leave a trace to which meaning can be assigned, but these traces are haunted by the absence that was the performance itself, its reception, and its emotive power. To write about these online performances already alters their significance; a shift in temporal and spatial context produces a shift in meaning.

I remember the textual performances (as Second Front) we did in Second Life such as “Breaking News” (also not that long ago). The “playbook” for this performance was simply: we go into the Reuters headquarters and use the chat window to shout headlines such as: BREAKING NEWS: AVATARS IN REUTERS NEED ATTENTION!

But now, the performance only exists in writing, and absurd documentation videos like this:



Materiality in 3D Prints

I’m resuming some of the 3D printing work this week for my ongoing 3D data visualization research (a.k.a. Data Crystals). Here are four small tests in the “embryonic” state.

IMG_0930Step 1 in the cleaning process is the arduous process of using dental tools to pick away the support material.


I have four “crystals” — two constructed from a translucent resin material and two from a more rubbery black material. IMG_0933

And the finished product! The Tango Black (that’s the material) below. I’m not so happy with the how this feels: soft and bendy.IMG_0934

And the Vero Clear — which has an aesthetic appeal to it, and is a hard resin that resembles like ice. Remember the ICE (Intrusion Countermeasure Electronics) in Neuromancer…this is one source of inspiration.

Welcome to the Party: @lenenbot

Say hello to the latest Twitterbot from the Bot Collective: @lenenbot


Lenenbot* mixes up John Lennon and Vladimir Lenin quotes. The first half of one with the second half of the other.

Some of my favorites so far are:

Communism is everybody’s business.
It’s weird not to be able to run the country.
Revolution is love.

There are more, surreal ones. There are about 600 different possibilities, all randomized. Subscribe to the Twitter account here.



* I chose the name “Lenen” to avoid confusion. Lenonbot and Lenninbot look like misspellings of Lennon and Lenin, respectively. Lenen is it’s own bot.

Digital Fabrication Success

I’ve been working on a Digital Fabrication Technique for building precise 3D-faceted forms. I ended up making an armature, which is close to a good solution, but still has too much play in the joints.

IMG_0920One of the other resident artists at Autodesk, suggested a solution where I make wooden squares to solidify the joints in the armature. I cut out a variety of squares, each with a slightly different width+height, to account for the kerf of the laser-cutter. I also laser-etched them with their measurements.IMG_0917You can see here where I cut out a groove in the bottom of the armature, 1/8″ deep. The square fits nicely in there. I found that 25/1000″ seems to be the right amount of compensation.

IMG_0921I also added squares for the top joints.
IMG_0923Using the brad nailer, I adhered the bottom squares to the armature.


Then the top squares and then the bottom panel of the structure.IMG_0925I built up the structure quickly. The precision of the armature made it easy to align the wood-paneled faces.

IMG_0927This is what it looks like before I put the last panel on.

IMG_0928And done! No glue or anything. Easy assembly.


It looks just like the model!


Digital Fabrication — Better

After the “Digital Fabrication Fail” based on my self-defined Fabrication Challenge, I’ve gotten closer to a more precise solution. After an evening of frustration, while riding my bike home, I realized that an armature for the 3D sculptures would be the solution.

This is a bit tedious design-wise since I’d have to custom-design the armature for every 3D form. However, it would work — I remembered the Gift Horse Project and the armature that we built for this.

armature_assembledI designed a quick-and-dirty armature in Sketchup (I know, I know…) and exported the faces to Illustrator with an SVG exporter.sketchupI then laser-cut the armature pieces and put them together.


I made a few mistakes at first, but after a few tries got these three pieces to easily fit together.

IMG_0920However, even with accounting for the kerf, there is still a lot of play in the structure. You can’t see it in the images, but I can easily wiggle the pieces back and forth.

If I model the tolerances too tightly, then I can’t slide the inner portions of the armature together.
IMG_0919It is certainly an improvement, but I’m looking for something that has more precision and is still easy to assemble.



Digital Fabrication Fail #1

I’m working on some simple tests or my Faceted Forms Fabrication Challenge . I started with this model, which has 10 faces and is relatively simple.

digital_fail_solidThen, I laser-cut these pieces from a 1/8″ sheet of wood.

IMG_0904And, I also cut out these joints.

IMG_0905Then, using the brad nail gun and glue, I began with the base and built up the structure using the joints for support.IMG_0909The first level, with the rectangular base went well.IMG_0910However, when I started assembling the trapezoid sections, I quickly ran into problems. The nail gun pushed the joint blocks away from the wood, and it was difficult to align the joint pieces correctly. I had to redo sections over again. Although this photo doesn’t entirely capture the first-try-failure, you can see the nail holes everywhere and also the gap between the joints. I threw in the towel pretty quickly and went home to sleep on the project, and hopefully, will come up with a better solution.IMG_0913


Fabrication Challenge — Faceted Forms

The fabrication challenge for some of my new sculptures is to devise a way to transform models in 3D screen-space into faceted painted wood forms. The faceted look is something I first experimented using papercraft sculptures in the No Matter (2008) project, a collaboration with Victoria Scott.


I later expanded upon this idea with the 2049 Series sculptures such as the Universal Mailbox and the 2049 Hotline. I constructed these sculptures from found wood at the dump while an artist-in-residence at Recology SF.

The problem I had getting the weird angles to be exact. I don’t have strong woodworking skills and ended up spending a lot of time with bondo fixing my mistakes. I’d like to be able to make these on the laser-cutter…no saws and no sanding and have them look perfect. Stay tuned.

malbox phonebooth


The Art of 3D Printing

My new work on “Data Crystals” is featured in a new episode of Science in the City, produced by the Exploratorium. You can watch it here.

The behind-the-scenes production involved many emails and then a quick video shoot. Phoebe (the videographer) interviewed me in the conference room at Autodesk. We had about 25 minutes to shoot the interview portion of the video. She filled me in on her intentions for the piece and asked me to talk about a few general topics related to 3D printing.


Fortunately, over the years, I’ve become very comfortable with my voice and image. She also did a great job of making me look smart. I explained my new “Data Crystals” project, which is in the research phase. I am looking at open data sets provided by San Francisco Open Data Portal and mapping them as 3D sculptural objects. You can see me holding some of the 3D prints on the video.




Lasercut Measuring Tool

I like helping people and making simple tools to share. This measures plywood sheets, fits in your pocket and can be laser-cut in 3 minutes. The Instructable is here (Illustrator file included).