I’ve fallen a bit behind in my documentation and have a backlog of great stuff that I’ve been 3D-printing. These are a few of my early tests with my new project: Data Crystals. I am using various data sources, which I algorithmically transform data into 3D sculptures.
The source for these is the San Francisco Open Data Portal — which provides datasets about all sorts of interesting things such as housing permit data, locations of parking meters and more.
My custom algorithms transform this data into 3D sculptures. Legibility is still an issue, but initial tests show the wonderful work that algorithms can do.
This is a transformation of San Francisco Crime Data. It turns out that crime happens everywhere, so the data is in a giant block.
After running some crude data transformations, I “mined” this crystal: the location of San Francisco public art. Most public art is located in the downtown and city hall area. But there is a tail, which represents the San Francisco Airport.
More experiments: this is a test, based on the SF public art, where I played with varying the size of the cubes (this would be a suggested value of artwork, which I don’t have data for…yet). Now, I have a 4th axis for the data. Plus, there is a distinct aesthetic appeal of stacking differently-sized blocks as opposed to uniform ones.
I arrived at 9am and introduced myself to Casey Reas, co-founder of Processing, who was leading the hackathon and a super-nice guy. When I was working as a New Media Exhibit Developer at the Exploratorium (2012-13), Processing was the primary tool we used for building installations. Thanks Casey!
I arrived alone and expected a bunch of nerdy 20-somethings. Instead, I ran into some old friends, including Karen Marcelo, who has been generously running dorkbot for 15+ years and has an SRL email address. (coolPoints *= coolPoints)
I sat down at a table with Karen and invited Eric over. Also sitting with us were Jesse Day, a graduate student in Learning, Design and Technology at Stanford and Kristin Henry, artist and computer scientist. The 5 of us were soon to become a team — Team JEKKS…get it?
The folks from GAFFTA (Josette Melchor), swissnex and BCNM took turns presenting slides about possibilities for data canvas projects for 30 minutes. This was followed by another 30 of questions from a curious crowd of 60 people, which mean a lot to ingest.
The night before, we were given a dataset in a .csv format. I’d recommend never, ever looking at datasets just before going to sleep. I dreamt of strings, ints and timestamps.
The data included four Market Street locations, which tracked people, cars, trucks and buses for every minute of time. There was a lot of material there. How did they track this? Answer: Air quality sensors. That’s right, small dips in various emissions and others could give us minute-by-minute extrapolations on what kind of traffic was happening at each place. This is an amazing model — though I still wonder about its accuracy.
This was a competition and as such, we would be judged on three criteria: Audience Engagement: Would a general audience be attracted to installation? Would they stop and watch/interact?
Legibility of Data: Can people understand the data and make sense of the specifics?
Actionability: Are people spurred to action, presumably to change their mode of transport to reduce emissions?
At 10:30, we started. I don’t have any pictures of us working. They’re pretty much exactly what you’d imagine — a bunch of dorks huddled around a table with laptops.
After introducing ourselves and talking about our individual strengths, it was apparent we had a strong group of thinkers. We tossed around various ideas for about 30 minutes and then decided to do individual experiments for about an hour.
We decided to focus our data investigation on time rather than location. The 4 locations would somehow be on the same timeline for visitors to see. Kristin dove into Python and began transcoding the data sets into a more usable format. She translated them into graphics.
I played around with a hand-drawn aesthetic, tracing over a map of the downtown area by hand and drawing individual points, angling for something a little more low-tech. I also knew that Eric would devise something precise, neat and clean, so left him with the hard-viz duties.
Karen worked on her own to come up with some circular representations in Processing. As with everyone, in a hackathon, people work with the strong toolsets they already have.
Jesse was the only one of us who didn’t start coding right away. Smart man. He was also the one with the conceptual breakthrough, and began coloring bars on the vehicles themselves to represent emissions.
We huddled and decided to focus on representing the emissions as a series of colors. We settled on representing particulates, VOC (body odor), CO, CO2 and EMF (phones, electricity), not sure at the time if they were actually being tracked by the sensors.
More coding. Eric and I tapped into our collective exhibition design/art design experience and talked a compelling interaction model. The two things that people universally enjoy are to see themselves and to control timelines. Everyone liked the idea of “seeing yourself” as particulate emissions.
We all hashed out an idea of a 2-monitor installation and consulted with Casey about whether this was permissible (answer = yes). The first would be a real-time data visualization of the various stations. The other monitor would be a mirror which — get this — would do live video-tracking and map graphic of buses, cars, trucks and people onto corresponding moving bits in the background. Additionally, you could see yourself in the background.
Since it was a hackathon-style proposal, it doesn’t have to actually work. Beauty, eh?
2:30pm. 4 hours to make it happen. The rules were: laptops closed at 6:30 and then we all present as a group.
Jesse did the design work. We argued about colors: “too 70s”, “too saturated”, etc. Eric worked on the arduous task of getting the data into a legible data visualization. I worked on the animation, which involved no data translation.
I reused animation code that I’ve used in the Player Two rotoscoping project and for the Tweets in Space video installation. The next few hours were fast-n-furious and not especially “fun”. Eric was down to the wire with the data translation into graphics. At 5:30, I was busy making animated bus, car and truck exhaust farts, which made us all laugh. At 6:30 we were done.
We had two visualizations to show the crowd. Eric’s came out perfectly and was precise and legible. I was thankful that I roped him into our team. (note: video sped up by 4x).
The animation I wrote supplemented the visualization well. It was scrappy and funny we know would make people in the audience laugh.
Neither Karen and Kristin were able to make it for our presentation, so only the boys were represented in the pictures.
We were due up towards the end and so had a chance to watch the others before us. Almost everyone else had slide shows (oops!). There were so many both crazy and conventional ideas floating around. I can’t remember all of them — it’s like reading a book of short stores where you only can recall a handful.
I did notice a few things: a lot of the younger folks had a design-approach to making the visualizations, starting with well-illustrated concept slides. A few didn’t have any code and just the slides (to their credit, I think the Processing environment wasn’t familiar to everyone). One group made a website with a top level domain (!), one worked in the Unity game engine, there were many web-based implementations, one piece which was a sound-art piece (low points for legibility, but high for artistic merit) and one had a zombie game. Some presentations were a muddled and others were clear.
We gave a solid presentation, led by Jesse, which we called “Particulate Matters” (ba-dum-bum). We started with the “hard” data visualization and ended with the animation, which got a lot of laughs. I felt solid about our work.
The judging took a while. Fortunately, they provided beer! The results were in and we got 2nd place (woo-hoo!) out of about 14 teams. 1st place deserved it — a clean concept, which included accumulated particle emissions with Processing code showing emission-shapes dropping from the sky and accumulating on piles on the ground. The shapes matched the data. Nice work.
We got lots of chocolate as our prize. Yummy!
It turns out that Karen is the geekiest of all of us and in the days after the hackathon, improved her Processing sketch to come up with this cool-looking visualization.
This single purchase seems to have glitched my Amazon preferences. As a straight, white male, I now get recommendations that contradict my “personality profile”. Check these out:
Onto the text itself: I found myself fascinated by Rodriguez’s textual interactions and queer latina identity, especially since her world of net.interaction happened in a pre-Facebook world with IRC chat rooms (really not that long ago…)
My favorite passage in the book is this one
Digital discourses, those virtual exchanges we glimpse on the Net, are textual performances: fleeting, transient, ephemeral, already past. Like the text of a play, they leave a trace to which meaning can be assigned, but these traces are haunted by the absence that was the performance itself, its reception, and its emotive power. To write about these online performances already alters their significance; a shift in temporal and spatial context produces a shift in meaning.
I remember the textual performances (as Second Front) we did in Second Life such as “Breaking News” (also not that long ago). The “playbook” for this performance was simply: we go into the Reuters headquarters and use the chat window to shout headlines such as: BREAKING NEWS: AVATARS IN REUTERS NEED ATTENTION!
But now, the performance only exists in writing, and absurd documentation videos like this:
One of the other resident artists at Autodesk, suggested a solution where I make wooden squares to solidify the joints in the armature. I cut out a variety of squares, each with a slightly different width+height, to account for the kerf of the laser-cutter. I also laser-etched them with their measurements.You can see here where I cut out a groove in the bottom of the armature, 1/8″ deep. The square fits nicely in there. I found that 25/1000″ seems to be the right amount of compensation.
I also added squares for the top joints. Using the brad nailer, I adhered the bottom squares to the armature.
Then the top squares and then the bottom panel of the structure.I built up the structure quickly. The precision of the armature made it easy to align the wood-paneled faces.
This is what it looks like before I put the last panel on.
After the “Digital Fabrication Fail” based on my self-defined Fabrication Challenge, I’ve gotten closer to a more precise solution. After an evening of frustration, while riding my bike home, I realized that an armature for the 3D sculptures would be the solution.
I designed a quick-and-dirty armature in Sketchup (I know, I know…) and exported the faces to Illustrator with an SVG exporter.I then laser-cut the armature pieces and put them together.
I made a few mistakes at first, but after a few tries got these three pieces to easily fit together.
However, even with accounting for the kerf, there is still a lot of play in the structure. You can’t see it in the images, but I can easily wiggle the pieces back and forth.
If I model the tolerances too tightly, then I can’t slide the inner portions of the armature together. It is certainly an improvement, but I’m looking for something that has more precision and is still easy to assemble.
Then, I laser-cut these pieces from a 1/8″ sheet of wood.
And, I also cut out these joints.
Then, using the brad nail gun and glue, I began with the base and built up the structure using the joints for support.The first level, with the rectangular base went well.However, when I started assembling the trapezoid sections, I quickly ran into problems. The nail gun pushed the joint blocks away from the wood, and it was difficult to align the joint pieces correctly. I had to redo sections over again. Although this photo doesn’t entirely capture the first-try-failure, you can see the nail holes everywhere and also the gap between the joints. I threw in the towel pretty quickly and went home to sleep on the project, and hopefully, will come up with a better solution.
The fabrication challenge for some of my new sculptures is to devise a way to transform models in 3D screen-space into faceted painted wood forms. The faceted look is something I first experimented using papercraft sculptures in the No Matter (2008) project, a collaboration with Victoria Scott.
The problem I had getting the weird angles to be exact. I don’t have strong woodworking skills and ended up spending a lot of time with bondo fixing my mistakes. I’d like to be able to make these on the laser-cutter…no saws and no sanding and have them look perfect. Stay tuned.
The behind-the-scenes production involved many emails and then a quick video shoot. Phoebe (the videographer) interviewed me in the conference room at Autodesk. We had about 25 minutes to shoot the interview portion of the video. She filled me in on her intentions for the piece and asked me to talk about a few general topics related to 3D printing.
Fortunately, over the years, I’ve become very comfortable with my voice and image. She also did a great job of making me look smart. I explained my new “Data Crystals” project, which is in the research phase. I am looking at open data sets provided by San Francisco Open Data Portal and mapping them as 3D sculptural objects. You can see me holding some of the 3D prints on the video.