This GPS data-logging shield from Adafruit arrived yesterday and after a couple of hours of code-wrestling, I was able to capture the latitude and longitude to a CSV data file.
This is me walking from my studio at SFAI to the bedroom. The GPS signal at this range (100m) fluctuates greatly, but I like the odd compositional results. I did the plotting in OpenFrameworks, my tool-of-choice for displaying data that will be later transformed into sculptural results.
The second one is me driving in the car for a distance of about 2km. The tracks are much smoother. If you look closely, you can see where I stopped at the various traffic lights.
Now, GPS tracking alone isn’t super-compelling, and there are many mapping apps that will do this for you. But as soon as I can attach water sensor data to latitude/longitude, then it can transform into something much more interesting as the data will become multi-dimensional.
EquityBot exists entirely as a networked art or “net art” project, meaning that it lives in the “cloud” and has no physical form. For those of you who are Twitter users, you can follow on Twitter: @equitybot
What is EquityBot? Many people have asked me that question.
EquityBot is a stock-trading algorithm that “invests” in emotions such as anger, joy, disgust and amazement. It relies on a classification system of twenty-four emotions, developed by psychologist and scholar, Robert Plutchik.
how it works During stock market hours, EquityBot continually tracks worldwide emotions on Twitter to gauge how people are feeling. In the simple data-visualization below, which is generated automatically by EquityBot, the larger circles indicate the more prominent emotions that people are Tweeting about.
At this point in time, just 1 hour after the stock market opened on October 28th, people were expressing emotions of disgust, interest and fear more prominently than others. During the course of the day, the emotions contained in Tweets continually shift in response to world events and many other unknown factors.
EquityBot then uses various statistical correlation equations to find pattern matches in the changes in emotions on Twitter to fluctuations in stocks prices. The details are thorny, I’ll skip the boring stuff. My time did involve a lot of work with scatterplots, which looked something like this.
Once EquityBot sees a viable pattern, for example that “Google” is consistently correlated to “anger” and that anger is a trending emotion on Twitter, EquityBot will issue a BUY order on the stock.
Conversely, if Google is correlated to anger, and the Tweets about anger are rapidly going down, EquityBot will issue a SELL order on the stock.
EquityBot runs a simulated investment account, seeded with $100,000 of imaginary money.
In my first few days of testing, EquityBot “lost” nearly $2000. This is why I’m not using real money!
Disclaimer: EquityBot is not a licensed financial advisor, so please don’t follow it’s stock investment patterns.
The project treats human feelings as tradable commodities. It will track how “profitable” different emotions will be over the course of months. As a social commentary, I propose a future scenario that just about anything can be traded, including that which is ultimately human: the very emotions that separate us from a machine.
If a computer cannot be emotional, at the very least it can broker trades of emotions on a stock exchange.
As a networked artwork, EquityBot generates these simple data visualizations autonomously (they will get better, I promise).
It’s Twitter account (@equitybot) serves as a performance vehicle, where the artwork “lives”. Also, all of these visualizations are interactive and on the EquityBot website: equitybot.org.
I don’t know if there is a correlation between emotions in Tweets and stock prices. No one does. I am working with the hypothesis that there is some sort of pattern involved. We will see over time. The project goes “live” on October 29th, 2014, which is the day of the opening of the Impakt Festival and I will let the first experiment run for 3 months to see what happens.
Feedback is always appreciated, you can find me, Scott Kildall, here at: @kildall.
It’s been a busy couple of weeks working on the EquityBot project, which will be ready for the upcoming Impakt Festival. Well, at least some functional prototype in my ongoing research project will be online for public consumption.
EquityBot now tweets images of data-visualizations on its own and is autonomous. I’m constantly surprised and a bit nervous by its Tweets.
Using code from Jim Vallandingham, In just one evening, I created dynamically-generated bubble maps of Twitter sentiments as they arrive EquityBot’s own sentiment analysis engine.
I mapped the colors directly from the Plutchik wheel of emotions, which is why they are still a little wonky like the fact that the emotion of Grief is unreadable. Will be fixed.
I did some screen captures and put them my Facebook and Twitter feed. I soon discovered that people were far more interested in images of the data visualizations than just text describing the emotions.
I ended up using PhantomJS, the Selenium web driver and my own Python management code to solve the problem. There biggest hurdle was getting Google webfonts to render properly. Trust me, you don’t want to know the details.
But I’m happy with the results. EquityBot will now move to other Tweetable data-visualizations such as its own simulated bank account, stock-correlations and sentiments-stock pairings.
For my latest project, EquityBot, I’ve been researching, building and writing code during my 2 month residency at Impakt Works in Utrecht (Netherlands).
EquityBot is going through its final testing cycles before a public announcement on Twitter. For those of you who are Bot fans, I’ll go ahead and slip you the EquityBot’sTwitter feed: https://twitter.com/equitybot
The initial code-work has involved configuration of a back-end server that does many things, including “capturing” Twitter sentiments, tracking fluctuations in the stock market and running correlation algorithms.
I know, I know, it sounds boring. Often it is. After all, the result of many hours of work: a series of well-formatted JSON files. Blah.
But it’s like building city infrastructure: now that I have the EquityBot Server more or less working, it’s been incredibly reliable, cheap and customizable. It can act as a Twitterbot, a data server and a data visualization engine using D3.
This type of programming is yet another skill in my Creative Coding arsenal. And consists of mostly Python code that lives on a Linode server, which is a low-cost alternative to options like HostGator or GoDaddy, which incur high monthly costs. And there’s a geeky sense of satisfaction in creating a well-oiled software engine.
The EquityBot Server looks like a jumble of Python and PHP scripts. I cannot possibly explain it excruciating detail, nor would anyone in their right mind want to wade through the technical details.
Instead, I wrote up a blueprint for this project.
For those of you who are familiar with my art projects, this style of blueprint may look familiar. I adapted this design from my 2049 Series, which are laser-etched and painted blueprints of imaginary devices. I made these while an artist-in-residence at Recology San Francisco in 2011.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-10-13 06:07:132014-10-13 06:07:13Blueprint for EquityBot
In the hours just before I finished my presentation, I also managed to get Life of Poo working. What is it? Well, an interactive map of where your poo goes based on the sewer data that I used for this project.
This will begin an animated poo journey down the sewer map and to the wastewater treatment plant.
Not all of the flushes works as you’d expect. There’s still glitches and bugs in the code. If you type in “16th & Mission”, the poo just sits there. Hmmm.
Why do I have the bugs? I have some ideas (see below) but I really like the chaotic results so will keep it for now.
I think the erratic behavior is happening because of a utility I wrote, which does some complex node-trimming and doesn’t take into account gravity in its flow diagrams. The sewer data has about 30,000 valid data points and Leaflet can only handle about 1500 or so without it taking forever to load and refresh.
The utility I wrote parses the node data tree and recursively prunes it to a more reasonable number, combining upstream and downstream nodes. In an overflow situation, technically speaking, there are nodes where waste might be directed away from the waste-water treatment plant.
However, my code isn’t smart enough to determine which are overflow pipes and which are pipes to the treatment plants, so the node-flow doesn’t work properly.
In case you’re still reading, here’s an illustration of a typical combined system, that shows how the pipes might look. The sewer outfall doesn’t happen very often, but when your model ignores gravity, it sure will.
The 3D print of the sewer, the one that uses the exact same data set as Life of Poo looks like this.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-09-16 01:11:072014-09-16 01:11:07Life of Poo
My exciting news is that this fall I will be an artist-in-residence at Impakt Works, which is in Utrecht, the Netherlands. The same organization puts on the Impakt Festival every year, which is a media arts festival that has been happening since 1988. My residency is from Sept 15-Nov 15 and coincides with the festival at the end of October.
Utrecht is a 30 minute train ride from Amsterdam and 45 minutes from Rotterdam and by all accounts is a small, beautiful canal city with medieval origins and also hosts the largest university in the Netherlands.
Of course, I’m thrilled. This is my first European art residency and I’ll have a chance to reconnect with some friends who live in the region as well as make many new connections.
Like many of my projects this year, this will involve heavy coding, data-visualization and a sculptural component.
At this point, I’m in the research and pre-production phase. While configuring back-end server code, I’m also gathering reading materials about capital and algorithms for the upcoming plane rides, train rides and rainy Netherland evenings.
Here is the project description:
EquityBot is a stock-trading algorithm that explores the connections between collective emotions on social media and financial speculation. Using custom algorithms Equitybot correlates group sentiments expressed on Twitter with fluctuations in related stocks, distilling trends in worldwide moods into financial predictions which it then issues through its own Twitter feed. By re-inserting its results into the same social media system it draws upon, Equitybot elaborates on the ways in which digital networks can enchain complex systems of affect and decision making to produce unpredictable and volatile feedback loops between human and non-human actors.
Currently, autonomous trading algorithms comprise the large majority of stock trades.These analytic engines are normally sequestered by private investment companies operating with billions of dollars. EquityBot reworks this system, imagining what it might be like it this technological attention was directed towards the public good instead. How would the transparent, public sharing of powerful financial tools affect the way the stock market works for the average investor?
I’m imagining a digital fabrication portion of EquityBot, which will be the more experimental part of the project and will involve 3D-printed joinery. I’ll be collaborating with my longtime friend and colleague, Michael Ang on the technology — he’s already been developing a related polygon construction kit — as well as doing some idea-generation together.
“Mang” lives in Berlin, which is a relatively short train ride, so I’m planning to make a trip where we can work together in person and get inspired by some of the German architecture.
My new 3D printer — a Printrbot Simple Metal — will accompany me to Europe. This small, relatively portable machine produces decent quality results, at least for 3D joints, which will be hidden anyways.
Finding water data is harder than I thought. Like detective Gittes in the movie Chinatown, I’m poking my nose around and asking everyone about water. Instead of murder and slimy deals, I am scouring the internet and working with city government. I’ve spent many hours sleuthing and learning about the water system in our city.
In San Francisco, where this story takes place, we have three primary water systems. Here’s an overview:
The Sewer System is owned and operated by the SFPUC. The DPW provides certain engineering services. This is a combined stormwater and wastewater system. Yup, that’s right, the water you flush down the toilet goes into the same pipes as the the rainwater. Everything gets piped to a state-of-the art wastewaster treatment plant. Amazingly the sewer pipes are fed almost entirely by gravity, taking advantage of the natural landscape of the city.
The Auxiliary Water Supply System (AWSS) was built in 1908 just after the 1906 San Francisco Earthquake. It is an entire water system that is dedicated solely to firefighting. 80% of the city was destroyed not by earthquake itself, but by the fires that ravaged the city. The fires rampaged through the city mostly because the water mains collapsed. Just afterwards, the city began construction on a separate this infrastructure for combatting future fires. It consists of reservoirs that feed an entire network of pipes to high-pressure fire hydrants and also includes approximately 170 underground cisterns at various intersections in the city. This incredible separate water system is unique to San Francisco.
The Potable WaterSystem, a.k.a. drinking water is the water we get from our faucets and showers. It comes from the Hetch Hetchy — a historic valley but also a reservoir and water system constructed from 1913-1938 to provide water to San Francisco. This history is well-documented, but what I know little about is how the actual drinking water gets piped into San Francisco. homes Also, the San Francisco water is amongst the most safe in the world, so you can drink directly from your tap.
Given all of this, where is the story? This is the question that I asked folks at Stamen, Autodesk and Gray Area during a hyper-productive brainstorming session last week. Here’s the whiteboard with the notes. The takeaways, as folks call it are, are below and here I’m going to get nitty-gritty into process.
(whiteboard brainstorming session with Stamen)
(1) In my original proposal, I had envisioned a table-top version of the entire water infrastucture: pipes, cisterns, manhole chambers, reservoirs as a large-scale sculpture, printed in panels. It was kindly pointed out to me by the Autodesk Creative Projects team that this is unfeasible. I quickly realized the truth of this: 3D prints are expensive, time-consuming to clean and fragile. Divide the sculptural part of the project into several small parts.
(2) People are interested in the sewer system. Someone said, “I want to know if you take a dump at Nob Hill, where does the poop go?” It’s universal. Everyone poops, even the Queen of England and even Batman. It’s funny, it’s gross, it’s entirely human. This could be accessible to everyone.
(4) Think about focusing on making a beautiful and informative 3D map / data-visualization of just 1 square mile of San Francisco infrastructure. Hone on one area of the city.
(5) Complex systems can be modeled virtually. Over the last couple weeks, I’ve been running code tests, talking to many people in city government and building out an entire water modeling systems in C++ using OpenFrameworks. It’s been slow, deliberate and arduous. Balance the physical models with a complex virtual one.
I’m still not sure exactly where this project is heading, which is to be expected at this stage. For now, I’m mining data and acting as a detective. In the meantime, here is the trailer for Chinatown, which gives away the entire plot in 3 minutes.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-07-17 12:21:372014-07-17 12:21:37Data Miner, Water Detective
This project build on my Data Crystals sculptures, which transform various public datasets algorithmically into 3D-printable art objects. For this artwork, I used Processing with the Modelbuilder libraries to generate STL files. It was a fairly easy coding solution, but I ran into performance issues along tje wau.
But Processing tends to choke up at managing 30,000 simple 3D cubes. My clustering algorithms took hours to run. Because it isn’t compiled into machine code and is instead interpreted, it has layers of inefficiency.
I bit the coding bullet and this week migrated my code to OpenFrameworks (an open source C++ environment). I’ve used OF before, but never with 3D work. There are still lots of gaps in the libraries, specifically the STL exporting, but I’ve had some initial success, woo-hoo!
Here are all the manholes, the technical term being “sewer nodes”, mapped into 3D space using GIS lat/lon and elevation coordinates. The clear indicator that this is San Francisco, and not Wisconsin, which this mapping vaguely resembles is the swath of empty space that is Golden Gate Park.
What hooked me was that “a-ha” moment where 3D points rendered properly on my screen. I was on a plane flight home from Seattle and involuntarily emitted an audible yelp. Check out the 3D mapping. There’s a density of nodes along the Twin Peaks, and I accentuated the z-values to make San Francisco look even more hilly and to understand the location of the sewer chambers even better.
Sewer nodes are just the start. I don’t have the connecting pipes in there just yet, not to mention the cisterns and other goodies of the SF water infrastructure.
Of course, I want to 3D print this. By increasing the node size — the cubic dimensions of each manhole location, I was able to generate a cohesive and 3D-printable structure. This is the Meshlab export with my custom-modified STL export code. I never thought I’d get this deep into 3D coding, but now, I know all sorts of details, like triangular winding and the right-hand rule for STL export.And here is the 3D print of the San Francisco terrain, like the Data Crystals, with many intersecting cubes.
It doesn’t have the aesthetic crispness of the Data Crystals project, but this is just a test print — very much a work-in-progress.
Along with 3 other new media artists and creative coding experts, I was recently selected to be a Creative Code Fellow for 2014 — a project pioneered by Gray Area (formerly referred to as GAFFTA and now in a new location in the Mission District).
Each of us is paired with a partnering studio, which provides a space and creative direction for our proposed project. The studio that I’m pleased to be working with is Stamen Design, a leader in the field of aesthetics, mapping and data-visualization.
I’ll be also continuing my residency work at Autodesk at Pier 9, which will be providing support for this project as well.
My proposed project is called “Water Works” — a 3D-printed data visualization of San Francisco’s water system infrastructure, along with some sort of web component.
Creative Code Fellowship Application Scott Kildall
Project Proposal (250 limit)
My proposed project “Water Works” is a 3D data visualization of the complex network of pipes, aqueducts and cisterns that control the flow of water into our homes and out of our toilets. What lies beneath our feet is a unique combined wastewater system — where stormwater mixes with sewer lines and travels to a waste treatment plant, using gravitational energy from the San Francisco hills.
This dynamic flow is the circulatory system of the organism that is San Francisco. As we are impacted by climate change, which escalates drought and severe rainstorms, combined with population growth, how we obtain our water and dispose of it is critical to the lifeblood of this city.
Partnering with Autodesk, which will provide materials and shop support, I will write code, which will generate 3D prints from municipal GIS data. I imagine ghost-like underground 3D landscapes with thousands of threads of water — essentially flow data — interconnected to larger cisterns and aqueducts. The highly retinal work will invite viewers to explore the infrastructure the city provides. The end result might be panels that snap together on a tabletop for viewers to circumnavigate and explore.
The GIS data is available, though not online, from San Francisco and already I’ve obtained cooperation from SFDPW about providing some infrastructure data necessary to realize this project.
While my focus will be on the physical portion of this project, I will also build an interactive web-based version from the 3D data, making this a hybrid screen-physical project.
Why are you interested in participating in this fellowship? (150 word limit) The fellowship would give me the funding, visibility and opportunity of working under the umbrage of two progressive organizations: Gray Area and Stamen Design. I would expand my knowledge, serve the community and increase my artistic potential by working with members of these two groups, both of which have a progressive vision for art and design in my longtime home of San Francisco.
Specifically, I wish to further integrate 3D printing into the data visualization conversation. With the expertise of Stamen, I hope to evolve my visualization work at Autodesk. The 3D-printing technology makes possible what has hitherto been impossible to create and has enormous possibilities to materialize the imaginary.
What experience makes this a good fit for you? (150 word limit) I have deep experience in producing both screen-based and physical data visualizations. While at the Exploratorium, I worked on many such exhibits for a general audience.
One example is a touch-screen exhibit called “Seasons of Plankton”, which shows how plankton species in the Bay change over the year, reflecting a diverse ecosystem of microscopic organisms. I collaborated with scientists and visitor evaluators to determine the optimal way to tell this story. I performed all of the coding work and media production for this successful piece.
While at Autodesk, my focus has been creating 3D data visualizations with my custom code that transforms public data sets into “Data Crystals” (these are the submitted images). This exploration favors aesthetics over legibility. I hope to build upon this work and create physical forms, which help people see the dynamics of a complex urban water system to invite curiosity through beauty.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-07-03 11:31:562014-07-03 11:31:56Creative Code Fellowship: Water Works Proposal
@SelfiesBot began tweeting last week and already the results have surprised me.
Selfies Bot is a portable sculpture which takes selfies and then tweets the images. With custom electronics and a long arm that holds a camera that points at itself, it is a portable art object that can travel to parks, the beach and to different cities.
I quickly learned that people want to pose with it, even in my early versions with a cardboard head (used to prove that the software works).
Last week, in an evening of experimentation, I added text component, where each Twitter pic gets accompanied by text that I scrape from Tweets with the #selfie hashtag.
This produces delightful results, like spinning a roulette wheel: you don’t know what the text will be until the Twitter website pubishes the tweet. The text + image gives an entirely new dimension to the project. The textual element acts as a mirror into the phenomenon of the self-portrait, reflecting the larger culture of the #selfie.
While a resident artist at Autodesk, we are supposed to write many Instructables. Often, the temptation is to make your projects and then write the how-to-guides in a haste.
Since I broke my collarbone, I really can’t make anything physical, but I can type one-handed. Besides the daily naps and the doctors’ appointments, and slowly doing one-handed chores like sorting laundry, I have to keep my mind active (I’m still too vulnerable up to go outside on my own).
Here is a new one: an Introduction to Git and GitHub. I originally found this source-control system to be weird and confusing, but now I’m 100% down with it. Feel free to add comments on the guide, as I’m a relative Git/GitHub nOOb and also have a thick skin for scathing Linux criticism.
And here is my post-surgey selfie from yesterday, where they put the pins in my collarbone. The doctors told me it went well. All I know is that I woke up feeling groggy with extra bandages on my shoulder. That’s how easy it is these days.
I’ve fallen a bit behind in my documentation and have a backlog of great stuff that I’ve been 3D-printing. These are a few of my early tests with my new project: Data Crystals. I am using various data sources, which I algorithmically transform data into 3D sculptures.
The source for these is the San Francisco Open Data Portal — which provides datasets about all sorts of interesting things such as housing permit data, locations of parking meters and more.
My custom algorithms transform this data into 3D sculptures. Legibility is still an issue, but initial tests show the wonderful work that algorithms can do.
This is a transformation of San Francisco Crime Data. It turns out that crime happens everywhere, so the data is in a giant block.
After running some crude data transformations, I “mined” this crystal: the location of San Francisco public art. Most public art is located in the downtown and city hall area. But there is a tail, which represents the San Francisco Airport.
More experiments: this is a test, based on the SF public art, where I played with varying the size of the cubes (this would be a suggested value of artwork, which I don’t have data for…yet). Now, I have a 4th axis for the data. Plus, there is a distinct aesthetic appeal of stacking differently-sized blocks as opposed to uniform ones.
Stay tuned, there is more to come!
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-03-06 13:56:312014-03-06 13:56:313D Data Viz & SF Open Data
I arrived at 9am and introduced myself to Casey Reas, co-founder of Processing, who was leading the hackathon and a super-nice guy. When I was working as a New Media Exhibit Developer at the Exploratorium (2012-13), Processing was the primary tool we used for building installations. Thanks Casey!
I arrived alone and expected a bunch of nerdy 20-somethings. Instead, I ran into some old friends, including Karen Marcelo, who has been generously running dorkbot for 15+ years and has an SRL email address. (coolPoints *= coolPoints)
I sat down at a table with Karen and invited Eric over. Also sitting with us were Jesse Day, a graduate student in Learning, Design and Technology at Stanford and Kristin Henry, artist and computer scientist. The 5 of us were soon to become a team — Team JEKKS…get it?
The folks from GAFFTA (Josette Melchor), swissnex and BCNM took turns presenting slides about possibilities for data canvas projects for 30 minutes. This was followed by another 30 of questions from a curious crowd of 60 people, which mean a lot to ingest.
The night before, we were given a dataset in a .csv format. I’d recommend never, ever looking at datasets just before going to sleep. I dreamt of strings, ints and timestamps.
The data included four Market Street locations, which tracked people, cars, trucks and buses for every minute of time. There was a lot of material there. How did they track this? Answer: Air quality sensors. That’s right, small dips in various emissions and others could give us minute-by-minute extrapolations on what kind of traffic was happening at each place. This is an amazing model — though I still wonder about its accuracy.
This was a competition and as such, we would be judged on three criteria: Audience Engagement: Would a general audience be attracted to installation? Would they stop and watch/interact?
Legibility of Data: Can people understand the data and make sense of the specifics?
Actionability: Are people spurred to action, presumably to change their mode of transport to reduce emissions?
At 10:30, we started. I don’t have any pictures of us working. They’re pretty much exactly what you’d imagine — a bunch of dorks huddled around a table with laptops.
After introducing ourselves and talking about our individual strengths, it was apparent we had a strong group of thinkers. We tossed around various ideas for about 30 minutes and then decided to do individual experiments for about an hour.
We decided to focus our data investigation on time rather than location. The 4 locations would somehow be on the same timeline for visitors to see. Kristin dove into Python and began transcoding the data sets into a more usable format. She translated them into graphics.
I played around with a hand-drawn aesthetic, tracing over a map of the downtown area by hand and drawing individual points, angling for something a little more low-tech. I also knew that Eric would devise something precise, neat and clean, so left him with the hard-viz duties.
Karen worked on her own to come up with some circular representations in Processing. As with everyone, in a hackathon, people work with the strong toolsets they already have.
Jesse was the only one of us who didn’t start coding right away. Smart man. He was also the one with the conceptual breakthrough, and began coloring bars on the vehicles themselves to represent emissions.
We huddled and decided to focus on representing the emissions as a series of colors. We settled on representing particulates, VOC (body odor), CO, CO2 and EMF (phones, electricity), not sure at the time if they were actually being tracked by the sensors.
More coding. Eric and I tapped into our collective exhibition design/art design experience and talked a compelling interaction model. The two things that people universally enjoy are to see themselves and to control timelines. Everyone liked the idea of “seeing yourself” as particulate emissions.
We all hashed out an idea of a 2-monitor installation and consulted with Casey about whether this was permissible (answer = yes). The first would be a real-time data visualization of the various stations. The other monitor would be a mirror which — get this — would do live video-tracking and map graphic of buses, cars, trucks and people onto corresponding moving bits in the background. Additionally, you could see yourself in the background.
Since it was a hackathon-style proposal, it doesn’t have to actually work. Beauty, eh?
2:30pm. 4 hours to make it happen. The rules were: laptops closed at 6:30 and then we all present as a group.
Jesse did the design work. We argued about colors: “too 70s”, “too saturated”, etc. Eric worked on the arduous task of getting the data into a legible data visualization. I worked on the animation, which involved no data translation.
I reused animation code that I’ve used in the Player Two rotoscoping project and for the Tweets in Space video installation. The next few hours were fast-n-furious and not especially “fun”. Eric was down to the wire with the data translation into graphics. At 5:30, I was busy making animated bus, car and truck exhaust farts, which made us all laugh. At 6:30 we were done.
We had two visualizations to show the crowd. Eric’s came out perfectly and was precise and legible. I was thankful that I roped him into our team. (note: video sped up by 4x).
The animation I wrote supplemented the visualization well. It was scrappy and funny we know would make people in the audience laugh.
Neither Karen and Kristin were able to make it for our presentation, so only the boys were represented in the pictures.
We were due up towards the end and so had a chance to watch the others before us. Almost everyone else had slide shows (oops!). There were so many both crazy and conventional ideas floating around. I can’t remember all of them — it’s like reading a book of short stores where you only can recall a handful.
I did notice a few things: a lot of the younger folks had a design-approach to making the visualizations, starting with well-illustrated concept slides. A few didn’t have any code and just the slides (to their credit, I think the Processing environment wasn’t familiar to everyone). One group made a website with a top level domain (!), one worked in the Unity game engine, there were many web-based implementations, one piece which was a sound-art piece (low points for legibility, but high for artistic merit) and one had a zombie game. Some presentations were a muddled and others were clear.
We gave a solid presentation, led by Jesse, which we called “Particulate Matters” (ba-dum-bum). We started with the “hard” data visualization and ended with the animation, which got a lot of laughs. I felt solid about our work.
The judging took a while. Fortunately, they provided beer! The results were in and we got 2nd place (woo-hoo!) out of about 14 teams. 1st place deserved it — a clean concept, which included accumulated particle emissions with Processing code showing emission-shapes dropping from the sky and accumulating on piles on the ground. The shapes matched the data. Nice work.
We got lots of chocolate as our prize. Yummy!
It turns out that Karen is the geekiest of all of us and in the days after the hackathon, improved her Processing sketch to come up with this cool-looking visualization.
https://kildall.com/wp/wp-content/uploads/2019/02/logo-1.png00Scott Kildallhttps://kildall.com/wp/wp-content/uploads/2019/02/logo-1.pngScott Kildall2014-02-27 09:06:432014-02-27 09:06:43Urban Data Challenge