From Creative Coding

GPS Tracks

I am building water quality sensors which will capture geolocated data. This was my first test with this technology. This is part of my ongoing research at the Santa Fe Water Rights residency (March-April) and for the American Arts Incubator program in Thailand (May-June).

This GPS data-logging shield from Adafruit arrived yesterday and after a couple of hours of code-wrestling, I was able to capture the latitude and longitude to a CSV data file.

This is me walking from my studio at SFAI to the bedroom. The GPS signal at this range (100m) fluctuates greatly, but I like the odd compositional results. I did the plotting in OpenFrameworks, my tool-of-choice for displaying data that will be later transformed into sculptural results.

The second one is me driving in the car for a distance of about 2km. The tracks are much smoother. If you look closely, you can see where I stopped at the various traffic lights.

Now, GPS tracking alone isn’t super-compelling, and there are many mapping apps that will do this for you. But as soon as I can attach water sensor data to latitude/longitude, then it can transform into something much more interesting as the data will become multi-dimensional.

EquityBot goes live!

During my time at Impakt as an artist-in-residence, I have been working on a new project called EquityBot, which is an online commission from Impakt. It fits well into the Soft Machines theme of the festival: where machines integrate with the soft, emotional world.

EquityBot exists entirely as a networked art or “net art” project, meaning that it lives in the “cloud” and has no physical form. For those of you who are Twitter users, you can follow on Twitter: @equitybot

01_large

What is EquityBot? Many people have asked me that question.

EquityBot is a stock-trading algorithm that “invests” in emotions such as anger, joy, disgust and amazement. It relies on a classification system of twenty-four emotions, developed by psychologist and scholar, Robert Plutchik.

Plutchik-wheel.svg

how it works
During stock market hours, EquityBot continually tracks worldwide emotions on Twitter to gauge how people are feeling. In the simple data-visualization below, which is generated automatically by EquityBot, the larger circles indicate the more prominent emotions that people are Tweeting about.

At this point in time, just 1 hour after the stock market opened on October 28th, people were expressing emotions of disgust, interest and fear more prominently than others. During the course of the day, the emotions contained in Tweets continually shift in response to world events and many other unknown factors.

twitter_emotionsEquityBot then uses various statistical correlation equations to find pattern matches in the changes in emotions on Twitter to fluctuations in stocks prices. The details are thorny, I’ll skip the boring stuff. My time did involve a lot of work with scatterplots, which looked something like this.

correlationOnce EquityBot sees a viable pattern, for example that “Google” is consistently correlated to “anger” and that anger is a trending emotion on Twitter, EquityBot will issue a BUY order on the stock.

Conversely, if Google is correlated to anger, and the Tweets about anger are rapidly going down, EquityBot will issue a SELL order on the stock.

EquityBot runs a simulated investment account, seeded with $100,000 of imaginary money.

In my first few days of testing, EquityBot “lost” nearly $2000. This is why I’m not using real money!

Disclaimer: EquityBot is not a licensed financial advisor, so please don’t follow it’s stock investment patterns.

accountThe project treats human feelings as tradable commodities. It will track how “profitable” different emotions will be over the course of months. As a social commentary, I propose a future scenario that just about anything can be traded, including that which is ultimately human: the very emotions that separate us from a machine.

If a computer cannot be emotional, at the very least it can broker trades of emotions on a stock exchange.

affect_performanceAs a networked artwork, EquityBot generates these simple data visualizations autonomously (they will get better, I promise).

It’s Twitter account (@equitybot) serves as a performance vehicle, where the artwork “lives”. Also, all of these visualizations are interactive and on the EquityBot website: equitybot.org.

I don’t know if there is a correlation between emotions in Tweets and stock prices. No one does. I am working with the hypothesis that there is some sort of pattern involved. We will see over time. The project goes “live” on October 29th, 2014, which is the day of the opening of the Impakt Festival and I will let the first experiment run for 3 months to see what happens.

Feedback is always appreciated, you can find me, Scott Kildall, here at: @kildall.

 

Data-Visualizing + Tweeting Sentiments

It’s been a busy couple of weeks working on the EquityBot project, which will be ready for the upcoming Impakt Festival. Well, at least some functional prototype in my ongoing research project will be online for public consumption.

The good news is that the Twitter stream is now live. You can follow EquityBot here.

EquityBot now tweets images of data-visualizations on its own and is autonomous. I’m constantly surprised and a bit nervous by its Tweets.

exstasy_sentimentAt the end of last week, I put together a basic data visualization using D3, which is a powerful Javascript data-visualization tool.

Using code from Jim Vallandingham, In just one evening, I created dynamically-generated bubble maps of Twitter sentiments as they arrive EquityBot’s own sentiment analysis engine.

I mapped the colors directly from the Plutchik wheel of emotions, which is why they are still a little wonky like the fact that the emotion of Grief is unreadable. Will be fixed.

I did some screen captures and put them my Facebook and Twitter feed. I soon discovered that people were far more interested in images of the data visualizations than just text describing the emotions.

I was faced with a geeky problem: how to get my Twitterbot to generate images of the data visualizations using D3, a front-end Javascript client? I figured it out eventually, after stepping into a few rabbit holes.

Screen Shot 2014-10-21 at 11.31.09 AM

I ended up using PhantomJS, the Selenium web driver and my own Python management code to solve the problem. There biggest hurdle was getting Google webfonts to render properly. Trust me, you don’t want to know the details.

Screen Shot 2014-10-21 at 11.31.29 AM

 

But I’m happy with the results. EquityBot will now move to other Tweetable data-visualizations such as its own simulated bank account, stock-correlations and sentiments-stock pairings.

Blueprint for EquityBot

For my latest project, EquityBot, I’ve been researching, building and writing code during my 2 month residency at Impakt Works in Utrecht (Netherlands).

EquityBot is going through its final testing cycles before a public announcement on Twitter. For those of you who are Bot fans, I’ll go ahead and slip you the EquityBot’sTwitter feed: https://twitter.com/equitybot

The initial code-work has involved configuration of a back-end server that does many things, including “capturing” Twitter sentiments, tracking fluctuations in the stock market and running correlation algorithms.

I know, I know, it sounds boring. Often it is. After all, the result of many hours of work: a series of well-formatted JSON files. Blah.

But it’s like building city infrastructure: now that I have the EquityBot Server more or less working, it’s been incredibly reliable, cheap and customizable. It can act as a Twitterbot, a data server and a data visualization engine using D3.

This type of programming is yet another skill in my Creative Coding arsenal. And consists of mostly Python code that lives on a Linode server, which is a low-cost alternative to options like HostGator or GoDaddy, which incur high monthly costs. And there’s a geeky sense of satisfaction in creating a well-oiled software engine.

The EquityBot Server looks like a jumble of Python and PHP scripts. I cannot possibly explain it excruciating detail, nor would anyone in their right mind want to wade through the technical details.

Instead, I wrote up a blueprint for this project.

ebot_server_diagram_v1For those of you who are familiar with my art projects, this style of blueprint may look familiar. I adapted this design from my 2049 Series, which are laser-etched and painted blueprints of imaginary devices. I made these while an artist-in-residence at Recology San Francisco in 2011.

sniffer-blue

Life of Poo

I’ve been blogging about my Water Works project all summer and after the Creative Code Gray Area presentation on September 10th, the project is done. Phew. Except for some of the residual documentation.

In the hours just before I finished my presentation, I also managed to get Life of Poo working. What is it? Well, an interactive map of where your poo goes based on the sewer data that I used for this project.

Huh? Try it.

Screen Shot 2014-09-16 at 6.42.06 AM

This is the final piece of my web-mapping portion of Water Works and uses Leaflet with animated markers, all in Javascript, which is a new coding tool in my arsenal (I know, late to the party). I learned the basics in the Gray Area Creative Code Immersive class, which was provided as part of the fellowship.

The folks at Stamen Design also helped out and their designer-technicians turned me onto Leaflet as I bumbled my way through Javascript.

How does it work?

On the Life of Poo section of the Water Works website, you enter an address (in San Francisco) such as “Twin Peaks, SF” or “47th & Judah, SF” and the Life of Poo and then press Flush Toilet.

This will begin an animated poo journey down the sewer map and to the wastewater treatment plant.

Screen Shot 2014-09-16 at 6.50.17 AMNot all of the flushes works as you’d expect. There’s still glitches and bugs in the code. If you type in “16th & Mission”, the poo just sits there. Hmmm.

Why do I have the bugs? I have some ideas (see below) but I really like the chaotic results so will keep it for now.

Screen Shot 2014-09-16 at 6.54.32 AM

 

I think the erratic behavior is happening because of a utility I wrote, which does some complex node-trimming and doesn’t take into account gravity in its flow diagrams. The sewer data has about 30,000 valid data points and Leaflet can only handle about 1500 or so without it taking forever to load and refresh.

The utility I wrote parses the node data tree and recursively prunes it to a more reasonable number, combining upstream and downstream nodes. In an overflow situation, technically speaking, there are nodes where waste might be directed away from the waste-water treatment plant.

However, my code isn’t smart enough to determine which are overflow pipes and which are pipes to the treatment plants, so the node-flow doesn’t work properly.

In case you’re still reading, here’s an illustration of a typical combined system, that shows how the pipes might look. The sewer outfall doesn’t happen very often, but when your model ignores gravity, it sure will.

CombineWasteWaterOverflow

The 3D print of the sewer, the one that uses the exact same data set as Life of Poo looks like this.

sewerworks_front sewerworks_top

EquityBot @ Impakt

My exciting news is that this fall I will be an artist-in-residence at Impakt Works, which is in Utrecht, the Netherlands. The same organization puts on the Impakt Festival every year, which is a media arts festival that has been happening since 1988. My residency is from Sept 15-Nov 15 and coincides with the festival at the end of October.

Utrecht is a 30 minute train ride from Amsterdam and 45 minutes from Rotterdam and by all accounts is a small, beautiful canal city with medieval origins and also hosts the largest university in the Netherlands.

Of course, I’m thrilled. This is my first European art residency and I’ll have a chance to reconnect with some friends who live in the region as well as make many new connections.

impakt; utrecht; www.impakt.nlThe project I’ll be working on is called EquityBot and will premiere at the Impakt Festival in late October as part of their online component. It will have a virtual presence like my Playing Duchamp artwork (a Turbulence commission) and my more recent project, Bot Collective, produced while an artist-in-residence at Autodesk.

Like many of my projects this year, this will involve heavy coding, data-visualization and a sculptural component.

equity_bot_logo

At this point, I’m in the research and pre-production phase. While configuring back-end server code, I’m also gathering reading materials about capital and algorithms for the upcoming plane rides, train rides and rainy Netherland evenings.

Here is the project description:

EquityBot

EquityBot is a stock-trading algorithm that explores the connections between collective emotions on social media and financial speculation. Using custom algorithms Equitybot correlates group sentiments expressed on Twitter with fluctuations in related stocks, distilling trends in worldwide moods into financial predictions which it then issues through its own Twitter feed. By re-inserting its results into the same social media system it draws upon, Equitybot elaborates on the ways in which digital networks can enchain complex systems of affect and decision making to produce unpredictable and volatile feedback loops between human and non-human actors.

Currently, autonomous trading algorithms comprise the large majority of stock trades.These analytic engines are normally sequestered by private investment companies operating with billions of dollars. EquityBot reworks this system, imagining what it might be like it this technological attention was directed towards the public good instead. How would the transparent, public sharing of powerful financial tools affect the way the stock market works for the average investor?

kildall_bigdatadreamsI’m imagining a digital fabrication portion of EquityBot, which will be the more experimental part of the project and will involve 3D-printed joinery. I’ll be collaborating with my longtime friend and colleague, Michael Ang on the technology — he’s already been developing a related polygon construction kit — as well as doing some idea-generation together.

“Mang” lives in Berlin, which is a relatively short train ride, so I’m planning to make a trip where we can work together in person and get inspired by some of the German architecture.

My new 3D printer — a Printrbot Simple Metal — will accompany me to Europe. This small, relatively portable machine produces decent quality results, at least for 3D joints, which will be hidden anyways.

printrbot

Data Miner, Water Detective

This summer, I’m working on a Creative Code Fellowship with Stamen Design, Gray Area and Autodesk. The project is called Water Works, which will map and data-visualize the San Francisco water infrastructure using 3D-printing and the web.

Finding water data is harder than I thought. Like detective Gittes in the movie Chinatown, I’m poking my nose around and asking everyone about water. Instead of murder and slimy deals, I am scouring the internet and working with city government. I’ve spent many hours sleuthing and learning about the water system in our city.

chinatown-nicholsonanddunway

In San Francisco, where this story takes place, we have three primary water systems. Here’s an overview:

The Sewer System is owned and operated by the SFPUC. The DPW provides certain engineering services. This is a combined stormwater and wastewater system. Yup, that’s right, the water you flush down the toilet goes into the same pipes as the the rainwater. Everything gets piped to a state-of-the art wastewaster treatment plant. Amazingly the sewer pipes are fed almost entirely by gravity, taking advantage of the natural landscape of the city.

The Auxiliary Water Supply System (AWSS) was built in 1908 just after the 1906 San Francisco Earthquake. It is an entire water system that is dedicated solely to firefighting. 80% of the city was destroyed not by earthquake itself, but by the fires that ravaged the city. The fires rampaged through the city mostly because the water mains collapsed. Just afterwards, the city began construction on a separate this infrastructure for combatting future fires. It consists of reservoirs that feed an entire network of pipes to high-pressure fire hydrants and also includes approximately 170 underground cisterns at various intersections in the city. This incredible separate water system is unique to San Francisco.

The Potable Water System, a.k.a. drinking water is the water we get from our faucets and showers. It comes from the Hetch Hetchy — a historic valley but also a reservoir and water system constructed from 1913-1938 to provide water to San Francisco. This history is well-documented, but what I know little about is how the actual drinking water gets piped into San Francisco. homes Also, the San Francisco water is amongst the most safe in the world, so you can drink directly from your tap.

Given all of this, where is the story? This is the question that I asked folks at Stamen, Autodesk and Gray Area during a hyper-productive brainstorming session last week. Here’s the whiteboard with the notes. The takeaways, as folks call it are, are below and here I’m going to get nitty-gritty into process.

(whiteboard brainstorming session with Stamen)

stamen_brainstorm_full

(1) In my original proposal, I had envisioned a table-top version of the entire water infrastucture: pipes, cisterns, manhole chambers, reservoirs as a large-scale sculpture, printed in panels. It was kindly pointed out to me by the Autodesk Creative Projects team that this is unfeasible. I quickly realized the truth of this: 3D prints are expensive, time-consuming to clean and fragile. Divide the sculptural part of the project into several small parts.

(2) People are interested in the sewer system. Someone said, “I want to know if you take a dump at Nob Hill, where does the poop go?” It’s universal. Everyone poops, even the Queen of England and even Batman. It’s funny, it’s gross, it’s entirely human. This could be accessible to everyone.

(3) Making visible the invisible or revealing what’s in plain sight. The cisterns in San Francisco are one example. Those brick circles that you see in various intersections are actually 75,000 gallon underground cisterns. Work on a couple of discrete urban mapping projects.

(4) Think about focusing on making a beautiful and informative 3D map / data-visualization of just 1 square mile of San Francisco infrastructure. Hone on one area of the city.

(5) Complex systems can be modeled virtually. Over the last couple weeks, I’ve been running code tests, talking to many people in city government and building out an entire water modeling systems in C++ using OpenFrameworks. It’s been slow, deliberate and arduous. Balance the physical models with a complex virtual one.

I’m still not sure exactly where this project is heading, which is to be expected at this stage. For now, I’m mining data and acting as a detective. In the meantime, here is the trailer for Chinatown, which gives away the entire plot in 3 minutes.

 

Mapping Manholes

The last week has been a flurry of coding, as I’m quickly creating a crude but customized data-3D modeling application for Water Works — an art project for my Creative Code Fellowship with Stamen Design, Gray Area and Autodesk.

This project build on my Data Crystals sculptures, which transform various public datasets algorithmically into 3D-printable art objects. For this artwork, I used Processing with the Modelbuilder libraries to generate STL files. It was a fairly easy coding solution, but I ran into performance issues along tje wau.

But Processing tends to choke up at managing 30,000 simple 3D cubes. My clustering algorithms took hours to run. Because it isn’t compiled into machine code and is instead interpreted, it has layers of inefficiency.

I bit the coding bullet and this week migrated my code to OpenFrameworks (an open source C++ environment). I’ve used OF before, but never with 3D work. There are still lots of gaps in the libraries, specifically the STL exporting, but I’ve had some initial success, woo-hoo!

Here are all the manholes, the technical term being “sewer nodes”, mapped into 3D space using GIS lat/lon and elevation coordinates. The clear indicator that this is San Francisco, and not Wisconsin, which this mapping vaguely resembles is the swath of empty space that is Golden Gate Park.

What hooked me was that “a-ha” moment where 3D points rendered properly on my screen. I was on a plane flight home from Seattle and involuntarily emitted an audible yelp. Check out the 3D mapping. There’s a density of nodes along the Twin Peaks, and I accentuated the z-values to make San Francisco look even more hilly and to understand the location of the sewer chambers even better.

Sewer nodes are just the start. I don’t have the connecting pipes in there just yet, not to mention the cisterns and other goodies of the SF water infrastructure.

water_works_nodes_screen_shotOf course, I want to 3D print this. By increasing the node size — the cubic dimensions of each manhole location, I was able to generate a cohesive and 3D-printable structure. This is the Meshlab export with my custom-modified STL export code. I never thought I’d get this deep into 3D coding, but now, I know all sorts of details, like triangular winding and the right-hand rule for STL export.3d_terrain_meshlabAnd here is the 3D print of the San Francisco terrain, like the Data Crystals, with many intersecting cubes.

3d_terrain_better It doesn’t have the aesthetic crispness of the Data Crystals project, but this is just a test print — very much a work-in-progress.
data_crystals

 

Creative Code Fellowship: Water Works Proposal

Along with 3 other new media artists and creative coding experts, I was recently selected to be a Creative Code Fellow for 2014 — a project pioneered by Gray Area (formerly referred to as GAFFTA and now in a new location in the Mission District).

Each of us is paired with a partnering studio, which provides a space and creative direction for our proposed project. The studio that I’m pleased to be working with is Stamen Design, a leader in the field of aesthetics, mapping and data-visualization.

I’ll be also continuing my residency work at Autodesk at Pier 9, which will be providing support for this project as well.

My proposed project is called “Water Works” — a 3D-printed data visualization of San Francisco’s water system infrastructure, along with some sort of web component.

grayarea-fellowship-home-page

 

Creative Code Fellowship Application Scott Kildall

Project Proposal (250 limit)
My proposed project “Water Works” is a 3D data visualization of the complex network of pipes, aqueducts and cisterns that control the flow of water into our homes and out of our toilets. What lies beneath our feet is a unique combined wastewater system — where stormwater mixes with sewer lines and travels to a waste treatment plant, using gravitational energy from the San Francisco hills.

This dynamic flow is the circulatory system of the organism that is San Francisco. As we are impacted by climate change, which escalates drought and severe rainstorms, combined with population growth, how we obtain our water and dispose of it is critical to the lifeblood of this city.

Partnering with Autodesk, which will provide materials and shop support, I will write code, which will generate 3D prints from municipal GIS data. I imagine ghost-like underground 3D landscapes with thousands of threads of water — essentially flow data — interconnected to larger cisterns and aqueducts. The highly retinal work will invite viewers to explore the infrastructure the city provides. The end result might be panels that snap together on a tabletop for viewers to circumnavigate and explore.

The GIS data is available, though not online, from San Francisco and already I’ve obtained cooperation from SFDPW about providing some infrastructure data necessary to realize this project.

While my focus will be on the physical portion of this project, I will also build an interactive web-based version from the 3D data, making this a hybrid screen-physical project.

Why are you interested in participating in this fellowship? (150 word limit)
The fellowship would give me the funding, visibility and opportunity of working under the umbrage of two progressive organizations: Gray Area and Stamen Design. I would expand my knowledge, serve the community and increase my artistic potential by working with members of these two groups, both of which have a progressive vision for art and design in my longtime home of San Francisco.

Specifically, I wish to further integrate 3D printing into the data visualization conversation. With the expertise of Stamen, I hope to evolve my visualization work at Autodesk. The 3D-printing technology makes possible what has hitherto been impossible to create and has enormous possibilities to materialize the imaginary.

Additionally some of the immersive classes (HTML5, Javascript, Node.js) will be helpful in solidifying my web-programming skills so that I can produce the screen-based portion of this proposal.

What experience makes this a good fit for you? (150 word limit)
I have deep experience in producing both screen-based and physical data visualizations. While at the Exploratorium, I worked on many such exhibits for a general audience.

One example is a touch-screen exhibit called “Seasons of Plankton”, which shows how plankton species in the Bay change over the year, reflecting a diverse ecosystem of microscopic organisms. I collaborated with scientists and visitor evaluators to determine the optimal way to tell this story. I performed all of the coding work and media production for this successful piece.

While at Autodesk, my focus has been creating 3D data visualizations with my custom code that transforms public data sets into “Data Crystals” (these are the submitted images). This exploration favors aesthetics over legibility. I hope to build upon this work and create physical forms, which help people see the dynamics of a complex urban water system to invite curiosity through beauty.

 

@SelfiesBot: It’s Alive!!!

@SelfiesBot began tweeting last week and already the results have surprised me.

Selfies Bot is a portable sculpture which takes selfies and then tweets the images. With custom electronics and a long arm that holds a camera that points at itself, it is a portable art object that can travel to parks, the beach and to different cities.

I quickly learned that people want to pose with it, even in my early versions with a cardboard head (used to prove that the software works).

Last week, in an evening of experimentation, I added text component, where each Twitter pic gets accompanied by text that I scrape from Tweets with the #selfie hashtag.

This produces delightful results, like spinning a roulette wheel: you don’t know what the text will be until the Twitter website pubishes the tweet. The text + image gives an entirely new dimension to the project. The textual element acts as a mirror into the phenomenon of the self-portrait, reflecting the larger culture of the #selfie.

Produced while an artist-in-residence at Autodesk.

aaron
mikkela

And this is the final version! Just done.

selfes_bot_very_good

This is the “robot hand” that holds the camera on a 2-foot long gooseneck arm.

robot_hand
yo

two_people

martin