Tag: code

EquityBot World Tour

Art projects are like birthing little kids. You have grand aspirations but never know how they’re going to turn out. And no matter, what, you love them.

20151125 125225

It’s been a busy year for EquityBot. I didn’t expect at all last year that my stock-trading algorithm Twitterbot would resonate with curators, thinkers and  general audience so well. I’ve been very pleased with how well this “child” of mine has been doing.

This year, from August-December, it has been exhibited in 5 different venues, in 4 countries. They include MemFest 2015 (Bilbao), ISEA 2015, (Vancouver), MoneyLab 2, Economies of Dissent (Amsterdam) and Bay Area Digitalists (San Francisco).

Of course, it helps the narrative that EquityBot is doing incredibly well, with a return rate (as of December 4th) of 19.5%. I don’t have the exact figures, but the S&P for this time period, according to my calculations, is the neighborhood of -1.3%.

Screen Shot 2015-12-05 at 9.13.20 AM

 

The challenge with this networked art piece is how to display it. I settled on making a short video, with the assistance of a close friend, Mark Woloschuk. This does a great job of explaining how the project works.

And, accompanying it is a visual display of vinyl stickers, printed on the vinyl sticker machine at the Creative Workshops at Autodesk Pier 9, where I once had a residency and now work (part-time).

EquityBot_installation_screen_c

 

from-columbus-show

Soft Machines and Deception

The Impakt Festival officially begins next Wednesday, but in the weeks prior to the event, Impakt has been hosting numerous talks, dinners and also a weekly “Movie Club,” which has been a social anchor for my time in Utrecht.

10437517_643169085789022_7756476391981345316_nEvery Tuesday, after a pizza dinner and drinks, an expert in the field of new media introduces a relatively recent film about machine intelligence, prompting questions that frame larger issues of human-machine relations in the films. An American audience might be impatient about a 20-minute talk before a movie, but in the Netherlands, the audience has been engaged. Afterwards, many linger in conversations about the very theme of the festival: Soft Machines.

1625471_643169265789004_3958937439824009299_n

The films have included I, Robot, Transcendence, Her and the documentary: Game Over: Kasparov and the Machine. They vary in quality, but with the introduction of the concepts ahead of time, even Transcendence, a thoroughly lackluster film engrossed me.

The underlying question that we end up debating is: can machines be intelligent? This seems to be a simple yes or no question, which cleaves any group into either a technophilic pro-Singularity or curmudgeonly Luddite camp. It’s a binary trap, like the Star Trek debates between Spock and Bones. The question is far more compelling and complex.

The Turing test is often cited as the starting point for this question. For those of you who are unfamiliar with this thought experiment, it was developed by British mathematician and computer scientist, Alan Turing in a 1950 paper that asked the simple question: “can machines think”.

The test goes like this: suppose you have someone at a computer terminal who is conversing with an entity by typing text conversations back and forth, what we now regularly do with instant messaging. The entity on the other terminal is either a computer or a human, the identity of which is unknown to the computer user. The user can have a conversation and ask questions. If he or she cannot ascertain “human or machine” after about 5 minutes, then the machine passes the Turing test. It responds as if a human would and can effectively “think”.

turing_model

In 1990, the thought experiment became a reality with the Loebner Prize. Every year, various chatbots — algorithms which converse via text with a computer user — compete to try to fool humans in a setup that replicates this exact test. Some algorithms have come close, but to date, no computer has ever successfully won the prize.

eliza2

The story goes that Alan Turing was inspired by a popular party game of the era called the “Imitation Game” where a questioner would ask an interlocutor various questions. This intermediary would then relay these questions to a hidden person who would answer via handwritten notes. The job of the questioner was to try to determine the gender of the unknown person. The hidden person would provide ambiguous answers. A question of “what is your favorite shade of lipstick” could be answered by “It depends on how I feel”. The answer is in this case is a dodge as a 1950s man certainly doesn’t know the names of lipstick shades.

Both the Turing test and the Imitation Game hover around the act of deception. This technique, widely deployed in predator-prey relationships in nature, is engrained in our biological systems. In the Loebner Prize competitions, there have even been instances where the human and computer will try to play with the judges, making statements like: “Sorry I am answering this slowly, I am running low on RAM”.

It may sound odd, but the computer doesn’t really know deception. Humans do. Every day we work with subtle queues of movement around social circles, flirtation with one another, exclusion and inclusion into a group and so on. These often rely on shades of deception: we say what we don’t really mean and have other agendas than our stated goals. Politicians, business executives and others that occupy high rungs of social power know these techniques well. However, we all use them.

The artificial intelligence software that powers chatbots has evolved rapidly over the years. Natural language processing (NLP) is widely used in various software industries. I had an informative lunch the other day in Amsterdam with a colleague of mine, Bruno Jakic at AI Applied, who I met through the Affect Lab. Among other things, he is in the business of sentiment analysis, which helps, for example, determine if a large mass of tweets indicates a positive or negative emotion. Bruno shared his methodology and working systems with me.

State-of-the-art sentiment analysis algorithms are generally effective, operating in the 75-85% range for identification of a “good” or “bad” feeling in a chuck of text such as a Tweet. Human consensus is in the similar range. Apparently, a group of people cannot agree on how “good” or “bad” various Twitter messages are, so machines are coning close to effective as humans on a general scale.

The NLP algorithms deploy brute force methods by crunching though millions of sentences using human-designed “classifiers” — rules to help determine how a sentence looks. For example, an emoticon like a frown-face, almost always indicates a bad feeling.

frown

Computers can figure this out because machine perception is millions of time faster than human perception. It can run through examples, rules and more but acts on logic alone. If NLP software code generally works, where specifically does it fail?

Bruno pointed out was that machines are generally incapable of figuring out if someone is being sarcastic. Humans immediately sense this by intuitive reasoning. We know, for example that getting locked out of your own house is bad. So if you write that this is a contradictory good thing, it is obviously sarcastic. The context is what our “intuition” — or emotional brain understands. It builds upon shared knowledge that we gather over many years.

sarcasm

The Movie Club films also tackle this issue of machine deception. At a critical moment in the film, Sonny, the main robot character in I, Robot, deceives the “bad” AI software that is attacking the humans by pretending to hold a gun to one of the main “good” characters. It  then winks to Will Smith (the protagonist) to let him know that he is tricking the evil AI machine. Sonny and Will Smith then cooperate, Hollywood style with guns blazing. Of course, they prevail in the end.

sony-wink

Sonny possess a sophisticated Theory of Mind: an understanding of its own mental state and well as that of the other robots and Will Smith. It takes initiative and pretends to be on the side of the evil AI computer by taking an an aggressive action. Earlier in the film, Sonny learned what winking signifies. It knows that the AI doesn’t understand this and so the wink will be understood by Will Smith and not be the evil AI.

In Game Over: Kasparov and the Machine, which recasts the narrative of the Deep Blue vs.Kasparov chess matches, the Theory of Mind of the computer resurfaces. We know that Deep Blue won the chess match, which was a series of 6 chess matches in 1997. It is the infamous Game 2, which obsessed Kasparov. The computer played aggressively and more like a human than Kasparov had expected.

At move 45, Kasparov resigned, convinced that Deep Blue had outfoxed him that day. Deep Blue had responded in the best possible way to Kasparov’s feints earlier in the game. Chess experts later discovered that Kasparov could have easily forced a honorable draw instead of resigning the match.

The computer appeared to have made a simple error. Kasparov was baffled and obsessed. How could the algorithm have failed on a simple move, when it was so thoroughly strategic earlier in the game. It didn’t make sense.

Kasparov felt like he was tricked into resigning. What he didn’t consider was that when te algorithm didn’t have enough time — since tournament chess games are run against a clock — to find the best-ranked move, that it would choose randomly from a set of moves…much like a human would do in similar circumstances. The decision we humans make is emotional at this point. Inadvertently, Kasparov the machine deceived Kasparov.

KASPAROVI’m convinced that ability to act deceptively is one necessary factor for machines need to be “intelligent”. Otherwise, they are simply code-crunchers. But there are other aspects, as well, which I’m discovering and exploring during the Impakt Festival.

I will continue this line of thought on machine intelligence in future blog posts, I welcome any thoughts and comments on machine intelligence and deception. You can find me on Twitter: @kildall.

 

 

 

 

 

 

 

 

Data-Visualizing + Tweeting Sentiments

It’s been a busy couple of weeks working on the EquityBot project, which will be ready for the upcoming Impakt Festival. Well, at least some functional prototype in my ongoing research project will be online for public consumption.

The good news is that the Twitter stream is now live. You can follow EquityBot here.

EquityBot now tweets images of data-visualizations on its own and is autonomous. I’m constantly surprised and a bit nervous by its Tweets.

exstasy_sentimentAt the end of last week, I put together a basic data visualization using D3, which is a powerful Javascript data-visualization tool.

Using code from Jim Vallandingham, In just one evening, I created dynamically-generated bubble maps of Twitter sentiments as they arrive EquityBot’s own sentiment analysis engine.

I mapped the colors directly from the Plutchik wheel of emotions, which is why they are still a little wonky like the fact that the emotion of Grief is unreadable. Will be fixed.

I did some screen captures and put them my Facebook and Twitter feed. I soon discovered that people were far more interested in images of the data visualizations than just text describing the emotions.

I was faced with a geeky problem: how to get my Twitterbot to generate images of the data visualizations using D3, a front-end Javascript client? I figured it out eventually, after stepping into a few rabbit holes.

Screen Shot 2014-10-21 at 11.31.09 AM

I ended up using PhantomJS, the Selenium web driver and my own Python management code to solve the problem. There biggest hurdle was getting Google webfonts to render properly. Trust me, you don’t want to know the details.

Screen Shot 2014-10-21 at 11.31.29 AM

 

But I’m happy with the results. EquityBot will now move to other Tweetable data-visualizations such as its own simulated bank account, stock-correlations and sentiments-stock pairings.

Blueprint for EquityBot

For my latest project, EquityBot, I’ve been researching, building and writing code during my 2 month residency at Impakt Works in Utrecht (Netherlands).

EquityBot is going through its final testing cycles before a public announcement on Twitter. For those of you who are Bot fans, I’ll go ahead and slip you the EquityBot’sTwitter feed: https://twitter.com/equitybot

The initial code-work has involved configuration of a back-end server that does many things, including “capturing” Twitter sentiments, tracking fluctuations in the stock market and running correlation algorithms.

I know, I know, it sounds boring. Often it is. After all, the result of many hours of work: a series of well-formatted JSON files. Blah.

But it’s like building city infrastructure: now that I have the EquityBot Server more or less working, it’s been incredibly reliable, cheap and customizable. It can act as a Twitterbot, a data server and a data visualization engine using D3.

This type of programming is yet another skill in my Creative Coding arsenal. And consists of mostly Python code that lives on a Linode server, which is a low-cost alternative to options like HostGator or GoDaddy, which incur high monthly costs. And there’s a geeky sense of satisfaction in creating a well-oiled software engine.

The EquityBot Server looks like a jumble of Python and PHP scripts. I cannot possibly explain it excruciating detail, nor would anyone in their right mind want to wade through the technical details.

Instead, I wrote up a blueprint for this project.

ebot_server_diagram_v1For those of you who are familiar with my art projects, this style of blueprint may look familiar. I adapted this design from my 2049 Series, which are laser-etched and painted blueprints of imaginary devices. I made these while an artist-in-residence at Recology San Francisco in 2011.

sniffer-blue

Data Miner, Water Detective

This summer, I’m working on a Creative Code Fellowship with Stamen Design, Gray Area and Autodesk. The project is called Water Works, which will map and data-visualize the San Francisco water infrastructure using 3D-printing and the web.

Finding water data is harder than I thought. Like detective Gittes in the movie Chinatown, I’m poking my nose around and asking everyone about water. Instead of murder and slimy deals, I am scouring the internet and working with city government. I’ve spent many hours sleuthing and learning about the water system in our city.

chinatown-nicholsonanddunway

In San Francisco, where this story takes place, we have three primary water systems. Here’s an overview:

The Sewer System is owned and operated by the SFPUC. The DPW provides certain engineering services. This is a combined stormwater and wastewater system. Yup, that’s right, the water you flush down the toilet goes into the same pipes as the the rainwater. Everything gets piped to a state-of-the art wastewaster treatment plant. Amazingly the sewer pipes are fed almost entirely by gravity, taking advantage of the natural landscape of the city.

The Auxiliary Water Supply System (AWSS) was built in 1908 just after the 1906 San Francisco Earthquake. It is an entire water system that is dedicated solely to firefighting. 80% of the city was destroyed not by earthquake itself, but by the fires that ravaged the city. The fires rampaged through the city mostly because the water mains collapsed. Just afterwards, the city began construction on a separate this infrastructure for combatting future fires. It consists of reservoirs that feed an entire network of pipes to high-pressure fire hydrants and also includes approximately 170 underground cisterns at various intersections in the city. This incredible separate water system is unique to San Francisco.

The Potable Water System, a.k.a. drinking water is the water we get from our faucets and showers. It comes from the Hetch Hetchy — a historic valley but also a reservoir and water system constructed from 1913-1938 to provide water to San Francisco. This history is well-documented, but what I know little about is how the actual drinking water gets piped into San Francisco. homes Also, the San Francisco water is amongst the most safe in the world, so you can drink directly from your tap.

Given all of this, where is the story? This is the question that I asked folks at Stamen, Autodesk and Gray Area during a hyper-productive brainstorming session last week. Here’s the whiteboard with the notes. The takeaways, as folks call it are, are below and here I’m going to get nitty-gritty into process.

(whiteboard brainstorming session with Stamen)

stamen_brainstorm_full

(1) In my original proposal, I had envisioned a table-top version of the entire water infrastucture: pipes, cisterns, manhole chambers, reservoirs as a large-scale sculpture, printed in panels. It was kindly pointed out to me by the Autodesk Creative Projects team that this is unfeasible. I quickly realized the truth of this: 3D prints are expensive, time-consuming to clean and fragile. Divide the sculptural part of the project into several small parts.

(2) People are interested in the sewer system. Someone said, “I want to know if you take a dump at Nob Hill, where does the poop go?” It’s universal. Everyone poops, even the Queen of England and even Batman. It’s funny, it’s gross, it’s entirely human. This could be accessible to everyone.

(3) Making visible the invisible or revealing what’s in plain sight. The cisterns in San Francisco are one example. Those brick circles that you see in various intersections are actually 75,000 gallon underground cisterns. Work on a couple of discrete urban mapping projects.

(4) Think about focusing on making a beautiful and informative 3D map / data-visualization of just 1 square mile of San Francisco infrastructure. Hone on one area of the city.

(5) Complex systems can be modeled virtually. Over the last couple weeks, I’ve been running code tests, talking to many people in city government and building out an entire water modeling systems in C++ using OpenFrameworks. It’s been slow, deliberate and arduous. Balance the physical models with a complex virtual one.

I’m still not sure exactly where this project is heading, which is to be expected at this stage. For now, I’m mining data and acting as a detective. In the meantime, here is the trailer for Chinatown, which gives away the entire plot in 3 minutes.

 

@SelfiesBot: It’s Alive!!!

@SelfiesBot began tweeting last week and already the results have surprised me.

Selfies Bot is a portable sculpture which takes selfies and then tweets the images. With custom electronics and a long arm that holds a camera that points at itself, it is a portable art object that can travel to parks, the beach and to different cities.

I quickly learned that people want to pose with it, even in my early versions with a cardboard head (used to prove that the software works).

Last week, in an evening of experimentation, I added text component, where each Twitter pic gets accompanied by text that I scrape from Tweets with the #selfie hashtag.

This produces delightful results, like spinning a roulette wheel: you don’t know what the text will be until the Twitter website pubishes the tweet. The text + image gives an entirely new dimension to the project. The textual element acts as a mirror into the phenomenon of the self-portrait, reflecting the larger culture of the #selfie.

Produced while an artist-in-residence at Autodesk.

aaron
mikkela

And this is the final version! Just done.

selfes_bot_very_good

This is the “robot hand” that holds the camera on a 2-foot long gooseneck arm.

robot_hand
yo

two_people

martin

 

3D Data Viz & SF Open Data

I’ve fallen a bit behind in my documentation and have a backlog of great stuff that I’ve been 3D-printing. These are a few of my early tests with my new project: Data Crystals. I am using various data sources, which I algorithmically transform data into 3D sculptures.

The source for these is the San Francisco Open Data Portal — which provides datasets about all sorts of interesting things such as housing permit data, locations of parking meters and more.

My custom algorithms transform this data into 3D sculptures. Legibility is still an issue, but initial tests show the wonderful work that algorithms can do.

This is a transformation of San Francisco Crime Data. It turns out that crime happens everywhere, so the data is in a giant block.

crime_data

After running some crude data transformations, I “mined” this crystal: the location of San Francisco public art. Most public art is located in the downtown and city hall area. But there is a tail, which represents the San Francisco Airport.

sf_art

More experiments: this is a test, based on the SF public art, where I played with varying the size of the cubes (this would be a suggested value of artwork, which I don’t have data for…yet). Now, I have a 4th axis for the data. Plus, there is a distinct aesthetic appeal of stacking differently-sized blocks as opposed to uniform ones.

Stay tuned, there is more to come!random_squares

Cracking the Code

After some several days of brainstorming on generating 3D models using simple coding tools, I started diving into Processing* using Marius Watz’s Modelbuilder Library (which is incredible). This is what I have going so far. Super-excited about the possibilities!algo_3d

Version 2 with “clustering” algorithm
clustering

* Technically speaking, I’m  using the Processing libs with Eclipse, which makes development far easier. This Instructable that I wrote shows you how to migrate your Processing projects to Eclipse.