Tagged impakt

Impakt Festival: Opening Night

The Impakt Festival officially kicked off this Wednesday evening, and the first event was the exhibition opening at Foto Dok, curated by Alexander Benenson.


The works in the show circled around the theme of Soft Machines, which Impakt describes as “Where the Optimized Human Meets Artificial Empathy”.

Of the many powerful works in the show, my favorite was the 22-minute video, “Hyper Links or it Didn’t Happen,” by Cécile B. Evans. A failed CGI rendering of Philip Seymour Hoffman narrates fragmented stories of connection, exile and death. At one point, we see an “invisible woman” who lives on a beach and whose lover stays with her, after quitting a well-paying job. The video intercuts moments of odd narration by a Hoffman-AI. Spam bots and other digital entities surface and disappear. None of it makes complete sense, yet it somehow works and is absolutely riveting.


After the exhibition opening, the crowd moved to Theater Kikker, where Michael Bell-Smith, presented a talk/performance titled “99 Computer Jokes”. He spared the audience by telling us one actual computer joke. Instead, he embarked on a discursive journey, covering topics of humor, glitch, skeuomorphs, repurposing technology and much more. Bell-Smith spoke with a voice of detached authority and made lateral connections to ideas from a multitude of places and spaces.

michael In the first section of his talk, he describes that successful art needs to have a certain amount of information — not too much, not too little, citing the words of arts curator Anthony Huberman:

“In art, what matters is curiosity, which in many ways is the currency of art. Whether we understand an artwork or not, what helps it succeed is the persistence with which it makes us curious. Art sparks and maintains curiosities, thereby enlivening imaginations, jumpstarting critical and independent thinking, creating departures from the familiar, the conventional, the known. An artwork creates a horizon: its viewer perceives it but remains necessarily distant from it. The aesthetic experience is always one of speculation, approximation and departure. It is located in the distance that exists between art and life.”

In the present time where faith in technology has vastly overshadowed that of art, these words are hyper-relevant. The Evans video accomplishes this, resting in this valley between the known and the uncertain. We recognize Hoffman and he is present, but in an semi-understandable, mutated form. We know that the real Philip Seymour Hoffman is dead. His ascension into a virtual space is fragmented and impure. The video suggests that traversing the membrane from the real into the screen space will forever distort the original. It triggers the imagination. It sticks with us in a way that stories do not.

What Bell-Smith alludes to his talk is that the idea of combining the human and the machine won’t work…as expected. He sidesteps any firm conclusions. His performance is like the artwork that Huberman describes: it never reaches resolution and opens up a space for curiosity.

Later he displayed slides of Photoshop distasters, a sort of “Where’s Waldo” of Photoshop errata. Microseconds after viewing the advertisement below, we know something is off. The image triggers an uncanny response. A moment later we can name the problem of the model having only one leg. Primal perception precedes a categorical response. Finally, everyone laughs together at the idiosyncrasy that someone let into the public sphere.


After Bell-Smith’s talk we had a chance for eating-and-drinking. Hats off to the Impakt organization. I know I’m biased since I’m an artist-in-residence at Impakt during the festival itself, but they certainly know how to make everyone feel warm and cozy.
galaNext up was the keynote speaker, Bruce Sterling, who is a science fiction writer and cultural commentator. He boldly took the stage without a laptop, and so the audience had no slides or videos to bolster his arguments. He assumed the role of naysayer, deconstructing the very theme of the festival: Where Optimized Human Meets Artificial Empathy. Defining the terms “cognition” (human) vs “computation” (machine), he took the stance that the merging of the two was a categorical error in thinking. His example: birds can fly and drones can fly, but this doesn’t mean that drones can lay eggs. My mind raced, thinking that someday drone aircraft might reproduce. Would that be inconceivable?

Sterling tackled the notion of the Optimized Human with san analogy to Dostoyevsky’s Crime and Punishment. For those of you that don’t recall your required high school reading, the main character of the book is Raskolnikov, who is both brilliant and desperate for money. He carefully plans and then kills an morally bankrupt pawnbroker for her cash. The philosophical question that Dostoyevsky proposes is the idea of a superhuman:  select individuals who are exempt the prescribed moral and legal code. Could the murder of a terrible person be a justifiable act? And could the person to judge this would be someone who is excessively bright, essentially leaving the rest of the humanity behind?

In the book, the problem is that the social order gets disrupted. Raskolnikov action introduces an deadly unpredictable element into his village. With an uncertainty to the law and who executes it, no one feels safe. At the conclusion of the novel, Raskolnikov ends up in exile, in a sort of moral purgatory.

The very notion of the “optimized human” has similar problems. If select people are somehow “upgraded” through cybernetics, gene therapies and other technological enhancements, what happens to the social order? Sterling spoke about marketing, but I see the greater problem one of leveraged inequality. If there are a minority of improved humans who have combined integrated themselves with some sort of techno-futuristic advantages, our society rapidly escalates the classic problem of the digital divide. The reality is that this has already started happening. The future is here.bruce Bruce Sterling concluded with the point that we need to pay attention to how technology is leveraged. His example of Apple’s Siri system, albeit not a strong case of Artificial Empathy, is owned by a company with specific interests. When asked for the nearest gas station or a recipe for grilled chicken, Siri “happily” responds. If you ask her how to remove the DRM encoding on a song in your iTunes library, Siri will be helpless. While I disagreed with a number of Sterling’s points in his talk, what I do know is that I would hope for a non-predictive future for my Artificial Empathy machines.

The Impakt Festival continues through the weekend with the full schedule here.




Soft Machines and Deception

The Impakt Festival officially begins next Wednesday, but in the weeks prior to the event, Impakt has been hosting numerous talks, dinners and also a weekly “Movie Club,” which has been a social anchor for my time in Utrecht.

10437517_643169085789022_7756476391981345316_nEvery Tuesday, after a pizza dinner and drinks, an expert in the field of new media introduces a relatively recent film about machine intelligence, prompting questions that frame larger issues of human-machine relations in the films. An American audience might be impatient about a 20-minute talk before a movie, but in the Netherlands, the audience has been engaged. Afterwards, many linger in conversations about the very theme of the festival: Soft Machines.


The films have included I, Robot, Transcendence, Her and the documentary: Game Over: Kasparov and the Machine. They vary in quality, but with the introduction of the concepts ahead of time, even Transcendence, a thoroughly lackluster film engrossed me.

The underlying question that we end up debating is: can machines be intelligent? This seems to be a simple yes or no question, which cleaves any group into either a technophilic pro-Singularity or curmudgeonly Luddite camp. It’s a binary trap, like the Star Trek debates between Spock and Bones. The question is far more compelling and complex.

The Turing test is often cited as the starting point for this question. For those of you who are unfamiliar with this thought experiment, it was developed by British mathematician and computer scientist, Alan Turing in a 1950 paper that asked the simple question: “can machines think”.

The test goes like this: suppose you have someone at a computer terminal who is conversing with an entity by typing text conversations back and forth, what we now regularly do with instant messaging. The entity on the other terminal is either a computer or a human, the identity of which is unknown to the computer user. The user can have a conversation and ask questions. If he or she cannot ascertain “human or machine” after about 5 minutes, then the machine passes the Turing test. It responds as if a human would and can effectively “think”.


In 1990, the thought experiment became a reality with the Loebner Prize. Every year, various chatbots — algorithms which converse via text with a computer user — compete to try to fool humans in a setup that replicates this exact test. Some algorithms have come close, but to date, no computer has ever successfully won the prize.


The story goes that Alan Turing was inspired by a popular party game of the era called the “Imitation Game” where a questioner would ask an interlocutor various questions. This intermediary would then relay these questions to a hidden person who would answer via handwritten notes. The job of the questioner was to try to determine the gender of the unknown person. The hidden person would provide ambiguous answers. A question of “what is your favorite shade of lipstick” could be answered by “It depends on how I feel”. The answer is in this case is a dodge as a 1950s man certainly doesn’t know the names of lipstick shades.

Both the Turing test and the Imitation Game hover around the act of deception. This technique, widely deployed in predator-prey relationships in nature, is engrained in our biological systems. In the Loebner Prize competitions, there have even been instances where the human and computer will try to play with the judges, making statements like: “Sorry I am answering this slowly, I am running low on RAM”.

It may sound odd, but the computer doesn’t really know deception. Humans do. Every day we work with subtle queues of movement around social circles, flirtation with one another, exclusion and inclusion into a group and so on. These often rely on shades of deception: we say what we don’t really mean and have other agendas than our stated goals. Politicians, business executives and others that occupy high rungs of social power know these techniques well. However, we all use them.

The artificial intelligence software that powers chatbots has evolved rapidly over the years. Natural language processing (NLP) is widely used in various software industries. I had an informative lunch the other day in Amsterdam with a colleague of mine, Bruno Jakic at AI Applied, who I met through the Affect Lab. Among other things, he is in the business of sentiment analysis, which helps, for example, determine if a large mass of tweets indicates a positive or negative emotion. Bruno shared his methodology and working systems with me.

State-of-the-art sentiment analysis algorithms are generally effective, operating in the 75-85% range for identification of a “good” or “bad” feeling in a chuck of text such as a Tweet. Human consensus is in the similar range. Apparently, a group of people cannot agree on how “good” or “bad” various Twitter messages are, so machines are coning close to effective as humans on a general scale.

The NLP algorithms deploy brute force methods by crunching though millions of sentences using human-designed “classifiers” — rules to help determine how a sentence looks. For example, an emoticon like a frown-face, almost always indicates a bad feeling.


Computers can figure this out because machine perception is millions of time faster than human perception. It can run through examples, rules and more but acts on logic alone. If NLP software code generally works, where specifically does it fail?

Bruno pointed out was that machines are generally incapable of figuring out if someone is being sarcastic. Humans immediately sense this by intuitive reasoning. We know, for example that getting locked out of your own house is bad. So if you write that this is a contradictory good thing, it is obviously sarcastic. The context is what our “intuition” — or emotional brain understands. It builds upon shared knowledge that we gather over many years.


The Movie Club films also tackle this issue of machine deception. At a critical moment in the film, Sonny, the main robot character in I, Robot, deceives the “bad” AI software that is attacking the humans by pretending to hold a gun to one of the main “good” characters. It  then winks to Will Smith (the protagonist) to let him know that he is tricking the evil AI machine. Sonny and Will Smith then cooperate, Hollywood style with guns blazing. Of course, they prevail in the end.


Sonny possess a sophisticated Theory of Mind: an understanding of its own mental state and well as that of the other robots and Will Smith. It takes initiative and pretends to be on the side of the evil AI computer by taking an an aggressive action. Earlier in the film, Sonny learned what winking signifies. It knows that the AI doesn’t understand this and so the wink will be understood by Will Smith and not be the evil AI.

In Game Over: Kasparov and the Machine, which recasts the narrative of the Deep Blue vs.Kasparov chess matches, the Theory of Mind of the computer resurfaces. We know that Deep Blue won the chess match, which was a series of 6 chess matches in 1997. It is the infamous Game 2, which obsessed Kasparov. The computer played aggressively and more like a human than Kasparov had expected.

At move 45, Kasparov resigned, convinced that Deep Blue had outfoxed him that day. Deep Blue had responded in the best possible way to Kasparov’s feints earlier in the game. Chess experts later discovered that Kasparov could have easily forced a honorable draw instead of resigning the match.

The computer appeared to have made a simple error. Kasparov was baffled and obsessed. How could the algorithm have failed on a simple move, when it was so thoroughly strategic earlier in the game. It didn’t make sense.

Kasparov felt like he was tricked into resigning. What he didn’t consider was that when te algorithm didn’t have enough time — since tournament chess games are run against a clock — to find the best-ranked move, that it would choose randomly from a set of moves…much like a human would do in similar circumstances. The decision we humans make is emotional at this point. Inadvertently, Kasparov the machine deceived Kasparov.

KASPAROVI’m convinced that ability to act deceptively is one necessary factor for machines need to be “intelligent”. Otherwise, they are simply code-crunchers. But there are other aspects, as well, which I’m discovering and exploring during the Impakt Festival.

I will continue this line of thought on machine intelligence in future blog posts, I welcome any thoughts and comments on machine intelligence and deception. You can find me on Twitter: @kildall.









EquityBot @ Impakt

My exciting news is that this fall I will be an artist-in-residence at Impakt Works, which is in Utrecht, the Netherlands. The same organization puts on the Impakt Festival every year, which is a media arts festival that has been happening since 1988. My residency is from Sept 15-Nov 15 and coincides with the festival at the end of October.

Utrecht is a 30 minute train ride from Amsterdam and 45 minutes from Rotterdam and by all accounts is a small, beautiful canal city with medieval origins and also hosts the largest university in the Netherlands.

Of course, I’m thrilled. This is my first European art residency and I’ll have a chance to reconnect with some friends who live in the region as well as make many new connections.

impakt; utrecht; www.impakt.nlThe project I’ll be working on is called EquityBot and will premiere at the Impakt Festival in late October as part of their online component. It will have a virtual presence like my Playing Duchamp artwork (a Turbulence commission) and my more recent project, Bot Collective, produced while an artist-in-residence at Autodesk.

Like many of my projects this year, this will involve heavy coding, data-visualization and a sculptural component.


At this point, I’m in the research and pre-production phase. While configuring back-end server code, I’m also gathering reading materials about capital and algorithms for the upcoming plane rides, train rides and rainy Netherland evenings.

Here is the project description:


EquityBot is a stock-trading algorithm that explores the connections between collective emotions on social media and financial speculation. Using custom algorithms Equitybot correlates group sentiments expressed on Twitter with fluctuations in related stocks, distilling trends in worldwide moods into financial predictions which it then issues through its own Twitter feed. By re-inserting its results into the same social media system it draws upon, Equitybot elaborates on the ways in which digital networks can enchain complex systems of affect and decision making to produce unpredictable and volatile feedback loops between human and non-human actors.

Currently, autonomous trading algorithms comprise the large majority of stock trades.These analytic engines are normally sequestered by private investment companies operating with billions of dollars. EquityBot reworks this system, imagining what it might be like it this technological attention was directed towards the public good instead. How would the transparent, public sharing of powerful financial tools affect the way the stock market works for the average investor?

kildall_bigdatadreamsI’m imagining a digital fabrication portion of EquityBot, which will be the more experimental part of the project and will involve 3D-printed joinery. I’ll be collaborating with my longtime friend and colleague, Michael Ang on the technology — he’s already been developing a related polygon construction kit — as well as doing some idea-generation together.

“Mang” lives in Berlin, which is a relatively short train ride, so I’m planning to make a trip where we can work together in person and get inspired by some of the German architecture.

My new 3D printer — a Printrbot Simple Metal — will accompany me to Europe. This small, relatively portable machine produces decent quality results, at least for 3D joints, which will be hidden anyways.