Water Works, Google Translated

My Water Works data-visualization was just featured in MetaTrend Journal (“Big Datification”, Volume 63, March 2015). It’s a subscription model, so you can’t read the article, plus it’s in Korean, which means I definitely can’t read it.

Screen Shot 2015-03-05 at 8.39.53 AM

 

I did get some partial text emailed to me from the organization and run it through Google translate, which gave me this paragraph:

Water Works project is implemented as a map to visualize 3D printing coming drainage and sewer systems of San Francisco . This is a project of visual artist Scott Kjeldahl data . San Francisco 170 water tanks visualize dozen water tank location (San Francisco Cisterns), 3 million , and visualize data points sewers activity (Sewer Works) and was made ??up of 67 of the most efficient virtual hydrant (Imaginary Drinking Hydrants) Map . Pipes, hydrants , circulation and the supply of urban waterways flow through the location and construction of a sewage treatment plant can see at a glance.

I like it! Once again Google Translate impresses with the odd results and the mangling of phrases.

ReFILL Workshop in Winnipeg

On March 27th & 28th (2015), Victoria Scott and I will be conducting a workshop in Winnipeg around the “libricide” in Canada’s DFO libraries. The full article on their closures is here.

Screen Shot 2015-02-26 at 8.30.29 AM

Here’s the description
On March 27th & 28th, 2015, San Francisco-based artists Victoria Scott and Scott Kildall will be leading 2-day, hands-on workshop to physically re-imagine and re-materialize some of the lost titles of the Freshwater Institute Library. We will discuss, imagine, draw, map and construct while listening to soothing water sounds and watching water-related videos. We will also discuss methodologies of data visualization and create a map which tracks the migration of these materials from publicly-funded resource into private hands and landfill.

Our project blog will always tell more!

Death and Language

This Thursday at 6pm at Root Division, I will be part of evening of conversation and performance.

The short talk I’ll be giving will be called Death and Language.

In 1972, my father, Gary Kildall, wrote the first high-level computer language for Intel’s microprocessors. This language, called PL/M was instrumental in the development of the personal computer and is now extinct. At around the same time, the last fluent speaker of the Tillamook language also died, thus extinguishing this natural language. What survives of the Tilamook language are audio recordings taken from 1965-1972. With digital preservation techniques as the backdrop, I will entertain questions regarding death of both natural and machine languages.

EndangeredLanguagePanel

Pier 9 Artist Profile

The good folks at Pier 9, Autodesk just released this video-profile of me and my Water Works project. I’m especially happy with Charlie Nordstrom’s excellent videography work and even got the chance help with the editing of the video itself.

Yes, in a previous life I used to do editing for video documentaries with now defunct, Sleeping Giant Video and the IndyMedia Center.

But now, I’m more interested in algorithms, data and sculpture.

Wikipedia and the Politics of Openness by Nathaniel Tkacz

I first met Nathaniel Tkacz, in India and then later in Amsterdam for a series of the Wikipedia CPOV (Critical Point of View) conferences. At these two events, my colleague, Nathaniel Stern and I were presenting a talk, which later became a paper on our Wikipedia Art project.

Congratulations to Nathaniel Tkacz. He has just released his book, Wikipedia and the Politics of Openness, which is covered in this Times Higher Education article by Karen Shook.

Screen Shot 2015-01-05 at 8.01.20 PM

 

 

 

Talk at David Baker Architects

Yesterday, I gave a brief artist talk at David Baker Architects, which is a local San Francisco architecture firm with numerous sustainability and innovation design awards. Here I am with David Baker, himself, who is sporting a stylish scarf. I want.me_and_baker

It was a casual lunchtime talk with about 15 or 20 people in attendance. An important part of my art practice is talking to organizations that both work outside of the art world and are doing amazing work. I want to share ideas and discuss compelling art ideas with a larger audience.. lunch_audience Here I am, showing my Data Crystals work and explaining the clustering algorithms at work. I later talked about mapping the water infrastructure with my Water Works project.

From this architecture firm, I got positive responses about data and design with in-depth knowledge about urban infrastructure.I hope to continue the code-to-3d prints work with these project. More proposals are in the works.

scott_gesture

 

Human Brain Project @ Impakt Festival

I spent my time at the five-day long Impakt Festival watching screenings, listening to talks, interacting with artworks and making plenty of connections with both new and old friends. I’m still digesting the deluge of aesthetic approaches, subjective responses and formal interpretations of the theme of the festival, “Soft Machines: Where the Optimized Human Meets Artificial Empathy”.

imapktIt’s impossible to summarize everything I’ve seen. While there were a few duds, like any festival, the majority of what I experienced was high-caliber work. Topping my “best of list” was the “Algorithmic Theater” talk by Annie Dorsen, the Omer Fast film, “5000 Feet is the Best”, the Hohokum video game by Richard Hogg and a captivating talk on the Human Brain Project.

vid-game

For the sake of brevity, I’m going to cover just the presentation on the Human Brain Project (HBP). Even though this is a science project, what impressed me was similarities in methodology to many art projects. HBP has simple directive: to map the human brain. However the process is highly experimental and the results are uncertain.

HBP is largely EU-funded and was awarded to a consortium of researchers from a competition with 26 different organizations. The total funding over the course of the 10-year project is about 1 billion Euros, which is a hefty price tag for a research project. The eventual goal, likely well-after the 10 year period will be to actualize a simulated human brain on a computer — an impossibly ambitious project given the state of technology in 2014.

I arrived skeptical, well-aware that technology projects often make empty promises when predicting the future. Marc-Oliver Gewaltig, who is one of the scientists on HBP presented the analogy of 15th-century mapmaking. In 1492, Martin Behaim collected as many known maps of the world as he could, then produced the Erdapfel, a map of the known world at the time. He knew that the work was incomplete. There were plenty of known places but also many uncertain geographical areas as well. The Erdapfel didn’t even include any of the Americas since it was created before the return of Columbus from his first voyage. But, the impressive part was that the Erdapfel was a paradigm shift, which synthesized all geographical knowledge into a single system. This map would then be a stepping stone for future maps.

Carte_behaimAccording to Gewaltig, the mission of the HBP will follow a similar trajectory and aggregate known brain research into a unified, but flawed model. He fully recognizes that the directive of the project, a fully working synthetic human brain is impossible at this point. The computing power isn’t available yet, nor will it likely be there in 10 years.

The human brain is filled with neurons and synapses. The interconnections are everywhere with very little empty space in a brain. Because of this complexity, the HBP project is beginning by trying to simulate a mouse brain, which is within technology’s grasp in the next 10 years.

brain-mapThe rough process is to analyze physical slices of a mouse brain rather than chemical and electrical signals. From this information, they can construct a 3D model of a mouse brain itself using advanced software. For those of you who are familiar with 3D modeling, can you imagine the polygon count?

Gewaltig also made a distinction in their approach from science-fiction style speculation. When thinking about artificial intelligence, we often think of high-level cognitive functions: reasoning, memory and emotional intelligence. But, the brain also handles numerous non-cognitive functions: regulating muscles, breathing, hormones, etc. For this reason, HBP is creating a physical model of a mouse, where it will eventually interact with a simulated world. Without a body, you cannot have a simulated brain, despite what many films about AI suggest.

virtualmouseWhile I still have doubts about the efficacy of the Human Brain Project, I left impressed. The goal is not a successful simulated brain but instead to experiment and push the boundaries of the technology as much as possible. Computing power will catch up some day, and this project will help push future research in the proper direction. The results will be open data available to other scientists. Is that something we can really argue against?

 

 

 

 

 

 

Impakt Festival: Opening Night

The Impakt Festival officially kicked off this Wednesday evening, and the first event was the exhibition opening at Foto Dok, curated by Alexander Benenson.

alex

The works in the show circled around the theme of Soft Machines, which Impakt describes as “Where the Optimized Human Meets Artificial Empathy”.

Of the many powerful works in the show, my favorite was the 22-minute video, “Hyper Links or it Didn’t Happen,” by Cécile B. Evans. A failed CGI rendering of Philip Seymour Hoffman narrates fragmented stories of connection, exile and death. At one point, we see an “invisible woman” who lives on a beach and whose lover stays with her, after quitting a well-paying job. The video intercuts moments of odd narration by a Hoffman-AI. Spam bots and other digital entities surface and disappear. None of it makes complete sense, yet it somehow works and is absolutely riveting.

pseymore

After the exhibition opening, the crowd moved to Theater Kikker, where Michael Bell-Smith, presented a talk/performance titled “99 Computer Jokes”. He spared the audience by telling us one actual computer joke. Instead, he embarked on a discursive journey, covering topics of humor, glitch, skeuomorphs, repurposing technology and much more. Bell-Smith spoke with a voice of detached authority and made lateral connections to ideas from a multitude of places and spaces.

michael In the first section of his talk, he describes that successful art needs to have a certain amount of information — not too much, not too little, citing the words of arts curator Anthony Huberman:

“In art, what matters is curiosity, which in many ways is the currency of art. Whether we understand an artwork or not, what helps it succeed is the persistence with which it makes us curious. Art sparks and maintains curiosities, thereby enlivening imaginations, jumpstarting critical and independent thinking, creating departures from the familiar, the conventional, the known. An artwork creates a horizon: its viewer perceives it but remains necessarily distant from it. The aesthetic experience is always one of speculation, approximation and departure. It is located in the distance that exists between art and life.”

In the present time where faith in technology has vastly overshadowed that of art, these words are hyper-relevant. The Evans video accomplishes this, resting in this valley between the known and the uncertain. We recognize Hoffman and he is present, but in an semi-understandable, mutated form. We know that the real Philip Seymour Hoffman is dead. His ascension into a virtual space is fragmented and impure. The video suggests that traversing the membrane from the real into the screen space will forever distort the original. It triggers the imagination. It sticks with us in a way that stories do not.

What Bell-Smith alludes to his talk is that the idea of combining the human and the machine won’t work…as expected. He sidesteps any firm conclusions. His performance is like the artwork that Huberman describes: it never reaches resolution and opens up a space for curiosity.

Later he displayed slides of Photoshop distasters, a sort of “Where’s Waldo” of Photoshop errata. Microseconds after viewing the advertisement below, we know something is off. The image triggers an uncanny response. A moment later we can name the problem of the model having only one leg. Primal perception precedes a categorical response. Finally, everyone laughs together at the idiosyncrasy that someone let into the public sphere.

leg

After Bell-Smith’s talk we had a chance for eating-and-drinking. Hats off to the Impakt organization. I know I’m biased since I’m an artist-in-residence at Impakt during the festival itself, but they certainly know how to make everyone feel warm and cozy.
galaNext up was the keynote speaker, Bruce Sterling, who is a science fiction writer and cultural commentator. He boldly took the stage without a laptop, and so the audience had no slides or videos to bolster his arguments. He assumed the role of naysayer, deconstructing the very theme of the festival: Where Optimized Human Meets Artificial Empathy. Defining the terms “cognition” (human) vs “computation” (machine), he took the stance that the merging of the two was a categorical error in thinking. His example: birds can fly and drones can fly, but this doesn’t mean that drones can lay eggs. My mind raced, thinking that someday drone aircraft might reproduce. Would that be inconceivable?

Sterling tackled the notion of the Optimized Human with san analogy to Dostoyevsky’s Crime and Punishment. For those of you that don’t recall your required high school reading, the main character of the book is Raskolnikov, who is both brilliant and desperate for money. He carefully plans and then kills an morally bankrupt pawnbroker for her cash. The philosophical question that Dostoyevsky proposes is the idea of a superhuman:  select individuals who are exempt the prescribed moral and legal code. Could the murder of a terrible person be a justifiable act? And could the person to judge this would be someone who is excessively bright, essentially leaving the rest of the humanity behind?

In the book, the problem is that the social order gets disrupted. Raskolnikov action introduces an deadly unpredictable element into his village. With an uncertainty to the law and who executes it, no one feels safe. At the conclusion of the novel, Raskolnikov ends up in exile, in a sort of moral purgatory.

The very notion of the “optimized human” has similar problems. If select people are somehow “upgraded” through cybernetics, gene therapies and other technological enhancements, what happens to the social order? Sterling spoke about marketing, but I see the greater problem one of leveraged inequality. If there are a minority of improved humans who have combined integrated themselves with some sort of techno-futuristic advantages, our society rapidly escalates the classic problem of the digital divide. The reality is that this has already started happening. The future is here.bruce Bruce Sterling concluded with the point that we need to pay attention to how technology is leveraged. His example of Apple’s Siri system, albeit not a strong case of Artificial Empathy, is owned by a company with specific interests. When asked for the nearest gas station or a recipe for grilled chicken, Siri “happily” responds. If you ask her how to remove the DRM encoding on a song in your iTunes library, Siri will be helpless. While I disagreed with a number of Sterling’s points in his talk, what I do know is that I would hope for a non-predictive future for my Artificial Empathy machines.

The Impakt Festival continues through the weekend with the full schedule here.

 

 

 

EquityBot goes live!

During my time at Impakt as an artist-in-residence, I have been working on a new project called EquityBot, which is an online commission from Impakt. It fits well into the Soft Machines theme of the festival: where machines integrate with the soft, emotional world.

EquityBot exists entirely as a networked art or “net art” project, meaning that it lives in the “cloud” and has no physical form. For those of you who are Twitter users, you can follow on Twitter: @equitybot

01_large

What is EquityBot? Many people have asked me that question.

EquityBot is a stock-trading algorithm that “invests” in emotions such as anger, joy, disgust and amazement. It relies on a classification system of twenty-four emotions, developed by psychologist and scholar, Robert Plutchik.

Plutchik-wheel.svg

how it works
During stock market hours, EquityBot continually tracks worldwide emotions on Twitter to gauge how people are feeling. In the simple data-visualization below, which is generated automatically by EquityBot, the larger circles indicate the more prominent emotions that people are Tweeting about.

At this point in time, just 1 hour after the stock market opened on October 28th, people were expressing emotions of disgust, interest and fear more prominently than others. During the course of the day, the emotions contained in Tweets continually shift in response to world events and many other unknown factors.

twitter_emotionsEquityBot then uses various statistical correlation equations to find pattern matches in the changes in emotions on Twitter to fluctuations in stocks prices. The details are thorny, I’ll skip the boring stuff. My time did involve a lot of work with scatterplots, which looked something like this.

correlationOnce EquityBot sees a viable pattern, for example that “Google” is consistently correlated to “anger” and that anger is a trending emotion on Twitter, EquityBot will issue a BUY order on the stock.

Conversely, if Google is correlated to anger, and the Tweets about anger are rapidly going down, EquityBot will issue a SELL order on the stock.

EquityBot runs a simulated investment account, seeded with $100,000 of imaginary money.

In my first few days of testing, EquityBot “lost” nearly $2000. This is why I’m not using real money!

Disclaimer: EquityBot is not a licensed financial advisor, so please don’t follow it’s stock investment patterns.

accountThe project treats human feelings as tradable commodities. It will track how “profitable” different emotions will be over the course of months. As a social commentary, I propose a future scenario that just about anything can be traded, including that which is ultimately human: the very emotions that separate us from a machine.

If a computer cannot be emotional, at the very least it can broker trades of emotions on a stock exchange.

affect_performanceAs a networked artwork, EquityBot generates these simple data visualizations autonomously (they will get better, I promise).

It’s Twitter account (@equitybot) serves as a performance vehicle, where the artwork “lives”. Also, all of these visualizations are interactive and on the EquityBot website: equitybot.org.

I don’t know if there is a correlation between emotions in Tweets and stock prices. No one does. I am working with the hypothesis that there is some sort of pattern involved. We will see over time. The project goes “live” on October 29th, 2014, which is the day of the opening of the Impakt Festival and I will let the first experiment run for 3 months to see what happens.

Feedback is always appreciated, you can find me, Scott Kildall, here at: @kildall.

 

Soft Machines and Deception

The Impakt Festival officially begins next Wednesday, but in the weeks prior to the event, Impakt has been hosting numerous talks, dinners and also a weekly “Movie Club,” which has been a social anchor for my time in Utrecht.

10437517_643169085789022_7756476391981345316_nEvery Tuesday, after a pizza dinner and drinks, an expert in the field of new media introduces a relatively recent film about machine intelligence, prompting questions that frame larger issues of human-machine relations in the films. An American audience might be impatient about a 20-minute talk before a movie, but in the Netherlands, the audience has been engaged. Afterwards, many linger in conversations about the very theme of the festival: Soft Machines.

1625471_643169265789004_3958937439824009299_n

The films have included I, Robot, Transcendence, Her and the documentary: Game Over: Kasparov and the Machine. They vary in quality, but with the introduction of the concepts ahead of time, even Transcendence, a thoroughly lackluster film engrossed me.

The underlying question that we end up debating is: can machines be intelligent? This seems to be a simple yes or no question, which cleaves any group into either a technophilic pro-Singularity or curmudgeonly Luddite camp. It’s a binary trap, like the Star Trek debates between Spock and Bones. The question is far more compelling and complex.

The Turing test is often cited as the starting point for this question. For those of you who are unfamiliar with this thought experiment, it was developed by British mathematician and computer scientist, Alan Turing in a 1950 paper that asked the simple question: “can machines think”.

The test goes like this: suppose you have someone at a computer terminal who is conversing with an entity by typing text conversations back and forth, what we now regularly do with instant messaging. The entity on the other terminal is either a computer or a human, the identity of which is unknown to the computer user. The user can have a conversation and ask questions. If he or she cannot ascertain “human or machine” after about 5 minutes, then the machine passes the Turing test. It responds as if a human would and can effectively “think”.

turing_model

In 1990, the thought experiment became a reality with the Loebner Prize. Every year, various chatbots — algorithms which converse via text with a computer user — compete to try to fool humans in a setup that replicates this exact test. Some algorithms have come close, but to date, no computer has ever successfully won the prize.

eliza2

The story goes that Alan Turing was inspired by a popular party game of the era called the “Imitation Game” where a questioner would ask an interlocutor various questions. This intermediary would then relay these questions to a hidden person who would answer via handwritten notes. The job of the questioner was to try to determine the gender of the unknown person. The hidden person would provide ambiguous answers. A question of “what is your favorite shade of lipstick” could be answered by “It depends on how I feel”. The answer is in this case is a dodge as a 1950s man certainly doesn’t know the names of lipstick shades.

Both the Turing test and the Imitation Game hover around the act of deception. This technique, widely deployed in predator-prey relationships in nature, is engrained in our biological systems. In the Loebner Prize competitions, there have even been instances where the human and computer will try to play with the judges, making statements like: “Sorry I am answering this slowly, I am running low on RAM”.

It may sound odd, but the computer doesn’t really know deception. Humans do. Every day we work with subtle queues of movement around social circles, flirtation with one another, exclusion and inclusion into a group and so on. These often rely on shades of deception: we say what we don’t really mean and have other agendas than our stated goals. Politicians, business executives and others that occupy high rungs of social power know these techniques well. However, we all use them.

The artificial intelligence software that powers chatbots has evolved rapidly over the years. Natural language processing (NLP) is widely used in various software industries. I had an informative lunch the other day in Amsterdam with a colleague of mine, Bruno Jakic at AI Applied, who I met through the Affect Lab. Among other things, he is in the business of sentiment analysis, which helps, for example, determine if a large mass of tweets indicates a positive or negative emotion. Bruno shared his methodology and working systems with me.

State-of-the-art sentiment analysis algorithms are generally effective, operating in the 75-85% range for identification of a “good” or “bad” feeling in a chuck of text such as a Tweet. Human consensus is in the similar range. Apparently, a group of people cannot agree on how “good” or “bad” various Twitter messages are, so machines are coning close to effective as humans on a general scale.

The NLP algorithms deploy brute force methods by crunching though millions of sentences using human-designed “classifiers” — rules to help determine how a sentence looks. For example, an emoticon like a frown-face, almost always indicates a bad feeling.

frown

Computers can figure this out because machine perception is millions of time faster than human perception. It can run through examples, rules and more but acts on logic alone. If NLP software code generally works, where specifically does it fail?

Bruno pointed out was that machines are generally incapable of figuring out if someone is being sarcastic. Humans immediately sense this by intuitive reasoning. We know, for example that getting locked out of your own house is bad. So if you write that this is a contradictory good thing, it is obviously sarcastic. The context is what our “intuition” — or emotional brain understands. It builds upon shared knowledge that we gather over many years.

sarcasm

The Movie Club films also tackle this issue of machine deception. At a critical moment in the film, Sonny, the main robot character in I, Robot, deceives the “bad” AI software that is attacking the humans by pretending to hold a gun to one of the main “good” characters. It  then winks to Will Smith (the protagonist) to let him know that he is tricking the evil AI machine. Sonny and Will Smith then cooperate, Hollywood style with guns blazing. Of course, they prevail in the end.

sony-wink

Sonny possess a sophisticated Theory of Mind: an understanding of its own mental state and well as that of the other robots and Will Smith. It takes initiative and pretends to be on the side of the evil AI computer by taking an an aggressive action. Earlier in the film, Sonny learned what winking signifies. It knows that the AI doesn’t understand this and so the wink will be understood by Will Smith and not be the evil AI.

In Game Over: Kasparov and the Machine, which recasts the narrative of the Deep Blue vs.Kasparov chess matches, the Theory of Mind of the computer resurfaces. We know that Deep Blue won the chess match, which was a series of 6 chess matches in 1997. It is the infamous Game 2, which obsessed Kasparov. The computer played aggressively and more like a human than Kasparov had expected.

At move 45, Kasparov resigned, convinced that Deep Blue had outfoxed him that day. Deep Blue had responded in the best possible way to Kasparov’s feints earlier in the game. Chess experts later discovered that Kasparov could have easily forced a honorable draw instead of resigning the match.

The computer appeared to have made a simple error. Kasparov was baffled and obsessed. How could the algorithm have failed on a simple move, when it was so thoroughly strategic earlier in the game. It didn’t make sense.

Kasparov felt like he was tricked into resigning. What he didn’t consider was that when te algorithm didn’t have enough time — since tournament chess games are run against a clock — to find the best-ranked move, that it would choose randomly from a set of moves…much like a human would do in similar circumstances. The decision we humans make is emotional at this point. Inadvertently, Kasparov the machine deceived Kasparov.

KASPAROVI’m convinced that ability to act deceptively is one necessary factor for machines need to be “intelligent”. Otherwise, they are simply code-crunchers. But there are other aspects, as well, which I’m discovering and exploring during the Impakt Festival.

I will continue this line of thought on machine intelligence in future blog posts, I welcome any thoughts and comments on machine intelligence and deception. You can find me on Twitter: @kildall.