BOOM! WaterWorks

My Water Works project recently got coverage in BOOM: A Journal of California and I couldn’t be more pleased.

Screen Shot 2015-08-25 at 9.06.28 AM

A few months ago, I was contacted by the editorial staff to write about the 3D printed maps and data-visualization for Water Works.

What most impressed me is the context for this publication, which is a conversation about California, in their own words: “to create a lively conversation about the vital social, cultural, and political issues of our times, in California and the world beyond.”

So, while my Water Works project is an artwork, it is having the desired effect of a dialogue outside of the usual art world.

EquityBot got clobbered

Just after the Dow Jones dropped 1000 points on Aug 24th (yesterday), I checked out how EquityBot was doing. Annual rate of return of > -50%

Screen Shot 2015-08-24 at 11.20.25 PM

Crazy! Of course, this is like taking the tangent of any curve and making a projection. A day later, EquityBot is at -32%.

Screen Shot 2015-08-25 at 8.57.06 AM

Still not good, but if if you were to invest yesterday, you could be much richer today.

I’m not that much of a gambler, so I’m glad that EquityBot is just a simulated (for now) bank account.

EquityBot Goes to ISEA

EquityBot will be presented at this year’s International Symposium on Electronic Art at Vancouver. The theme is Disruption. You can always follow EquityBot here: @equitybot.

EquityBot is an automated stock-trading algorithm that uses emotions on Twitter as the basis for investments in a simulated bank account.

This art project poses the question: can an artist create a stock-trading algorithm that will outperform professional managed accounts?

The original EquityBot, what I will call version 1, launched on October 28th via the Impakt organization, which was supported the project last fall during at artist residency.

I intended for it to run for 6 months and then to assess its performance results. I ended up letting it run a little bit longer (more on this later).

Since then, I’ve revamped EquityBot about 1 month ago. The new version is doing *great* with an annual rate of return of 10.86%. Most of this is due to some early investments in Google, whose stock prices have been doing fantastic.

equitybot-isea-8emotions-1086percent

How does EquityBot work? During stock market hours, EquityBot scrapes Twitter to determine the frequency of eight basic human emotions: anger, fear, joy, disgust, anticipation, trust, surprise and sadness.

equitybot-8emotions

The software code captures fluctuations in the number of tweets containing these emotions. It then correlates them to changes in stock prices.  When an emotion is trending upwards EquityBot will select a stock that follows a similar trajectory. It deems this to be a “correlated investment” and will buy this stock.

equitybot_correlation_graph

The ISEA version of EquityBot will run for another 6 months or so. The major change from version 1 was that with this version, I tracked 24 different emotions, all based on the Plutchik wheel.

Plutchik-wheel.svg_1

 

The problem that I found was this was too many emotions to track, both in terms. Statistically-speaking, there were too few tweets for many of the emotions for the correlation code to properly function.

The only change with the ISEA version (what I will call v1.1) is that it now tracks eight emotions instead of 24.

popular-emotions

How did v1 of EquityBot perform? It came out of the gates super-strong, hitting a high point of 20.21%. Wowza. These are also some earlier data-visualizations, which have since improved, slightly so.
equitybot-nov26-2021percent

But 1 month later, by December 15th, EquityBot dipped down to -4.58% percent. Yikes. These are the vicissitudes of the market and a short time-span

equitybot-dec15-minus-458percent

 

By January 21st 2015, EquityBot was almost back to even at -0.96%.

 

equitybot-jan21-minus096percent

Then by February 4th, 2015, EquityBot was back at a respectable 5.85%.

equitybot-feb4-585percent

And on March 1st, doing quite well at 7.36%

equitybot-march1-736percent

I let the experiment run until June 11th. The date was arbitrary, but -9.15% was the end result. This was pretty terrible.

equitybot-jun11-minus915percent

And which emotions performed the “best” — the labels aren’t on this graph, but the ones that were doing well were Trust and Terror. And the worst…was Rage (extreme Anger).

equitybot-investing-results-jun11

 

How do other managed accounts perform? According to the various websites, these are the numbers I’ve found.

Janus (Growth & Income): 7.35%
Fidelity (VIP Growth & Income): 4.70%
Franklin (Large Cap Equity): 0.46%
American Funds (The Income Fund of America): -1.23%
Vanguard (Growth and Income): 4.03%

This would put EquityBot v1.0 as dead last. Good thing this was a simulated bank account.

I’m hoping that v1.1 will do better. Eight emotions. Let’s see how it goes.

 

Machine Data Dreams: Barbie Video Girl Cam

One of the cameras they have here at the Signal Culture Residency is the Barbie Video Girl cam. This was a camera embedded inside a Barbie doll, produced in 2010.

The device was discontinued most notably after the FBI accidentally leaked a warning about possible predatory misuses of the camera, is  patently ridiculous.

The interface is awkward. The camera can’t be remotely activated. It’s troublesome to get the files off the device. The resolution is poor, but the quality is mesmerizing.

 

barbie_disassembly_1

The real perversion is the way you have to change the batteries for the camera, by pulling down Barbie’s pants and then opening up her leg with a screwdriver.

b-diss

I can only imagine kids wondering if the idealized female form is some sort of robot.

barbie_disassembly_3

The footage it takes is great. I brought it first to the local antique store, where I shot some of the many dolls for sale.

 

 

And, of course, I had to hit up the machines at Signal Culture to do a live analog remix using the Wobbulator and Jones Colorizer.

In the evening, as dusk approached, I took Barbie to the Evergreen Cemetery in Owego, which has gravestones dating from the 1850s and is still an active burial ground.

Here, Barbie contemplated her own mortality.

barbie_cemetery barbie_close_cross barbie_good barbie_gravestone_1 barbie_headstore barbie_warren

barbie_cemetery_mother

It was disconcerting for a grown man to be holding a Barbie doll with an outstretched arm to capture this footage, but I was pretty happy with the results.

I made this short edit.

And remixed with the Wobbulator. I decided to make a melodic harmony (life), with digital noise (death) in a move to mirror the cemetery — a site of transition between the living and the dead.

How does this look in my Machine Data Dreams software?

You can see the waveform here — the 2nd channel is run through the Critter & Guitari Video Scope.

Screen Shot 2015-08-04 at 10.43.09 AM

And the 3D model looks promising, though once again, I will work on these post-residency.

Screen Shot 2015-08-04 at 10.45.04 AM

Machine Data Dreams: Critter & Guitari Video Scope

Not to be confused with Deleuze and Guattari, this is a company that makes various hardware music synths.

For my new project, Machine Data Dreams, I’m looking at how machines might “think”, starting with the amazing analog video machines at Signal Culture.

signal_culture-fullsetup

This morning, I successfully stabilized my Arduino data logger. This captures the raw video signal from any device with RCA outputs and stores values at a sampling rate of ~3600 Hz.

It obviously misses a lot of the samples, but that’s the point, a machine-to-machine listener, bypassing any sort of standard digitizing software.

data_logger

For my first data-logging experiment, I decided to focus on this device, the Critter & Guitari Video Scope, which takes audio and coverts it to a video waveform.

critterguitari_3 critterguitari_2 Crittcritterguitari_1

Using the synths, I patched and modulated various waveforms. I’ve never worked with this kind of system until a few days ago, so I’m new to the concept of control voltages.audio_sythn

This is the 15-minute composition that I made for the data-logger.

Critter & Guitari Videoscope Composition (below)

And the captured output, in my custom OpenFrameworks software.

 

Screen Shot 2015-08-02 at 10.56.15 PMThe 3D model is very preliminary at this point, but I am getting some solid waveform output into a 3D shape. I’ll be developing this in the next few months. But since I only have a week at Signal Culture, I’ll tackle the 3D-shape generation later.

Screen Shot 2015-08-02 at 11.02.25 PM

My data logger can handle 2 channels of video, so I’m experimenting with outputting the video signal as sound and then running it back through the C&G Videoscope.

This is the Amiga Harmonizer — output, which looks great by itself. The audio, however, as a video signal, as expected comes out sounding like noise.

But the waveforms are compelling. there is a solid band at the bottom, which is the horizontal sync pulse. This is the signature for any composite (NTSC) devices.

2000px-Composite_Video.svg

 

So, every devices I log should have this signal at the bottom, which you can see below.

Screen Shot 2015-08-02 at 10.58.12 PM

Once again, the 3D forms I’ve generated in OpenFrameworks and then opened up in Meshlab are just to show that I’m capturing some sort of raw waveform data.

Screen Shot 2015-08-02 at 11.00.14 PM

Atari Adventure Synth

Hands down my favorite Atari game when I was a kid was Adventure (2). The dragons looked like giant ducks. Your avatar was just a square and a bat wreaks chaos by stealing your objects.

In the ongoing research for my new Machine Data Dreams project, beginning here at Signal Culture, I’ve been playing with the analog video and audio synths.

Yesterday afternoon, I explored the town of Owego. I ran across a used DVD, CD & electronics store and bought an Atari Flashback Console for $25. I didn’t even know these existed.

blog_pic

I can plug it directly into their video synth system. After futzing around with the various patch cables, I came up with this 5-minute composition, which shows me playing the game. The audio sounds like marching with dirty noise levels.

Also, here is the latest 3D model from my code, which now has a true 3D axis for data-plotting.

Time is one axis, video signal is another, audio signal is the third.

Screen Shot 2015-07-31 at 9.26.05 PMAnd a crude frequency plot.

Screen Shot 2015-08-01 at 3.03.24 PM

 

Van Gogh Wobbulator

In the first full day of the residency at Signal Culture, I played around with the video and audio synthesizers. It’s a new world for me.

While my focus is on the Machine Data Dreams project, I also want to play with what they have and get familiar with the amazing analog equipment.

I started with this 2 minute video, which I shot earlier this summer at Musee d’Orsay. I had to document the odd spectacle: visitor after visitor would take photos of this famous Van Gogh self-portrait…despite the fact you can get a higher-quality version online.

I ran this through a few patches and into the Wobbulator, which affects the electronic signal on the CRT itself.

20150730_200415

 

 

20150730_152742

Ewa Justka, who is the toolmaker-in-residence here, and who is building her own audio synthesizer spruced up the accompanying audio. I captured a 20-minute sample.

ewa-blog-trash

What I love about the result is that the repetitive 2-minute video takes on its own life, as the two of us tweaked knobs, made live patches and laughed a lot.

Introducing Machine Data Dreams

Earlier this year, I received an Individual Artist Commission grant from the San Francisco Arts Commission for a new project called Machine Data Dreams.

I was notified months ago, but the project was on the back-burner until now — where I’m beginning some initial research and experiments at a residency called Signal Culture. I expect full immersion in the fall.

The project description
Machine Data Dreams will be a large-scale sculptural installation that maps the emerging sentience of machines (laptops, phones, appliances) into physical form. Using the language of machines — software program code  — as linguistic data points, Scott Kildall will write custom algorithms that translate how computers perceive the world into physical representations that humans can experience.

The project’s narrative proposition is that machines are currently prosthetic extensions of ourselves, and in the future, they will transcend into something sentient. Computer chips not only run our laptops and phones, but increasingly our automobiles, our houses, our appliances and more. They are ubiquitous and yet, often silent. The key to understanding their perspective of the world is to envision how machines view the world, in an act of synthetic synesthesia.

Scott will write software code that will perform linguistic analysis on machine syntax from embedded systems — human-programmable machines that range from complex, general purpose devices (laptops and phones) to specific-use machines (refrigerators, elevators, etc) . Scott’s code will generate virtual 3D geometric monumental sculptures. More complex structures will reflect the higher-level machines and simpler structures will be generated from lower-level devices. We are intrigued by the experimental nature of what the form will take — this is something that he will not be able to plan.

kildall_5

Machine Data Dreams will utilize 3D printing and laser-cutting techniques, which are digital fabrication techniques that are changing how sculpture can be created — entirely from software algorithms. Simple and hidden electronics will control LED lights to imbue a sense of consciousness to the artwork. Plastic joints will be connected via aluminum dowels to form an armature of irregular polygons. The exterior panels will be clad by a semi-translucent acrylic, which will be adhered magnetically to the large-sized structures. Various installations can easily be disassembled and reassembled.

The project will build on my experiments with the Polycon Construction Kit by Michael Ang, where I’m doing some source-code collaboration. This will heat up the fall.

PCK-small-mountain-768x1024

At Signal Culture, I have 1 week of residency time. It’s short and sweet. I get to play with devices such as the Wobbulator, originally built by Nam June Paik and video engineer Shuya Abe.

The folks at Signal Culture built their own from the original designs.

What am I doing here, with analog synths and other devices? Well, I’m working with a home-built Arduino data logger that captures raw analog video signals (I will later modify it for audio).

20150730_200511I’ve optimized the code to capture about 3600 signals/second. The idea is to get a raw data feed of what a machine might be “saying”, or the electronic signature of a machine.

20150730_150950

Does it work? Well, I hooked it up to a Commodore Amiga (yes, they have one).

I captured about 30 seconds of video and I ran it through a crude version of my custom 3D data-generation software, which makes models and here is what I got. Whoa…

It is definitely capturing something.

Screen Shot 2015-07-30 at 10.08.49 PM

Its early research. The forms are flat 3D cube-plots. But also very promising.

Bad Data: SF Evictions and Airbnb

The inevitable conversation about evictions at San Francisco every party…art organizations closing, friends getting evicted…the city is changing. It has become a boring topic, yet it is absolutely, completely 100% real.

For the Bad Data series — 12 data-visualizations depicting socially-polarized, scientifically dubious and morally ambiguous dataset, each etched onto an aluminum honeycomb panel — I am featuring two works: 18 Years of Evictions in San Francisco and 2015 AirBnb Listings for exactly this reason. These two etchings are the centerpieces of the show.

evictions_airbnb

This is the reality of San Francisco, it is changing and the data is ‘bad’ — not in the sense of inaccurate, but rather in the deeper sense of cultural malaise.

By the way, the reception for the “Bad Data” show is this Friday (July 24, 2015) at A Simple Collective, and the show runs through August 1st.

The Anti-Eviction Mapping Project has done a great job of aggregating data on this discouraging topic, hand-cleaning it and producing interactive maps that animate over time. They’re even using the Stamen map tiles, which are the same ones that I used for my Water Works project.

Screen Shot 2015-07-23 at 4.52.36 PM

When I embarked on the Bad Data series, I reached out to the organization and they assisted me with their data sets. My art colleagues may not know this, but I’m an old-time activist in San Francisco. This helped me with getting the datasets, for I know that the story of evictions is not new and certainly not on this scale.

In 2001, I worked in a now-defunct video activist group called Sleeping Giant, which worked on short videos in the era when Final Cut Pro made video-editing affordable and when anyone with a DV camera could make their own videos. We edited our work, sold DVDs and had local screenings, stirring up the activist community and telling stories from the point-of-view of people on the ground. Sure, now we have Twitter and social media, but at the time, this was a huge deal in breaking apart the top-down structures of media dissemination.

Here is No Nos Vamos a hastily-edited video about evictions in San Francisco. Yes, this was 14 years ago.

I’ve since moved away from video documentary work and towards making artwork: sculpture, performance, video and more. The video-activist work and documentary video in general felt overly confining as a creative tool.

My current artistic focus is to transform datasets using custom software code into physical objects. I’ve been working with the amazing fabrication machines at Autodesk’s Pier 9 facility to make work that was not previously possible.

Ths dataset (also provided through the SF Rent Board) includes all the no-fault evictions in San Francisco, I got my computer geek on…well, I do try to use my programming powers for non-profit work and artwork.

I mapped the data into vector shapes using the C++ open source toolkit, called OpenFrameworks and wrote code which transformed the ~9300 data points into plotable shapes, which I could open in Illustrator. I did some work tweaking the strokes and styles.

sf_evictions_20x20

This is what the etching looks like from above, once I ran int through the water jet. There were a lot of settings and tests to get to this point, but the final results were beautiful.

waterjet-overhead

The material is a 3/4″ honeycomb aluminum. I tuned the high-pressure from the water-jet to pierce through the top layer, but not the bottom layer. However, the water has to go somewhere. The collisions against the honeycomb produce unpredictable results.

…just like the evictions themselves. We don’t know the full effect of displacement, but can only guess as the city is rapidly becoming less diverse. The result is below, a 20″ x 20″ etching.

Bad Data: 18 Years of San Francisco Evictions

baddata_sfevictions

The Airbnb debate is a little less clear-cut. Yes, I do use Airbnb. It is incredibly convenient. I save money while traveling and also see neighborhoods I’d otherwise miss. However, the organization and its effect on city economies is a contentious one.

For example, there is the hotel tax in San Francisco, which after 3 years, they finally consented to paying — 14% to the city of San Francisco. Note: this is after they had a successful business.

There also seems to be a long-term effect on rent. Folks, and I’ve met several who do this, are renting out places as tenants on Airbnb. Some don’t actually live in their apartments any longer. The effect is to take a unit off the rental market and mark it as a vacation rental. Some argue that this also skirts the law rent-control in the first place, which was designed as a compromise solution between landlords and tenants.

There are potential zoning issues, as well…a myriad of issues around Airbnb.

BAD DATA: 2015 AIRBNB LISTINGS, etching file

airbnb_sf

In any case, the location of the Airbnb rentals (self-reported, not a complete list) certainly fit the premise of the Bad Data series. It’s an amazing dataset. Thanks to darkanddifficult.com for this data source.

BAD DATA: 2015 Airbnb Listings

baddata_airbnb