Letting It Go To Voicemail is about communication anxiety. It’s about the stress that builds up when we think about our inbox or our voicemail. It’s about the overwhelming crush of communication that comes our way each week, and how it impacts us mentally.
Letting It Go To Voicemail is hyper-minimal in construction, consisting of only a single oscillator and an algorithm for generating a buffer. The algorithm was suggested to me by Larry Polansky. He calls it The Longest Melody in the World, and it generates noise that is something like a probabilistic drunkards walk. This piece simply sweeps the probability parameter from low to high, forwarding the resulting buffer to the oscillator, and drawing it on a polar coordinate system.
Rendering video in Processing is easy. The MovieMaker class makes it incredibly easy to render Quicktime video files from a Processing sketch. Unfortunately, Processing doesn’t supply tools for rendering audio using the MovieMaker class. Hence, rendering the output from a multimedia program can really be a headache.
I’ve spent a lot of time working on this problem in the last few months. I tried screen-capture software, but even the professional screen capture apps aren’t suited to the task. They cause glitches in the audio, drop frames and slow down the sketch itself. I also tried rendering using external hardware. Unfortunately, the only affordable device for capturing VGA output averages a mediocre 10 frames per second, and the frame rate is unacceptably inconsistent.
So the solution had to come from code, and in the end, the solution is pretty simple. Admittedly, this solution still slows down your sketch, but if you lower the resolution, you can get acceptable, synchronized audio and video which can be combined in any video editor.
Synchronizing MovieMaker Based on the Audio Stream
The solution is to render video frames based on the position in the audio output buffer. Simply monitor the position in the audio stream, and render a video frame every so many samples.
There are three basic code changes that are necessary to get this working. First, calculate the number of audio samples that will occur per frame of video. For this to work, the frame rate must be relatively low. 12 works well for me.
int MovieFrameRate = 12;
float AudioSamplesPerFrame = 44100.0f / (float)MovieFrameRate;
AudioFormat af = new AudioFormat(44100.0f, 16, 1, true, true);
outputSample = new Sample(af, 44100);
rts = new RecordToSample(ac, outputSample, RecordToSample.Mode.INFINITE);
Finally, call this subroutine in your draw function, and make sure to finalize the audio and the video when the program ends.
// this routines adds video frames based on how much audio has been processed
// if we have enough audio to do so, then add a frame to the video
if( rts.getNumFramesRecorded() > MovieFrameCount * AudioSamplesPerFrame )
// we may have to add multiple frames
float AudioSamples = rts.getNumFramesRecorded() - (MovieFrameCount * AudioSamplesPerFrame);
while( AudioSamples > AudioSamplesPerFrame )
AudioSamples -= AudioSamplesPerFrame;
After your program completes, you just need to stitch the audio and video together using any old video editor at your disposal.
Here’s an example sketch rendered using this method.
“Cannot Connect” is a problem for both computers and for people. When dealing with technology, we receive this message when we try to use something new. For people, this can be a problem in every sort of relationship.
The keyboard is a tool that people use every day to try to connect with other people. Through blogs, tweets, prose and poetry, we try to engage other humans through our work at the keyboard.
In this piece, the performer attempts to connect to both the computer and the audience through the keyboard. The software presents a randomized electronic instrument each time it is started. It selects from a palette of samples, synthesizers and signal processing effects. The performer must feel out the new performance environment and use it to connect to the audience by typing free association verse. 
This is my latest work with Processing, Bead and NextText.
Over the past year, I’ve had the pleasure of discovering Oliver Bown’s wonderful sound art library, Beads. Beads is a library for creating and analyzing audio in Processing or Java, and it is head-and-shoulders above the other sound libraries that are available for Processing. From the ground up, Beads is made for musicians and sound artists. It takes ideas popularized by CSound, Max and other popular sound art environments, and intuitively wraps them in the comfort of the Processing programming language.
The book covers all of the standard sound-art topics in straightforward tutorial style. Each chapter addresses a basic topic, then demonstrates it in code. Topics covered include Additive Synthesis, Frequency Modulation, Sampling, Granular Synthesis, Filters, Compression, Input/Output, MIDI, Analysis and everything else an artist may need to bring
their Processing sketches to life.
It’s true that these topics are well-covered by other environments in other places. There are a plethora of sound art platforms these days. I love Pure Data, Max, SuperCollider and even Tassman and Reaktor. But there are a million people out there making visual art in Processing who don’t have a good way of exploring multimedia in the environment in which they’re comfortable. This tutorial is aimed at Processing programmers who think that sound art is a bridge too far.