Press "Enter" to skip to content

Category: Programming

Disconnected, an Album of Algorithmic Sound Collages from the Web

I’m pleased to announce the release of Disconnected, and album of algorithmic sound collages generated by pulling sounds from the web.

I prefer to call this album semi-algorithmic because some of the music is purely software-generated, while other pieces are a collaboration between the software and myself. Tracks four and six are purely algorithmic, while the other tracks are a mix of software-generated material and more traditionally composed material.


Cover

The software used in the sound collage pieces (1, 3, 4, 6) was inspired by Melissa Schilling’s Small World Network Model of Cognitive Insight. Her theory essentially says that moments of cognitive insight, or creativity, occur whenever a connection is made between previously distantly related ideas. In graph theory, these types of connections are called bridges, and they have the effect of bringing entire neighborhoods of ideas closer together.

I applied Schilling’s theory to sounds from freesound.org. My software searches for neighborhoods of sounds that are related by aural similarity and stores them in a graph of sounds. These sounds are then connected with more distant sounds via lexical connections from wordnik.com. These lexical connections are bridges, or moments of creativity. This process is detailed in the paper Composing with All Sound Using the FreeSound and Wordnik APIs.

Finally, these sound graphs must be activated to generate sound collages. I used a modified boids algorithm to allow a swarm to move over the sound graph. Sounds were triggered whenever the population on a vertex surpassed a threshold.

Disconnected is available for download from Xylem Records.


Back

The Sound of Sorting

In this video, a youtube user sonifies various sorting algorithms which are often used in computer programs.

This particular audibilization is just one of many ways to generate sound from running sorting algorithms. Here on every comparison of two numbers (elements) I play (mixing) sin waves with frequencies modulated by values of these numbers. There are quite a few parameters that may drastically change resulting sound – I just chose parameteres that imo felt best. [1]



Interview with Viznut, Creator of Minimal Art Platform IBNIZ

Recently, this video of IBNIZ in action made its way around the electronic music blogosphere. IBNIZ is an extremely enigmatic art platform. It’s not necessarily easy, or pretty, or even broadly useful. IBNIZ is uniquely targeted at a small coterie of artists with specific aesthetic goals. I had the opportunity to ask Viznut, the creator of IBNIZ, about the language, and his views were so intriguing that I am publishing them unedited here.

Evan Merz: In your blog post, you say that IBNIZ is an old-school platform. How was it inspired by older technology? How do you feel that IBNIZ relates to old-school +software or old-school art practices?

Viznut: I don’t actually call it an oldschool platform — those words tend to have strict historical connotations to a lot of people — but a platform that has the kind of simplicity, concreteness and freedom that can also be found in old computers and makes them fun to hack with. You use a small instruction set to manipulate numerical values that consist of tangible bits. Nothing is arbitrary, undefined or illegal: every bit has a purpose, every combination of instructions has a specific, potentially useful outcome. These platform-level properties are what shaped a lot of the oldschool home computer culture, including the core artistic practices of the demoscene.

While IBNIZ shares these properties with actual “oldschool platforms”, however, it uses totally different design choices for achieving them. As IBNIZ also tries to break new ground by having code compactness as a leading design goal, I would rather call it “experimental” than “oldschool”.

The main artistic practice IBNIZ has been designed to serve is the “sub-256-byte” demo art that aims at producing increasingly impressive visual or even audiovisual programs under very tight program size limitations. The kind of attitude that gave birth to this practice has been a very prominent part of computer hacker subcultures since the very beginning — consider the display hacks of the 1950s and 1960s, for example — so I guess it could very well be called “oldschool”. However, I would avoid binding it to any specific time period, as it is always possible to approach any computing system in this way. The issue here is just that modern mainstream platforms that hide their bits and bytes under numerous abstraction layers do not encourage the kind of “bit-bending challenges” that IBNIZ or classic computers do.

When talking about the kind of computer art that is prominent on old platforms and small program/data sizes, I prefer to use the term “Computationally Minimal Art” as it eliminates the need for a timeline. IBNIZ concentrates on the program-size aspect of CMA while being considerably less minimal on the machine spec department, allowing a decent number of pixels, millions of simultaneous colors and a lot of processing
power.


IBNIZ Screenshot

EM: Do you think that growing up with personal computers in the 80s and the 90s has made you a different artist than you would be if you were born in 2000? Specifically, do the old methods give you a different perspective on computer art? Do you think that younger artists would benefit by exploring older methods and environments?

V: I believe that being influenced by eight-bit computers at a formative age has made be assign some kind of archetypal roles to bits, pixel patterns and
synthesized waveforms. These are the primitives that define computer art for me at the most fundamental level. If I had been born twenty years later, I would perhaps have ended up embracing polygons instead of pixels and formal lines of code instead of concrete bits and bytes.

Of course, it might also have been possible that if the computers of my childhood had been too complex and unpredictable, I might not have become interested in them at all, at least on a very deep level. The computer would have remained as a mere tool for me instead of material, a platform for fixed applications instead of a platform for code-level experimentation. This is what concerns me a lot at times. Many of the kind of minds that became computer virtuosos in the eighties would find themselves completely lost if they were introduced to computers today. This is why I find it important to create and advocate the kind of virtual toys and cultural forms that make the “oldschool path” more accessible and interesting.

How much a computer artist can benefit from experimenting with the kind of “bit-twiddling” typical to oldschool platforms depends a lot on his on her psychological characteristics, I guess. I would say that at least those people who show any symptoms of “hacker mentality”, including a kind of desire to completely understand and control a limited set of building blocks and to explore their potentials, should definitely try it out.

I have noticed that many of the younger demoscene artists have an interest in platforms that had already fallen out of fashion by the time they were born. I think this is very understandable: if you are able to grasp the fundamentals of a code-based artform that embraces technical excellence and experimentation, you also have the potential to appreciate any computing platform as artistic material by its unique inherent restrictions and characteristics, regardless of its age, cultural context or whether it is considered “oldschool” or “newschool”.

EM: You have a very interesting code aesthetic. Is IBNIZ designed, in part, to show how code can be beautiful? Do you think that beautiful code can be appreciated along with a beautiful work of art? Does showing the code change the perception of the art?

V: IBNIZ has been mainly designed to produce maximal results from a minimal number of characters. So, the only principle for what the code is supposed to look like is “small is beautiful”. When desigining the language with this principle in mind, however, I thought it might help a lot if I aimed at some level of implicit elegance. In order to attain this elegance, I’ve taken a lot of influence from FORTH which I regard as a particularly beautiful programming language as it combines a Lisp-like purity and simplicity with an Assembly-like concreteness and straightforwardness.

I’m not sure how showing the code could change the perception of an artwork, but as a demoscener I know that by knowing the size of the program and something about the platform it is written for, it is possible to appreciate the code even without actually seeing it. If a 64-byte-long program produces a 3D-rotating, phong-shaded torus, any demoscener will be ready to praise the quality, beauty and impressivity of the code just by knowing that such an achievement must necessarily involve very well-thought-out code and math. So, in extreme size-coding, it all boils down to code length: the most beautiful code for any given task is the shortest possible code for the task, period. However, I do believe that even when sticking to pure size-optimization, the process produces a kind of emergent beauty a lot like how simplified mathematical formulas tend to give more pleasing impressions than their non-simplified forms.

When trying to show off the inherent beauty of IBNIZ code, I don’t think the current “line-noise format” with one ASCII character per instruction serves this purpose very well. The beauty could perhaps be much better grasped by, for instance, a visualization of the abstract stack flow.


IBNIZ Screenshot

EM: What artists inspired you to build IBNIZ? Did any particular artist, whether part of the demoscene or not, spur this project? Are there any coders who inspired you?

V: The path that eventually lead to IBNIZ originally started from a technical idea inspired by the continuing progress of 4-kilobyte demoscene productions: once the maximum code density of X86 machine code has been reached in a 4-kilobyte demo, would it still possible to increase the code density even further by putting in a custom bytecode interpreter? This lead into a lot of experimentation with different virtual machine concepts that would allow for a maximum code density with a minimum overhead for simple effects. I don’t really know about any specific artists who had worked on anything similar but I would say that the general attitude and mindset that the demoscene culture in general had cultivated in me has influenced many of the design choices.

The IBNIZ project had been dormant for a couple of years before I finally finished the design and implementation. A major motivator for the revival was a 23-byte Commodore 64 demo, “Wallflower” by 4mat of Ate Bit, that was groundbreaking by producing several minutes of interesting structured glitches from a couple of simple bitshift operations. This inspired me to revive the project and to do some musical experiments with very short C programs. As this unexpectedly grew into a collective movement called “bytebeat” which also gave birth to several different interactive experimentation tools, I really had to finish IBNIZ. How IBNIZ eventually came out was somewhat affected by the bytebeat movement, especially the Flash-based on-line experimentation tool by Paul Hayes — I wouldn’t have emphasized the interactive editor so much without this contribution.

EM: For people who want to use IBNIZ, what tricks can you share? What little snippets of code are particularly effective or useful?

V: This would require a very long answer, as the relevant tricks depend quite a lot on whether the user is working on video or audio, the level of determinism involved and what kind of results are aimed at. I am working on a “full-scale” IBNIZ reference guide that describes every opcode with their intended purposes and also gives examples and useful “nonintended uses” for each of them.

Many people who experiment with IBNIZ just try out different combinations of opcodes without actually knowing what they are doing. For those who prefer this method, I would recommend combining basic arithmetic and stack manipulation with the stack-pick opcode ‘)’ that often produces interesting feedback effects. However, I think even random experienters can benefit from obtaining some level of understanding on how the VM actually works.

For those who want to experiment with audio, I would recommend looking into the material available on the “bytebeat” formulas, including a couple of blog posts and a technical paper by me. When translating these formulas into the RPN syntax used by IBNIZ, remember that ‘w’ yields the value of the ‘t’ variable in the audio context. IBNIZ also has a different sample rate and number format so the formulas will often need adjustments to their constants.

EM: Could you share your favorite program that you’ve written so far with IBNIZ?

V: We are still at a rather early stage in IBNIZ culture, and I’m sure that a lot of new “favorite programs” will pop up once the regular IBNIZ demo competitions start taking place. Right now, the most advanced programs available for IBNIZ are fractal renderers. Here’s a Mandelbrot zoomer by myself — very slow in the current public version of IBNIZ but a lot faster in the upcoming JIT-enabled version:

vArs1ldv*vv*0!1-1!0dFX4X1)Lv*vv*-vv2**0@+x1@+4X1)Lv*vv*+4x->?Lpp0:ppRpRE.5*;

There’s also the random-program approach. Sometimes just a couple of characters are enough for an interesting result. I have tested all the possible programs from zero to three characters in length in order to find some nice ones, and my favorite among them is probably d)r that produces a long and varying sequence of audiovisual gltiches.

EM: What are you plans for IBNIZ? Are you working on a new version? What
changes +are in the pipeline? Do you plan on compromising on some aspects of
it, or will +it stay hardcore and old-school?

V: IBNIZ is still in a somewhat moving state and that’s why a new version comes up from time to time. Right now I am concentrating on a JIT compiler in order to give the VM implementation a much needed performance boost. The abstract VM will not undergo any major changes, however. A couple of new opcodes will probably be added, but otherwise it is already pretty much fixed. Once we reach version 2.0, there will be no further changes to the VM definition.

A near-future plan is to start regular IBNIZ demo competitions in order to advocate the platform and to inspire the discovery of new tricks and techniques. I think IBNIZ has also potential as a livecoding platform, and this series of competitions will support the development of livecoding skills as well.

EM: Is there anything else people should know about IBNIZ or demo art in general? Where can we go to see more demo art by you or other people?

V: The demoscene is a multi-faceted subculture, so there is a lot of different demo art, from very technical to very non-technical and from very constrained to very non-constrained. The size classes of 256 bytes or less are among the most technical, constrained and hardcore genres of demo art, another example being the demos running on very old and limited platforms. I have created most of my most acclaimed work for the unexpanded VIC-20 but have recently started to expand my sphere of technical creativity into highly constrained “non-8-bit” works as well.

Currently, the leading community website and production database for the demoscene is Pouet.net, that allows for searching pieces of demo art by size class and platform. Although demos are executable programs by definition, many of them can also be watched as video captures that are available on Youtube and other video websites. My own work is released under the group label “PWP” and you can find it by doing a Pouet.net search for “pwp” or a Youtube search for “viznut” or “pwp”.

How to Render Synchronous Audio and Video in Processing using Beads

Rendering video in Processing is easy. The MovieMaker class makes it incredibly easy to render Quicktime video files from a Processing sketch. Unfortunately, Processing doesn’t supply tools for rendering audio using the MovieMaker class. Hence, rendering the output from a multimedia program can really be a headache.

I’ve spent a lot of time working on this problem in the last few months. I tried screen-capture software, but even the professional screen capture apps aren’t suited to the task. They cause glitches in the audio, drop frames and slow down the sketch itself. I also tried rendering using external hardware. Unfortunately, the only affordable device for capturing VGA output averages a mediocre 10 frames per second, and the frame rate is unacceptably inconsistent.

So the solution had to come from code, and in the end, the solution is pretty simple. Admittedly, this solution still slows down your sketch, but if you lower the resolution, you can get acceptable, synchronized audio and video which can be combined in any video editor.

Synchronizing MovieMaker Based on the Audio Stream

The solution is to render video frames based on the position in the audio output buffer. Simply monitor the position in the audio stream, and render a video frame every so many samples.

There are three basic code changes that are necessary to get this working. First, calculate the number of audio samples that will occur per frame of video. For this to work, the frame rate must be relatively low. 12 works well for me.


int MovieFrameRate = 12;
float AudioSamplesPerFrame = 44100.0f / (float)MovieFrameRate;

Then set up your audio recording objects as detailed in my free ebook: Sonifying Processing: The Beads Tutorial.


AudioFormat af = new AudioFormat(44100.0f, 16, 1, true, true);
outputSample = new Sample(af, 44100);
rts = new RecordToSample(ac, outputSample, RecordToSample.Mode.INFINITE);

Finally, call this subroutine in your draw function, and make sure to finalize the audio and the video when the program ends.


// this routines adds video frames based on how much audio has been processed
void SyncVideoAndAudio()
{
  // if we have enough audio to do so, then add a frame to the video
  if( rts.getNumFramesRecorded() > MovieFrameCount * AudioSamplesPerFrame )
  {
    // we may have to add multiple frames
    float AudioSamples = rts.getNumFramesRecorded() - (MovieFrameCount * AudioSamplesPerFrame);
    while( AudioSamples > AudioSamplesPerFrame )
    {
      mm.addFrame();
      MovieFrameCount++;
      AudioSamples -= AudioSamplesPerFrame;
    }
  }
}

After your program completes, you just need to stitch the audio and video together using any old video editor at your disposal.

Here’s an example sketch rendered using this method.



And here is the source code for that sketch: Video_Audio_Sync_Test_03

I hope this saves you some time and money!

IBNIZ – Multimedia Coding Environment



As demonstrated by the video, IBNIZ (Ideally Bare Numeric Impression giZmo) is a virtual machine and a programming language that generates video and audio from very short strings of code. Technically, it is a two-stack machine somewhat similar to Forth, but with the major execption that the stack is cyclical and also used at an output buffer. Also, as every IBNIZ program is implicitly inside a loop that pushes a set of loop variables on the stack on every cycle, even an empty program outputs something (i.e. a changing gradient as video and a constant sawtooth wave as audio). [1]

Download IBNIZ at http://pelulamu.net/ibniz/

Phil Burk’s Look Back Melody Algorithm

First of all, I probably shouldn’t attribute this algorithm to Phil Burk. I imagine that many people have implemented a version of this algorithm. It’s a simple, almost fundamental musical algorithm, but he is the first person who brought it to my attention, so for the time being, I will call it Phil Burk’s Look Back Algorithm.

In pseudocode, the algorithm looks like this:


1. Generate a handful of random note events (pitch, duration, velocity)
2. For each successive note in the piece, notes[i] = notes[i - delay] + transposition
3. Occasionally insert a random note event
Where notes is an array of note events, notes[i] represents the current event, delay represents how far to look back, and transposition is a transformation of the previous notes.

Phil brought up this algorithm in reference to a hyper-simplistic fugue generator. Essentially all it does is repeat sections of music that have already been generated. It pulls subsets of earlier note events and subtly transforms them. It infinitely noodles around on whatever random note events are generated in the first place.

The algorithm is remarkably effective for its simplicity. It is an elegant way of generating really coherent melodies. Here is a simple melody I generated using this algorithm: LookBackOutput.mid

And here is a simple Eclipse project that implements the Look Back algorithm in java and outputs midi files.