Press "Enter" to skip to content

Category: Multimedia

unnamed soundsculpture by Daniel Franke

The basic idea of the project is built upon the consideration of creating a moving sculpture from the recorded motion data of a real person. For our work we asked a dancer to visualize a musical piece (Kreukeltape by Machinenfabriek) as closely as possible by movements of her body. She was recorded by three depth cameras (Kinect), in which the intersection of the images was later put together to a three-dimensional volume (3d point cloud), so we were able to use the collected data throughout the further process. The three-dimensional image allowed us a completely free handling of the digital camera, without limitations of the perspective. The camera also reacts to the sound and supports the physical imitation of the musical piece by the performer. She moves to a noise field, where a simple modification of the random seed can consistently create new versions of the video, each offering a different composition of the recorded performance. The multi-dimensionality of the sound sculpture is already contained in every movement of the dancer, as the camera footage allows any imaginable perspective.

The body, constant and indefinite at the same time, bursts the space already with its mere physicality, creating a first distinction between the self and its environment. Only the body movements create a reference to the otherwise invisible space, much like the dots bounce on the ground to give it a physical dimension. Thus, the sound-dance constellation in the video does not only simulate a purely virtual space. The complex dynamics of the body movements is also strongly self-referential. With the complex quasi-static, inconsistent forms the body is painting, a new reality space emerges whose simulated aesthetics goes far beyond numerical codes.

Similar to painting, a single point appears to be still very abstract, but the more points are connected to each other, the more complex and concrete the image seems. The more perfect and complex the alternative worlds we project (Vilém Flusser) and the closer together their point elements, the more tangible they become. A digital body, consisting of 22 000 points, thus seems so real that it comes to life again. [1]



via @olliebown

Intubation by LRM Performance

LRM Performance is an interdisciplinary collective – group, or company if you prefer – seeking to breach borders between art disciplines. Created by composer David Aladro-Vico and plastic artist Berta Delgado, their lineup is variable including performers from mixed disciplines. Their works are live visual, auditive and movement creations, usually non-narrative or abstract. [1]



Fieldwork by Christopher Burns

Fieldwork is a software environment for improvised performance with electronic sound and animation. Two musicians’ sounding performances are fed into the system, and analyzed for pitch, rhythm, and timbral change. When the software recognizes a sharp contrast in one performer’s textures or gestures, it reflects this change by transforming the sound of the other musician’s performance. As a result of this process, the musicians are not only responding to one another as in conventional improvisation, but they are also able to directly modify their duo partner’s sound by interacting with the software. Fieldwork emphasizes rapid, glitchy, and polyrhythmic distortions of the musician’s performances, and establishes unpredictable feedback processes that encourage unexpected improvisational relationships between the performers and computer.

Performed by Amanda Schoofs (voice) and Christopher Burns (guitar) on October 27, 2011, as part of the Unruly Music festival, a co-production of UW-Milwaukee’s Peck School of the Arts and the Marcus Center for the Performing Arts. [1]



Interview with Viznut, Creator of Minimal Art Platform IBNIZ

Recently, this video of IBNIZ in action made its way around the electronic music blogosphere. IBNIZ is an extremely enigmatic art platform. It’s not necessarily easy, or pretty, or even broadly useful. IBNIZ is uniquely targeted at a small coterie of artists with specific aesthetic goals. I had the opportunity to ask Viznut, the creator of IBNIZ, about the language, and his views were so intriguing that I am publishing them unedited here.

Evan Merz: In your blog post, you say that IBNIZ is an old-school platform. How was it inspired by older technology? How do you feel that IBNIZ relates to old-school +software or old-school art practices?

Viznut: I don’t actually call it an oldschool platform — those words tend to have strict historical connotations to a lot of people — but a platform that has the kind of simplicity, concreteness and freedom that can also be found in old computers and makes them fun to hack with. You use a small instruction set to manipulate numerical values that consist of tangible bits. Nothing is arbitrary, undefined or illegal: every bit has a purpose, every combination of instructions has a specific, potentially useful outcome. These platform-level properties are what shaped a lot of the oldschool home computer culture, including the core artistic practices of the demoscene.

While IBNIZ shares these properties with actual “oldschool platforms”, however, it uses totally different design choices for achieving them. As IBNIZ also tries to break new ground by having code compactness as a leading design goal, I would rather call it “experimental” than “oldschool”.

The main artistic practice IBNIZ has been designed to serve is the “sub-256-byte” demo art that aims at producing increasingly impressive visual or even audiovisual programs under very tight program size limitations. The kind of attitude that gave birth to this practice has been a very prominent part of computer hacker subcultures since the very beginning — consider the display hacks of the 1950s and 1960s, for example — so I guess it could very well be called “oldschool”. However, I would avoid binding it to any specific time period, as it is always possible to approach any computing system in this way. The issue here is just that modern mainstream platforms that hide their bits and bytes under numerous abstraction layers do not encourage the kind of “bit-bending challenges” that IBNIZ or classic computers do.

When talking about the kind of computer art that is prominent on old platforms and small program/data sizes, I prefer to use the term “Computationally Minimal Art” as it eliminates the need for a timeline. IBNIZ concentrates on the program-size aspect of CMA while being considerably less minimal on the machine spec department, allowing a decent number of pixels, millions of simultaneous colors and a lot of processing
power.


IBNIZ Screenshot

EM: Do you think that growing up with personal computers in the 80s and the 90s has made you a different artist than you would be if you were born in 2000? Specifically, do the old methods give you a different perspective on computer art? Do you think that younger artists would benefit by exploring older methods and environments?

V: I believe that being influenced by eight-bit computers at a formative age has made be assign some kind of archetypal roles to bits, pixel patterns and
synthesized waveforms. These are the primitives that define computer art for me at the most fundamental level. If I had been born twenty years later, I would perhaps have ended up embracing polygons instead of pixels and formal lines of code instead of concrete bits and bytes.

Of course, it might also have been possible that if the computers of my childhood had been too complex and unpredictable, I might not have become interested in them at all, at least on a very deep level. The computer would have remained as a mere tool for me instead of material, a platform for fixed applications instead of a platform for code-level experimentation. This is what concerns me a lot at times. Many of the kind of minds that became computer virtuosos in the eighties would find themselves completely lost if they were introduced to computers today. This is why I find it important to create and advocate the kind of virtual toys and cultural forms that make the “oldschool path” more accessible and interesting.

How much a computer artist can benefit from experimenting with the kind of “bit-twiddling” typical to oldschool platforms depends a lot on his on her psychological characteristics, I guess. I would say that at least those people who show any symptoms of “hacker mentality”, including a kind of desire to completely understand and control a limited set of building blocks and to explore their potentials, should definitely try it out.

I have noticed that many of the younger demoscene artists have an interest in platforms that had already fallen out of fashion by the time they were born. I think this is very understandable: if you are able to grasp the fundamentals of a code-based artform that embraces technical excellence and experimentation, you also have the potential to appreciate any computing platform as artistic material by its unique inherent restrictions and characteristics, regardless of its age, cultural context or whether it is considered “oldschool” or “newschool”.

EM: You have a very interesting code aesthetic. Is IBNIZ designed, in part, to show how code can be beautiful? Do you think that beautiful code can be appreciated along with a beautiful work of art? Does showing the code change the perception of the art?

V: IBNIZ has been mainly designed to produce maximal results from a minimal number of characters. So, the only principle for what the code is supposed to look like is “small is beautiful”. When desigining the language with this principle in mind, however, I thought it might help a lot if I aimed at some level of implicit elegance. In order to attain this elegance, I’ve taken a lot of influence from FORTH which I regard as a particularly beautiful programming language as it combines a Lisp-like purity and simplicity with an Assembly-like concreteness and straightforwardness.

I’m not sure how showing the code could change the perception of an artwork, but as a demoscener I know that by knowing the size of the program and something about the platform it is written for, it is possible to appreciate the code even without actually seeing it. If a 64-byte-long program produces a 3D-rotating, phong-shaded torus, any demoscener will be ready to praise the quality, beauty and impressivity of the code just by knowing that such an achievement must necessarily involve very well-thought-out code and math. So, in extreme size-coding, it all boils down to code length: the most beautiful code for any given task is the shortest possible code for the task, period. However, I do believe that even when sticking to pure size-optimization, the process produces a kind of emergent beauty a lot like how simplified mathematical formulas tend to give more pleasing impressions than their non-simplified forms.

When trying to show off the inherent beauty of IBNIZ code, I don’t think the current “line-noise format” with one ASCII character per instruction serves this purpose very well. The beauty could perhaps be much better grasped by, for instance, a visualization of the abstract stack flow.


IBNIZ Screenshot

EM: What artists inspired you to build IBNIZ? Did any particular artist, whether part of the demoscene or not, spur this project? Are there any coders who inspired you?

V: The path that eventually lead to IBNIZ originally started from a technical idea inspired by the continuing progress of 4-kilobyte demoscene productions: once the maximum code density of X86 machine code has been reached in a 4-kilobyte demo, would it still possible to increase the code density even further by putting in a custom bytecode interpreter? This lead into a lot of experimentation with different virtual machine concepts that would allow for a maximum code density with a minimum overhead for simple effects. I don’t really know about any specific artists who had worked on anything similar but I would say that the general attitude and mindset that the demoscene culture in general had cultivated in me has influenced many of the design choices.

The IBNIZ project had been dormant for a couple of years before I finally finished the design and implementation. A major motivator for the revival was a 23-byte Commodore 64 demo, “Wallflower” by 4mat of Ate Bit, that was groundbreaking by producing several minutes of interesting structured glitches from a couple of simple bitshift operations. This inspired me to revive the project and to do some musical experiments with very short C programs. As this unexpectedly grew into a collective movement called “bytebeat” which also gave birth to several different interactive experimentation tools, I really had to finish IBNIZ. How IBNIZ eventually came out was somewhat affected by the bytebeat movement, especially the Flash-based on-line experimentation tool by Paul Hayes — I wouldn’t have emphasized the interactive editor so much without this contribution.

EM: For people who want to use IBNIZ, what tricks can you share? What little snippets of code are particularly effective or useful?

V: This would require a very long answer, as the relevant tricks depend quite a lot on whether the user is working on video or audio, the level of determinism involved and what kind of results are aimed at. I am working on a “full-scale” IBNIZ reference guide that describes every opcode with their intended purposes and also gives examples and useful “nonintended uses” for each of them.

Many people who experiment with IBNIZ just try out different combinations of opcodes without actually knowing what they are doing. For those who prefer this method, I would recommend combining basic arithmetic and stack manipulation with the stack-pick opcode ‘)’ that often produces interesting feedback effects. However, I think even random experienters can benefit from obtaining some level of understanding on how the VM actually works.

For those who want to experiment with audio, I would recommend looking into the material available on the “bytebeat” formulas, including a couple of blog posts and a technical paper by me. When translating these formulas into the RPN syntax used by IBNIZ, remember that ‘w’ yields the value of the ‘t’ variable in the audio context. IBNIZ also has a different sample rate and number format so the formulas will often need adjustments to their constants.

EM: Could you share your favorite program that you’ve written so far with IBNIZ?

V: We are still at a rather early stage in IBNIZ culture, and I’m sure that a lot of new “favorite programs” will pop up once the regular IBNIZ demo competitions start taking place. Right now, the most advanced programs available for IBNIZ are fractal renderers. Here’s a Mandelbrot zoomer by myself — very slow in the current public version of IBNIZ but a lot faster in the upcoming JIT-enabled version:

vArs1ldv*vv*0!1-1!0dFX4X1)Lv*vv*-vv2**0@+x1@+4X1)Lv*vv*+4x->?Lpp0:ppRpRE.5*;

There’s also the random-program approach. Sometimes just a couple of characters are enough for an interesting result. I have tested all the possible programs from zero to three characters in length in order to find some nice ones, and my favorite among them is probably d)r that produces a long and varying sequence of audiovisual gltiches.

EM: What are you plans for IBNIZ? Are you working on a new version? What
changes +are in the pipeline? Do you plan on compromising on some aspects of
it, or will +it stay hardcore and old-school?

V: IBNIZ is still in a somewhat moving state and that’s why a new version comes up from time to time. Right now I am concentrating on a JIT compiler in order to give the VM implementation a much needed performance boost. The abstract VM will not undergo any major changes, however. A couple of new opcodes will probably be added, but otherwise it is already pretty much fixed. Once we reach version 2.0, there will be no further changes to the VM definition.

A near-future plan is to start regular IBNIZ demo competitions in order to advocate the platform and to inspire the discovery of new tricks and techniques. I think IBNIZ has also potential as a livecoding platform, and this series of competitions will support the development of livecoding skills as well.

EM: Is there anything else people should know about IBNIZ or demo art in general? Where can we go to see more demo art by you or other people?

V: The demoscene is a multi-faceted subculture, so there is a lot of different demo art, from very technical to very non-technical and from very constrained to very non-constrained. The size classes of 256 bytes or less are among the most technical, constrained and hardcore genres of demo art, another example being the demos running on very old and limited platforms. I have created most of my most acclaimed work for the unexpanded VIC-20 but have recently started to expand my sphere of technical creativity into highly constrained “non-8-bit” works as well.

Currently, the leading community website and production database for the demoscene is Pouet.net, that allows for searching pieces of demo art by size class and platform. Although demos are executable programs by definition, many of them can also be watched as video captures that are available on Youtube and other video websites. My own work is released under the group label “PWP” and you can find it by doing a Pouet.net search for “pwp” or a Youtube search for “viznut” or “pwp”.

Letting It Go To Voicemail by Evan X. Merz

Letting It Go To Voicemail is about communication anxiety. It’s about the stress that builds up when we think about our inbox or our voicemail. It’s about the overwhelming crush of communication that comes our way each week, and how it impacts us mentally.

Letting It Go To Voicemail is hyper-minimal in construction, consisting of only a single oscillator and an algorithm for generating a buffer. The algorithm was suggested to me by Larry Polansky. He calls it The Longest Melody in the World, and it generates noise that is something like a probabilistic drunkards walk. This piece simply sweeps the probability parameter from low to high, forwarding the resulting buffer to the oscillator, and drawing it on a polar coordinate system.

The live generative version of this piece can be viewed as an applet at http://www.computermusicblog.com/Letting_It_Go_To_Voicemail/



How to Render Synchronous Audio and Video in Processing using Beads

Rendering video in Processing is easy. The MovieMaker class makes it incredibly easy to render Quicktime video files from a Processing sketch. Unfortunately, Processing doesn’t supply tools for rendering audio using the MovieMaker class. Hence, rendering the output from a multimedia program can really be a headache.

I’ve spent a lot of time working on this problem in the last few months. I tried screen-capture software, but even the professional screen capture apps aren’t suited to the task. They cause glitches in the audio, drop frames and slow down the sketch itself. I also tried rendering using external hardware. Unfortunately, the only affordable device for capturing VGA output averages a mediocre 10 frames per second, and the frame rate is unacceptably inconsistent.

So the solution had to come from code, and in the end, the solution is pretty simple. Admittedly, this solution still slows down your sketch, but if you lower the resolution, you can get acceptable, synchronized audio and video which can be combined in any video editor.

Synchronizing MovieMaker Based on the Audio Stream

The solution is to render video frames based on the position in the audio output buffer. Simply monitor the position in the audio stream, and render a video frame every so many samples.

There are three basic code changes that are necessary to get this working. First, calculate the number of audio samples that will occur per frame of video. For this to work, the frame rate must be relatively low. 12 works well for me.


int MovieFrameRate = 12;
float AudioSamplesPerFrame = 44100.0f / (float)MovieFrameRate;

Then set up your audio recording objects as detailed in my free ebook: Sonifying Processing: The Beads Tutorial.


AudioFormat af = new AudioFormat(44100.0f, 16, 1, true, true);
outputSample = new Sample(af, 44100);
rts = new RecordToSample(ac, outputSample, RecordToSample.Mode.INFINITE);

Finally, call this subroutine in your draw function, and make sure to finalize the audio and the video when the program ends.


// this routines adds video frames based on how much audio has been processed
void SyncVideoAndAudio()
{
  // if we have enough audio to do so, then add a frame to the video
  if( rts.getNumFramesRecorded() > MovieFrameCount * AudioSamplesPerFrame )
  {
    // we may have to add multiple frames
    float AudioSamples = rts.getNumFramesRecorded() - (MovieFrameCount * AudioSamplesPerFrame);
    while( AudioSamples > AudioSamplesPerFrame )
    {
      mm.addFrame();
      MovieFrameCount++;
      AudioSamples -= AudioSamplesPerFrame;
    }
  }
}

After your program completes, you just need to stitch the audio and video together using any old video editor at your disposal.

Here’s an example sketch rendered using this method.



And here is the source code for that sketch: Video_Audio_Sync_Test_03

I hope this saves you some time and money!

Maja Ratkje’s Voice

This is a preview of a video that is apparently being released in 2012. It is probably the most compelling “preview” that I have ever seen. The contrast of diagetic/nondiagetic sound with narrative/non-narrative film is really interesting. I can’t wait to see what the full production looks like.