Press "Enter" to skip to content

Category: Experimental

The Sound of Sorting

In this video, a youtube user sonifies various sorting algorithms which are often used in computer programs.

This particular audibilization is just one of many ways to generate sound from running sorting algorithms. Here on every comparison of two numbers (elements) I play (mixing) sin waves with frequencies modulated by values of these numbers. There are quite a few parameters that may drastically change resulting sound – I just chose parameteres that imo felt best. [1]



Slow-Fi Generative Music Environment by Jason Soares

Slow-Fi is a generative self correcting audio/visual environment. Original concept and software by Jason Soares 2004. Modified in 2009 by Jason Soares & JFRE Coad. Download for Mac/PC. Slow-Fi EP release August 24th, 2010 on imputor? Records.

Once running, the emitter (pulsing circle) will launch hexagon shapes from itself. These hexagons with be assigned a random note and will move around randomly and intermittently. If a hexagon moves onto the emitter, it will kill that hexagon and launch two new hexagons in its place. There are three lines in the upper left corner which show the status of the system. The middle light grey line represents the current amount of hexagons. The left and right dark grey lines are the randomly chosen maximum and minimum triggers for the emitter to react to. Once the amount of hexagons reaches the maximum amount (left line), the emitter will start moving around the screen bouncing off the walls at different random speeds and directions killing off hexagons. It will do this until it reaches the minimum amount (right line). Then new amounts will be chosen and the process will start over. [1]



Squatouch by Alp Tugan

This prototype interface was specifically designed to be used by multiple users at once, which is an interesting implication of multi-touch interfaces.



Initially based on only mouse and keyboard interaction and single-user oriented interaction paradigms, it now provides multi-user oriented alternative interaction methods thanks to the rapid improvements in technology. However, the technology providing the user those opportunities expects the user to learn a new language. Multitouch interfaces are among these new languages through which more than one user can interact directly by using their hands without a mouse or a keyboard.

In that context, the following project presents Squatouch, which carries the human-computer interaction to a higher level by providing a tangible interface that is alternative to traditional graphical user interface. [1]

[1] http://squatouch.alptugan.com/

Via C74 Projects

The O-Bow, Optical Bow Interface

In the latest issue of the Canadian Electroacoustic Community’s online journal, eContact, Dylan Menzies unveils the O-Bow. The O-Bow uses an optical flow sensor, like the one on the bottom of your mouse, to sense speed, direction and angle of motion.

The O-Bow is a bow controller consisting of an optical flow sensor mounted to measure the bow speed and horizontal angle with high resolution. The bow can be anything with a grained surface, such as a wooden stick.

Development of the O-Bow was prompted by the lack of robust, and inexpensive bow controllers. Synthesized string instruments frequently appear in recordings, yet the quality of articulation is very limited for such expressive instruments. Bowing is a fairly easy skill to aquire, whereas fingering and vibrato are very difficult. Combining the keyboard with bow allows a musician previously unskilled with string instruments to quickly produce much better articulation than using a keyboard alone.Controlling vibrato with bow angle or key pressure avoids the need to control vibrato directly.

From a less utilitarian viewpoint, bowing is a very natural and expressive mode of control. It deserves to be integrated better into the modern world of electronic sound, including that which is more removed from authentic strings.

So far the O-Bow has been used with a simple one-sample synthesiser as shown in the following video. More sophistocated synthesis is being developed, including physical modelling. [1]



[1] http://www.zenprobe.com/dylan/project/obow/index.html

Allele by Michael Zev Gordon

Today The Guardian brings us details on Michael Zev Gordon’s new piece, Allele. The piece uses the human genome as source material and is being debuted on July 9.

It’s been a delicate path to tread, and my approach has been shaped by seeing genes as simultaneously physical matter and things of extraordinary wonder. Humans share more than 99% of our genetic material. But every so often in any gene, at known points, or “polymorphisms”, tiny differences in genetic structure occur between groups of individuals. The different forms of the gene at these points are called alleles – and specific aspects of our individuality are influenced by particular allelic combinations. The scientific research has involved comparing certain alleles in musicians with those in non-musicians. The driving, expressive impulse for my piece has been to highlight these miraculous variants.

It took me time to get my head around the science involved. Things crystallised when I began to map a segment of common sequence leading up to my chosen polymorphism – A, C and A on to the same musical note-names; then T – “ti’ in the doh-re-mi solfège system – on to B, and so on. Adding a supple rhythm, I arrived, to my surprise, at something that sounded quite like plainsong: it became the initial gesture of the piece.

Other, pragmatic factors were formative, too. We had to decide who the performers would be. It was a starting point for the project that I would use their specific DNA data in my work – we were drawn to the image of “singing one’s genes”. That led to a multipart choir, and, inevitably for me, the model of Thomas Tallis’s 40-voice motet, Spem in Alium. The common linguistic root of Alium and Allele – the other – was not lost on us either.

Listening to Light

Eric Archer presents an interesting drive around New York City, where light is being translated into sound.

Here are some experimental recordings I’ve made with the Lumicon sound camera, which detects modulated light and transforms it to analog audio. I’m having a great deal of fun exploring the city with this device. Its like eavesdropping on a world of sounds that were never intended to be heard.



Via MAKE

Tectonic: Earthquake Music by Micah Frank

Last month, we discovered Milton Garces’ Volcano Music, and now Micah Frank brings us another geological music project: Tectonic. This one utilizes real-time processing of xml feeds of earthquake data to generate a sonic quake landscape.

Tectonic is a sound sculpture created in real time by earthquakes as they occur across the globe. A tightly integrated system between Max/MSP, Google Earth and Ableton Live processes a stream of real-time data that is translated into synthesis and sample playback parameters.

When an earthquake occurs, seismic data is relayed to the system, sound is produced and Google Earth immediately flies to the coordinates of the latest earthquake giving us a visual representation of the newest developments. As multiple earthquakes occur daily, the sculpture builds, enmeshing itself in a complex soundscape of textures and tones – every second, different from the last and never repeating the same stage twice. [1]





Note [1] http://micahfrank.com/tagged/tectonic Accessed on 6/11/10

via twitter

Lum by Alfred Duarte

Alfred Duarte brings us another beautiful “wiimote instrument.” This one is built using Sony’s gesture controller, but the basic gestural control remains the same. In fact, this particular instrument bears a striking resemblance to gestural instruments from the 1980s, in the way that the distance between the two controllers acts as a sound parameter. Specifically, it resembles Buchla’s Lightning MIDI Controller and Max Mathews’ Radio Baton. Duarte calls his instrument LUM, and the video is a pretty demonstration of its capabilities.



Via CDM.