Press "Enter" to skip to content

Tag: interface

Laurie Spiegel’s Music Mouse

While reading Elizabeth Hinkle-Turner’s book on women in electroacoustic music, I was reminded of Laurie Spiegel’s Music Mouse program. This is a mouse-driven generative music interface that Spiegel created all the way back in 1981. Spiegel used it to create a lot of her well-known pieces, including Appalachian Grove. Music mouse works basically by mapping the X and Y axes of the mouse position to diatonically-constrained pitches and harmonies.

Here’s a demo from YouTube. The fun stuff starts at 31 seconds.

All Hail the Dawn by Alexander Dupuis

An interactive audiovisual feedback loop forms the basis of All Hail the Dawn. The instrument contains two simple light-sensitive oscillators. A crude spectral analysis in Max/MSP is used to filter the oscillators as well as looped buffers recorded from the instrument. A matrix of the spectral analysis, interactively altered in Jitter using audio data, is projected back onto the instrument and performer as a series of shifting patterns. This setup allows both the graphics and sound to drive each other, creating an evolving audiovisual relationship sensitive to slight changes in position, sound or processing. [1]

Graph Layout Music by Abram Hindle

Graph layouts use spring layout algorithms. Springs and other physical systems are very interesting systems and have fundamentally musical properties such as decay and oscillation.

I took the graph drawer found here and stole the state of the graph every couple of frames:

The instrument I used in the background is an additive string I modified from here:

This is a web based UI being rendered by csound:

The source code for the whole shebang is at

Slow-Fi Generative Music Environment by Jason Soares

Slow-Fi is a generative self correcting audio/visual environment. Original concept and software by Jason Soares 2004. Modified in 2009 by Jason Soares & JFRE Coad. Download for Mac/PC. Slow-Fi EP release August 24th, 2010 on imputor? Records.

Once running, the emitter (pulsing circle) will launch hexagon shapes from itself. These hexagons with be assigned a random note and will move around randomly and intermittently. If a hexagon moves onto the emitter, it will kill that hexagon and launch two new hexagons in its place. There are three lines in the upper left corner which show the status of the system. The middle light grey line represents the current amount of hexagons. The left and right dark grey lines are the randomly chosen maximum and minimum triggers for the emitter to react to. Once the amount of hexagons reaches the maximum amount (left line), the emitter will start moving around the screen bouncing off the walls at different random speeds and directions killing off hexagons. It will do this until it reaches the minimum amount (right line). Then new amounts will be chosen and the process will start over. [1]

Squatouch by Alp Tugan

This prototype interface was specifically designed to be used by multiple users at once, which is an interesting implication of multi-touch interfaces.

Initially based on only mouse and keyboard interaction and single-user oriented interaction paradigms, it now provides multi-user oriented alternative interaction methods thanks to the rapid improvements in technology. However, the technology providing the user those opportunities expects the user to learn a new language. Multitouch interfaces are among these new languages through which more than one user can interact directly by using their hands without a mouse or a keyboard.

In that context, the following project presents Squatouch, which carries the human-computer interaction to a higher level by providing a tangible interface that is alternative to traditional graphical user interface. [1]


Via C74 Projects

The O-Bow, Optical Bow Interface

In the latest issue of the Canadian Electroacoustic Community’s online journal, eContact, Dylan Menzies unveils the O-Bow. The O-Bow uses an optical flow sensor, like the one on the bottom of your mouse, to sense speed, direction and angle of motion.

The O-Bow is a bow controller consisting of an optical flow sensor mounted to measure the bow speed and horizontal angle with high resolution. The bow can be anything with a grained surface, such as a wooden stick.

Development of the O-Bow was prompted by the lack of robust, and inexpensive bow controllers. Synthesized string instruments frequently appear in recordings, yet the quality of articulation is very limited for such expressive instruments. Bowing is a fairly easy skill to aquire, whereas fingering and vibrato are very difficult. Combining the keyboard with bow allows a musician previously unskilled with string instruments to quickly produce much better articulation than using a keyboard alone.Controlling vibrato with bow angle or key pressure avoids the need to control vibrato directly.

From a less utilitarian viewpoint, bowing is a very natural and expressive mode of control. It deserves to be integrated better into the modern world of electronic sound, including that which is more removed from authentic strings.

So far the O-Bow has been used with a simple one-sample synthesiser as shown in the following video. More sophistocated synthesis is being developed, including physical modelling. [1]