Press "Enter" to skip to content

Month: April 2011

The Bends Collective

The Bends Collective was born out of a 10 week project in early 2011 as part of the Sound Design MSc at The University of Edinburgh.

The project focussed on circuit bending, and hardware hacking. The group learned how to modify various electronic devices ranging from cheap kids toys to a Nintendo Entertainment System. With their army of circuit bent instruments, they performed a concert in front of a live audience in March 2011.

In this video, you will see the army at work! Many devices were hacked in various ways, producing crazy sound textures far superior to that of their original output. They were all fitted with output jacks so that they could be played through amplifiers or loudspeakers. The instruments were fed into a Max/MSP patch which quadraphonically panned the audio around 4 loudspeakers which surrounded the audience.



Goodbye Max Mathews!

In 1957 … the first computer-generated sounds were heard at Bell Telephone Laboratories (or Bell Labs, as it was called) in Murray Hill, New Jersey. Max Mathews had joined the acoustic research department at Bell Labs to develop computer equipment to study telephones. With the aim of using listening tests to judge the quality of the sound, he had made a converter to put sound into a computer and a converter to get it back out again, which according to Mathews, “turned out to be a very successful way to do research in telephony.” Mathews went further:

“It was immediately apparent that once we could get sound out of a computer, we could write programs to play music on the computer. That interested me a great deal. The computer was an unlimited instrument, and every sound that could be heard could be made this way. And the other thing was that I liked music. I had played the violin for a long time …”

John Pierce, also at Bell Labs, lent crucial support to the music project. As he explains, “I was executive director of the communication sciences division when Max used the computer to produce musical sounds-I was fascinated.”

In 1957, Mathews finished Music I, his first sound generating computer program. The first music produced with Music I was a seventeen second composition by Newman Guttman, a linguist and acoustician at Bell Labs. The composition, called In the Silver Scale, used a scale slightly different from the diatonic scale so as to have better controlled chords. Mathews said, “It was terrible.” Pierce said, “To me, it sounded awful.” But, as Msthews continues, “It was the first.” In fact, the chords were never heard because Music I was a single-voiced program. As Mathews recalls, “The program was also terrible-it had only one voice, one waveform, a triangular wave, no attack, no decay, and the only expressive parameters you could control were pitch, loudness, and duration.”

Music I was the first in a series of sound-generating computer programs collectively referred to as the Music-N series. Music II followed in 1958, with improvements of four voices and arbitrary waveforms, and it introduced the concept of the wavetable oscillator. Music III, which followed in 1960, was, as Mathews puts it, “when things really came together.” Music III introduced the concept of modularity, or unit generators, so that one could put together “orchestras” of “instruments.” It introduced additional possibilities for shaping sounds and it introduced the concept of a “score,” where notes could be listed in the order of their starting times and each note was associated with a timbre, loudness, pitch and duration. [1]

The computer music research at Bell Labs and other institutions provided the backdrop to the first round of creative musical work with computers. From the beginning, John Pierce and Max Mathews had been eager to make contact with musicians, and in 1961 Pierce hired composer James Tenney to come and work at Bell Labs.

Tenney worked at Bell from 1961 to 1964 and completed several compositions during that period. His first was Analog #1: Noise Study, finished in 1961 and inspired by the random noise patterns he heard in the Holland Tunnel on his daily commute between Manhattan and New Jersey. His interest in randomness at that time included using the computer to make musical decisions as well as to generate sound. In Dialogue (1963), Tenney used various stochastic methods to determine the sequencing of sounds.

Tenney continued to develop his stochastic ideas in Phases (For Edgard Varese) (1963), in which different types of sounds are statistically combined. His techniques resulted in sounds with continually changing textures, similar to a fabric made up of a variety of materials in various shapes and colors.

In 1963, Mathews published an influential article on computer music titled “The Digital Computer as a Musical Instrument” in Science. Jean-Claude Risset, at the time a physics graduate student in France, read the article and became so excited by the potential of computer music that he decided to write his thesis based on research he planned to do at Bell Labs. Risset came to Bell in 1964, began research in timbre, returned to France in 1965, and came back to Bell in 1967. He completed Computer Suite from Little Boy in 1968 and Mutations in 1969. Both compositions contain sounds that could not have been produced by anything but a computer.

Meanwhile, at Stanford University in 1963, John Chowning also came across Max Mathews’s Science article and became inspired to study computer science. Chowning visited Bell Labs in the summer of 1964 and left with the punched cards for Music IV. He subsequently established, with David Poole, a laboratory for computer music at Stanford. The lab would eventually become the Center for Computer Research in Music and Acoustics (CCRMA), a major center for computer music research. Chowning later went on to develop frequency modulation (FM) as a method for generating sound. His approach to FM, in fact, was licensed by Yamaha in 1974 and was the basis of sound production in many Yamaha synthesizers through the 1980s.

Chowning’s early compositions Sabelithe (1971) and Turenas (1972) both simulated sounds moving in space. In Stria (1977), Chowning used the Golden Section to determine the spectra of the sounds. The results were otherworldly–magical, strange, icy, and unlike anything that one could imagine coming from an acoustic instrument. [2]



1. Chadabe, Joel. Electric Sound, 1997. 108.
2. Chadabe, Joel. “The Electronic Century Part III: Computers and Analog Synthesizers.” Electronic Musician, Vol. 16, Issue 4 (April 2000).

TVESTROY

Presented on triple screens and on a series of CRT televisions, Tvestroy is an experimentation with the links between image and sound materials, leveraging all of their potential “drasticality.” Generated from the same source, sounds and images are not only in synch — they emerge concurrently. The sound IS the image. Hypnotic and heady, this work of art qualified as “electrovideoacoustic” by its two creators, engulfs the audience in an environment made up of geometrical abstractions and de-structured rhythmic phrases way beyond any compromise. [1]



Interview with Sword & Sorcery Ambient Game Creators

Over at CDM, there’s a great interview with the creators of Sword & Sorcery, an ambient music game for the iPad. The game combines minimalist gameplay with tightly integrated sound and music. It draws heavily on nostalgia, employing sounds and visuals that are very reminiscent of the original Legend of Zelda and Castlevania games.

Jim: I captured all of the music either on a PlayStation using MTV’s Music Generator and/or
[Apple] GarageBand. For example, on the song, ‘Lone Star,’ I drummed a beat onto a cassette four-track, burned that onto a CD, placed the CD into the PlayStation, sampled and looped in MTV Music Generator,
and then built a song around it using that software. THEN I brought it into GarageBand and added more layers and effects. I also used a [Casio] SK-1 peppered throughout. In terms of plug-ins and soft synths, I used a lot of the Arturia stuff, [Native Instruments] Kontakt, [XLN Audio] Addictive Drums, [Toontracks] Superior Drummer, and a [Universal Audio] UAD-2 card loaded with a bunch of their processing plug-ins. [1]



Sound Activated Augmented Reality Sculptures for iPhone

Konstruct is an investigation into Generative Art in an Augmented Reality environment. It is a sound reactive AR experience for the iPhone that allows the user to create a virtual sculpture by speaking, whistling or blowing into the device’s microphone. A variety of 3D shapes, colour palettes and settings can be combined to build an endless collection of structures. Compositions can be saved to the device’s image gallery.

Konstruct is a free app available on iPhone 3GS and 4 running iOS 4+. A version for the iPad 2 is planned for the coming months.

Konstruct site – apps.augmatic.co.uk/​konstruct
More info – jamesalliban.wordpress.com/​2011/​03/​30/​konstruct-ar-iphone-app/​ [1]



Via Make

Night Fragments (2011) by Lindsay Vickery

Performance by Caitlin Cassidy – Mezzo-Soprano and Decibel (cat hope – fl/a.fl, lindsay vickery cl/b.cl, tristen parr – vcl, stuart james – kbd and malcolm riddoch – MAX/MSP). photos lisa businovski. Live recording from the back of the hall at the Perth Institute for Contemporary Arts (remixed by Malcolm Riddoch).