Press "Enter" to skip to content

Tag: generative

The Melting Sun by Seiya Matsumiya

The Melting Sun is an ambient composition in the Bohlen-Pierce scale, whose tonality, timber, volume, and timing are determined algorithmically from a video of the sunset.

The sounds heard can be separated into two groups: the drones, and the melodies. Both groups feature three different Csound instruments that each correspond to various types of Red, Green, or Blue values extracted from the video. These data, combined with the data gathered from the position of the sun, control various parameters of the composition. Some of the data mapping choices are arbitrary, and some are obvious (i.e. the overall brightness controls the cutoff frequency of the global filter for the drones).

The composition is in the Moll II mode of the Bohlen-Pierce scale. The note numbers used for the drones and the melodies are predetermined, but the base frequency of the scale is not. In fact, the base frequency, or the tonality of the composition, shifts continuously throughout the piece with sun’s position, but the process is too slow to be perceptible—just like the movement of the sun itself. The three melodic instruments actually play the same long loop of notes, but at different timings and also in different tritaves. The timing itself changes continuously, and as the sun comes lower in the sky and causes an illusion that it is gaining speed, the notes are played more frequently. The composition currently uses previously recorded video material, but in the future it will allow the use of a visual live feed of the sunset. [1]


Shanghai Traces by Ben Houge

Here’s a 5-minute excerpt of a real-time video piece I presented as part of the Make Over show at OV Gallery in Shanghai, January 23-March 13, 2010.

The show was a response to the dramatic beautification campaign that has overrun Shanghai in anticipation of hosting the World Expo this year. The falling objects in the video are the wares of street vendors who are being forced from the city center during the Expo.

The piece was originally presented as a silent video, but in rendering a linear version to post on-line, I added a soundtrack in which recordings of interviews with Shanghai street vendors are algorithmically chopped up, layered, and delayed, very similar to what’s going on in the video.

The audio and video were both generated in Max/MSP/Jitter, using simple non-linear deployment methods I’ve been using for years in my videogame work, to ensure constant variation. I think the medium of real-time, generative video is well suited to commenting on a city’s continual cycle of reinvention. [1]

Find more of Ben’s work, including an interesting entry about how one of his pieces recently got banned, at his blog Aesthetic Cartography.


Ben Carey’s GPS and Accelerometer Controlled Granulator

Ben Carey sends word of his latest project in Max. Apparently, he is building a Max patch that granulates an audio file based on GPS data and accelerometer data from an iPhone.

GPS data controlling the mix between stereo outputs of a 4 buffer polyphonic sampler;
Sampled iphone accelerometer data controlling movement through the four sound files;
some other algorithms (offscreen) controlling and triggering other aspects of the sampler and effects processing…

I can’t wait to see this in performance.


Black Allegheny, Swarm Generated Music

Black Allegheny is one of the first albums made up entirely of swarm generated music. The album was created using a swarm-controlled sampler called Becoming, which was programmed by the composer.

<a href="">Imperceptible Time by Evan X. Merz</a>

Becoming is an algorithmic composition program written in java, that builds upon some of John Cage’s frequently employed compositional processes. Cage often used the idea of a “gamut” in his compositions. A gamut could be a collection of musical fragments, or a collection of sounds, or a collection of instruments. Often, he would arrange the gamut visually on a graph, then use that graph to piece together the final output of a piece. Early in his career, he often used a set of rules or equations to determine how the output would relate to the graph. Around 1949, during the composition of the piano concerto, he began using chance to decide how music would be assembled from the graph and gamut.

In Becoming, I directly borrow Cage’s gamut and graph concepts; however, the software assembles music using concepts from the AI subfield of swarm intelligence. I place a number of agents on the graph and, rather than dictating their motions from a top-down rule-based approach, the music grows in a bottom-up fashion based on local decisions made by each agent. Each agent has preferences that determine their movement around the graph. These values dictate how likely the agent is to move toward food, how likely the agent is to move toward the swarm, and how likely the performer is to avoid the predator.

Yes, this is my new album! Thanks for reading and listening!

On CDM, with a great comments thread
On Make Online
Swarm Sampler On MatrixSynth
On Noise for Airports (a great intellectual music blog!)
On Califaudio

The Heart Chamber Orchestra

The Heart Chamber Orchestra explores the use of heartbeats as a source of music data. Many people have tried similar things in the past, but it’s rare to see an ensemble of this size using this sort of data-mining.

The musicians are equipped with ECG (electrocardiogram) sensors. A computer monitors and analyzes the state of these 12 hearts in real time. The acquired information is used to compose a musical score with the aid of computer software. It is a living score dependent on the state of the hearts.

While the musicians are playing, their heartbeats influence and change the composition and vice versa. The musicians and the electronic composition are linked via the hearts in a circular motion, a feedback structure. The emerging music evolves entirely during the performance.

The resulting music is the expression of this process and of an organism forming itself from the circular interplay of the individual musicians and the machine.

via Everyday Listening

With the Blurred Vision of a Newborn

This piece unites Cage’s conception of graph music with ideas from the field of swarm intelligence. The software uses a graph of notated musical fragments to generate a score in real-time, for live performance. It does this by allowing a swarm of virtual insects to crawl over the graph, choosing new fragments with each move.