In 1957 … the first computer-generated sounds were heard at Bell Telephone Laboratories (or Bell Labs, as it was called) in Murray Hill, New Jersey. Max Mathews had joined the acoustic research department at Bell Labs to develop computer equipment to study telephones. With the aim of using listening tests to judge the quality of the sound, he had made a converter to put sound into a computer and a converter to get it back out again, which according to Mathews, “turned out to be a very successful way to do research in telephony.” Mathews went further:
“It was immediately apparent that once we could get sound out of a computer, we could write programs to play music on the computer. That interested me a great deal. The computer was an unlimited instrument, and every sound that could be heard could be made this way. And the other thing was that I liked music. I had played the violin for a long time …”
John Pierce, also at Bell Labs, lent crucial support to the music project. As he explains, “I was executive director of the communication sciences division when Max used the computer to produce musical sounds-I was fascinated.”
In 1957, Mathews finished Music I, his first sound generating computer program. The first music produced with Music I was a seventeen second composition by Newman Guttman, a linguist and acoustician at Bell Labs. The composition, called In the Silver Scale, used a scale slightly different from the diatonic scale so as to have better controlled chords. Mathews said, “It was terrible.” Pierce said, “To me, it sounded awful.” But, as Msthews continues, “It was the first.” In fact, the chords were never heard because Music I was a single-voiced program. As Mathews recalls, “The program was also terrible-it had only one voice, one waveform, a triangular wave, no attack, no decay, and the only expressive parameters you could control were pitch, loudness, and duration.”
Music I was the first in a series of sound-generating computer programs collectively referred to as the Music-N series. Music II followed in 1958, with improvements of four voices and arbitrary waveforms, and it introduced the concept of the wavetable oscillator. Music III, which followed in 1960, was, as Mathews puts it, “when things really came together.” Music III introduced the concept of modularity, or unit generators, so that one could put together “orchestras” of “instruments.” It introduced additional possibilities for shaping sounds and it introduced the concept of a “score,” where notes could be listed in the order of their starting times and each note was associated with a timbre, loudness, pitch and duration. 
The computer music research at Bell Labs and other institutions provided the backdrop to the first round of creative musical work with computers. From the beginning, John Pierce and Max Mathews had been eager to make contact with musicians, and in 1961 Pierce hired composer James Tenney to come and work at Bell Labs.
Tenney worked at Bell from 1961 to 1964 and completed several compositions during that period. His first was Analog #1: Noise Study, finished in 1961 and inspired by the random noise patterns he heard in the Holland Tunnel on his daily commute between Manhattan and New Jersey. His interest in randomness at that time included using the computer to make musical decisions as well as to generate sound. In Dialogue (1963), Tenney used various stochastic methods to determine the sequencing of sounds.
Tenney continued to develop his stochastic ideas in Phases (For Edgard Varese) (1963), in which different types of sounds are statistically combined. His techniques resulted in sounds with continually changing textures, similar to a fabric made up of a variety of materials in various shapes and colors.
In 1963, Mathews published an influential article on computer music titled “The Digital Computer as a Musical Instrument” in Science. Jean-Claude Risset, at the time a physics graduate student in France, read the article and became so excited by the potential of computer music that he decided to write his thesis based on research he planned to do at Bell Labs. Risset came to Bell in 1964, began research in timbre, returned to France in 1965, and came back to Bell in 1967. He completed Computer Suite from Little Boy in 1968 and Mutations in 1969. Both compositions contain sounds that could not have been produced by anything but a computer.
Meanwhile, at Stanford University in 1963, John Chowning also came across Max Mathews’s Science article and became inspired to study computer science. Chowning visited Bell Labs in the summer of 1964 and left with the punched cards for Music IV. He subsequently established, with David Poole, a laboratory for computer music at Stanford. The lab would eventually become the Center for Computer Research in Music and Acoustics (CCRMA), a major center for computer music research. Chowning later went on to develop frequency modulation (FM) as a method for generating sound. His approach to FM, in fact, was licensed by Yamaha in 1974 and was the basis of sound production in many Yamaha synthesizers through the 1980s.
Chowning’s early compositions Sabelithe (1971) and Turenas (1972) both simulated sounds moving in space. In Stria (1977), Chowning used the Golden Section to determine the spectra of the sounds. The results were otherworldly–magical, strange, icy, and unlike anything that one could imagine coming from an acoustic instrument. 
1. Chadabe, Joel. Electric Sound, 1997. 108.
2. Chadabe, Joel. “The Electronic Century Part III: Computers and Analog Synthesizers.” Electronic Musician, Vol. 16, Issue 4 (April 2000).