scispace - formally typeset
Search or ask a question

Showing papers in "Computer Music Journal in 1986"


Journal ArticleDOI
TL;DR: This article attempts to explain the operation of the phase vocoder in terms accessible to musicians, relying heavily on the familiar concepts of sine waves, filters, and additive synthesis, and employing a minimum of mathematics.
Abstract: For composers interested in the modification of natural sounds, the phase vocoder is a digital signal processing technique of potentially great significance. By itself, the phase vocoder can perform very high fidelity time-scale modification or pitch transposition of a wide variety of sounds. In conjunction with a standard software synthesis program, the phase vocoder can provide the composer with arbitrary control of individual harmonics. But use of the phase vocoder to date has been limited primarily to experts in digital signal processing. Consequently, its musical potential has remained largely untapped. In this article, I attempt to explain the operation of the phase vocoder in terms accessible to musicians. I rely heavily on the familiar concepts of sine waves, filters, and additive synthesis, and I employ a minimum of mathematics. My hope is that this tutorial will lay the groundwork for widespread use of the phase vocoder, both as a tool for sound analysis and modification, and as a catalyst for continued musical exploration.

352 citations


Journal ArticleDOI

231 citations



Journal ArticleDOI
TL;DR: This introduction to magnetic recording aims to introduce readers to the science of magnetic recording by explaining the basics of magnetism and its applications in literature.
Abstract: What do you do to start reading introduction to magnetic recording? Searching the book that you love to read first or find an interesting book that will make you want to read? Everybody has difference with their reason of reading a book. Actuary, reading habit must be from earlier. Many people may be love to read, but not a book. It's not fault. Someone will be bored to open the thick book with small words to read. In more, this is the real condition. So do happen probably with this introduction to magnetic recording.

63 citations



Journal ArticleDOI
TL;DR: This paper presents an alternative approach to realtime control that enables the programmer to express the real-time response of a system in a declarative fashion rather than an imperative or procedural one.
Abstract: In the past, real-time control via digital computer has been achieved more through ad hoc techniques than through a formal theory. Languages for realtime control have emphasized concurrency, access to hardware input/output (I/O) devices, interrupts, and mechanisms for scheduling tasks, rather than taking a high-level problem-oriented approach in which implementation details are hidden. In this paper, we present an alternative approach to realtime control that enables the programmer to express the real-time response of a system in a declarative fashion rather than an imperative or procedural one. Examples of traditional, sequential languages for real-time control include Modula (Wirth 1977a; 1977b; 1982), Ada (DOD 1980), CSP (Hoare 1978), and OCCAM (May 1983). These languages all provide support for concurrency through multiple sequential threads of control. Programmers must work hard to make, sure their processes execute the right instructions at the appropriate times, and realtime control is regarded as the most difficult form of programming (Glass 1980). In contrast, our approach (Dannenberg 1984; 1986) is based on a nonsequential model in which behavior in the time domain is specified explicitly. This model describes possible system responses to real-time conditions and provides a means for manipulating and composing responses. The programming language Arctic is based on the nonsequential model and was designed for use in real-time computer music programs. It should be emphasized that our efforts have concentrated on the development of a notation for specifying desired real-time behavior. Any implementation only approximates the desired behavior, just as Arctic: A Functional Language for Real-Time Systems

32 citations


Journal ArticleDOI
TL;DR: The Centre Acanthes extended its usual session during the months of July and August by travelling to the New Mozarteum in Salzburg, Austria, and the European Center in Delphi, Greece, where three sessions were dedicated wholly to the works, theories, and ideas of lannis Xenakis, therefore encompassing instrumental music as well as computer and electronic music.
Abstract: No doubt one of Europe's major events in contemporary music in 1985 was the seven-week gathering of composers, instrumentalists, and students from all over the world at the Centre Acanthes in honor of lannis Xenakis. The Paris-based Centre Acanthes, a nonprofit organization founded 10 years ago, moves to Aix-enProvence in South France once a year to host a summer session with composers of our time, which in former years have been Stockhausen, Berio, Ligeti, and Xenakis himself. Last year, the center was chosen by its sponsors (the European Parliament and the French Government, among others) to take part in the European Year of Music. During the months of July and August, it extended its usual session, traditionally based at the Milhaud Conservatory in Aix, by travelling to the New Mozarteum in Salzburg, Austria (Fig. 1) (upon the invitation of Rolf Liebermann, whom some may also remember for his compositions for mechanical instruments during the 1950s), and finally to the European Center in Delphi, Greece. The three sessions, which could be visited individually or successively, averaged approximately 70 students. The sessions were dedicated wholly to the works, theories, and ideas of lannis Xenakis, therefore encompassing instrumental music as well as computer and electronic music. Topics were discussed in seminars held by a staff of approximately 15 (including Xenakis) of the foremost European musicians in contemporary music. On the instrumental side were Elizabeth Chojnacka (harpsichord), Sylvio Gualda (percussio), James Wood (choir), Claude Helffer (piano), and the Arditti String Quartett. On the electronic side, which focused on the UPIC sysThe UPIC System: A User's Report

29 citations



Journal ArticleDOI

21 citations


Journal ArticleDOI

19 citations



Journal ArticleDOI
TL;DR: It is difficult to rigorously compare and characterize melodic patterns by ear alone since the listening process is subject to the limitations and artifacts of both memory and perception, as well as individual variations in listener's ability to localize and describe melodic features.
Abstract: Various techniques of music visualization, music transcription, melody storage, and melody matching have been proposed (Mitroo, Herman, and Badler 1979; Dillon and Hunter 1982). However, none of these methods have had as their primary focus the mathematical characterization of melody patterns using an interactive graphics system with a wide variety of controlling parameters. It is difficult to rigorously compare and characterize melodic patterns by ear alone since the listening process is subject to the limitations and artifacts of both memory and perception, as well as individual variations in listener's ability to localize and describe melodic features. This problem is the primary motivation for the system described here.


Journal ArticleDOI
TL;DR: The main purpose of this research has been to study and to experiment with a synthesis technique by which one can dynamically control time-varying spectra using a small number of parameters.
Abstract: The main purpose of our research has been to study and to experiment with a synthesis technique by which we can dynamically control time-varying spectra using a small number of parameters. We have studied and experimented with the technique introduced in (Mitsuhashi 1982), and we have implemented the technique on a digital signal processor (a Digital Music Systems DMX-1000) for real-time synthesis and on a Fairlight CMI Series IIX (controlled via a DEC VAX-1I1/750) for interactive experimentation and composition. The technique is based on the sampling of a two-variable function along a particular orbit; the orbit is calculated by expressions that represent time-dependent relations of the two variables. We can control the dynamic development of spectra by varying time-dependent and/or timeindependent parameters within the orbit expressions. The waveforms we obtain, as will be seen later, can be either periodic or aperiodic. As this technique gives us many new possibilities of synthesis control, we think that it will be useful to carry out systematic experimentation in order to identify suitable criteria for varying functions and orbital parameters.




Journal ArticleDOI
TL;DR: The attempt to satisfy these musical demands with a single general-purpose synthesis language fails not only because such programs cannot meet the increasing needs of today's composer, but because they sacrifice efficiency and power for breadth and generality.
Abstract: As our knowledge of sound synthesis grows, it becomes increasingly apparent that no single synthesis strategy can create the wide range of musical timbres desired by composers. Similarly, as we gain experience creating compositional interfaces to synthesis programs, it becomes clear that no single musical input language or user interface can adequately accommodate a wide range of compositional styles and intentions. Thus, the attempt to satisfy these musical demands with a single general-purpose synthesis language fails not only because such programs cannot meet the increasing needs of today's composer, but because they sacrifice efficiency and power for breadth and generality. The need for new strategies becomes obvious when one considers that notions about synthesis and compositional interfaces keep changing year by year and that a great deal of software is constantly discarded.


Journal ArticleDOI
TL;DR: Author(s): Curtis Roads, Marc Battier, Clarence Barlow, John Bischoff, Herbert Brün, Joel Chadabe, Conrad Cummings, Giuseppe Englert, David Jaffe, Stephan Kaske, Otto Laske, JeanClaude Risset, David Rosenboom, Kaija Saariaho, Horacio Vaggione
Abstract: Author(s): Curtis Roads, Marc Battier, Clarence Barlow, John Bischoff, Herbert Brün, Joel Chadabe, Conrad Cummings, Giuseppe Englert, David Jaffe, Stephan Kaske, Otto Laske, JeanClaude Risset, David Rosenboom, Kaija Saariaho, Horacio Vaggione Reviewed work(s): Source: Computer Music Journal, Vol. 10, No. 1 (Spring, 1986), pp. 40-63 Published by: The MIT Press Stable URL: http://www.jstor.org/stable/3680297 . Accessed: 19/02/2012 20:36

Journal ArticleDOI
TL;DR: It is insufficient that a computer-music composer should think only in terms of the mean rate of vibrato of the oboelike sound in the next six seconds, without addressing the basic appeals of music.
Abstract: People and machines neither hear nor make the same music. Machines measure and generate exact amplitudes, frequencies, time durations, and spectra. However, people hear and create music with subjective qualities such as rhythmic pulses, prettiness, illusions of the ear, imitations of sounds, models of nonmusical reality, freely rendered mathematical relationships, familiarity, archetypal symbols, and the stimulation of senses other than the auditory. We agree with Minsky's remark, ". .. music should make more sense once seen through listeners' minds" (Minsky 1981). A reconciliation between machine and human purposes seems in order, to assure the promise of growing intimacy between machines and people. It is insufficient that a computer-music composer should think only in terms of the mean rate of vibrato of the oboelike sound in the next six seconds, without addressing the basic appeals of music. Leon Kirchner wrote: "One of the naive assumptions, in the construction of computer music, for instance, is that if one programs the parameters (duration, density, pitch, etc., etc.) music should result" (Ramey n.d.). It is insufficient for composition and synthesis software to permit specification of only the notes and not of the musical impression to be made on the listener. It is difficult to coax the irresistible object, music, from the immovable force that technically, rather than humanly, programmed computers represent. Nevertheless, composers persevere in an effort to program computers to make music the way they do. Laurie Spiegel reports on using the Groove program by Max Mathews and F. R. Moore, "... I wrote complex algorithms (in Fortran) to process the data from these devices [keyboard, knobs, pad, etc.] and derive from it much more complex music than I actually played. .... incorporating a set of rules for melodic evolution" (Spiegel 1980). Writing about his composition program PHRASE, Hiller writes, Aesthetic Appeal in Computer Music

Journal ArticleDOI
TL;DR: In working with standard digital sound synthesis algorithms, the author has sought methods that permit him to derive from a particular timbre clearly related but distinct variants of that timbre.
Abstract: In working with standard digital sound synthesis algorithms, I have often arrived at a point in a composition where I felt the need to develop a specific timbre as a means of emphasizing or prolonging a structurally important moment in the piece. To this end I have sought methods that permit me to derive from a particular timbre clearly related but distinct variants of that timbre. These methods involve