Sunday, 2 March 2008

Graphical Groove: Memorium for a Visual Music System

http://retiary.org/ls/writings/vampire.html

Published in Organised Sound 3(3): 187-191 1998 Cambridge University Press.

Graphical Groove: Memorium for a Visual Music System by Laurie Spiegel, August, 1998

Abstract:

Once upon a time there was a computer music system called GROOVE (Generating Realtime Operations On Voltage-controlled Equipment, Bell Telephone Laboratories, Murray Hill, New Jersey), which outputted in the realm of sound, and was a wonderful and still-unique tool for the composition thereof. Once upon a time a then-young composer who was using GROOVE for music got the hairbrained idea that if she made a few minor changes here and there she could use it to compose images as well. This she did in 1974-6, and though the untimely demise of the system completed, owing to massive hardware changes in this system's home lab, prevented creation of much documentation in the form of aesthetic works of its output, the system did function sufficiently to make some description worthwhile. While it is true that the mid-1960s DDP-224 computer on which GROOVE became a VAMPIRE (Video And Music Program for Interactive Realtime Exploration/Experimentation) was a massive roomsized computer, it has by now long been eclipsed in power by the constantly improving home computer. It is worth describing the concepts involved in part because there are by now many small computers capable of emulating its musical methods. Besides, I had a deep personal relationship with that computer, and wish to commemorate it. Here then follows the tale of Graphical GROOVE, a.k.a. the VAMPIRE.

What was GROOVE, anyway?

Before going on to its visual applications, it may help to help you visualize the GROOVE system in its original form, that of a hybrid (digital-analogue) computer music system, as developed by Max Mathews, Dick Moore and colleagues. The principle was both simple and general. A number of input devices (knobs, pushbuttons, a small organ keyboard, a 3D joystick, an alphanumeric keyboard, card reader, several console and toggle switches, and a number of output devices (14 digital-to-analog converters used for control voltages, 36 computer controlled relays, a thermal printer, and 2 washing machine sized one megabyte hard disks) were connected to a room-sized 24 bit DDP-224 computer programmable by its users in FORTRAN IV and DAP 24 bit assembly language. Also accessible (as subroutines residing in Fortran IV libraries) were what might be called "soft" or "virtual" input devices (random number generators, attack-decay interpolators, and a sophisticated periodic function generator) and output devices (storage buffers, including arrays for logical switches and data of different types).

Each user would write their own "user programs" for their own purposes, to specify interconnections between inputs and outputs including the above mentioned. Since these connections could be complex transfer functions consisting of any kind of process the user could code up, this made the system ideal for the development of what we called "intelligent instruments". (These are essentially musical instruments for which the ratio of the amount of information the music system generates to the amount the musician plays, per unit of time, is greater than one to one.) It also made the system ideal for the exploration of compositional algorithms.

Aside from the creative freedoms and temptations inherent in the responsibility of each user to program their own "patch," GROOVE had another cataclysmically important characteristic as a music system. It viewed everything as (perceptually) continuous functions of time. It did not think in terms of notes or other such "events". Instead, it required such entities to be programmed as processes or sampled as curves of change over time. The sampling rate was l00 hertz, which meant that all analog oscillator, etc. parameters were updated fast enough to sound as though continuously changing to us mere organisms. (This was sampling done on the level of control parameters, not audio sampling.)

During each sample, the user program would be looped through once, all inputs referenced would be read, all programmed computations made, and the output channelled to DACs and disk files which could be edited later. An editing program would be just another interactive user program which each of us would write for a specific kind of modification of a particular work, and the data being edited would be the stored time functions on disk instead of the live data coming into the computer from the various input devices. Editing programs could use all of the same devices (knobs, periodic function generators, et cetera) that might be used in recording a first pass, and editing was often a realtime performance process in itself.

The centerpiece of the design was the bank of 200 functions of time. All data was stored as series of numbers that had no specific association with any parameter of sound or of musical composition except what a user program might give it by connecting these numbers to a relay or DAC (digital to analogue convertor). The system allowed the composition of functions of time in the abstract.

The importance of being able to approach all parameters of sound, of composition, or of performance as perceptually continuous functions of time cannot be over stressed during this current period when music seems everywhere to be digitally described as entities called "notes", and in which there are generally conceived to be differing necessary rates of change for different musical parameters. In our modern post-MIDI world, pitch is seen as changing at a rate of once per note whereas amplitude is updated "continuously" at higher resolution (faster rate). GROOVE embodied a concept space shared with the old truly modular analog synthesizers of the 1960s, on which any pattern of temporal change could be applied to any parameter, and in which sound can really be treated as a multidimensional continuous phenomenon. GROOVE added to this the ability to use time functions computationally without direct connection to any input or output variable.

The software was able to handle several times as many of these simultaneous time functions than we had hardware DACs to use them on (200 functions for 14 DACs). We therefore had many spare functions available to use for variables of any level of abstraction we might want, from recording actual knob or switch settings as we improvised or interpreted stored music, to global and profound compositional parameters such as probabilities, densities or entropy curves.

Each of us used these time functions in our own ways. In fact each of us freely used the entire system in entirely our own way because we each had an entire copy of it to ourself, with full source code. We could each change anything we wanted.

Temptation:

We often enjoyed just playing around with the system. The rate at which the computer ran through the user program loop, reading its inputs and writing to its outputs, was controlled by an external analog oscillator in this early hybrid system. So of course, we tried plugging in a voltage controlled oscillator so you could compose a time function which would create tempo changes by changing the sampling frequency of the computer itself. At one point Emmanuel Ghent had the computer control the speed of a variable speed reel to reel tape recorder so that he could specifically compose pitch changes with the oscillations of a bank of fixed frequency resonant filters he had built. In general, there were a lot of interconnections between the digital and analog domains and we played with them quite a bit.

This was often difficult because the analog audio lab and the digital computer hardware were in separate labs at a cumbersome distance from each other, connected by several hundred yards of trunk cables. We all made many trips back and forth between the analog and digital ends of GROOVE to calibrate DAC output voltages or to change the configuration of the multicolored spaghetti. (A typical patch consisted of hundreds of cables on a removable patch matrix board that each user could slide into a card rack full of audio modules too miscellaneous to describe here, so that each user could pursue a hardware configuration unlike anyone else's.)

During such trips down that long long hall between the analog and digital labs, when not impatiently obsessed with an embarrassing desire for roller skates to shorten the long walk (in that era before roller skates were ok for grownups too), I began stopping to look through a glass window in a door to another computer room along the way. Strange abstract shapes could usually be seen evolving on a video monitor, growing and evolving, week after week, month after month. Eventually I got to know Dr. Kenneth Knowlton, the computer graphics pioneer and master of evolutionary algorithms, and we began to work together on various projects. After learning some graphics coding there, I became intrigued with the idea of trying to make musical structure visible and embarked on the strange mission of bringing GROOVE's compositional capabilities to bear on the frame buffer output, particularly the ideas of time functions, transfer functions, and interconnectible software modules.

This was back in the early l970s, before digital synthesis of sound could be done in realtime (computed at the speed that we hear it). In that era, apart from a hybrid system such as GROOVE, computer music could only be done noninteractively, by entering defining information into a computer, waiting for sounds to be computed, then retrieving, recording and listening to them later. Ken, working with film maker Lillian Schwarz, was working in a similarly nonrealtime way, running image generation software all night, using a program that would compute a single frame of film then open and close the lens of a computer controlled film camera to expose the film and then advance the film.

I reasoned that just as GROOVE's computer control of analog modules had made interaction with relatively complex logic systems a realtime process, permitting realtime interactive computer control of musical materials for the first time, realtime interactive computer graphics should be possible as well by similar means. Instead of recording the image on film frame by frame, I should be able to code myself a visual musical instrument that would let me play and compose image pieces by recording the control data as time functions and playing back the time functions as visual compositions.

The idea of getting GROOVE running on this second computer in a different lab down the hall where it would output to a video monitor instead of to banks of equipment in an analog audio lab did not come to me all at once. Initially, I merely succumbed to the irresistible temptation of glowing color and texture and movement and light, but for the next several years, I spent probably as much time working on this visual music system as on audible music, and realtime interaction with video images felt like playing music to me. The desire to compose music visually was an inevitable craving.

RTV (Real Time Video):

What later became VAMPIRE started relatively simply, as a program called RTV (Realtime Video). That, in turn, started even more simply, as a mere drawing program for creating still images. Using a routine that Ken Knowlton gave me which permitted me to address the "frame buffer" (I believe this was just a dedicated area of memory in computer CORE) and a Rand Tablet, I wrote myself a drawing" program (similar to what we now call "paint" programs, but in 1974 there was no terminology for this yet), and greatly enjoyed doing a long ongoing series of computer drawings, evolving and changing the way the drawing program elaborated an image from my motions over the Rand Tablet as time went on.

Ken and I also worked out an elaborate initialization routine for an array 64 definable, storable bitmapped textures which could be used as "brushes" or letters of the alphabet, or whatever, and which made use of a box with 10 columns of 12 pushbuttons each representing a bit that could be on or off, functioning as a means of entering these patterns. After consulting some of my old hand weaving books, I made a large deck of hollerith cards, and shuffled them different ways to be able to easily enter batches of patterns via the computer's card reader. (Ken did some truly amazing things with that 10 by 12 button box that are beyond the scope of this writing but nonetheless worth mentioning, such as projecting completely customizable virtual control surfaces for telephone related jobs onto a half silvered mirror above it. But that's another story.)

As a composer of music, I soon found that I enjoyed playing the drawing parameters in real time like a musical instrument. I could move around in an image and change the size, color, texture, color and other parameters in real time as I drew it, using knobs and switches just like those the GROOVE music computer down the hall. I would draw with one hand while manipulating the various visual parameters with my other hand using the 3D joystick, switches, push buttons and knobs.

The movements of the object I dragged around the screen felt melodic, and I realized that I wasn't satisfied with just one "melodic" line. In audible music I had loved counterpoint best, so I wrote in another realtime interactive device to play. It was a square box of 16 pushbuttons for standard musical contrapuntal options. By now it was possible to interact with quite a number of visible variables in realtime.

This was before "menu driven" human interface systems came into fashion. Even had it not been, however, I've always preferred random access parallel (equally reachable-for) controls to any kind of hierarchical or modal way of organizing such a group of controls. The interfaces in traditional acoustic musical instruments are generally of random access parallel level design. It may be more hardware intensive, but spontaneously grabable controls are better for the music and art.

The simultaneous parallel inputs I had written into the system at this point, before interfacing it with the GROOVE music system, when it was still just an unrecordable room sized live performance visual instrument, were as follows:

Rand tablet:

x and y location currently being drawn

Foot pedal:

enable or disable drawing (writing to the display)

Knobs:

Logical operation (write, and, or, xor)

Vertical size
Horizontal size
Color number 1 through 8 for foreground
Color number of background

Global color parameters

Color definition mode 1 (parametric control)

Saturation
Value
Hue
Resolution of hue spectrum
3-D joystick for path through color space of color indices 1 through 8

Color definition mode 2 (3-D joystick axes)

x = amplitude of green
y = amplitude of red
z = amplitude of blue

Push buttons - contrapuntal options:

Single line (single sequence of time-sequences x-y locations, as drawn)

Contrary motions

Reflection on x axis added
Reflection on y axis added
Reflection on both axes
Sine-cosine relationship of 2 points moving

Parallel motions

Single line
Two parallel lines (for all above)
Three parallel lines

Oblique motions

Freeze x on reflection of drawn path
Freeze y on reflection of drawn path

Other pushbuttons

Clear screen to selected color
Visible cursor by continually xor-ing twice

10x12-button pushbutton box (used to define, edit and select bitmapped textures)

Definition mode (non-realtime)
Selection mode (textures 1 through 64, in realtime)


VAMPIRE (the Video And Music Playing Interactive Realtime Experiment):

With that many parameters to control in real time, I had arrived at the same difficult stage in visual improvisation at which I had found myself needing to switch over from improvising to composing in audible music several years earlier. The capabilities available to me had gotten to be more than I could sensitively and intelligently control in realtime in one pass to any where near the limits of what I felt was their aesthetic potential.

Concurrently, I had become increasingly interested in the use of algorithms and powerful evolutionary parameters in sonic composing, and the idea of organic or other visual growth processes algorithmicly described and controlled with realtime interactive input, and of composing temporal structures that could be stored, replayed, edited, added to ("overdubbed" or "multitracked"), refined, and realized in either audio or video output modalities, based on a single set of processes or composed functions, made an interface of the drawing system with GROOVE's compositional and function-oriented software an almost inevitable and irresistible path to take. It would be possible to compose a single set of functions of time that could be manifest in the human sensory world interchangeably as amplitudes, pitches, stereo sound placements, et cetera, or as image size, location, color, or texture (et cetera), or (conceivably, ultimately) in both sensory modalities at once.

There are fewer parameters of sound to deal with than there are for images. In a hybrid system such as GROOVE, which used fixed waveform analog oscillators and computer controlled analog filters and voltage controlled oscillators, each "voice" may have frequency amplitude, filter cutoff, and possibly, filter Q, reverb mixture, or stereo location. A visual "voice" may have x, y, and possibly z axis locations, size in each of these dimensions, color, texture, hue, saturation, value (or other color parameters), plus logical operation on screen contents (write, and, or, exclusive or), and in the case of a recognizable entity, scaling and rotation variables (for solid objects roll, pitch and yaw) in two or three dimensions. (I did not deal with transformations of solid objects in this relatively primitive realtime digital visual instrument and composing system.)

In essence, what this system ultimately provided for the short time that it ran before its untimely demise, was a instrument for composing abstract patterns of change over time by recording human input into a computer via an array of devices the interpretation and use of each of which could be programmed and the data from which could be stored, replayed, reinterpreted and reused. The set of time functions created could be further altered by any transformation one wished to program and then used to control any parameter of image or of sound (when transfered back to GROOVE's audio-interfaced computer by computer tape or disk). Unfortunately, due to the requirement of separate computers in separarte rooms at the Labs, it was not physically possible to use a single set of recorded (and/or computed) time functions to control both image and sound simultaneously, though in principle this would have been possible.

Like any other vampire, this one consistently got most of its nourishment out of me in the middle of the night, especially just before dawn. It did so from 1974 through 1979, at which time its CORE was dismantled, which was the digital equivalent of having a stake driven through its art.

--------------------------------------------------------------------------------
Copyright ©1998 Laurie Spiegel. All rights reserved.

No comments:

Post a Comment