Exciting "breakthroughs" ------- Recent news and views
in electronic music ------------- on science and art
[Typographical errors and omissions corrected 2020.0725 by Joseph Monzo]
The "psychological moment" for electronic music has arrived. By this I mean all the phases of electronic music: theremins, electronic organs, and other instruments played by hand; new techniques for re-recording sounds and altering their quality, such as the montage method called musique concrete; automatically-played instruments; and the use of electronic computers to produce and control musical sounds.
For instance: on the Story Line radio program over Station KNX, here in Los Angeles, on 19 December 1962, they played brief recordings that showed how much progress has been made in the last few years, getting computers to behave like musical instruments. They started with a rather crude rendition of "Silent Night" in a clarinet-like timbre, and full of keying clicks and thumps. Later makes and models of computers performed better and better. First the thumps were all gone, then vibrato was added, then some expression was put in. Finally an IBM 7090 not only played three mandolin-like accompaniment parts with proper expressive shadings, but this machine SANG "A Bicycle Built For Two" in a bass voice with good dynamic range and only the slightest "electrical accent."
--It is important to realize that this was accomplished on machines never intended for musical purposes, but merely for prosaic, non-artistic tasks like adding up the weekly payroll or doing the mathematical drudgery for a roomful of engineers.
On 30 December 1962, the Los Angeles Herald-Examiner ran an article on the Thereminvox as part of the Sunday Supplement. Oddly enough, this article failed to mention that theremins are now being made again commercially, or that quite a number of people are playing such instruments that they have built themselves. However, the article did say that theremin music is used more and more in films and on TV programs.
As for electronic organs, people are beginning to forget what a genuine pipe organ sounds like. By way of compensation, organ music is no longer the exclusive property of church or auditorium, but is coming into the home. Although there are far too many simplified, stripped-down instruments being made for "one-finger artists," the amazing advances in minaturization now being made in electronics will one day give us electronic organs with full resources but smaller and handier than the present "chord organs" for beginners. Even before that happens, the organ may well assume the piano's pre-eminence as the chief home instrument. Already it is competing seriously for first place. Prospective organists have a wide choice of makes, models, tone-qualities, furniture styling, and prices.
A little more on the subject of organs later on, but this is an appropriate palce to tell you that the first portion, one keyboard's worth, of an electronic organ was built during the last four months at 1280 Exposition boulevard, Los Angeles, and you can see and hear it by appointment. The completed portion, and other portions as they are built after a while, will incorporate certain unusual features not practical for the mass-produced commercial instruments. This instrument will eventually have a larger audience, though, as it will be used to make recordings.
***
On Sunday, 20 January, 1963, the Los Angeles Times broke from the beaten path and published, in the weekly "calendar" Section, a whole page on the subject of electronic music. Entitled "Beethoven, Bach,...Now the Computer," the article by the music critic Albert Goldberg presented the timely observations of Dr. Simon Ramo, who is experienced in music as well as electronics. If you possibly can, go to your library and look up this article. If you can't, here is a summary: it began with an announcement of a panel discussion. Though I was not able to attend myself, I have been promised a report on it, and will no doubt be able to give you more information at some later time.
As a composer, all my critical instincts were aroused by the next topic of the article: as implied in the title, electronic computers (in this case, the general-purpose digital computer, such as the machines mentioned on the preceding page) have now been developed to the point where they can simulate (highbrow talk for "pretend to be," "behave almost as if they were") a wide variety of things and processes that seem at first to be quite unrelated to mathematics or arithmetic problems or the antics of office adding machines.
For instance, Wolfgang Amadeus Mozart. It is now possible to feed all of Mozart's symphonies, sonatas, and other works into a computer, along with coded programming instructions representing the rules of music in Mozart's time, and after the machine has performed a thorough analysis of Mozart's style--how he wrote his melodies, how he harmonized them, how he orchestrated, etc.--it is then possible for the computer to shuffle its stored data around and within a matter of hours to flood the musical market with jillions of operas, symphonies, chamber works, and so on, all so closely imitating Mozart that even the expert might not be able to tell the difference. Indeed, more music than all the musicians could find time to play, or the audience time to listen to!
This is not some science-fiction writer's dream; it is feasible today. The $1,500,000 cost of large computers and the equally fantastic rentals and hourly operating charges are the only real obstacles to the mass-production of Beethoven, Bach, and other "standard" composers in this manner. Now that a recording company has come out with such a feat of legedermain as those "electronically reprocessed," imitation-stereophonic Toscanini recordings (and in this case, elevating the conductor, the interpreter, above the composers involved), it would be most naive to suppose that anybody would be restrained by ethical principles from issuing "Beethoven's Thirty-Third Symphony" or "Mozart's Sonata for Violin and Piano, K 29,387."
Fortunately, the article does not stop at this point, but goes on to take up the use of computers for aiding contemporary composers.
There is as much routine drudgery in musical composition (and probably more) as there is in science and engineering. The proportion of inspiration to perspiration (Edison claimed it was 10% vs. 90%) must be about the same for the composer or the poet as it is for the architect or electronic-circuit designer. As I said in 1957 (see the reprinted leaflet I may enclose with this bulletin) the computer may be used as a tool to avoid most of this perspiration. A composer can have his own style statistically analyzed by a computer, and thus bring to a fully conscious level many of the facets of his creative activity which have been intuitive or uncoscious; then he can elect to change his style if he wishes, and no longer be the unwitting slave of his own formulas.
(Maybe I could digress here and "conjugate" one of Bertrand Russel's famous "irregular verbs": "I am consistent; you are the slave of your own formulas; he is a wornout hack.")
Be that as it may, the human memory is most capricious, and a composer's mind, quite like that of the author or scientist, runs in grooves; so the composer will repeat his favorite patterns too often, and forget to use other tone-patterns he may have stored in his memory, but cannot recall when they are most needed. Computers and information-retrieval devices are quite impartial about "remembering," and thus the composer making use of these new aids can choose from a much wider range of possible combinations than one who has to rely on what occurs to him at any given moment.
The artist still must choose from what the machine offers him. It is the often-encountered case of "a good servant but a bad master" again: yet is it likely that any able composer would willingly become the servant of a machine, however expensive and elaborate?
The final section of the article deals with the possibilities of generating new kinds of musical tones by computer. One example given is that of a white noise selected from and filtered down till it becomes a tone. (But this idea is hardly new: there are patents on it from some years back.) Or one may proceed cautiously, first dissecting familiar sounds and studying them, then going on from there, taking something only slightly different, and incorporating this into the language of music. So ends the article.
I didn't succeed in keeping my opinions out of the review, and I hope you will excuse me for this, while I go on to give a few more opinions.
Computers of the kind discussed in the article are so costly that while the Man-to-the-Moon program or the Rocket-to-Mars project, or the Government's budget department, can afford them easily, there certainly isn't a composer, alive or dead, whose exchequer would take care of even a moderate amount of computer rental time. Then is the promise implied in the article pointless and merely tantalizing?
Not necessarily. In the first place, drafting a general-purpose machine, with many non-musical features and many abilities and facilities not required for simulating musical instruments or doing a composer's specializied chores, is both inefficient and uneconomic. It is not merely a question of the computer hardware tied up in working at a task for which it was not really designed--it is also a matter of what computer people now call "software"--the programs and coded instructions and preliminary work that highly-trained people have to do before these machines can pretend to be a composer's assistant, or an orchestra--such clerical work requires special skill and costs money.
Let us say that a composer wants an orchestral effect, but a conventional orchestra won't do. It doesn't have the tone-quality needed, the parts to be written for it are much too difficult for even professional musicians, or the music is of such a nature that the computer doesn't want any other people, any interpreters, to stand between him and his audience. So he tries out a computer.
Unless he is a trained programmer himself--and [I] haven't heard of any such composers so far--he cannot use the machine himself. Programmers, technicians, and others have to take his written musical notation and re-encode it into punched cards, punched paper tape, magnetic tape, or other standard computer code (not designed with music in mind, remember) and put it on the machine, and "debug" it if it doesn't produce the desired results. This is expensive, tedious work, and more important, it re-introduces the very interpreters and "performers" the composer thought he was eliminating. He is worse off than before! Now non-musical persons stand between him and his audience, and so do coding methods and input and output devices not designed to handle musical data efficiently.
The atmopshere of the laboratory or office or business concern where the computer is housed is not apt to be inpirational to a composer. And if he must write the music out first in ordinary notation, he would need unusual self-discipline to refrain from using the piano in his studio while composing. But if he does this, he is not writing 20th-Century music, for the 19th century is built right into the piano. If someone else has to put the music into computer code, he loses the important privilege of making immediate corrections. Moreover, corrections will cost $15 or more apiece, and so they won't be made. Thus he still has the problem of not getting his music played the way he intended.
Maybe some composers can be talked into putting up with such a set of circumstances. But not me!
In my opinion, the solution is to design a smaller machine, so it won't cost as much, it won't take up so much space, and it can be right in the composer's own home and available at 2:30 in the morning if a fit of inpiration strikes him. It will embody only a few computer principles; it wil be automatically-played, but far more refined than the old-fashioned player-piano; it will be intermediate in tone and character between the orchestra and the organ; and it must be designed to take full advantage of the capabilities of the latest electronic equipment, rather than being designed by some timid, backward-looking engineer to be an imitation organ, imitation piano, or to reproduce faithfully all the quirks and foibles of the obsolescent, ill-mated, and cantankerous instruments found in our orchestras and bands. Anyone who wishes to continue writing under the onerous restrictions of 18th- and 19-century musical equipment is welcome to do so, of course; this is supposed to be a free country: but such antiquarians do not have the right to demand that new instruments and aids to composers be built from 1863 or 1763 plans!
If this proposal for a composer's instruct [sic: instrument?] sounds too ambitious, rememeber that it can be begun with very modest preliminary models. It should be possible to start with a small instrument and keep adding to it as funds and sponsorship and time and facilities permit. In the meantime, the price and the rental of computers and more sohisticated devices will be coming down, and the day may come when something composed on a small instrument can be reprocessed on a larger apparatus if this would improve it. One can plan ahead, and design against obsolescence.
***
Almost everyone in the musical world seems to ignore very carefully the facts that: 1) what a composer writes is largely determined by the instruments he plays and the instruments he is allowed to hear; 2) a composer will avoid writing certain passages, because of the ordinary musical notation, which makes certain things easy to write and others, just as good, difficult or impossible to write.
For example:
It is just as easy to play one of the above chords on a piano or organ as it is to play the other, but the second example is rarely written because it is hard to write and read. (These examples after Percy Scholes, Oxford Companion to Music.)
More important still, music is not played the way it is written!
Musical orthography is unphonetic--not as bad as English spelling, but certainly as bad as French spelling. Inventors have been busy as bees down through the centuries putting out new musical notations, and I'll bet dollars to doughtnuts there still is about one new system a month right now.
When player-piano or orchestrion rolls are made from written music, unbelievably drastic changes have to be made in the note-values, extra rests have to be inserted, and octave duplications may also be put in. The solution, then, is not in inventing a new notation, because whatever faults of the ordinary system it may correct, it will introduce just as many of its own, and ignore pressing problems. Even the graphing method doesn't necessarily tell the truth about what is to be heard when the piece is properly performed. The only efficient way is to write down or perforate or punch the instructions to the automatic musical instrument directly on the roll or other medium that the machine will use when it sounds the tones called for.
The effort now being put in to develop an attachment to computer that will read or write the conventional musical notation is liable to tighten the already oppressive grip that notation has on us.
While it may be conceded that such devices would be handy for re-working, cataloguing, and indexing older music, and for republication of existing music (Hiller & Isaacson's book, Experimental Music, enumerates these applications), there is grave danger that if we build an obsolete procedure into the newest music-making equipment, we shall never get it out.
Don't misunderstand my attitude: I never advocated the invention of new instruments to perform old music, and I never will. Let us keep our 19th-century instruments and orchestras and concert halls for performing the music written in and near that century, beacuse that music was deliberately designed to be so played and heard. Nay, let us go beyond that and support the revival of 18th-century and earlier instruments for performing music written then as its composers wanted it to be heard. But why should today's and tomorrow's composers have to express themselves through means that are quite literally out of tune with our times?
Let me interrupt myself here for a moment and summarize a recent news item:
In the mail the other day I received a photocopy of a clipping from some unidentified magazine, bearing a picture of a man operating a strange-looking machine, vaguley resembling certain apparatus around a printing plant or perhaps a textile establishment. According to the story, headed, "Cosmic Music," one Yevgenii Murzin, somewhere in Russia, has devised a composer's instrument that produces sounds when photocells "see through" transparent lines made when the composer scratches an opaque coating off a glass sheet. The instrument is called the ANS, after Scriabin's initials. Last sentence in this article says that these ANS instruments are in commercial production.
...Well, this is not the first time the Russians have claimed they invented something! And, like so many of their other claims to priority, this one is full of holes like a Swiss cheese. It so happens that I have made several patent searches in the electronic music and allied fields. The first one I made was in 1938, and I have brought it up to date several times, and have made a point of keeping up with the latest times, and have made a point of keeping up with the latest inventions in music and sound equipment. Our American inventors have taken out a number of patents on similar instruments to the ANS, and anyone curious enough need only visit the public library to discover who had this idea first.
Why, then, do such exciting ideas remain buried in archives until some other nation develops the idea and tries to claim to be first? Is our competition with the USSR to be only in the field of space science and weaponry? Have we no pride in our national culture? True, we train, and train superbly, our pianists and violinists and singers and orchestras and bands; our popular music is a widely-exported article too. But when it comes to composers--the people without whom there could be no music at all--they are quite forgotten.
Our national policy seems to be, worship, living performers and dead composers. Prefer our own product in any field to the Russian one, with but a single exception: composers! More remarkable still, no one in the USA press or publicity or radio-TV enterprises dares call attention to this injustice.
As a result, many people are studying for concert careers of the type they could have had a hundred years ago, but are not likely to enjoy at the present time. Nineteenth-century music was mostly conceived in terms of concert halls, and it is well known that as the years from 1800 to 1900 went by, orchestras grew larger and larger. The music grew richer and heavier and lusher in texture. Piano tones started out thin and weak, but grew heavier and fuller as well as louder during the nineteenth century. At the beginning of the twentieth century, it seemed as though this "beefing-up" process would and could go on indefinitely. Such an attitude has become a tradition, and teachers of music cling to it in spite of the actual facts.
Enter the phonograph. In the "talking machine's" early days, Ambrose Bierce could define it this way in the "Devil's Dictionary": "PHONOGRAPH--An irritating little toy that restores life to dead noises." It suffered almost total eclipse in the early days of radio and while talking movies were going strong. But in the last decade, with the advent of better amplifiers, more and better loudspeakers, tape recording, LP discs, stereo, and now transistors, the picture, or rather the sound, has greatly changed. (Although TV has had its part too, in getting people to stay home from the movies.)
When the whole classical repertoire is on discs, and soon will be on stereo tapes; when the modern repertoire is also well represented, and the better-known pieces are available in different versions; when the "hi-fi" set places this at one's fingertips right at home, with tone-quality getting better all the time, why should one journey across town to a concert hall, only to be served up the tritest and most hackneyed portion of the standard repertoire? --Unless, of course, one's interest is not really in the music itself, but rather in the "star" conductor, the virtuoso performer, or the "society" aspects of the concertgoing crowd.
If any music-club presidents or music critics read this, they will no doubt disagree with me, but I really don't have to argue, the point is so terribly obvious. Wishing won't make the facts go away! Ignoring them won't either. And fund-raisers for the local Symphony Society in your city will wish I and others like me would hold their tongues.
What does this mean for the future of electronic music? Why do I bring up this subject now? Because composers must come to terms with the new situation. They may or may not like it; I didn't pray for it to happen, but I accept the fact that it has. Composers have been writing for big orchestras and large choruses and 9-foot concert grand pianos and 3- or 4-manual pipe organs, all in big halls, auditoriums, or churches, for performances before 500 or 2500 persons.
How do you squeeze all this volume of sound in to the average home living-room, or apartment, or the inside of an automobile? And even if the hi-fi-hound buys an 80-watt stereo amplifier and two enclosures each with 15-inch woofers, and maybe an extra "phantom third-channel" speaker, so that he can bring the Symphony Auditorium right into his living-room, he finds it too big for his home--and in an apartment house someone is bund to complain!
The situation is complicated by another generally-ignored fact: 95% of all the music the average person hears during the average day has gone through mazes of electronic equipment:
Background music equipment is installed in offices, stores, factories, waiting rooms--just about everywhere there is no jukebox. Since background music is "to be heard but not listened to," in the phrase coined by one of the biggest distributors, it is not necessarily reproduced in a "high fidelity" manner. The dynamic range is deliberately restricted, and so is the frequency range, so that one hears a subdued murmur.
If Mr. and Mrs. John Q. Public ever actually get to the concert hall or some place where "live" music is played, they will start criticizing it in terms of the only music they really have been exposed to: REproduced music, hi or lo fi. Musicians may complain of the change in public taste all they want, but that will not turn the clock back to pre-phonographic days.
Composers ought to realize that instead of bewailing this state of affairs, they ought to get busy and take advantage of it. A tape recorder will play back to the composer, and the tape will sound as other people will hear the music, rather than letting the composer keep his illusions about how his music sounds to other persons.
Then, instead of writing for performances in large auditoriums, they should write for the home living-rooms, and the corners and alcoves of apartments, where most of their listeners will hear their music.
If they are concerned about distortions, such as losing the high or low end, or both ends, of the frequency range, they can compensate for this in advanace: use the conventional instruments in different ways, instruct the recording engineers to make this compensation, or use electronic musical instruments with this compensation built in, or have such compensation built into or programmed into automatic instruemnts for composing. The composer doesn't need to be an engineer himself to determine the right amount of compensation: all he has to do is listen to some of the recording in his own studio. The point I am making here is that nobody need complain about the fact that even the best electronic equipment changes the quality of the sounds passing through it. Simply start with sounds of such a quality that even if they don't sound exactly as one wants them, the equipment through which they pass will change them into the desired quality.
This, of course, is not the hi-fi perfectionist's approach. The perfectionist, or purist, as he is sometimes called, wants to secure the last ounce or "realism," and money is no object. Often he starts collecting sounds, exaggerated effects without regard for their value as music. Unfortunately, because he has money to spend, he is catered to. He is so intent upon reproducing all the defects and idiosyncrasies of ordinary musical instruments--pipe-organ wheezing and wind-noise, pops and clicks of wind instruments like the oboe and English horn, breathy sounds accompanying flute notes, the "rapping" accompanying high notes on the piano, scratching sounds accompanying the bowing of violins, and maybe even the crowd nosies at a concert...as I said he is so intent on the incidentals and the means of reproduction that he loses sight of the end, which (or so I thought) was: the best possible communication of the composer's ideas to the listener. The composer is ignored, forgotten, while the incidental noises of performance are given as much importance as the music they distract form. And it is so easy to overdo this "perfection" business, whereupon it descends to exaggeration and vulgarity.
For example, take the newest wrinkle in hi-fi circles, artificial reverberation, "canned echo." There are cases where one can get some illusion of a large hall in one's living-room by adding some artifcial echoing, but when overdone it is just plain ludicrous.
Another consequence of the attempt to squeeze auditoriums into alcoves, is that fortissimo passages are "turned down"--and pianossimo passages "turned up"--one runs to the volume control and usually undoes most of the crescendo and diminuendo effects written into the music. A soft tone amplified is not the same thing as a loud tone, and a loud tone smothered is not the same thing as a tone played softly in the first place. When a single player or a small chamber ensemble plays in a big auditorium, and the amplifying system in the hall is turned on, again we have distortion of the composer's intentions.
Electronic musical instruments, and automatic ones as well, can be designed to overcome this difficulty. They are not to be thought of as imitation pianos, imitation organs, imitation horns, imitation violins, and so on; for if the designers and engineers restrict themselves to this idea of imitating, they either display their poverty of creative imagination, or they are trying to destroy what creative imagination they happen to possess. No. We have to start boldly "from scratch" and re-design the entire communication channel from composer's mind to listener's ear as a TOTAL SYSTEM. We have a chance unparalelld in history to remove most of the obstacles between composer and audience.
When the music is performed by orchestras or ensembles, the situation can soon be improved when the new instruments are designed to fit in with the other links in the communication chain between composer and listeners, rather than fighting with them and with each other, as is now the case. Having played in orchestras, in both the wind and string sections, I do not feel that I am making any ivory-tower recommendations at second hand.
Nor am I asking that all new music be automatically performed. As a composer, I have the right to say that there are different kinds of music -- and some of these are best interpreted by performers from a broad, general outline the composer prespares, leaving the details to the individual choice of the musician or musicians who are going to play; but there are other kinds of music where the composer takes full responsibilitity and should set out a precision, detailed, careful specificication of every tone and nuance or expression--in which case an impartial, automatic performance is required.
Automatic or not, computers or none, live or recorded, auditorium or home, individual musician or ensemble...in any of these sets of conditions electronic instruments can add to the expressiveness and effectiveness of the music...but only if the entire communication channel is re-designed as a total system.
There has been recent progress in other fields besides electronics. Other fields that have some bearing on the future of music are: communications engineering, information theory, structural linguistics, logic and mathematics, applied psychology, the study of creativity, data processing--you name it!
When I say, re-design as a total system, I mean that these other fields have progressed so much that if their new discoveries and new approaches are applied to the problem along with electronics, AND if each element in the chain between composer and listener be designed with full awareness of the other links in the chain, so that they work together instead of undoing one another's contributions to the total effect, we can have another Renaissance--for these new approaches are being applied in some of the other arts and sciences. Why not in music also?
You already know what a chaotic situation has resulted from the use of amplifiers, radio channels, microphones, and phonographs to bring music into our living-rooms that was not composed to be heard there, and when the instruments used to play the music were not designed to work without one another, and certainly not designed to work with and through microphones, recording machines, amplifiers, and loudspeakers. How could we expect anything else? Rather one should be grateful the situation isn't worse than it is.
Merely inserting electronic computers inot this chain, whether to help out with the composing or the arranging, or the performance, will not necessarily improve matters--because the computer was not designed to be a communications aid between composer and listener. It could make matters worse instead of better.
Designing new musical instruments as mere imitations of the old ones would be just as silly--and some of the instruments now on the market prove this--all you have to do is listen! The most brilliant engineerings and gadgetry is of avail if the true function of the instrument is forgotten. Do I have to repeat what that function is? Good composing is logical. Musical form is logical. Ever since the Ancient Greeks, people have applied arithmetic and some other mathematics to music; but just lately we have a more powerful combination of symbolic logic and mathematics, and when composers study and apply this in their work, modern music will not sound so chaotic as it does today. Arithmetic alone is not enough, and that is why previous music theory may not seem to have borne much fruit. Another reason is that many present-day musical instruments are obliged from mechanical considerations to use only a limited number of notes (the 12-tone tempered scale) out of the much larger number of pitches the average ear can distinguish. This means that most applications of arithmetic to music can never be played in their correct forms on ordinary instruments, so, inevitable disappointment results. The theories are blamed for what is not their fault.
Again, a visual representation of music, such as a page of staff-notation, a mathematical style graph, or a page of solfa symbols, can never be anything else but a representation: it cannot possibly be the music itself, for it must always be "seen and not heard."
One can write something that looks new and different, but it doesn't sound different. This causes the average listener to be quite impatient with the older musical theories. The new logical approach to everything, when applied in music, should make the music-as-heard more coherent, more orderly.
We have been living through an Age of Specialization. It got to the point where plumbers weren't supposed to ever hammer nails into boards, and carpenters were supposed to call in an electrician every time they had to have a burnt-out light bulb replaced. Not only were there foot doctors and eye doctors and bone specialists, but these fields were on the point of breaking up into smaller specialties, perhaps toes and fingers and eyelids, so that the poor patient would get treated for each individual ailment, but never as a whole man in one integrated, interrelated body.
Happily the pendulum is now swinging the other way, toward an Era of Generalists--people who try to view the overall situation and see the comopnent parts not merely by themselves, but how they interact with one another and all go to make up the whole. Accordingly, the total systems concept is taking hold. Parts that must work together should be designed together, or at least with frequent consulatation and collaboration. If the composer for the new instruments refuses to collaborate with the other people designing these instruments, he will lose a tremendous opportunity to start afresh and express today's ideas through today's communication channels. Since some engineers and inventors already have designed new instruments on the basis of some of the music of the past, neither knowing nor caring what is being composed today, the unfortunate results of this can be heard in all-too-irritating actual examples.
The consequences of grafting the phonograph and reproduction indsutry onto a music world not designed for any such thing to happen, I have already told you about. High fidelity to what? To whom? A one-way process when commuication should be a two-way street. Background music for people and by people with no musical background. How can anyone talk so much about "fidelity" and display so little social responsibility?
Thus I don't have to justify the generalistic approach or the total systems idea--they speak for themselves. More than that: research in one field helps research in another. Companies now are "diversifying"--making new products seemingly unrelated to their original businesses. As those news items pointed out, the data-processors, the computer makers, the communications people, the mathematicians, suddenly are finding musical possibilities as by-products of their non-musical enterprises. Conversely, musical research should have by-product applications in other fields. I have already proved this to my satisfaction, and enough other persons agree with me. Sources outside of the musical world may well provide backing and assistance for musical research, when this point can be proved to them.
What this means to me is that I don't have to justify or excuse myself for being a composer and electronic music consultant. I don't have to apologize for the art of music, or try to prove that it is useful for therapy or that "transfer of training" occurs when people are musically trained. Instead, I can point to the continually growing acceptance of the generalistic attitude. Since music is something human beings do, it must be related to the other human activities; it is part of the overall pattern of human endeavor, and as such just as important as any other activity, and just as relevant to the "full life."
Furthermore, music is a form of audible communication and any and all phases of it have thus become the proper business of that field. Art and science cannot be walled off into separate compartments, even though something of the sort has been attempted for the last 200 years or so. The old bromide about "the great master composers didn't go the university to get degrees in acoustics or physics or mathematics" misses the point entirely: the masters followed intuitively, on an unconscious level, principles that now can be studied fully consciously. They did so within a social and cultural framework that differs radically from conditions in this country today--read your history book. This difference of our present cultural milieu from theirs is increasing year by year. And the rate of change is itself increasing! The difference between 1950 and 1963 is greater than the difference betwen 1900 and 1913.
The late Charles Wakefield Cadman, my instructor in composition, used to tell me how much of his time was wasted by people who insisted on writing opera libretti and poems for ballads when it was no longer profitable [to] set such words to music. They would get angry and argue with him for not being eager to work on their poems right away, and turn a deaf ear when he patiently tried to tell them that the musical world had changed from what it was in 1870 or 1890. He didn't ask for this change, but accepted it when it came. That was 25 or 30 years ago. How much more things have changed since then!
Not long ago I was told that they have a saying in the aerospace indsutry: "If it already is flying, it's obsolete!" That is, progress has become so unbelievably rapid that they think of, and are able to start building, something new before the first product is through being tested. Many other fields are now progressing almost that fast. It could be music's turn next.
At any rate, we cannot afford to build obsolete concepts into any of the new instruments, and we cannot go on teaching music as though the twentieth centruy had never come...Not in a world where today's hard realities are so fantastic that science-fiction is threatened with obsolescence.
The new ideas I have been speaking of have given rise to equally new words, and to uses of the old words in extremely unfamiliar ways. I apologize here and now if I have let too many such new words and usages slip through my typewriter without explaining them. Yet that very semantic problem should show you how really new this combination of circumstances is--who ever thought, just a few years ago, that melody and harmony could come out of adding machines, or that iron rust (the active coating on magnetic recording tape) could preserve sounds more faithfully than a shellac sdisc?
Even though they are coming to realize they must work together to produce a total music communication system all the way from composer to listener, muscians, sound technicians, electronic engineers, acousticians, and physicists talk mututally unintelligible "dialects." "Modulation," "key," and many other words do not mean the same to these persons, and the hi-fi enthusiasts have confused the semantic picture still further by mixing several kinds of jargon together, while creating some terms of their own. In addressing this bulletin to all these kinds of people at once, I risk obscurity--but I am in a terrific hurry to get some ideas over to you while they are still news, and thus haven't any time to lay this aside and compile a two- or maybe three-way dictionary. Let's see: what would I call it? Engineerian to musicianese and musicianese to engineerian? Even if I did write the dictionary ten years ago when I first had the idea, it would be hopelessly outdated by now. Still, if I find a real go-getter collaborator and the assurance of the book's quick publication, I would consider collaborating on it.
Meanwhile, I mean by "electronic music consultant" that I am available, as of now, to facilitate communication between musicians and electronic experts, when or if they are willing to compensate me adequately. I would like to see to it that the wrong instruments were not invented for the wrong reasons, and that composers did not fail to take advantage of all the new resources now available to them. Since the "breakthrough" for electronic music has now begun, as evidenced by the news items I have included here, there will be need for persons like me, who have tried to familiarize themselves with both the electronic and musical fields.
This is why I have gotten out this bulletin in a hurry, rather than taking any time to polish it. It is this long because, like the busy preacher delivering the long sermon, I did not have the time to make it shorter. If it achieves its object, I will be so busy that I won't have time to write another bulletin six months from now, nor to review this one.
Also, I haven't the time or energy to write individual letters to all the people who should be contacted soon--certainly not the time to include in each letter only what that particular person might be interested in. Therefore I decided to put down here everything at all pertinent that wasn't well covered in my earlier articles and leaflets.
The semantic difficulties I just mentioned extend to the definition of the term "electronic music" itself--which should be no surprise. To be fair and impartial, I should put in a word about musique concrete and the various tape-splicing and re-recording methods.
So long as recording was limited to inflexible, unalterable, and non-re-useable discs, and to the very expensive sound-on-film method used in motion picture studios, only the very wealthy could experiment with re-recording or "dubbing" of sounds. With the tape recorder, and the vast improvement in tape recorders during the last few years, re-recording and splicing tehcniques may well turn out to be the most practical approach to electronic music for those of us who are not well endowed financially.
The recent little book, Electronic Music and Music Concrete, by F.C. Judd, A. Inst. E. (London, England) is a very apt introduction to this feild, for anyone who might like to try it.
The most obvious use of tape recorders is to play back the recording at a different speed, putting everything up or down one or more octaves in pitch. This takes the finger-dexterity obstacle away, so that almost anybody can become a superspeedy virtuoso in a delightfully makebelieve sort of way.
The next most obvious technique is to put "sound on sound," adding a new part each time the previous parts are re-recorded. Popular music fans have been familiar with this technique applied to conventional instruments and voices, for some time. It can also be applied to electronically-produced tones and thus enable a person with only one or two electronic instruments to find out how a whole orchestra of them would sound. (Needless to add, I have made a few such tapes.)
Sounds can be reversed, a piano tone played backward becoming quite weird in effect. The initial hammer-blow can be cut out of piano tone, or a hammer-blow spliced onto a violin tone...many other fantastic possibilities abound.
Electrical devices called "wave filters" or simply "filters" can remove the lower frequencies (high-pass), remove the high frequencies (low-pass), remove the middle frequencies (band-reject) or remove both low and high frequencies so that only a narrow or medium slice of the tonal spectrum remains (band-pass). These filters can radically change tone-qualities, and when band-bass filters are applied to noises, even make tones or almost-tones out of such noises. (By the way, filters are the means used in most electronic organs to derive mose of their different tone-qualities, especially to add a few drops or spoonfuls of oboe flavoring or horn flavoring or essence of clarinet to the tones.)
Vibrato and tremolo can be added to tones that did not have them. Tones can be caused to combine and deeply influence each other by a device called a "ring modulator" (electronic term, having nothing to do with the musician's "circle of keys"). These alterations are inserted while recording or re-recording, and produce many sounds impossible of attainment on ordinary instruments; and in effect the tape recorders and accessories become instruments themselves. A person approaching these techniques with a really open mind, unprejudiced by conventional musical training, might well escape the cliches and mannerisms that stifle so many composers and arrangers (me, too!).
By that I don't mean that musical training is a disadvantage, but that ordinary musical training is retrospective, and implants so many prejudices that one becomes conditioned to avoid the perfecly valid and fruitful possibilities of using electronic equipment. If you have been trained in the conventional manner, you must make intense and continued effort to use the new electronic techniques with a fresh viewpoint and an uncluttered mind. Otherwise there is no purpose in using them at all: why imitate, when the original things (the conventional instruments) are so ubiquitously available? In musicianese: a cello can produce tones surprisingly like those of the clarinet, through a certain trick bowing. But what orchestrator would ask for this trick, when real clarinets are almost always at hand? In engineerian: an oscilloscope will measure voltage, but this job is usually done much more conveniently, accurately, and inexpensively with an ordinary voltmeter. Oscilloscopes are valued for what they can do that a voltmeter cannot do. A cello is used because of all the sounds it can make that a clarinet cannot make. Likewise, the re-recording and tone-modifying techniques are valued for their unique abilities, not their abilities held in common with older techniques.
Some of the manipulations, as switching tone-modifying circuits in and out, splicing tape, starting and stopping recording machines, etc., are so strange and unfamiliar to musicians that it does require an effort of will to admit that these are musicial instrument playing techniques just as much as bowing a violin or plucking harp strings or blowing a wind instrument. This again is something I would subsume under the "total system" idea--if the composer is going to express himself properly to his listeners (who will be removed in time and space) he must learn these new techniques himself, not entrust them to someone else trained only in recording and copying prcoesses. This, of course, does not prevent a recording expert from studying how to compose music and using electronic equipment to produce the music, even though he does not care to learn the old-fashioned techniques of playing on conventional musical instruments.
I only hope that people who want to become composers this way, or computer programmers who want to learn how to compose on their machines, or those who want to take up direct sound-track writing without first learning an ordinary instrument, will be able to get the proper musical instruction without also getting hogtied by the rules and regulations and restrictions that belong to old-fashioned instruments and orchestras, rather than to the art of musical composition itself.
A clear distinction will have to be made between musical composition proper and the customs, habits, notations, tuning-systems, and instrument-construction menthods which have, up until now, been interposed between the composer and the listener. One can be taught without the other; this is proved, on the converse side, by the existence of millions of musicians who are non-composers.
We now come to another approach to organized sound, one that some musicians are reluctant to call "music" at all. The practitioners of musique concrete for [sic: use?] it, and this label has become generally accepted. "Concrete" as opposed to "abstract" means that the raw material of musique concrete consists mainly of sounds picked up from one's contemporary living environment by a microphone. Preferably, these sounds are recorded without anyone else knowing a recording is being made--"candid camera" fashion, so to speak. Auto horns, babies crying, people talking, fire engines and sirens, locomotives, streetcars, birds singing, dogs barking, typewriters, cement-mixers--anything and everything that makes a noise, can be used for musique concrete. Sometimes phonograph records or radio programs are added to the source-material, making this a sort of second-hand music.
The sounds collected from the environment are subjected to electronic manipulations, to re-recording, splicing, speeding up or slowing down (sometimes this is done to put them more or less in tune with ordinary musical scales), and usually the sounds are well mixed together. Often the manipulations are so extensive that the original sounds cannot be recognized.
A good example of this technique was the "Symphony of the Birds" issued as a long-playing record a few years ago, where canaries were converted into bass instruments and crows speeded up so they became quite aceptable treble melodists.
Musique concrete has been frequently compared with photo-montage. It could be considered a logical development of theatre and movie sound-effects. It should not be too difficult a field to enter, though I would think anyone going in for it seriously would have to be patient and painstaking.
A favorite argument for musique concrete is that by taking "real sounds" from the environment, it is more truly related to the real world than musique abstraite (the ordinary kind) can be.
In answer to that, one might contend that today's environment is highly artificial, and too full of man-made noises for the average person to hear any of the sounds of nature through such a din. Also the average person on an average day hears so many artificial, abstract sounds (dial tone, busy signal, door-chimes, buzzers, whistles, gongs, sirens, various bells, alternating-current hum) with arbitrary meanings or none at all, that the distinction bewteen "natural" and "artificial" sounds, and the distinction between "concrete" and "abstract" sounds, no longer have much validity.
However, musique concrete is foreshadowed in the conventional orchestra by thunder-effects, bird-calls, and such pieces as the Anvil Chorus, while the popular dance-band contains a whole battery of sound-imitators and a few items that make their own "natural" sounds. Would you call me uncharitable if I reminded you of the Futurist noise-making machines presented in Italy some 50 years ago by Russolo and Marinetti?
Musique concrete does teach us one important lesson: there is no hard-and-fast boundary line bwteen tone and noise--between unpitched sound and sound with definite pitch. Many of the sounds of human speech contain equal portions of noise and tone.
One of the fascinating possibilities of electronic music is that we now can control this porportion of tones to noise, rather than uncritically accept the scratch of the violin bow, the hammer-blow of the piano, the wheezing of reed-organ and harmonica, the breath-sounds and key-clicks of the saxophone or clarinet. Reducing the noise element should make loud music less fatiguing.
Somebody (my guess is that it was a scientist with a twisted sense of humor) tried the experiment of "muting" a piano with thick layers of felt and rubber, so that the strings could not possibly produce any tones, and all one got from playing the instrument was various kinds of percussive noises. Even so, when well-known selections were rendered on this noisical instrument, they were readily recognized by the audience! So when I wax enthusiastic about the possibilities of being able to tame, systematize, and fully control noise as well as tone, I am not talking through my hat.
I mentioned this piano experiment because there has been considerable interest of late in "prepared pianos," where all manner of muting, damping, and weirdly assorted hardware is stuffed into a piano to make its tone become semi-noise. Does not this trend indicate that some people at least are growing tired of the monotony of piano tone, to the point where they will do almost anyting to change it? We don't need any better argument than that to prove that it's time for electronic music.
Back there a few paragraphs, I mentioned the motion-picture technique of sound-on-film. As soon as this was put into general use, inventors became intrigued iwth the possibilities of photo-electric musical instruments and direct sound-track writing. The advantage of the film sound-track is that it is visible, and can be accurately edited and spliced, and all the resources of optical technology can be applied to modify the recorded sounds. This in addition to the electronic techniques we already told you about under the heading of tape recorders. Some of these possibilities have been exploited in the movies, but as the picture in this case is more important than the sound, the movie-makers naturally did not push the matter. Photoelectric organs have been attempted, but the enormous mechanical problems have kept them from the organ market, although I understand new makes and models are being introduced.
The sound-on-film process opens up further possibilities: designs may be drawn and photographed, enlarging, reducing, superimposing, and repeating them, and [the] resulting strip of film can be played as though it were a sound recording, thus generating new sounds. This, of course, is tedious, but the process could be semi-automated, perhaps by a typewriter-like machine, and if so, it would be practical for composers to use. The disadvantage of the photo-electric methods is that one must wait for the film to be developed. While magnetic sound-tracks are invisible, they can be played back immediately, which is most important when a composer is correcting his work as he going along. Again, it is conceivable that tape sound-tracks could be produced without the usual recording of sounds, by some kind of machine which would magnetize the tape, and it is not hard to imagine some sort of oscilloscope-like device that would make the sound-tracks visible. There is a liquid that can be poured onto tape recordings to make them visible, but it [is] not too convenient for a composer to use.
Processes like these appeal to anyone with drafting and mechanical skills, so we may well expect someone to invent an improved machine for composers on this principle.
The success of animated cartoons over the years implied that similar techniques applied to sound and music should be equally practicable. The only questions I would raise are: is it too tedious and time-consuming? And is it possible to get as good results at less cost by some other method?
When much repetetive work is involved, (as will be the case with direct sound-tracks unless somebody invents a real humdinger of a new machine), the computer people will be only too glad to tell you that this is another good application for computers.
Here we are, back on the subject of computers again! And you have to admit, there isn't much point in relieving the computer or arranger of one kind of drudgery (copying out parts, writing the full score, continually asking himself if this or that note or passage will be effective on this or that instrument), if we immediately saddle him with another kind of tedium and monotonous routine (drawing and/or photographing sound-tracks, assembling photographic tracks or pieces or magnetic tape into an intricate mosaic, splicing, operating or directing the operation of elaborate optical and electrical equipment). No money has been saved; no time has been gained. When a composer is inspired, the slightest delay becomes irksome, and good ideas will be lost forever if not immediately set down.
This last sentence is what prosaic scientists and practical, hard-headed businessmen and technicians will ignore. More unfortunate still, many musical conservatories and colleges with music departments forget all about inspiration, and indeed they deliberately train all the enthusiasm, inspiration, and originality out of their composition and arranging students. One much-publicized system of composition actually proclaims that no inspiration, talent, or intuition is necessary to compose by this system. If anyone sincerely believes this, why doesn't he let a computer do all his composing for him?
The use of computers for "composing" music is fully described in Hiller & Isaacson's book, Experimental Music, available in most public libraries or at your nearest bookstore.
***
Random numbers (numbers determined only by pure chance) are generated by the computer. An encoded version of certain rules and rudiments of music has been stored in the computer beforehand. The process is similar to a card game, such as whist, seven-up, five hundred, or hearts. The cards are shuffled, cut, and dealt; and the players are obliged to follow suit, and one suit is trumps. Similarly, the random numbers generated by the computer may not "become" notes in the finished composition unless they obey the music rules stored in the computer's "memory." Since these rules were deduced by human beings from compositions previously written by other human beings, and other persons decide when the computer shall be turned on and turned off, and which rules are to be stored in the machine, I am forced to put the expression "composed by computer" in quotation marks. It would be more correct to call this composition of the third order.
In the compositions discussed in Hiller & Isaacson's book, the first order would be represented by hundreds of composers who lived 100 to 400 years ago. The second order would be represented by the music-textbook writers and teachers (e.g., Prout, Goetschius, et al.), while the third order would be represented by the computer, the people who built it and designed it, and the other people who program it, maintain it, and operate it.
Let's give Hiller and Isaacson the credit, though, for saying that "it would be unethical to use computers to turn out 'standard average Beethoven' on a mass-production basis"--this is a refreshingly different attitude from that of the experts who want to permute and simulate Mozart (see page 2). My own opinion must, alas, be a bit cynical: I am afraid that it will be done, despite any and all protests about how unethical it is.
Indeed, after I had begun writing this bulletin, and too late to insert it in its proper place, a society gathering for a music club was held somewhere around Los Angeles, and examples of simulated Mozart were sandwiched between examples of authentic Mozart, and the audience was dared to tell the difference. Raised eyebrows, anyone?
There is a danger that this sort of simulation-by-computer may lead many people to believe that the "rules of music" are forever determined and immutable, like the laws of physics. (Actually, even the physicists are the first to admit that they do not know all the laws, and that they may not have them set down and interpreted fully and correctly.)
While the laws and principles of acoustics lie behind musical practice and isntrument-construction principles, and will determine the construction and design of new electronic music devices, the so-called "rules of music" are not thus determined by the structure of the physical universe. Instead, they are deduced from what composers have done, and they do change with time, even more than the rules of games. Game rules rest, in the final analysis, upon agreement among the players. If whist can evolve through auction bridge to contract bridge, surely music can change its rules just as easily.
And just as scores of different games (and more scores of kinds of solitare) can be played with the same deck of cards, while there are other games requiring special decks, the "rules of music" need not be uniform from one composer to another or even from one composition to antoher. Since music is not played the way it is written, even by the most conservative and conscientious performers, these rules are more honored in the breach than in the observance, anyway.
In their book, Hiller & Isaacson describe how they programmed a computer at the University of Illinois to change the rules as it went along, so that one composition (a string quartet) exemplified everything from complete anarchy to strict counterpoint.
In another part of the book, they predict that composers will become interested in second- and higher-order composition as they define these terms: with the exhaustion of the elemental combinations of notes, the composers will rearrange entire measures, phrases, harmonic schemes, or passages; he will even take snippets from here and there in existing music and recombine them into a new overall pattern (something like musique concrete, in a way.) Sorry, but I am skeptical about this. There are ways of expanding the basic elemental resources: new scales, new tuning systems, new tone-qualities, etc.
If being an expert at knowing and applying all the rules were what really counted, then Ebenezer Prout, Karl Czerny, M. Hanon and their ilk would be greater composers than Beethoven, Schubert and Schumann. Czerny symphonies and Prout fugues and such would crowd Bach, Brahms, Debussy, Wagner and Tchaikovski off the concert programs. Since nothing of the kind has happened, it is almost superfluous for me to reassure other contemporary composers that they need not fear the coming avalanche of computer-composed music.
The only composers who need fear computers are the "competent mediocrities"--those who have the skill, technique, and the training, but not the "fire" of inspiration. Must I shed crocodile tears for them?
Since I am not a "popular songwriter" and have no intention of composing for dance-halls, night-clubs, and cocktail bars, and I feel no particular affinity with the movements, either high-society, middlebrow, or beatnik, which seek to make jazz serious and/or respectable, I cannot speak for such enthusiasts. Will the electronic computer affect them? Probably so, because there will be more experiments in generating popular tunes on computers, like the Klein "datatron" undertaking referred to in my 1957 leaflet. It is even conceivable that someone will marry the computer to the jukebox and come up with a weird and noisy hybrid. That is, a machine that would play each record a little diffferently each time it was repeated.
Already there are robot accompanists and rhythm devices, but I don't have sufficient data on them to more than mention them now. From the description in the ads, this could be a big factor in the popular-music field.
A while back, I was discussing sound-tracks, film and magnetic, and the possibility of composing and arranging machines using optical or magnetic sound-track assembly methods. Would computers figure here? (Please excuse pun.)
There is a class of comptuers called analog computers, which deal with continuously-varying phenomena such as the wiggly curves representing sound-waves. So far as I know, not much musical use has been made of analog computers, although many electronic music inventions would definitely suggest that they be used for this purpose.
The trend in the last few years seems to be toward digital computers, even for analog work. That is, we can take samples of a wiggly curve, such as the waveform presented by an oscilloscope when a violin tone is fed into it, measure arbitrary points close together along the surve, and express the height of these points as numbers. This is all done automatically by a device called an analog-to-digital converter. (Non-technical persons will please realize that I can't make this any easier to understand without garbling what I have to say.)
If you had graphs in algebra class at school, you have one way of looking at this idea. If you have examined mosaic work and embroidery of the "sampler" type, you have another good analogy. Or a musician might attempt to represent the howling of the wind by ascending and descending chromatic scales on the piano, even though the piano cannot slide siren-like up and down in pitch, but must proceed by units of semitones and ignore the intermediate shadings of pitch. Hoping you get the idea by now, I go on: a digital computer (the kind mentioned in the news stories at the beginning of this bulletin) can represent wiggly curves, and thus any musical instrument waveform or even a combination of such waveforms, such as a phonograph record groove or film soundtrack or tape recording of anything from a single instrument or voice to a full orchestra--the digital computer with enough accessories and attachments can represent sounds as strings of numbers, and operate on these numbers just as well as it can with bank balances or telephone bills or government stastistics. The numbers can be converted into sounds by a formidable apparatus called a digital-to-analog converter. Theoretically, you could listen to a table of logarithms or the cash-register tape from your neighborhood supermarket, or a page from the Census records. I don't know as you'd want to, but it certainly couldn't be any more bizarre than what the modern artists are painting and sculpting these days. Here is another kind of musique concrete for the taking!
Thus it is possible for computers to do many things with musical waveforms that could not be even conceivable with the original sound-tracks. Not all these manipulations will make sense, of course, but some of them will--and add valuable new items to the musical vocabulary. Waveforms can also be analyzed and synthesized. They can be correlated with each other and with anything else you care to put into the computer. How much this is going to cost, how long it will take to get worthwhile results, and how practical it will be to build smaller machines musicians can use themselves in their own work, only time will tell.
What I am trying to make clear here, is that there is work enough to keep many, many people busy for a long time to come. There is no end in sight--or hearing.
How, you may be wondering, is the person with ordinary musical training going to take advantage of all these potentialities? Perhaps by a new wrinkle in computer technology called "pseudo-codes." I am not yet an authority on this subject, having read only a few magazine articles about it, and that in a hurry. But from what I have learned about it, it is possible to train someone in a realtively short time to write strings of English words and numerals, and have some sort of automatic machine take these fairly-understandable words and convert them into coded form, which the computer can use. Theoretically, then, the composer should be able to use computers without having to become a mathematician. I need more data, and have written letters asking for it, before I can form a definite opinion on the matter, but I wanted to let you in on the latest dope.
Undoubtedly there are still other possibilities I haven't heard about yet. "Expect the unexpected," as one of my acquaintances is fond of saying. It makes me somwhat uneasy to realize how soon this bulletin is going to get out of date. I can only hope it will still be news by the time you read it through.
We have been talking about machines and electronic devices and instruments musical and otherwise, and about mechanical and other techniques.
Now it's time to say a little about certain engineering and scientific concepts that have musical implications. At least I believe they do; the promoters of these ideas were not musicians.
One of these concepts is information, as treated by the new study called "information theory." I have a pamphlet on this subject, shorn of its mathematical armor-plate, so will not duplicate here what I said there. But let's see what information theory and the mathematical theory of communication have to do with music.
Information theory defines information (I paraphrase) as that part of the message as receieved that one cannot guess correctly in advance--the unpredictable part of the message, in other words. For example, if the concert program or the record label says "Bach," we already know that we are going to hear some contrapuntal music, not a tin-pan-alley solo-and-accompaniment sort of thing with block chords thrummed on a guitar sotto voce.
If the program tells us that a symphony is going to be played, we already know that it will be a sonata set for orchestra, and that certain themes we hear are going to be repeated in various easily-penetrated disguises. If we have heard four or five compositions by Debussy, we will likely say to ourselves, "that must be Debussy" when another of his compositions, just a few measures of it, has been played to us.
Music is highly redundant, in information-theory terminology. It is customary to repeat musical themes (motives) often during a composition, so that only a limited smount of new information is presented in any one piece. Now, is this necessarily so? Might a composer decide not to be redundant at all, or to keep the percentage of repeated material down below some arbitrary figure? In the case of music performed by a soloist, how much information does the soloist add, that the composer didn't put in the piece? Obviously, if a new soloist interprets an old, hackneyed composition, all the information comes from the soloist, because we have heard the composer's message many times before.
Another pair of words used in information theory is signal and noise. Signal is the wanted message; noise is anything at all that is in the message-as-received that wasn't in the message-as-sent. An out-of-tune piano or a musician who didn't keep time would both introduce noise, in this sense, even though neither would be noise in the common sense of the word. Even something negative, the absence of accent in a player-piano performance, for instance, would be noise in the information theory sense.
Another idea is that of coding. Language, spoken or written, is a code for one's thoughts, which the listener or reader must decode, and he can't if he never learned the language used. Now is music a code in this sense? Perhaps; we say a new piece "makes sense" or that [it] is "incoherent." If so, wouldn't it profit us a great deal to take a fresh viewpoint, a brand-new attitude, and investigate and study music in these new terms: signal/noise, information, redundancy, code (and others I haven't space to mention here)?
If we are to write something better than "background music;" if we are demand each listener's attention--then it seems to me, information theory and the idea of communication have much to offer us. Also for listeners: in music-appreciation classes, for example.
Then in communication engineering there is a theory of feedback. (I have almost enough notes and lecture transcriptions form a lecture I delivered in 1955 to write a book about feedback.) Feedback has always been relevant to music, even if it wasn't called by that name till recently. For instance: a singer hears his or her own voice, and corrects each tone while it is being sung. The sounds produced by the vocal cords are fed back by way of the ears and the auditory nerves to the brain, which then compares the tone-now-being-sung with the "standard"-tone-as-remembered (what the tone "ought to be"), and then sends signals down to the larynx, diaphragm, mouth, etc. to alter the tone-now-being-sung till it coincides with the remembered image. I use robot or mechanistic language here because this feedback process becomes subconscious, automatic.
A violinist does his automatic tuning in a very similar manner, even though it has to be consciously learned at first. In an orchestra, where many players and hte conductor are involved, the process is more complex, and includes visual cues, but it is still a feedback network operating.
Applause is feedback.
Theoretically, there should be a closed feedback loop including the composer, the listener, and everybody else and everything else interposed between them. That is, the composer should be able to learn, from audience reaction, whether he has or has not communicated anything to the audience, and alter his future compositions accordingly. (In my lecture, I used a diagram illustrating this. I will include this diagram with at least some copies of this bulletin now, and will furnish a copy on request to those whom I have overlooked.) [Note from Monzo: Unfortunately this diagram is not available.]
In our present chaotic, overconservative, backward-looking, not-engineered, but rather trial-and-error situation in the musical worlds, feedback from listener to composer is discouraged, if not impossible because the composer happens to be dead for a century or more. I don't have to argue for improvement. Stating the facts is enough.
This is why I said so much about the total systems idea earlier in this bulletin. Let's give the listener a voice by providing feedback, and let's make the composers feel that their labors have not been in vain by seeing to it that they receive feedback.
Feedback, whether inside human bodies or in automatic machinery or in electronic circuitry or in the social situation, may be defined as built-in self-control. I am tempted to go on, but had better reserve futher comments for a future book.
A rapidly developing scientific field today is structural linguistics. Until fairly recently, all languages have been studied as though they were thoroughly dead. Whether Latin, Sanskrit, Modern French, or Modern English, the grammarians treated languages in a a dryasdust, boring way, as though they were inanimate fossils in some musty museum. Do you recognize the way the musical pedagogues have treated music? I hope so.
Some scientists today have decided to treat languages as living, as human affairs spoken by real people, as something changing and growing right now. This new attitude is revolutionizing language teaching, disclosing many hidden facts about English, French, Spanish, and so on, and is leading to the invention of machines for translating from one language into another, and even to "language engineering"--deliberately creating new artificial languages or supplements to languages for various pruposes. (All this is interesting to me, and I may become involved in such research.)
If this approach is taken to the study of music, it will bear fruit, I am sure. There is a correlation between the structure of languages and the structure of music. Both address themselves to the human ear, so there ought to be a correlation. If there were no relation bewteen the two, song would be difficult or impossible; but as it is, setting words to music seems a very natural and suitable thing to do.
Just like single speech-sounds, which have no particular meaning by themselves, musical tones do not gain real meaning until they are organized into higher units, such as rhythmic figures, motives, themes, measures, phrases, and on up to complete compositions. These building-blocks of music can be matched by the building-blocks of language, such as words, sentences, phrases, clauses, paragraphs, etc. Then grammar and syntax can be compared with musical form.
So long as musical analysis was dry and boring, and grammar and language study have been torture for schoolboys, these correlations or cross-correspondence between music and language have been mere curiosities, without much practical value. But the new attitude of structural linguistics and of modern language experts--they go out among people who are talking and collect actual samples of what people say to each other, rather than setting up elaborate grammatical charts and schemes in their ivory-tower laboratories--they describe language as it is, rather than as they think it ought to be--this new attitude and its happy results suggest to me, at least, that back-and-forth consultation between musical experts and language experts should produce exciting breakthroughs in both fields. (See bottom page 12 and top page 13.)
For example: in the June 1962 Scientific American, Victor Yngve wrote an article with diagrams in color, about language structure and how it can be applied to translation by machine. (Computers again!) Let's take just one idea from the article: a sentence in a written language can be very long, because the reader can go back over it if he loses his way. (I suppose I should ask your indulgence here because my sentences are so long.)
In spoken languages, however, sentences cannot be more than so long, or more than so complex in their structure, because the listener has to carry the beginning of the sentence in his memory so that he can follow the pattern the speaker is weaving out of spoken words. Now, I ask, doesn't this idea have applications to music? If a composer writes too intricate a passage, or goes too long at a time without breathing-spells, the listeners can't remember enough of the beginning of the passage to make sense out of its latter parts. But music teachers and other composers looking at the written score can look back and maybe follow the composer's train of thought--and then wonder why it is not so intelligible when played and listened to.
Already I have plans to apply the ideas behind language-translating machines to the construction and design of composers' machines, and vice versa. Anyone interested please write.
Memory plays an extremely important rolse in music, just as it does in language. If anything, there is too much memorizing done in music schools. New music is rejected because it is too hard to memorize and play perfectly without score in public. When composers write for ordinary instruments, they depend on the performers' memory-power as well as their own. For example[Note from Monzo: Image unavailable.]
Translated into scientific terms, this would go: the musical notation as written calls for sounding a 495-cycles-per-second tone for an arbitrary length of time, and then in the same bow-stroke sounding a 782.2-c.p.s. tone for three times as long as the first tone was sounded. (Frequencies given here are for Pythagorean intonation based on A = 440 cp.s., this being the usual scale employed by unaccompanied violinists.) [Note from Monzo: The musical notation could thus display a quarter-note (crochet) B on the middle line of the treble-staff, with ratio 9:8 above A-440, followed by a dotted-half-note (dotted minim) G on the space above the staff, with the ratio 16:9 above A-440, with a slur between the two notes.]
In the second example, the usual actual performance, the violinist sounds the 495-c.p.s. tone for somewhat less time than the arbitrary unit, then glides continuously upward in frequency (at reduced amplitude, usually) to a nominal frequency of 586.6 [Note from Monzo: the E in the top treble space, with ratio 4:3 above A-440], then makes an extremely brief pause (indicated by the dagger in the musical notation example, right after the stemless note denoting the end-point of the upward glide), then sounds a 782.2-c.p.s. tone for the notated duration, or slightly less, and, soon after beginning to sound this note, frequency-modulates it at a rate of 6 c.p.s. and with a deviation of +/- 8 c.p.s. or more. [Note from Monzo: the logarithmic size of the vibrato is thus approximately +/- 17.7 cents.]
The upward glide, for which the time is borrowed from that of the written notes, the tiny pause, and the vibrato (frequency-modulation, in engineerian), are all executed according to instructions sotred in the performer's memory, not the composer's. Violin and cello instruction books contain vague instructions to theis effect, but the finer points of such deviations from written music are supplied by the music teachers.
This is just one small example of what added information the computer or the automatic musical instrument must have, if the performance is to sound natural and expressive, rather than wooden and mechanical. Either the composer will have to supply this information to the machine, or the information will have to be stored in the machine's "memory."
Note that a small amount of information of this kind is actually built into a piano or pipe-organ. In these instruments, some of the details of every performance have actually been pre-determined by the builders: you might say that the builders are the unseen (but heard!) accompanists of every pianists and organist. If you like to talk pscyhologist's language, this information is in the form of engrams preserved in the piano's or organ's "subconscious;" if you prefer to talk computerese, the information is "wired into" the permanent memory unit of the instrument, or the instrument is pre-programmed.
Read that over: it is extremely important. This will explain the crude and somehow-incomplete effects produced on some electronic organs and electronic pianos. The makers don't know any better, or they cannot afford at the present time to build such refinements in realistically-priced instruments.
You can see how far we have come: memory was a term used by psychologists, or as a vague notion in popular speech; now it has been extended in scope till it includes such diverse things as computers storing electrical impulses that represent information, information invisibly captured on magnetic tapes, pictures or meaningful dots preserved on photographic film, instructions for operation punched into cards or paper tapes, and on a grosser scale, the program of an automatic washing machine built into it by assembling the proper pieces of metal together; and also the tone-quality and other characteristics built into a piano or organ, as just explained.
Nor are those all the added resources we gain by expanding the meaning and application of a concept such as "memory." In research laboratories, in many countries, but especially in our own country, they are building machines capable of learning. That is, the machine "remembers" its "experiences" and the next time it encounters a similar problem, operates more efficiently because of what it has learned. The possibilities of such machines for musical purposes are considerable. We can (p. 2) put such machines to work studying Beethoven and pretending to write more Beethoven compositions, or we can put such machines to the much more worthwhile and certainly more ethical task of aiding composers, performers, and music students and teachers in ways too numerous to describe here--the choice is up to us.
Another concept greatly expanded by mathematicians and scientists lately, one which is already transforming military and economic thinking, is game theory. Since musicians have always been fond of games, and more than one writier has intuitively perceived the affinities between, say, music and chess, and Mozart invented a scheme for composing music by playing dice, and there is sometimes as much physical exertion and skill involved in 70 persons performing a symphony as there is on an athletic team, and from my own experience I can reaffirm that composing a fugue is a game-like problem, game theory would seem to have musical applications. Yet another similarity: music and games are both patterns in time.
There is a growing interest in all manner of patterns and structures: mathematicians express this by such terms as topology, while psychologists have already begun to apply Gestalt-theory to music. And structural linguistics, which I mentioned a while ago, has its mathematical side.
Even the investigation of creativity itself begins to have a more systematic and scientific basis. As a result of my creativity meetings and writings (see p. 13) I can assert this quite confidently. With the growing realization on both sides of the fundamental similarity between artistic and scientific creativity, the long, sad estrangement between artists and scientists is breaking down, and we are returning to the Renaissance appreciation of the "whole man," the balanced, complete individual to whom nothing human can be alien.
Society has been suffering from "hardening of the categories"--but with this new adoption and expansion of concepts into many fields other than their original fields, we won't have this problem much longer.
I have had to consider the various machines one at a time, and the various concepts and ideas separately, and in doing so it may have given you the impression that all these were unrelated alternatives. Perhaps it seemed that there was a question of choosing one or the other machine, method, or idea.
But that isn't the case. The revolutionary changes going on in every field, and now happening to music, are the result of a synergistic action of machines and ideas together--"not just one, but a combination of ingredients" as the TV announcer's commercial puts it. In computerese, the "software" is just as important as the "hardware," and without new codes and programming methods and new ways of looking at everything, the most expensive machine would not accomplish much.
We don't have an exclusive choice of information theory or feedback or the expanded concept of memory or ---, but rather we have the opportunity of using all these ideas at once and applying this extermely powerful combination to future musical composition, performance, recording, teaching, and learning. We are not faced with a monstrous robot bugaboo, such as some people fear, but rather by the intelligent application of new ideas such as I described to you on the last few pages, we will get our artistic intentions through the machines safe and unharmed.
If you are still worried about "mechanization," don't be: after all, how much individuality of the player, how much of personality, can get through that maze of wooden machinery called a piano action? Take a look at this complicated mechanism, before you try to tell me that pianos are "natural" and electronic instruments are "artificial"!
If it were not for the artistic aspect of the "builder's unseen accompaniment to piano and pipe-organ tone" that I mentioned, with so little of the performer's individuality getting through the machinery of such instruments, the end-result would be aesthetically intolerable. But should the composer and the performer have to depend on the instrument-builders, just because they have been doing so for over two centuries?
Most elec tronic organs (and these are the chief example of electronic music that the average person gets to hear today) are designed with the latest aids, such as mechanical and electrical engineering, but the aesthetic side is mistakenly limited to copying features of pipe organs without any attempt to inquire whether such features are appropriate to newly-composed music or the modern rooms in modern homes and buildings where these organs will be installed, rather than the churches and auditoriums of two or more centuries ago. Nor are such organs designed in the light of modern communication between composer and listener. If it is not practicable to give the the mass-produced electronic organ of today the individuality that pipe organs had (no two pipe organs are ever exactly alike), then there is at least a moral obligation to give the organist some way or other to put a small measure of individuality into the instrument he is using.
Fortunately, if such latitude has not been built into the electronic instrument, it is possible to have the instrument "modified" without sending it back to the factory. I predict that this will become a good business in itself, in the not-too-distant future.
That is, to solve a problem such as this one of putting more individuality into an electronic organ, it is not necessary to re-design the whole thing from the ground up, anymore than a contractor would have to tear the whole house down and build a whole new house just to remodel the kitchen.
Another important thing to remember is that there are many sub-assemblies or pre-engineered "building-blocks" and "kits" and "components" on the market, so that many new ideas in electronic music can be tried out at nominal cost and without having to wait months or years for every individual part of the new apparatus to be designed, at great trouble and expense, just for one purpose.
These components or building-blocks don't even have to be originally intended to go into musical instruments or sound systems. That new word serendipity--finding just what you need when you are not looking for it--certainly applies here! Conversely, advanaces in electronic music will mean by-products of the research that will be profitably applied in other fields outside music.
What we have just said provides an answer to the "won't it cost too much?" objection considered on pages 2, 3, 4 and elsewhere. The results we will get when the new machines, the new techniques, the new ideas, and aesthetic principles and the new creative approach are all applied together, will be worth the cost. I had to take you on this long, rambling journey to show you an adequately rounded view of all facets of the problem.
Electronic music draws upong many other fields, and it has a great deal to give to other fields. Thus it is not just another narrow specialty, but [an] integral part of the Contemporary Cultural Renaissance.
I want to play an active part in this renaissance, and I am inviting you and other people you may know the play your parts in it also: that is why I have written this bulletin in such a hurry. You may recommend people to me who should receive a copy of this bulletin, or a few sample pages from it, or who ought to have the price lists of my other writings.
I shall be grateful if you will broadcast the news that--
1) electronic music is here now;
2) I compose music for electronic as well as conventional instruments;
3) I have invented new instruments;
4) I have built electronic musical instruments;
5) I am ready to collaborate in the design of equipment, instruments, techniques, and other factors in the electronic musical system;
6) I want to help people in the musical, electronic, and allied fields understand one another;
7) most of my side interests are relevant to audible communications;
8) my approach to the problems and projects entrusted to me will be at once artistic and scientific, aesthetic and technical, with no undue bias.
The last few months (end of 1965 and beginning of 1966) have shown a remarkable increase in electronic music activity, as well as a rapid growth of publicity for the subject--news items and discussions of it appearing in the most unexpected places. To the point where this bulletin has to be reissued, with some additions to bring it up to date.
COMPUTER MUSIC -- Specially noteworthy is the advance in computer music--my pessimism and caution were overdone, perhaps, in 1963. An excellent survey of these recent developments can be found in the article "Research in Music With Electronics" in Science, 8 October 1965 issue, pp. 161-169. In the last few years, and still going on, there has been a dramatic expansion in "computer software" as alluded to on the middle of p. 22, this Bulletin. The chasm that seemed to separate the mathematically-untrained person (such as the composer or arranger) from the ability to control and use a computer, has been narrowed till now for all practical purposes it is being bridged.
Nor is this all: with the speeding-up of the average computer's insides, and the improvement of devices to get data into and out of it, "real-time" operation becomes more practical. That is, instead of having to queue up and wait one's turn, the answers can be provided in a jiffy. This development, in turn, makes time-sharing and remote control possible: a large, expensive machine can be used by ten or even a hundred persons simultaneously (well, not really simultaneously; the users are switched to in rapid sequence), and the information put into and taken out of the machine can be transmitted over telephone lines and teletypewriter circuits, so that the users of one computer can now be in different buildings or even different cities. Thus the cost of computer time will come down even more rapidly than the cost of the machines themselves.
My misgivings (see pages 4 & 5) about having to compose in a business office, and about the interposition between composer and computer of musically-untrained programmers and technicians, are thus greatly relieved. The new "programming languages," "codes," "compilers" and other aids will meet the musically-trained person more than halfway. One method will be the use of words, letters, and numbers to make the writing of music compatible with a typewriter keyboard. To me, this seems to hold the most promise.
Another method would be a special device that could read ordinary musical notation. This is being developed. My personal opinion is that it ties us back to earlier centuries just at the time we need to be free of them. Of course, for those of you who are "allergic" to typewriters, this may have to be the way. I am particularly concerned about this drawback: as I point out on p. 26 and elsewhere, musical notation does not and cannot say all the important things about how a given passage is to sound. Much is left out of the notation and supplied by the performers. There is an unwritten gentlemen's agreement that the composer shall never specify certain details, such as pitch and rhythmic deviations, presence and degree of vibrato, timbre-nuances of a given instrument, pitch-glides between many notes, the proportion of noise to tone, variations in attack, and still other important factors. So long as one never has ideas beyond the 19th-century tradition, this agreement serves quite a useful purpose. It saved millions of hours of composers' time. It flattered the egos of performers, conductors, and music teachers; it still does! But it will not work for a new musical idiom where all these hitherto-fixed and standardized items of music-as-heard have now become variable.
Take for instance the concept of a klangfarbenmelodie, i.e., the sounding of successive notes with different qualities rather than different pitches. For centuries this has been possible, but rarely was it tried. Why not? Because there was no satisfactory notation for it. Now, to a computer or to many kinds of electronic musical instruments, both manual and automatic, a timbre-melody would be just as easy as a pitch-melody--but (and this is a big but) if the software, the special musical codings, or the notation or keyboard facilities offered the composer are too much influenced by 19th-century tradition, timbre-melody will become difficult, or next to impossible, or awkward in some way.
Much the same considerations obtain for rhythms not easily written in conventional notation, for scales and tuning-systems not congenial to the regular notation, for certain noise effects, and so on.
On pp. 5 & 26 I allude to the graphing method of notation. Instead of notes on a staff, one draws lines on cross-section paper, usually with the vertical dimension representing pitch and the horizontal dimension representing time. (A little more on this later.) Equipment already exists for input and output that would make this type of notation usuable by a computer, both for reading and writing. This type of notation will become increasingly important to composers from now on, but it isn't perfect and it can't do everything, although it really does overcome some fo the drawbacks of conventional notation. It still leaves things out: dynamics and timbre, especially. It is foreshadowed in the medieval neumes for singers, in the patterns of pins on the drum of a barrel-organ or music-box, then in the player-piano roll's perforations. (This last is close kin to the Jacquard Loom, usually considered to be the ancertor of the computer's punched cards--DON'T FOLD, STAPLE, BEND, PUNCH OR MUTILATE).
Before going on, I should tell about a computer-music demonstration attended too late to write up in the first edition of this Bulletin: Prof. Gerald Strang played tapes of computer music that other composers had produced by several different methods, then described in detail how he learned the Bell Telephone Laboratories/IBM 7090 method for composing music and having a computer perform it. He drew up his preliminary sketches in a graphic notation, with an extra graph for dynamics, and then coded it in a system involving the FORTRAN computer language, then the information was transferred to IBM punch cards, then fed into the machine. At the time he made the first composition, it took some 10 minutes of computer time for one minute of performance, but this drawback has since been overcome.
The array of punch cards on the table at the lecture auditorium was formidable--decks and decks of them! This obstacle has also been overcome recently--the process is now far less tedious.
Prof. Strang's composition contained a most impressive thunderclap, such as no ordinary orchestra could have made. There were other sounds that promised an entirely new vocabulary, yet some of the traditional tones, as clarinet for instance, were there also.
Other composers' electronic works that he played on tape were startling also for such effects as an old Model T Ford starting up, one saying "ahhh" for the doctor, and eerie sounds halfway between music and speech. Once we are free of the traditional isntruments, the music/speech fence can be crossed readily--or straddled. (To go back 50 years or more, the busy signal in your telephone receiver: is it speech or music? I know it's annoying, but that isn't what I mean here: it has some of the characteristics of each, when you think about it.)
The "wah-wah" mutes for brass instruments, beloved by the jazz musicians, are also relevant here. The converse of this, making voices sound like instruments, is also found: the mirlton or kazoo, for instance. Thus the precedent for exploring and developing the borderland between music and speech is there already; and as noted on page 24, music and linguistic research may well assist each other. Now I have further confirmation of this in L. Hiller and J. Beauchamp's article in Science, "Research in Music with Electronics" already mentioned. (8 October 1965.)
I have to state this another way: we have entered upon a period of readjustment and digesting of new resources, new freedoms, and new problems. It is now possible to take several new viewpoints, to regard the musical composition situation from several new angles, and to manipulate the sound-structures in ways undreamt of before. One is tempted to compare this situation with the new viewpoints of the piano which were introduced by Schonberg, Bartok, Debussy, and others, but this is dangerous oversimplification--the kind of new viewpoint I mean here represents a much more drastic and genuine revolution than the the transition from key-system to atonality did.
As a case in point, take the automatic instrument known as the RCA Music Synthesizer, invented by Drs. Olson and Belar at the RCA Laboratories in Princeton, N.J., and the Victor recording LM-1922 issued around 1955 and probably still available. Here is an instrument for which fantastic claims were made: it would be able to imitate any instrument, to produce any musical sound. (Since it has been somewhat redesigned and improved, my criticism of a 10-year-old record must not be taken as applying in toto to the present instrument nor to further improvements that may, for all I know, be in the offing.)
The Synthesizer has a peculiar timbre that runs through nearly all the tones on Record LM-1922, and through much of the new record, Columbia ML 5966, as well. Those of you who ever heard the Hammond Novachord (which made its debut around 1938) will have some idea of what I mean here. I do not quarrel with the timbre as such; it is as musically useful as trombone or saxophone or whatever. What bothers me is that both in the Novachord and in this Synthesizer the piano was taken as a Heaven-Sent Divinely-Inspired Model For All Eternity of what the Ideal Musical Instrument Should Be. The piano's 12-tone scale was taken over without even inquiring why the piano is so tuned; percussive tones were accepted as the norm from which sustained tones "deviate;" without really meaning to do so consciously, certain characteristics of the piano and other conventional instruments, along with their restrictions, were built into this Synthesizer that was to be so revolutionary and pace-setting. When you consider how much just one Synthesizer costs, you are appalled at such poverty of imagination. Still, it exemplifies a truly great idea: a special-purpose machine primarily for music--whereas computer music proper is the application of a general-purpose, non-musical machine.
We can be optimistic that, with the creation of the Columbia-Princeton Electronic Music Studio a few years ago in New York City, and the use of an improved Synthesizer there by a number of composers, this idea of special-purpose musical machines will be brought to a more creative level. Particularly so, now that there will be intensive competition between this idea and that of musical uses of the general-purpose digital computer. It is too early to predict which will ultimately be the better approach costwise--the first of anything always is very expensive. The economies effected by mass production, however, are on the side of the general-purpose computer and other devices to be used with it.
The Columbia-Princeton Studio is described in Radio-Electronics magazine for June, 1965 (it is the cover feature of that issue) and, believe it or not, in Vogue for February 1, 1966 in an illustrated take on the same subject; it is obvious that its time has come. But here is a third article, which appeared in the Saturday Evening Post for January 18, 1964, "Music For machines" by Lewis Lapham, dealing with this same studio, and with some aspects of computer music as well. I was sufficiently impressed by this article that I wrote a commentary entitled Art-O-Mation? which I have just reprinted without change, and will bind with future copies of this Bulletin.
Since 1964 I have received information from Messrs. Tillman Schafer, Tom Marshall, and others, indicating that there are many different approaches to the computer and automatic music problem being pursued independently and simultaneously.
As implied in this bulletin and in "Art-O-Mation," my own inclination is toward some kind of special-purpose automatic instrument small enough for the composer's own home--thus my plans are complementary to what others are doing in this field, and there won't be any useless duplication of effort. Rather these endeavors will fit together in really helpful manner.
XENHARMONY. This is a term I have coined from Greek roots, as is customary. The idea is that new harmonies will seem "strange" at first (Greek xenos). I can't just say "quartertones" because that is only a very small part of what is now possible to explore musically; even "microtones" focusses attention on small intervals and seems inappropriate for scales with just a few more tones than 12. Also, there are a number of tuning systems that can at least be demonstrated by retuning a 12-tone instrument. Also, there are noises, gliding tones, quasi-speech effects, and so on, which ought to be included under one term with new tuning-systems and scales.
Until quite recently, there was a formidable economic obstacle to experimenting with 'oddball' tunings. It involved either building a new kind of instrument that was inevitably expensive, or sacrificing an instrument by converting it, and then it couldn't be used for ordinary music. There is an extensive literature on proposed new tunings, and the revival of ancient tunings as the Greek enharmonic genus. But the Almighty Dollar kept shouting "No!!" and I do mean both exclamation-points. Our familiar 12-tone equally-tempered scale was "frozen" into the musical situation by the piano, which requires keys, action-parts, hammers, strings, etc. of sufficient size and strength to withstand the mechanical forces invovled in building and playing that instrument. The Bach "48" had only a little to do with it; the story is much more complicated than that. Detailed discussion really belongs in a new publication of mine to be called the Xenharmonic Quarterly, and perhaps in specialized monographs or a book, later on. Also, there is some mention of this in my 1965 monograph, Shall We Improve the Piano?
The subject of other tuning-systems than the 12-tone-equal, though, does cut across other topics, and is quite relevant here. Computer music should be limited only by what the human ear can accept and use; not by the mechanical limitations of older instruments, such as the piano, banjo, accordion, zither, pipe organ, etc. And electronic organs and other instruments played by hand also do not have many mechanical limitations on their tuning. As for the electric Hawaiian steel guitar and the electronic Thereminvox, these instruments do not have enough pitch-restrictions! The player must always be careful not to slide around all over the place and have his music degenerate into formless shapeless blah. Yet the undeniable fascination these scale-less instruments have for many people could be a token of rebellion against the rigidity of the 12-tone scale.
This isn't the proper place for me to set out a complete appraisal of the 12-tone scale--if it was so desireable and fruitful in 1790 or 1850, why say it is outworn in 1966? Is it really threadbare and exhausted as of 1966? I will have to save details for another treatise. In this bulletin, we are interested in electronic music and instruments, therefore in the way the 12-tone equal temperament affects them. In 1963, I hadn't conducted certain work that I have been engaged in during 1965; that is why I said little about tuning systems up to Page 29.
It is extremely unfortunate that hardly any composers or musicians or even music-theory and acoustics students have ever read electronic musical instrument patent specifications. Admittedly, these documents are written in a crabbed abstruse technical jargon that baffles even patent attorneys themselves. The Government (U.S.A. or other) asks for a complete disclosure of how the new invention works, but naturally the inventor is reticent about his valuable information. To this add the specialized vocabularies of the law, of engineering, of music, and of physics, and you have truly planned, organized confusion!
Methodical chaos. Language whose purpose is to conceal thought. Publications which are not really published--theoretically anyone can buy them at a ridiculously low figure, but they certainly aren't advertised. However, if you did read a hundred or so of these descriptions of electronic music inventions, you would find, over and over and over again, that considerable effort had been expended to make a mechanism or circuit that wanted to produce perfectly tuned (just) intervals, follow a tempered scale with deliberately distorted ratios between the tones. That which is so obvious in the fretting of a guitar or mandolin, and to which the strings of a piano and the pipes of an organ readily accommodate themselves--a scale where most intervals are somewhat out of tune with each other--many mechanisms and electronic circuits actually fight against, so they have to [be] forced into the tempered scale some way.
Nor is there anything in the programming of a computer to produce music, that makes the 12-tone tempered scale any more convenient or expedient than hundreds of other possible scales. There is no economic consideration to make non-12-tone scales more expensive, as might be the case with many conventional instruments. There are no fingering or note-reading problems, no commitment to, nor investment in, the standard routine. With this opportunity to make a fresh start, why not try it to see where it leads?
Already this has been done, to a small extent. Even the "atonal" or "serial" form of composition can profit by using a deliberately non-harmonic scale--such systems as 9, 13, and 21 tones have been tried. It is possible to switch systems during the same piece--this is a new kind of "modulation" or could generate a new kind of "dissonant counterpoint." Then there are the theoretically infinite resources of the just or untempered scale, known for centuries but fenced off from the composer by keyboard problems, the difficulty of keeping hundreds of pipes or strings in tune accurately, and notation problems.
One point must be made right here: no piano tuner, even the most expert, tunes the 12-tone equal temperament accurately. It's only a working approximation. The temperament is further tempered by the fact that a piano's strings are out of tune with themselves! Nor do they stay in tune very long--how could they, with 150 ro 170 lb. tension on each string, the many wooden parts involved, and variations in the room's temperature and humidity? Much the same goes for pipe-organs--pitch changes with temperature and the weather; and there are other considerations. Wind instruments in the orchestra warm up during a concert, and so on. Until recently, a precise tuning of the ordinary system was impossible outside the laboratory. Now, what happens when such precision is really attained, as it can be with electronic equipment? Why it sounds dead. Insipid, no "life" or "verve" to it.
So the instrument-makers and the computer musicians have to put some "dirt" in it. Vibrato, celeste effect, random deviations, reverberation, other gimmicks. Just think of it: once a perfect tuning of the 12-tone system is attained, we shrink from the consequences. We imitate the imperfections of ordinary instruments. What a waste of effort, when we could just as well be doing something insteresting, new, and thrilling? And here is the big difference: up till now, it was almost all trial-and-error guesswork--but now we can know what we are doing, and control all the factors that had to be left to chance in the good old days.
The violinist plays certain tones extra-sharp for effect; well, now we can do this deliberately instead of unconsciously, and add variety by a conscious use of nuances. Just because electronic music will be played through machines does not mean it has to sound mechanical. Any more than poetry sounds mechanical because it is tape-recorded, or looks mechanical because it is typewritten and printed. There has been far too much propaganda about the "dehumanizing" influence of the machines. Human beings design them, human beings use them, and the machines are not built for other machines to enjoy. Too many centuries, the composer has stood at a disadvantage compared with other artists like the painter, the sculptor, or even the artisan/craftsman. Now the composer can communicate directly with his listeners--but only if he has mastered the new electronic communication channel, and only if he takes advantage of its manifold possibilities.
I predict that the use of new scales and the revival of ancient scales which were not practicable on conventional keyboard instruments will become one of the chief reasons for using electronic musical instruments, synthesizers, computers, and other such equipment. It ties in directly with the uniquitous hi-fi set in the modern home, so that the average person in the average living-room will hear what the composer intended, and no matter how intricate or difficult, he will still hear it properly.
I am confident of all this because I have actually done it myself. During the last year and a half, I have tuned my specially-built electronic organ to many new scales, and composed and improvised in all of them, as well as the ordinary 12-tone system. The results speak for themselves: no arguments or long-winded descriptions are necessary. I have only begun to scratch the surface. Much remains to keep me and many other people busy the rest of our lives. My only misgivings are that people who read the score without hearing it cannot realize what new tuning-systems can do; and verbal descriptions like this one cannot convey the new experience either.
Not only will new music progress further because of new scales and their resources; older music can be played in many of these scales, shedding new light on it.
We shouldn't leave this subject without mentioning an affair called the mel scale. A mel is a unit of pitch-difference based on the ability of the ear to discriminate small rises or falls in pitch, without regard to the harmonic basis. Pitch-discriminating ability for simple tones (smooth, flute-like) is better in the middle of the scale than at either end, indeed, before the bottom of the range of hearing is reached, pitch-discrimination fails, and something of the sort occurs at the upper limit of the hearing range. Thus a scale of equal pitch-discriminating ability will be crowded in the middle and spread out at both ends. It will have no harmonic correlation. A piano keyboard, however, spaces intervals equally all through 7 1/4 octaves; and the distance between adjacent semitones on a doublebass is actually much further at the low end of the range, as compared with the way they are squeezed closely together on the violin fingerboard. This mel scale may intrigue composers.
New Electronic Instruments for Manual Playing
Right on page 1 of this Bulletin, I stated that miniaturization of electronic devices would lead to portable organs. I am happy to say that this prediction has come true before I expected that it would. Compact transistorized portable organs are already on the market, under various names, such as "combo" organ, with at least 4 manufacturers in the field and no doubt many more to enter the competition.
Miniaturization not only leads to true portability, but reduces the power consumption, the heating up of the instrument, and also permits operating some instruments from rechargeable batteries. Then they can be taken anywhere, just like violins or flutes, and thus [a] hurdle that has been keeping electronic music back, is finally gone for good. You are no longer forced to find a light socket to plug your instrument into.
With accelerated improvements and innovation (i.e., electronic devices changing almost as rapidly as women's fashions) the stagnation in the musical-instrument setup may be broken. It's risky to predict how much and how soon--this is a matter of economics as much as anything else, and also is tied up with the fact that popular music is in some ways more conservative than serious music.
Perhaps an article that appeared in TIME magazine last year (pp. 90-92, September 24, 1965 issue) "Age and the Patchwork," may indicate the new trends. According to this article, both classical concerts and rock-and-roll recordings are doctored--such items as synthetic reverberation (canned echoes), superposed recordings, editing out of all wrong notes, patching in a right note played after the piece or portion of a piece is completed, drastic alterations in the solo-accompaniment balance, trick pickups of individual instruments or voices to change their timbres, sometimes filters to empahsize or de-emhasize frequency-bands, deletion of noises, and so on.
The result is something that could not possibly be duplicated in a "live" performance without electronic equipment. It has gotten to the piont where live audiences are keenly disappointed when their favorite rock-and-roll or folk singers do not sound as good as the records! In self-defense of their "image," many of the performers have had to start carting around one or two vanloads of electronic gizmos. With the development of stereo, the tendency to exaggerate it is irresistible--and so is the temptation to fake stereo out of monaural recording--witness the highly-publicized fake stereo of Toscanini recordings.
Electronic organs and amplifying guitars now come with "reverb" amplifiers to put that "auditorium effect" into your living-room or a small dance-hall.
What this means is that performers on conventional instruments have begun to emulate the electronic instruments even though they won't admit it and will make denial after denial, denunciation after denunciation.
With ubiquitous background music and the impact of radio, TV, and records combined upon the average household, unconsciously people are being conditioned to these 'electronic' effects. From this changed attitude to the acceptance of electronic music in the home--organs, amplified guitars, new instruments about to come onto the market--is certainly not a big step--not any more.
But it has had to await miniaturization--our apartments and homes are more compact than those of a century ago: the piano has become too big for the average living-room, and a scaled-down "spinet" piano does not have the proper tone-quality. So long as the electronic organ occupied the floor space of a small piano, it could not make real headway as home instrument--but now, suddenly, the story is very different.
In earlier centuries, there was much home music-making--in the days of the lute and the madrigals and the viols, people would form impromptu chamber ensembles at home or meeting-house, rather than trying to be carbon copies in reduced size of symphony orchestras or bands, as has been the custom the last hundred years or so. One pleasant byproduct of hi-fi records has been the introduction of many people to this kind of home-sized music for a few players in a relatively small room, as was practiced in the 16th and 17th centuries.
The late Arnold Schoenberg once made a prediction (we found this quoted in an article by Peter Yates) that one day there would be miniature electronic instruments with something like a typewriter keyboard, that would be quite compact and portable; and people could carry those to one another's homes and perform in small groups with the wide variety of effects. I happen to know that the technology of today (not some distant tomorrow) is fully equal to the project, so we may yet see electronic instruments and even computer-like machines bringing music back into the home and re-humanizing it, not de-humanizing it.
This Supplement is a good place to acknowledge the cooperation and invaluable assistance many people have given me in turning up unusual information and calling magazine articles to my attention. I have cited most of my sources in the text itself, but do not give a Bibliography just now because it would soon become obsolete, and when it is time to issue such a biliography, it would be more useful as a separate publication. In the meantime, ask your librarian about articles on electronic music and related subjects in the current magazines.
Despite all the 19th-century opinions to the contrary, Art and Science were never really divorced. Electronic music is an integral part of the new cultural renaissance, and offers enrichment and opportunity for you and you and you.