Music Technology

Definition, Devices, Activities, and Concepts

Definition

If you want to be an audio engineer, a producer, a composer, or any kind of music technologist, you must have a working understanding of what follows.

We’ll begin by considering the definition of music technology, which—like many definitions—is continually shifting. New equipment, software, and techniques are always being devised, so the concept is changing slowly over time. Fifty thousand years ago, some cave-dwelling genius began banging sticks and rocks together with musical intent.[1] This person would’ve been on the forefront of music technology—the definition of which would’ve been something like a set of sticks and rocks that can be used to bang out thunder beats. Just think of how astonishing the first stick-and-rock beatmaster must have seemed to early modern humans. Many were probably tempted to worship him or her.

Nowadays, new-fangled devices and techniques for producing music emerge almost daily. To get a sense of this, consider that the hard-copy version of Sweetwater.com’s music-equipment catalog is the size of a small phone book. And it will likely keep growing larger. In the future, the concept of music technology will need to incorporate things like artificial intelligence and computer-brain interfaces—innovations that seem set to significantly alter the concept of music technology. The 2050 edition of the Sweetwater catalog will probably carry items that defy our current understanding of the field.

With this bit of caution about the malleability of definitions in place, let’s examine what’s meant and what’s not meant by the concept of music technology. Like any good scholar, I began my research for this material by going to Wikipedia. The definition I found there was long and unwieldy, so here is my paraphrased, much abridged, and uber-concise version:

Music technology is any device, activity, or concept that supports the production of music.

I’ve organized the rest of this blog post around this sentence. Thanks in advance for reading, and have a good time learning about music technology.

Before you move on to the next section, consider the following question:

Devices

Professional audio equipment is probably the most conspicuous subset of music technology. Most music performances feature a backdrop of loudspeakers, microphones, amplifiers, effects processors, monitors, mixing boards, DAWs, and so on.

We can organize our understanding of music technology devices by classifying them into three categories: (1) transducers, (2) electronic instruments, and (3) computer-based devices. We’ll look at each one in turn, but we’ll begin with transducers because they are the most critical of the three.  

Transducers

A transducer is an electronic device that turns one form of energy into another.

A microphone is one type of transducer because it turns sound-wave energy into electrical energy, and a loudspeaker is the opposite type because it turns electrical energy back into sound-wave energy. They have a yin-and-yang style relationship that usually works perfectly but has the potential to fail spectacularly. An example of the relationship working is a pleasantly amplified musical performance; an example of the relationship failing is a dementedly loud feedback squeal. We’ve all experienced this at a show or a coffeehouse performance: Some joker puts a microphone in front of a loudspeaker and a stentorian shriek of feedback blasts everyone’s cochlea into oblivion. The problem of feedback aside, microphones and loudspeakers are everywhere in the professional-audio world. But there are other transducers to consider—like pickups—which we’ll look at next.

Microphones and loudspeakers are transducers, which are devices that convert one form of energy into another. A microphone takes acoustic energy—which exists as compression and rarefaction of air molecules that are set into motion by a vibrating source—and converts this phenomenon into an analogous jet of electrons (electricity). Microphones generate electricity from sound. On the other hand, a loudspeaker takes electricity in the form of an audio signal and converts it into acoustical sound energy.

A pickup is another kind of transducer. It comes in two varieties (1) piezo and (2) coil.

A piezo pickup operates via piezo-electricity, which describes voltage being generated by crystalline substances—like quarts—when they are held under pressure or otherwise applied with mechanical stress. Even the slightest stress on a piezo will create voltage. Consequently, if such a substance is placed under the bride of a stringed instrument, the pressure differences generated by the strings will alter the voltage output of the crystal; thereby, creating an analog audio signal.

Remember, an analog audio signal comprises jolts and squirts of electricity that are perfectly analogous to twangs and pings of sound.

If the sound’s intensity increases; the electricity’s voltage also increases. The shape of the sound-wave and the flow of electrons will have the same outline. They are analogous to each other; this is what’s meant by the concept of analog audio. This kind of signal is continuous with variations being represented all the way down to individual electrons. Imagine the perturbation of one atmospheric molecule by a sound, and now imagine the analogous perturbation of one electron by a transducer. It’s not exactly like this, but I think you get the point—one causes the other in near-perfect synchronicity.

Okay, enough about electrons and molecules. Let’s look at the other style of pickup—the coil pickup.

A coil pickup consists of a length of copper wire wrapped in a loop around a magnet. Such a device, if placed near the metal strings of an electric guitar, will convert the strings’ vibrational energy into electricity. The principle used is electromagnetic induction, which describes a phenomenon in which voltage is generated when copper is moved through a magnetic field. The same principle is at work inside some microphones.

A coil pickup consists of a length of copper wire wrapped tightly around a set of magnets. Such an apparatus, if placed in proximity to nickel strings—like that of an electric guitar or bass—will convert the string vibrations into analog-audio signal. Coil pickups are a common form of transducer. A transducer is a device that converts one form of energy into another. In the case of a coil pickup, string sounds are being converted into electricity. (Jump, Brian. “Gibson Explorer Coil Pickup.” 2020, JPEG.)

So far, we’ve considered three transducers: microphones, loudspeakers, and pickups. Let’s look at one more—the tonewheel.

A tonewheel consists of an electric motor and a set of rotating disks. Each disk is a gear that contains a specific set of teeth or bumps that, when set spinning, cause a specific frequency (note) to be generated. The frequency depends on the disk’s speed, rotation, and number of bumps. An apparatus called a pickup (covered above) is placed near the rotating disc. Consequently, the sound generated by the tonewheel is converted into a flow of electrons (electricity). The Hammond Organ is the prototypical tone-wheel instrument. Within its working innards, resides a tone generator with nearly one hundred tonewheels (Dairiki par. 3).

Okay, now that we’ve seen the ways that musical sounds are turned into audio signals and back again, we’ll inspect some of the more common electronic instruments. Before we do, though; answer the following review question:

Electronic Instruments

There are two categories of electronic instruments: (1) electromechanical and (2) purely electronic. Let’s inspect the difference.

Electromechanical instruments employ moving-parts and nuts-and-bolts machinery to generate their electronic properties. Purely electronic instruments, on the other hand, have no moving parts and generate their electronic properties through circuit boards.

Basically, electromechanical instruments are machines and purely electronic instruments are computers. This is a little too simplified because the oldest specimens of purely electronic musical instruments weren’t computers. However, they did operate via circuit boards and had no moving parts.

Two examples of electromechanical technology are the Hammond organ, which, you’ll recall from above, uses electrically-motorized tonewheels to make sound, and (2) the electric guitar, which uses magnetically-activated pickups to make sound. The Hammond has spinning tonewheels, and the guitar has vibrating strings.

An electric guitar is an example of an electromechanical instrument. It operates via transducer technology. The electric guitar’s transducer is a coil pickup, which converts vibrating string movements into an analog audio signal. A coil pickup functions via electromagnetic induction, which is based on a peculiar reality of the universe—if you pass a conductor (like copper) through a magnetic field, you will generate a voltage. Some microphones (dynamics) operate on this same principle. (Photo by Mu00e9line Waxx on Pexels.com)
The Hammond Organ employs additive synthesis, which is a combinatoric sound technique that creates different timbres by adding various sine waves together. This electronic maneuver is accomplished by a large set of tonewheels, which are housed in the organ’s tone generator. The phenomenon of additive synthesis is eponymous for the common music-technology device called a synthesizer. (Photo by Tim Gouw on Pexels.com)

Now, onto the type of instrument without moving parts—the purely electronic category. This category of electronic instruments can itself be sub-categorized into two variants: one kind that generates its electronic properties using vacuum tubes, and another kind that generates its electronic properties using solid-state semiconductors. The difference is one of circuit boards vs. logic boards and is a little beyond the scope of this blog post. A good way to make sense of the difference between the two kinds is to consider the electronic instrument known as the Theremin because it exists in both the tube and solid-state variety. The original version, invented in 1928, used circuitry and vacuum tubes, and the modern version, which came out in the 1990s, uses semiconductors and logic boards. Despite this difference in construction, the two kinds sound the same.

Both versions of the Theremin are played by waving one’s hands at various distances from two metal antennas. The player does not touch either antenna—just waves in their proximity. The Theremin uses an electronic component called an oscillator, which can produce high-pitched whines and sonic fluctuations that ebb, flow, dive, and soar. The oscillator is the Theremin’s analog to the Hammond’s tonewheel—it’s what’s creating the sound. The difference is, unlike a tonewheel, an oscillator creates its sound without physically moving.

One of the Theremin’s antenna controls frequency, (pitch), and the other antenna controls the amplitude (volume). The electronic signal generated by a Theremin is sent to an amplifier and routed to a loudspeaker to be broadcast as sound. So, basically, you plug a Theremin into an amp—just like an electric guitar. In my opinion, the Theremin sounds like a science-fiction-soundtrack generator.

Check out this video below of Clara Rockmore (1911 – 1998), who was a Russian-American Theremin-ist operating during the instrument’s inception. In this author’s view, she is the best Theremin player in human history. Just listen to how she grabs notes out of thin air—and with perfect intonation!

Now on to another purely electronic musical instruments—the synthesizer. This device generates its sound via additive synthesis; hence, the name—synthesizer. Additive synthesis results from combining sounds from two or more sound generators. You take one sound, add it to another sound, and you get a third sound—that’s additive synthesis. The Hammond organ uses the same principle when it combines sounds from two or more tonewheels. Unlike the Hammond, synthesizers use analog oscillators—like a Theremin. Oscillators are electronic components that generate pitch via undulating electrical current. Unlike the Hammond’s tonewheels, the synthesizer’s oscillators permit subtractive synthesis, which is the application of filters to the sound coming from the oscillator to cancel out harmonics and other overtones.

The sound of a synthesizer is an oft-heard component of both pop and classical variants of electronic music. (Just think of the soundtrack to A Clockwork Orange to get the sound of a synthesizer playing in your head.) Both additive and subtractive synthesis are achieved via analog oscillators, which are electronic components that generate pitch via undulating electrical current. The most famous analog synthesizer is the Moog, which came out in the mid-1960s and continued developing through the late 1970s (Synthesizer” par 1; Moog synthesizer” par. 1).

Analog and FM synthesizers have significantly altered the world’s musical landscape. (Photo by Gustavo Fring on Pexels.com)

In addition to analog synths like the Moog, there exist another category of synthesizer that is primarily in the computer realm: the FM synthesizer (the FM stands for frequency modulation). Although FM synthesis can be performed with analog equipment, it is most often performed by digital equipment—the analog version proves to be unstable. The primary difference between oscillator-enabled synthesis and FM synthesis is that FM synthesis uses a device called a modulator to create and alter its pitch and timbre (“Frequency modulation synthesis” par. 1).

Perhaps the most famous FM synthesizer is the DX7 by Yamaha, which came out in 1983. Just imagine any 1980s-pop music in your mind’s ear and you’ll be hearing an FM synthesizer.

Answer the following review question before moving on:

Computer-Based Devices

Since the 1970s, much of the innovation in the realm of music-technology has come by way of computer-based devices. Below are some details about three prominent examples: (1) MIDI, (3) digital audio workstations, and (3) artificial intelligence.

The word MIDI is an acronym that stands for “musical instrument digital interface.” MIDI, itself, is a computer protocol that allows the interconnection of electronic instruments and computer interfaces. It enables you to hook your keyboard into your computer, if your keyboard is a MIDI controller—which simply means it has MIDI outputs allowing a cable to be connected between them. This connection allows you controller and your computer to talk to one another in the MIDI language.

Sound (audio) is not captured via the MIDI language. Instead, a MIDI controller transmits specific information about pitch, duration, sustain, intensity, etc. The point of transmitting this information is to capture the nuance of a player’s musical action. The most common parcels of MIDI data include notes, duration, velocity, and patch set. Duration refers to on or off, velocity refers to loud and soft, and patch set refers to which sounds from a digital keyboard (or some other sound-generating device) are to be used.

MIDI becomes sound when played by a digital keyboard or virtual instrument, both of which rely on samples for their sound production. A sample is a short snippet of pre-recorded audio that can be activated by keyboard, drum pad, or another MIDI controller. Imagine a sound recorded by a microphone and saved on a computer that can be activated via the MIDI language. This is what’s happening between your computer and your MIDI controller.

A MIDI controller is a device that is used to send musical information to a computer-based sound generator or to a synthesizer. MIDI controllers often take the form of a piano keyboard, but drum pads, track pads, and MIDI-enable guitars are also common. (Photo by Wendy Wei on Pexels.com)

A digital audio workstation, or DAW, is a computer-based music-production system. It usually entails the combination of hardware, software, and electronic instruments. Although DAWs-of-the-past were housed in electronic instruments—like digital keyboards—nowadays, the term DAW is almost always used to describe software-based platforms like ProTools, Cubase, and Studio One.

Most modern-music productions are generated using digital audio workstations and some combination of audio, MIDI, and samples. Indeed, DAWs facilitate audio recording, sequencing, scorewriting, mixing, mastering, and just about any music-production activity one can imagine. It’s safe to say that the digital audio workstation represents the heart and soul of modern music technology.

All of today’s music is produced using a digital audio workstation (DAW). This type of software can work with audio and MIDI and can be used to multitrack record, sequence, and scorewrite. (Photo by Pixabay on Pexels.com)

Digital audio workstations and MIDI have drastically changed our musical universe. However, artificial intelligence seems poised to inflict an even greater change. The first inkling of this truth is already seeping into our lives, particularly via music streaming services like Spotify, Google Play, and Pandora.

When you choose a song on Spotify (or one of the other platforms) you’ll notice that the program begins feeding you music in the same vein. If you choose Ray Charles, say, you may be invited to listen to Ike Turner next. It’s almost like Spotify has a brain. A computer system that simulates intelligence—like the ability to predict a person’s music-listening preferences—is known as artificial intelligence (AI).

Artificial intelligence lies beneath much of the modern internet, especially in the realms of business and advertising. When you surf the web, it may feel like your search engine is watching you—that’s because it is. AI is keeping tabs on us and attempting to guide our money-making decisions.

A computer system that can simulate intelligent behaviors has many potentials in the realm of music. For example, an artificially intelligent program can be used to compose and perform. Indeed, innovative companies are emerging that are devoted to AI-driven music composition. A short list includes IBM Watson, Google Magenta, and Amper Music.

A computer system that can simulate intelligent behaviors has many potential uses in the realm of music. For example, an artificially intelligent program can be used to compose and perform. Indeed, innovative companies are emerging that are devoted to AI-driven music composition. A short list includes IBM Watson, Google Magenta, and Amper Music.

The company called Aiva (Artificial Intelligence Virtual Artist) is also researching AI-driven composition. Check out the video below to see what Aiva’s artificially-intelligent rock band sounds like:

Before moving on to music-technology activities, answer the following question:

Activities

The field of music technology is permeated by activities used to produce, present, and broadcast music. If you are intrigued by this field, then you might want to select one of these activities for your life’s focus. If you devote yourself fully, then you may be on your way to a happy and contented life in the arts.

Sequencing

Sequencing is that art and craft of arranging prerecorded samples along a timeline for playback. It is an extremely common form of music technology. If you are making beats with a DAW, the chances are rather good that you are sequencing.

Hand-written musical notation can be thought of as a sort of sequence: the dots (note heads) on a page are played in order by a performer. With electronic sequencing, sounds are organized along a timeline and played back by a machine or a computer. Both techniques are linear and based on time. The difference is—instead of not heads and bar lines—sequencers use graph-paper like notation (see the picture below).

The first sequencers were primitive devices that played rigid patterns of notes or beats using a grid of 16 buttons (steps) with each representing 1/16th of a musical measure. Groups of measures—each with their own combinatoric possibility—could be compiled to form longer compositions.

Nowadays, sequencing is usually performed with a digital audio workstation (DAW), like Reason, ProTools, or Cubase. This is what a typical sequence looks like:

Pictured here is a musical sequence that’s been programmed into a digital audio workstation. The rectangle icons represent pitch content, and their lengths represents duration. The relative height and position of each rectangle can be compared to the piano keyboard at left to help producers recognize what note they are dealing with. Adjustments for volume, modulation, after touch, etc. can also be applied to sequence like this one. (Jump, Brian. “Bass Line Sequence.” 2020, JPEG.)

A specialized type of sequencing is scorwriting, which is the use of a computer program to produce professional-grade musical scores. Scorewriters like Finale and Sebilius—two of the most common brands of notations software—are specialized sequencers that employ traditional musical notation in their interfaces. If you are interested in composing or arranging, then it’d be wise to learn one of these programs.

Multitrack Recording

Multitrack recording is a method of capturing sound that allows multiple performances to be combined to make one, cohesive sound. Multitrack recording, or multitracking for short, entails recording different audio channels onto separate tracks.

Here’s how it’s done:

The first step is to use microphones (and other transducers) to convert musical sound vibrations into electricity. Next, the electronic audio is captured using an analog or digital multitrack recorder. Last, the captured music is finalized as a single audio file called a master track.

Mixing is the process of balancing individual audio tracks stored on a multitrack recorder; Mastering is any further adjustment made to the master track.

The master is usually a computer file containing audio in stereo. This means that most master tracks are two-track audio files. They are meant for commercial consumption. Spotify, Soundcloud, Apple Music, Amazon Music, and every other song aggregator on Earth, host stereo audio files that have been produced through mixing and mastering.

There is much overlap between multitrack recording and sequencing. In fact, the two activities routinely overlap when making music. Most modern productions are combinations of microphone-captured sound, sampled sound, and synthesized sound.

Many of the techniques used in multitrack recording are also used to produce live sound, which I’ll touch on next. But first, answer the following review question:

Live-Sound Production

Live-sound production is the use of professional audio devices like microphones and loudspeakers to broadcast amplified music through a public address system (PA system).

This area of music technology is seeing a resurgence today. This is due to changes in the music industry, particularly changes to the value of recorded music. Since the advent of MP3s and file sharing, the music industry has been irrevocably changed. Indeed, it’s safe to say that the music industry has been destroyed by the popularity of MP3s and file sharing. Music is essentially free today, and no longer is there a viable route for mid-level bands to succeed by making records. That business model has been blown up. Consequently, the only way that most musicians can make money today is to go on tour and play live shows. This means that, today, there are more opportunities for live-sound engineers than ever.

The basic craft entails using the above-mentioned devices and activities to provide the performing musicians with the sound they need to perform and provide the audience with the sound they paid for. The process involves transducers, synthesizers, digital audio workstations, and all the rest.

Just as there are many music-technology devices I left out above, there are many music-technology activities that I must leave out here. Let’s move on to the final area of the definition: music technology concepts.

Concepts

For our purposes here, we can think of a concept as an idea that initiates an activity or spurs the invention of a device. For example, the concept of amplification fostered the invention of electronic instruments and the activity of radio broadcasting.

Many fascinating and interesting concepts abound in the realm of music technology. Common ones include sound recording, public address, sequencing, effects, sound-wave editing, score writing, equalization, limiting, sampling, overdubbing, autotune, quantization, amplification, ring modulation, and so on. First came the concept, then came the activity or device. An invention needs an idea, right? Those ideas are what I’m referring to when I refer to music-technology concepts.

Consider the concept of overdubbing, which is the recording of one sound onto another. It’s sometimes called dubbing for short. Overdubbing is commonplace in today’s music industry. Indeed, since the 1950s, producers and engineers have been using this technique to create mainstream pop music. Sound engineers often lay down a rhythm track—bass and drums etc.— before overdubbing vocals and other lead instruments. This is what accounts for the near-perfect musical presentations offered by modern recordings.

Overdubbing is easily understood today, even to the uninitiated. You could probably explain overdubbing to your grandma if you had to. But, two hundred years ago, one would’ve been considered a genius of the highest order if they thought of, and articulated, the concept of overdubbing. Picture your favorite historical figure from two hundred years ago; then, picture explaining to them the concepts of electricity, loudspeakers, audio recording, format evolution, and—finally—overdubbing. Now imagine the technological chain of consequences that would need to be explained to you if you suddenly found yourself two hundred years in the future. What analogs to overdubbing might there be?

In modern times, the sheer amount of technology that pervades our lives is astonishing. Given the fact that humanity seems set to continue advancing technologically, it is guaranteed that there will be music technologies that we cannot presently imagine. For example, there are many signs that artificial intelligence will significantly alter the course of music technology. Who knows what AI will do to our musical minds in the future? And who’s to say what composition, performance, and broadcast might be like after twenty years of influence under AI.

Conceptually, here’s what’s happening in that video: An audio engineer has used the position of galaxies to inform his or her decisions about pitch, duration, and timbre.

What previously unrealized music-technology concept might you imagine?

Works Cited

Berube, Margery S, editor. The American Heritage Guide to Contemporary Usage and Style. Boston, Houghton Mifflin Co, 2005.

Dairiki, Geoffrey T. “Types of Tone Generators.” HammondWiki – Types of Tone Generators, http://www.dairiki.org/HammondWiki/TypesOfToneGenerators.

“Frequency modulation synthesis.” The Free Encyclopedia. Wikimedia Foundation, Inc. 1 August 2020.

Manning, Peter. Electronic and Computer Music. New York: Oxford U Press, 2013.

“Moog synthesizer.” The Free Encyclopedia. Wikimedia Foundation, Inc. 1 August 2020.

Manning, Peter. Electronic and Computer Music. New York: Oxford U Press, 2013.

“Music Technology” The Free Encyclopedia. Wikimedia Foundation, Inc. 19 June 2016.

“Roland TR-808” The Free Encyclopedia. Wikimedia Foundation, Inc. 28 May 2017.

“Stradivarius” The Free Encyclopedia. Wikimedia Foundation, Inc. 28 May 2017.


Notes

[1] The human species, homo sapien, has been around in its anatomically modern state for about 200,000 years according to anthropologist. The oldest known skeletal specimen is 196,000 years old. About 50,000 years ago, humans began worshiping gods, burying their dead, and—presumably—making music. Anthropologists call such people behaviorally modern humans.

Leave a comment

Blog at WordPress.com.

Up ↑