TCRM 19 outlined various effect types and parameters, including delay, reverb, chorus, flange, phase, modulation, emulation, pitch-shifting, etc…. Now, let’s discuss how effects are created, interfaced, and controlled, with plenty of audio examples of how they are commonly (and sometimes not so commonly) used.
Before getting into the usage of specific effect’s, it is a good idea to look at the technology used in their creation and implementation. The current state of effects processing is full of dichotomies: analog vs. digital, hardware vs. software, host-based vs. dedicated DSP chips, real-time vs. non-real-time, and destructive vs. non-destructive. In addition, effects can be added either by being inserted on a channel directly or accessed via auxiliary sends and returns.
Analog versus digital
There are fantastic effects available in both the analog and digital realms. Each has its own particular sound. Digital, however, does tend to give a lot more bang for the buck and generally offers a greater range of options.
With good cabling, connections, and converters, systems can be designed around both technologies without sacrificing too much in the way of audio quality. This is especially true in 24-bit audio environments. Higher sample rates, at 88.2 kHz or more, also help.
Hardware versus software
Put simply: hardware effects units offer high and consistent sound quality, with little or no latency, and great reliability, but each device is capable of only a limited number of simultaneous effects paths (1 or 2 being most common, 4 or 8 on pricier boxes). In addition, they also have a fixed set of algorithms and can be quite costly.
Software plug-ins are infinitely expandable and upgradable, can be run many times simultaneously on the same DAW, and offer a lot of routing flexibility for the money, but they are also subject to CPU power limitations, higher latencies, and can crash at awkward times.
Host-based (native) versus dedicated DSP chips
DAW plug-ins require a lot of blazing-fast number crunching to create audio effects in real-time. This can be done in two ways: host-based (native), and dedicated DSP (Digital Signal Processing) chips. Native processing uses the power of the computers CPU to create effects. The number of available effects is determined by the computational prowess of the host computer. For the latest batch of speedy computers, that can be a fairly significant amount of signal processing muscle. Still, it can only stretch so far. In addition, this power must be shared with other computer functions, such as running the operating system, background applications, updating the display, and accepting input from the keyboard, mouse, etc….
To allow greater and more consistent DSP power, additional processors can be added to the system. This means the computers CPU can dedicate itself to general computer tasks and leave specialized audio processing to the DSP chips.
Systems that use these chips, which are usually added on PCI cards or in FireWire boxes, generally have greater track counts and effects processing power, along with lower latency (more on this below).
Non-destructive vs. destructive vs. semi-destructive?
Most DAWs allow audio editing and effects processing to be accomplished in either of two basic ways: destructively or nondestructively. A destructive process is one in which the soundfile stored on the disk is rewritten to include the desired function. Generally, destructive processing does not happen in real-time during playback, as the edit or effect must be processed and written to disk before being accessible to the mix. Afterwards, however, no special processing is necessary as the DAW simply reads back the new audio file. Therefore, destructive edits provide less load on the CPU or DSP chips.
Destructive editing should be considered permanent regardless of the possible numerous levels of “undo” the DAW may offer. It is true that “undo” can often fix dire mistakes or allow you to reconsider a mixing decision. It is also certain, however, that habitual reliance on this function will eventually fail you… usually at the most inopportune time! Whenever your “undos” work the way you want them to - consider it a miraculous blessing, not a foregone conclusion.
To make destructive editing a bit less risky, some DAWs will create a new file reflecting the desired changes, while leaving the original file unaltered. I will call this “semi-destructive” to make a distinction between this and the riskier file-overwriting method. At the same time, it should be noted that any edit which creates a new file is still basically destructive as there is a change made to the file structure. In fact, the greatest drawback to semi-destructive editing is the creation of multiple large audio files for each track. This eats up precious hard-drive space very quickly.
Nondestructive processes create descriptions of how to modify the audio and then perform these tasks during playback to make the edit/effect audible. These functions do not change the actual audio files on disk. They are considered real-time because they are accomplished, as the tune is being played or recorded. Nondestructive editing is the most flexible of the three processing methods because it allows multiple precise changes to be made (and unmade) quickly and without taking up any significant hard-drive space. The undo command can still be handy, but is not necessary as all non-destructive edits can be undone manually.
The price of power
As always, you don’t get something so good for nothing: real-time effects are highly processor intensive. There’s a limit to how many your CPU or DSP chips can handle. This ceiling can be increased in either of three ways:
* Additional processing power - upgrade your CPU or add more DSP chips.
* Streamline your operating system – remove unnecessary drivers, programs, peripherals, and external connections; simplify graphics; don’t run background applications.
* Increase the RAM buffer - many DAWs have assignable buffer sizes. This is the amount of RAM memory dedicated to temporary data storage to facilitate audio processing. While increasing the buffer may increase track count and the number of plug-ins available simultaneously in a session, it will also increase the time it takes for processing to occur. This delay is yet another example of our old nemesis, latency.
Latency is the time difference between input and output in a digital audio system. It occurs in software, converters and hard drives. It can cause phase problems, unwanted filtering, or even obvious musical timing discrepancies (as if many musicians didn’t already have enough challenges in this area). In non-destructive editing, each effect used creates additional latency. This is due to the fact that each requires significant number-crunching. Since multiple effects on a single track are usually accomplished sequentially, the delay time is cumulative.
The various DAWs on the market these days take different approaches to addressing latency. Some delay all audio to equal the highest channel latency. Some slide audio regions back to match a tracks delay. Others make an overall delay user-definable by adjusting the buffer size. One program I tested a couple years back delayed all audio by a set 50ms, regardless of effects or signal path. An overdub situation relying on the internal mixer would therefore entail a delay of around a 32nd note at 150bpm, which is certainly enough to destroy a musician’s sense of time and make recording this way very difficult. To offset this, the recorded signal must be monitored before the DAW software. This means that internal effects cannot be added to real-time overdubs during recording.
Each of these solutions offers it’s own unique problems. Read the manual and test your system to determine what is really going on regarding latency issues. While the effects of latency can be minimized and addressed in various fashions, latency itself can only be reduced by reducing the buffer and/or increasing processing speeds.
Note: how quickly and efficiently a computer’s CPU can perform these audio DSP tasks is not merely a matter of the published processor speed. Real-world performance actually depends on a large number of interrelating and complex issues. The most fundamental of these are: the number of processors (or cores), use of multithreading, bit-depth, caching, amount of RAM, buss speed, software application design, Operating System, drivers, and exact chipset used. In addition, actual track counts and instances of effects plug-ins available are also influenced by the audio sample rate and bit depth, the number of channels, and the complexity of the actual effects algorithms.
Due to the development of multiple competing DAWs and operating systems, a number of different plug-in formats are currently in use. Most of these work with many more programs than those from the companies that originally developed the particular format, but take nothing for granted – before ordering additional third-party effects for your system, be sure they are in a format your DAW can read.
Some of the most popular formats (and their origins) are MAS (MOTU), TDM and RTAS (AVID/Digidesign), VST (Steinberg), Audio Units (Apple), and Direct X (Microsoft). Of these, VST, TDM, and RTAS have emerged as the three most prevalent plug-in formats, with VST having the greatest cross-platform functionality.
In addition, there are now a few ways to route audio between multiple active audio applications. Of these, Rewire (developed by Propellerhead Software, makers of Reason, and available on many different DAWs) offers the most power and flexibility. In real-time, a track can be sent from a Rewire “master” program to a second one, designated the “client.” The client adds an effect and the audio is sent back to the original master program. This can be used to allow ProTools to incorporate a Direct X effect, for example.
Generally, effects are added to DAW tracks in one of three ways: as destructive edits, as channel inserts, or by way of an aux send/return. Destructive edits reduce the required processing power, take less RAM, and reduce latency. On the other hand, they can be somewhat risky (fully destructive) or be disk-hogs (semi-destructive). Destructive edits are also quite time consuming to use (both fully and semi).
When an effect is inserted directly on a channel strip, it is often accomplished as a real-time (non-destructive) process, depending on the DAW and preference settings. In this case, it is much easier to make adjustments to effects parameters. Again, the drawback is that real-time effects are processor-intensive and latency-inducing. It should be noted that effects inserted directly on a specific mix channel only affect the audio on that one channel. If you want reverb on all of the drums, it must be individually inserted on each channel, taking up massive amounts of processing power. There must be a better way… a way to share resources….
Enter the aux send and return (sometimes called fx channel). Since auxiliary sends can be used to bus audio from multiple channels simultaneously, they are built for sharing. For example, by assigning the input of an fx channel (aux return) to aux 1, a single reverb inserted there can be addressed by all drum tracks at the same time by bringing up the send 1 levels on drum mix channels. This greatly reduces the processing load required for the real-time processing of so many channels. The trade-off here is that each part of the kit will be subject to the exact same reverb effect. Only the amount of each can be assigned separately with the channel aux send controls. If you require distinct effects, you’re back to channel inserts or destructive edits.
Reverb is an essential part of mixing music. It adds depth and acoustic context. Most studio recordings are made using close-miked techniques to capture a clear sound and minimize external noises and bleed, as well as to increase signal-to-noise ratios. This, along with the use of electronically generated or sampled sounds, tends to remove the sense of a performer’s acoustic context. Though mixes without reverb can be very clear, they tend to come across as two-dimensional and unnatural. In fact, human perception of the particular timbre of some instruments requires an acoustic context. Many sounds, especially those with very short envelopes, become indecipherable when recorded in an anechoic chamber.
Reverb plug-ins (or other hardware units) may include multiple reverb types. Some, such as plates and springs, are meant to emulate older, artificial reverb technologies. Others are intended to create more realistic emulations of physical acoustic spaces. Adding a plate reverb to drums or vocals can lend a classic sound to the mix. Spring reverbs are common on classic tube amplifiers for guitars and bass as well as on some older analog synthesizers. They are great on these instruments, but don’t be afraid to try them in other contexts, such as percussion or vocals.
With reverb, the most common mistakes are to use either too little or too much in the mix. As stated above, too little can make a recording seem synthetic, and without context. The mix can take on an almost disturbing sense of closeness (after all, your head would need to be mere inches away from all of the instruments to actually hear them that way). If reverb is used too heavily, however, clarity is lost as well as effective stereo (or surround) imaging. The mix will sound like mush; a sense of bad mic technique in a crummy live venue is unavoidable. The use of reverb is a balancing act.
Another cool effect, especially on snare, electric guitars, and vocals, is the gated reverb. This technique uses a gate on the output of the reverb that is set to silence the sound when the reverb tail falls below a specified level. By adjusting the threshold and release time of the gate, different distinctive truncations of the reverb decay can be created (see audio files 18 through 20 from TCRM 18 for some great examples of this). Another cool bonus to this technique is that the sound of the individual instrument can take on that “larger than life” quality, but clarity in the mix is still maintained. If your reverb does not include a gated setting (though many do) simply insert a separate gate manually after the reverb.
While we’re talking about cool reverb tricks, another one to try is the reverse reverb (sometimes called preverb). This is where the reverb of a sound actually precedes the sound itself. The effect is often mysterious in quality and adds a quick crescendo to the instruments attack. A few manufacturers have created some interesting look-ahead type reverbs to attempt to do this automatically, but the old-school method never fails…. First reverse the track in question, then insert a reverb. Create a new audio file with the reverb printed to it (either by bouncing, soloed mixdown, or by using a destructive reverb.) Reverse the new file. The original audio is now back in the right direction, but the reverb tail magically comes first. Freaky.
A single-tap delay with a long delay time and moderate feedback level creates a classic echo effect. By adding further taps to the delay (basically further discrete delay lines) interesting rhythmic patterns occur. Matching the delay times to a tune’s tempo and metrical structure lets a delay “ride the beat”. The tried-and-true “Ping-Pong” ensues if the various delays are panned to different stereo locations (usually alternating left/right).
Delays can be used as the foundation of a number of other effects. When a signal is split, hard-panned left/right, and then delayed slightly on one side (2-15ms), a type of stereo exciter will be created. When the delay is just a little longer (20-40ms) a thicker, double-tracked sound occurs (the beginnings of a chorus effect). If the two sides are panned to the same stereo location, the audio will be subjected to an obvious comb-filter function. Then, if the delay time is slowly swept up and down… voila, a flanger.
The venerable reverb is really a collection of many delays. A sense of acoustic space can also be created by using a few delays. Though tricky to create, due to relative and finicky parameter settings, a simple delay-based space can offer greater presence in the mix. It also limits the interference and muddiness caused by using too much reverb.
The purest of the commonly known modulation effects are tremolo and vibrato. A tremolo is a cyclical increase-decrease-increase-decrease-etc… of the volume of a sound. Generally a low frequency sine or triangle wave is used to control the level of an amplifier. Tremolo is a common addition to guitar amps and can be heard (glaringly) on tracks by REM and Green Day.
Vibrato uses a similar process, but cyclically modifies the frequency (tuning/pitch) of a sound. Singers do this acoustically by pulsing the muscles at the back of their throat. Guitarists do it by wobbling their fingers on the strings. It can also be done by varying the playback speed of a tape, changing the sample rate of a digital source, or sweeping the frequency of a dedicated pitch-shifting effect processor (see below). When a small amount is used, subtle tonal nuance can be added to snare, drum overheads or guitar tracks. Similarly, subtle (and super low frequency) tremolo can be used to automatically add interest back to a track that is overly compressed, or otherwise too dynamically even.
Since tremolo and vibrato effects are both really low-frequency modulation synthesis, much crazier sounds can be achieved by bringing the modulation rate (speed) up above the 20-40Hz range. Interesting new tones (called sidebands) are created when the modulation frequency is brought into the audible spectrum. To hear an example of this, check out the excerpt from “Passage to…” off of my Sonic Ninjutsu CD at http://www.cdbaby.com/cd/jshirley. First a saxophone, and then a clarinet are haunted by strange secondary lines courtesy of modulation synthesis.
Chorus effects (which can utilize a more complicated form of modulation) are used to thicken vocals and/or gloss-over a singer’s pitch problems (when not too horrid). Also use them to make any instrument seem as though it is layered or that there were several performers playing the same line. The electric bass can sometimes benefit from a chorus effect, whereas layering would create a big, sloppy, disaster.
A simpler, but somewhat related, effect called a flange, is a popular electric guitar effect (beloved by the Smashing Pumpkins). Flanges are not only cool guitar effects, however, but have been used on snare, drum overheads, or the entire drum kit (wow… early 80s flashback!) Subtler use can also be interesting on vocals.
Another weird example from my CD, this time of pitch-shifting, comes from the track “Bauble”. Here, all samples are of a cracked and out-of tune autoharp (even the low drones and bass stabs).
Pitch shifters can be used to make voices sound low and “demonic” or high and like the chipmunks. They are even better suited, however, to create interesting harmonies and/or to fix intonation. On DAWs, pitch-shifting plug-ins can be automated and intervals changed so that all kinds of harmonies can be generated, and not just in parallel motion. Similarly, many can sense the intonation (tuning) of an incoming signal and automatically correct it by shifting the signal the proper amount to match a reference. These days such effects are absolutely essential - many artists have come to rely on these all too heavily.
As noted in TCRM 19, intonation effects are now also commonly used to create robotic or “computerized” vocal effects, especially in hip-hop and pop music.
All kinds of distortions (including tube, tape and amp emulations) are now available on DAWs. Some are much more gain-dependent than others. Low levels may exhibit very little distortion, while high ones may become extremely distorted. In situations where this is not wanted, the only choices are to try another type of distortion or to compress the signal before adding the effect. For all distortions, eq before and after the effect is often necessary to control the timbre and amount of the effect.
While guitars and bass are an obvious place for distortion effects, especially when a DI is used, other instruments can benefit as well. A small amount of distortion on vocals can help them leap out of a mix: add more, and you’re entering Industrial or Death-Metal territory. Cool snare sounds can also be sculpted with these effects. Kick drums work as well, though this is a little more fussy because of the way the various different distortion types treat low-frequency content; some roll it off while others accentuate the low-mids.
Effects are all about interest and contrast. Each lends a distinctive color and sonic interest to draw the listener’s ear and aural attention. They are a truly necessary part of popular music forms, but their use must be both balanced and clear. Using the same effects on each instrument will sound excessive and unimaginative. Clarity between instruments will likely be lost. Mixes without any effects at all can be very bland. If an entire album is mixed sans effects… it will be a snoozer for sure. So please don’t forget to experiment with effects and be creative, but keep a close eye (ear) on their over-use (it’s easy to overdo it).
Enjoy the included soundfiles, demonstrating various uses of effects. Stay tuned for next time, when TCRM 21 will delve into methods of recording, treating and mixing vocals.
John Shirley is a recording engineer, composer, programmer and producer. He’s also a Professor in the Sound Recording Technology program at the University of Massachusetts Lowell and chairman of their music department. You can check out some of his more extreme uses of effects processing on his Sonic Ninjutsu CD at http://www.cdbaby.com/cd/jshirley.
Supplemental Media Examples
The following excerpts demonstrate various effects by way of a jazz combo. First, the basic mix: TCRM20_1.wav
Now, let’s hear what happens when we add a bit of distortion to the trumpet. TCRM20_2.wav
How about some flange on the trumpet? TCRM20_3.wav
A phasor on the trumpet: TCRM20_4.wav
Now, let’s try a phasor on just the drum kit: TCRM20_5.wav
Different types of reverb make for various timbres and spacial feels. Here a hall is used on the trumpet: TCRM20_6.wav
Now a spring reverb on the trumpet: TCRM20_7.wav
How about trying the reversed reverb trick: TCRM20_8.wav
Now, let’s play around with effects on the bass. First, an auto-wah: TCRM20_9.wav
Next, a flanger is used on the bass: TCRM20_10.wav
Now, let’s try a few different distortion types. First, an amp simulator: TCRM20_11.wav
Now, a more aggressive distortion: TCRM20_12.wav
Finally, a rectifier: TCRM20_13.wav
Moving on to a different tune, one which we also paired with previous articles, let’s try playing with the guitar. Here’s a tremolo effect: TCRM20_14.wav
Now a demonstration of a vibrato on the guitar: TCRM20_15.wav
A slower vibrato gives a different feel: TCRM20_16.wav
Now that slower vibrato is mixed back with a bit of the original, dry guitar track: TCRM20_17.wav
Finally, the guitar is subjected to a vibrato effect with a very fast modulation (56Hz). This is now really FM synthesis: TCRM20_18.wav
The drum kit is a fun place to try various reverbs and delay effects. First, let’s hear the kit dry: TCRM20_19.wav
Now with a medium hall: TCRM20_20.wav
If that same hall is programmed without pre-delay: TCRM20_21.wav
In the 70’s and 80’s many people experimented with flange and phase effects on the kit. Here’s a phasor: TCRM20_22.wav
Sculpting the snare independently of the kit is quite common and can yield great results. Here’s the snare dry: TCRM20_23.wav
Now with some spring reverb: TCRM20_24.wav
The snare with a plate reverb: TCRM20_25.wav
Distortion on the snare: TCRM20_26.wav
Now let’s try vocals. First dry: TCRM20_27.wav
A dedicated chorus effect on the vocals: TCRM20_29.wav
And let’s hear how the reverse reverb trick sounds here...: TCRM20_30.wav
Tuning effects are a major part of our audio vocabulary now. Here’s a vocal line without any extra tuning: TCRM20_31.wav
Now, AutoTune is used to fix some of the tuning issues: TCRM20_32.wav
Next, a rapper without AutoTune: TCRM20_33.wav
Now the line is tuned manually using the draw function: TCRM20_34.wav
Here’s what the line is like just putting the effect in Auto Mode: TCRM20_35.wav
Now using both techniques in combination: TCRM20_36.wav
Special thanks to audio engineers Michael Testa, Connor Smith and Bernie Mack for supplying tracks to play with and create these demos. Special thanks as well to all of the musicians and composers for the use of their talents and materials….
The jazz combo, recorded by Bernie Mack of Flashpoint Academy in Chicago, is: Markus Rutz - trumpet, Charles Heath - Drums, Dennis Luxio - Acoustic Bass
They are regulars at ANDY'S JAZZ CLUB in Chicago. http://www.youtube.com/watch?v=PjgzlAnYpvg
Thanks again to The Bay State for the use of the raw tracks to this demo recording of “Liars.” All material used by permission; all copyrights reserved. Demo recorded by Michael Testa.
The Bay State is Tom Tash, Drew Hooke, Susanne Gerry and Evan James.
Check out their music on iTunes (including the commercial release of “Liars” or visit them on facebook at: http://www.facebook.com/thebaystate
Thanks Connor Smith of Under the Piano Productions and artist BSJR for the vocal line. myspace.com/bjsrmusic
And again to Connor and the rapper KnuERA knuera.blogspot.com ++