This installment of TCRM adds to the previous discussion of microphone types, preamps, signal flow, converters and levels – we will explore various techniques regarding mic choice and placement.
To do this well, you must take into account the microphones, room acoustics, management of appropriate levels, and the psyche of the performers themselves – a tall order to get the high-quality recordings you desire.
Picking the right mic for the job
Last month’s issue outlined the workings and features of the various microphone types, including polar patterns, transducer types, proximity effect, transient response, impedance, and frequency response. Together with the interactions of the cabling and preamp, these facors combine to lend a distinctive character to each microphone.
To choose the right microphone for the job at hand you must know your microphones, instruments, and acoustic space well. The most important factor of all… is knowing exactly what the sound you want is! You should hear the desired tone in your head before you choose and place your mics.
Initial mic choice decisions come down to basic compatibility issues. First, the microphone must be able to handle the SPLs (Sound Pressure Levels) the source will be pumping out. Then the particular sonic characteristics of the instrument, and your desired “mind’s eye” sound, must be considered.
If the attack of the instrument is sharp and fast (and you wish to retain that sound), a mic with a fast transient response is called for. A small diaphragm condenser would be a good choice. If the attack is not fast, or you wish to minimize it, you may choose a large diaphragm condenser or dynamic.
Similarly, the frequency content of the sound should be complimented by the frequency response of the mic. For cymbals, a mic whose frequency response is flat (or even accentuates the high frequency sound) up to 20kHz is usually called for. An instrument like a fingerpicked upright bass, however, does not require frequencies that high. It does call for a mic with greater low-end capabilities than the cymbals do.
Then there’s the polar pattern. Directional mics help isolate a particular instrument (or instruments) in a room with numerous sound sources. They can also be used to control the amount of reverb or acoustic reflections the mic captures. Omnidirectional microphones are used to bring out the sense of acoustic space. (See TCRM #8 for more details)
Before you finesse a mic’s placement, consider where you’ll place the performers according to practicalities like sight and feel. Both of these influence the comfort and performance of the musicians and should not be overlooked. A tentative, uncertain, or uptight performance is generally not worth recording, no matter how good your mics and your miking technique are.
To perform well, musicians need to be able to see and interact with each other easily, as well as with the recording engineer and producer (where feasible). In addition, they should be situated in a spot in the room where they will be comfortable. Is the room to cold or too warm? Is the lighting too stark or too dim? Is the air fresh or stale?
Acoustics are the next concern. Different rooms, and areas of a single room, will accentuate different frequencies and decay times, imparting a different sound and feel to the performance and recording.
Get to know your recording environment by talking, clapping, singing, and playing in various places throughout your tracking room(s). Keep the acoustic signatures you hear in mind when deciding where to place performers. Again, it is necessary to know ahead of time what type of sound is desired before making decisions on placement.
Mic placement should be considered only after the performers have been placed in the most comfortable positions within their desired acoustic contexts.
Finally, keep in mind the comfort and normal movements of the performers when placing mics. The sound of a mic being bumped can ruin a recording. But when a mic is knocked over or dropped it can also ruin the mic - not to mention your operating budget, and your day.
Close miking yields an “in-yer-face” sound with the advantage of maximizing both the recording level and the signal-to-noise ratio. But close miking removes the acoustic context of the performance. It can skew the balance of an instrument’s sonic signature, generating a very unnatural-sounding recording. This is due to the fact that people rarely experience instruments by placing their ears four inches away, or less!
Such a short miking distance can capture sound that does not give an accurate picture of the sound in its entirety, sound that is simply too localized to a specific spot of the instrument. This is further emphasized by the lack of room sound or the acoustic environment the instrument was in. If a microphone is placed further away, a more balanced representation of the instrument’s acoustic attributes can be created, as well as a sense of the environment in which the performance occurred (concert hall, small room, temple, outdoors, etc....)
So miking from up-close sounds less natural while miking from further away sounds more natural. Unfortunately, there is a diametrically opposed tradeoff. As a microphone is moved away from a sound source, les s and less energy is concentrated on the diaphragm. To make up for this loss in amplitude, greater amplification is required. This “cranking the gain” tends to accentuate all kinds of noise as well as unwanted ambient and environmental sounds.
The inverse of this is also true; as a microphone is moved toward a sound source, the concentration on energy it receives increases with the square of the distance (e.g. twice as close means four times as loud). The resulting need for less gain, along with an increased ratio of source to environmental sound, makes for a much cleaner recording. In many situations a compromise position must be found, keeping in mind both musical and technical factors.
There are many situations in which microphones are placed very close to instruments and artificial reverbs and delays are added later. In most musical styles the close-miked sounds of snare drums, trumpets, vocals, and guitar amplifiers have all become commonplace and almost expected. Compare the trumpet sounds in recordings of horn sections like Tower of Power, The Memphis Horns or Earth Wind and Fire with the sound of the trumpets in a classical symphony and you’ll get a sense of the differences in miking techniques.
Also, remember that directional microphones tend to emphasize low frequencies more as the mic is placed closer. This is known as proximity effect. Equalization (either in the software, on the interface or preamp, or in the high-pass filter on the microphone) can be used to offset this if necessary.
Mirror, mirror on the wall… (Reflections)
When a musician makes sound, it travels to the microphone through both direct and indirect paths. For instance, the sound of a singer travels straight to the microphone, but also bounces off any nearby wall, floor or ceiling and then to the mic. Because the reflected sound must travel a longer distance to reach the microphone, it takes longer to be picked up. These types of sonic reflections (bouncing off a single surface before reaching the mic) are called first-order reflections and can have a tremendous impact on the sound captured by the mic. The interference between the two wavefronts arriving at different times causes comb filtering, similar to what happens with such time lags occurring in the electronic domain by delay or latency (as discussed in TCRM #4).
Comb filters are a normal part of life. After all, our ears perceive sound arriving from multiple directions, with resulting delays etc., and this helps us in the accurate perception of sound and our environment. While microphones do tend to emphasize their effects, completely removing such interference is not only extremely difficult, but can produce recordings without acoustic context or sense of space. Microphones and the recording process can easily make them so extreme, however, that they sound more like special effects than natural phenomena.
Fortunately, there are many techniques available to bring the interference caused by first-order reflections under control. Directional microphones can be placed so that the most sensitive angle faces the instrument while the least sensitive side faces the direction of a strong reflection. Blankets or gobos (movable dividers with highly reflective and/or absorptive sides) can be placed in between the mic and the offending reflective surfaces. One can also lessen the effect of these reflections by placing the microphone further away from the reflective surface or moving into a larger space. Larger, more acoustically complex spaces will tend to de-emphasize first order reflections, especially if the microphones are placed away from walls and other large reflective surfaces.
Interference can also occur between two (or more) microphones when they pick up sound traveling different distances from a single source.
This causes differences in the arrival times of the direct sound, between direct versus reflected sound, or even between different reflections. Wh ile some people are mindful of this phenomenon when recording a single instrument with two mics, they tend to forget it also happens when two single-miked instruments are recorded in the same room. The interference is especially evident when the recorded tracks are bussed or panned together.
The same basic methods mentioned in the above section are also used to avoid these comb-filters. Gobos or blankets can help keep direct sound and early reflections from traveling from one instrument to the other’s microphone. Angling directional mics can also work well.
Using more space to separate the performers and microphones can cut down on the noticeable filtering. This method actually relies on the fact that so many peaks and dips are created across the entire frequency range that a few random comb functions tend to cancel each other out.
The three-to-one rule
Many people may already be familiar with the “three-to-one” rule governing the distance between two microphones picking up the same source. This “rule” states that the second microphone should be at least three times as far away from the sound source as that source’s primary mic. While this may be a nice rule-of-thumb, giving people a sense they have done their job right, it is misleading. Let me be clear; following the 3-to-1 rule does not avoid comb filtering. In fact, even when strictly following this “rule”, there can be significant interference issues.
Let’s check it out quickly… with math (yes… I said the bad, evil M-word!)
Actually, the frequencies of both constructive and destructive interference can be found using a very simple formula. First, simply divide the speed of sound (344 meters per second) by twice the difference in path lengths traveled by the sound (in meters). All odd whole number multiples of this number will interfere destructively. All even whole number multiples of this number will interfere constructively.
For example, if the first microphone is 0.1 meters (about 4 inches) from the front of the guitar, and the second is 0.3 meters (about a foot), the sound must travel an extra 0.2 meters (8 inches) to get to the second mic. Divide 344 by twice this difference (0.4 m) to get the base number from which all other interferences can be derived.
344 / 0.4 = 860 Hz
Multiply by 1, 3, 5, 7, 9, 11… to get frequencies of destructive interference.
Multiply by 2, 4, 6, 8, 10, 12… to get frequencies of constructive interference.
Therefore, the first 5 frequency dips (in Hertz) will be:
860, 2580, 4300, 6020, 7740
And the first five frequency peaks:
1720, 3440, 5160, 6880, 8600
Note that while this example follows the 3-to-1 rule, it still creates a noticeable comb filter.
By adding some simple geometry to this formula, it can be used to calculate the basic comb filtering effects between two or more speakers, microphones, or sound sources (including direct vs. reflected sounds). The meaning underlying this formula outlines that the greater the difference in path lengths, the lower in frequency (and more tightly packed) the comb function becomes. The opposite is also true; as the difference in path length becomes less, the frequencies of interference rise and spread further apart. Eventually, there are few or no interference frequencies within our hearing range.
So, let’s simplify this and restate it as a more general, yet factual, set of guidelines:
- When there are two mics picking up sound from a single source they should either be placed as close together, or as far apart as feasible to minimize comb filtering. In addition, barriers and microphone polar patterns can be used to reduce the effects of interference.
Remember, this can include either two mics being used for a single instrument or the open mics being used to track multiple instruments in the same room. (Specialized stereo and surround miking techniques will be discussed further in a later installment of this series.)
Before leaving this discussion, it should be noted that real-world acoustics are rarely as simple as this calculation would suggest. It does a good job in determining the most basic factors involved in acoustic comb filtering, but does not consider complicating real-world factors such as phase, absorption, transmission loss, diffusion, and microphone parameters. In this case, the true complexity of the real world actually tends to work in favor of the recording engineer. This is because the additional variables tend to lessen the effects of these natural filtering functions.
Time and phase alignment
Fortunately, DAW technology has made it relatively easy to time-align the tracks of two or more microphones capturing a single instrument. By zooming in on the waveforms of your regions you can compare the time offset of each. By highlighting between the same points on opposing waveforms you can measure the time difference. By dragging or nudging one to match the other you can realign them and reduce obvious comb filtering (see included pictures and audio files.)
In some cases, tracks may be out of phase with one another. In two similar, time-aligned waveforms, this is where one waveform is traveling upwards while the other travels downward (like a horizontal mirror image). You can address this either by phase-inverting the mic preamp before recording (see TCRM#5) or by selecting phase invert (often represented by the symbol ø) for that region or channel in your DAW.
When more than one sound source is picked up by multiple microphones, time/phase aligning becomes much more complicated. Generally, only one source (instrument) can be time aligned across two or more tracks. The others will usually remain out of alignment. The choice of which elements to align (or whether to align them at all) must be informed by your own production aesthetic and your ears…. What sounds good to you?
Mic placement and instrumental acoustics
In addition to room acoustics, proximity effect, comb filtering, and signal-to-noise ratios, mic placement must consider the particular nature of the instrument to be recorded. Different instruments and instrumental families (strings, winds, brass, percussion, etc.) have acoustical and performance properties specific to them. These include such things as directionality of sound, body resonances, method of excitation, SPL, sympathetic vibrations, formants, envelope, and basic spectra.
Look out for… sonic radiation!
Before deciding where to stick that mic, it’s important to consider where the majority of sound is coming from and in what direction it is being projected. A trumpet, for example, projects mainly straight out from the bell. By contrast, the exact point and angle of projection of the clarinet changes based on what note is being played; this is due to the fact that the sound radiates from the various holes along the instrument - exactly which hole is the source of this acoustic energy, is determined by what note is being played. This is the same for all members of the woodwind family, including saxophones, oboe, bassoon, flutes, recorders, and English horn.
Members of the brass family use slides or valves to change the resonating length of the instrument and so do not radiate sound from any open holes in the body. The sound is projected from the bell (flare) at the end of the instrument instead. For these reasons, close miking a trumpet at the bell works much more successfully than doing the same thing with a clarinet. The brass family of instruments includes trumpet, trombone, tuba, bugle, euphonium, and flugelhorn.
The French Horn, technically also a brass instrument, is trickier to mike – the wise bell points towards the rear of the player’s position and doesn’t project as focused a sound as the other brass instruments. French Horns sit apart from the brass section in the orchestra, and musically they’re treated as a bridge between woodwinds and brass. That’s why French Horns are almost always heard by audiences as somewhat diffuse and distant, with their sound coming off the back wall of the stage, so close-miking them makes them almost unrecognizable.
Note that on all instruments where breathing or moving air as their means of generating acoustic vibrations, this air is expelled through various parts of the instrument. Be careful not to let it blow the diaphragm of the microphone or a rumble will appear on your recorded tracks. Windscreens, pop filters, rumble filters, and careful placement can all be used to combat this problem. Brass, winds, voice, organ, and bagpipes are just some of the offending instruments.
On some instruments it is also important to keep the whole of the instrument in mind when deciding where to place microphones. This is especially true when recording larger instruments or when using a close-miking technique. For instance, a piano resonates from so many strings, of various lengths, that it can have over 20 square feet of resonating area across the strings and soundboard. Placing a microphone too close tends to pick up notes and tonal resonances particular to that one spot much more strongly. Other notes and subtleties of tone will be downplayed. For this reason, multi-mic or more distant single mic setups are often employed when recording the piano.
No… formants! These are the resonant characteristics of instruments and the human voice that do not change with pitch. They are most often based on the physical materials, structure and dimensions of an instrument. Your voice still sounds like your voice, whether you sing a low or high note, because the formants that give your voice its distinctive characteristics are what they are – throat width, bone structure, mouth shape, etc.
The acoustic and classical guitars are great examples of this. The length of the strings is changed to play different pitches, but for every note the basic makeup of the guitar remains the same. The type of woods used, thickness of the face and back, dimensions, bracing, volume of air inside, sound hole diameter, method of excitation (pick versus fingers versus coins…) and string composition all remain the same. Each of these components resonates at a fixed set of very specific frequencies.
Placing a directional microphone right in front of the soundhole will accentuate the lower resonances of the interior air cavity and sound of picking, but less of the face, bridge and neck vibrations. Moving the mic back a bit and placing it over the bridge will capture more balanced between the bridge, face, soundhole and strings. Leaving the mic in this location, but angling it more towards the hole or neck will lend more energy to these resonances as well as to the strings.
Hi-Hats as well as drum heads (and bodies) are also great examples of the influence of formants. Close-miking a hi-hat can yield startlingly different results if the mic position is changed by mere centimeters. Even when struck in the same spot, with the same sticks, and in the same manner, the tone can vary from gong-like to brittle to cardboard-dull. In this case, it is because the physical shape and composition of the cymbal make it vibrate only in certain linear and circular patterns. These patterns, known as Chladni patterns, create active and inactive areas on the cymbal (called antinodes and nodes respectively). When a mic is placed very close to the surface of the cymbal it picks up the particular vibrations of the active spots in close proximity. If it is directly over a larger nodal point, the energy at that patterns frequency will seem weak.
Drumheads also form distinctive Chladni patterns, which are influenced by tuning. The formant frequencies of the body (shell) also have a large influence on the overall tone of the drum. Wide-ranging tonal variation is possible by moving the microphone across or around the head as well as behind or within the shell.
Once the microphone has been selected and placed it’s time to fire it up! Since aspect of level and gain were already covered extensively in TCRM #6, I’ll just give a few helpful pointers regarding the practical basics here.
Start with phantom power turned off, the pad switched in, and both the trim and channel faders all the way down. Any auxiliary sends or busses which are pre-fader should also be turned down along with effects returns, master fader and monitor outputs. This will ensure no microphones, speakers, preamps, or eardrums are wounded in the process of plugging in microphones.
Once the mics are set up and you’re ready to record, turn on phantom for all mics that need it and have the performers play a section of music that represents the loudest they will probably play during the session. Slowly turn up the gain knobs (aka. trim) on your preamps until the highest levels register around –18 to –12 dBFS on your recording software. This is a good compromise setting, allowing some headroom just in case the performers get louder (which almost always happens), but is a strong enough level to maximize quality and minimize noise. When recording in 24 bits, -26 to –20 dBFS may be an even better range as it allows greater headroom, but still delivers an extremely low noise floor.
If you reach the final 1/4 of the gain available from the trim knob (when it’s turned almost all the way to the right), turn it back down, release the pad and then begin working the trim pot back up to the suggested range.
Once all the levels seem appropriate, make a sample recording. Listen back to each recorded track for any sign of noise, hum, or distortion. If found, deal with each of these problems immediately, in a systematic manner, and one at a time:
- Hum/buzz – These are usually caused by ground loop issues. Lifting or isolating grounds may do the trick. Connecting all power cords into a single outlet sometimes also helps (See TCRM #7). Hum can also be caused by interference from lighting, dimmers, computer monitors, screens, cell phones (even when nobody is talking – turn them off!) or electrical transformers. Removing these things, and/or keeping cables and mics away from them, may solve the problem. Lines are especially sensitive to interference if unbalanced or improperly shielded. Avoid cable/connector adapters wherever possible. (See TCRM #6)
- Noise or Hiss – Usually caused by interference in unbalanced cabling or improperly shielded balanced cable. If it’s unbalanced… balance it! Use a DI box or line converter and then be sure the shield goes to ground correctly! (Again, TCRM #6) Noise or hiss can also be introduced if levels get too low in one part of the signal chain and are then cranked somewhere else to make up for it.
- Distortion – Added distortion is most often a level problem. The signal may be too hot for the preamp or converter inputs. Also make sure there is no clipping on any outputs. Check for appropriate amp and speaker input levels. Poor quality, old, or corroded cabling and connections can also cause this. Avoid adapters! Distortion may also be caused if the microphone itself overloads due to an SPL from the performer that is just too loud. If the mic has a pad, this may solve the problem. If not, the mic should be moved further away from the source or replaced by one with a higher SPL rating. I suppose, if all else fails, the musician could actually turn down his axe? (See TCRM #3, #6 and #8)
The red light
If levels and sound quality all check out, it’s time to begin recording the first take. Watch the levels as you record. If they start to creep up, you may need to bring the gain back a bit. It is best to do this between takes.
Well, that’s it for now. Remember, while it’s true that mic placement and mic choice have underlying technical considerations, both are meaningless without artistic and sonic vision. TCRM #10 will discuss ideas on miking specific instruments including guitars, bass, brass and winds.
John Shirley is a recording engineer, composer, programmer and producer. He’s also on faculty in the Sound Recording Technology Program at the University of Massachusetts Lowell. Check out his wacky electronic music CD, Sonic Ninjutsu, at http://www.cycling74.com/c74music/009.
Here are some recordings of an electric guitar to demonstrate the interference caused by reflections and multiple mics. First, a guitar lick recorded with a Neumann TLM103 at 4-feet from the amp cabinet.
Now, that same setup but with a close reflective surface (plexiglass window gobo) causing interference (see TCRM9_pic1.jpg)
For further comparison here's the same scenario, but using a different mic (the AT4041). Again, without the barrier:
... and with the barrier added back:
Now let's hear what happens when two spaced mics are used on a single instrument. First, here's the guitar using an SM57 straight on the speaker grill.
Simultaneously, the guitar is recorded with another SM57 about 20 inches back from the front of the grill.
When the two recorded tracks are combined, the comb filtering is quite severe.
By zooming in on the waveforms in a DAW, the tracks can be time-aligned by moving one region to match the other (see pictures). Here's what the time-aligned version sounds like:
Now let's try the same things with a sax. First, the sax is recorded using AT4041s at 24 and 31 inches (TCRM9_9.wav and TCRM9_10.wav respectively).
The difference of 7 inches between the mics and the sax means that the second mic receives the same pressure wave as the first one some .54 milliseconds later. While this does not make a big difference in the separate recordings, when the tracks are combined the comb-filtering created is quite obvious. (see TCRM9_pic2.jpg)
In a DAW the 31-inch track is moved earlier in time by .00054 seconds (.54 milliseconds), making the tracks time-aligned so that the comb filtering is diminished.
Here's another variation on the sax example. Now, the further mic is moved to 39 inches out, making the difference between the two mics and the sax 15 inches. (see the third picture)
Here's what happens when they are mixed together:
This is corrected by nudging the later track earlier in time by 1.11 milliseconds (97 samples at 88.2kHz, as seen in picture 5):
If the waveforms are time-aligned but out of phase, however, the combined sound is still compromised. (See pictures 7 through 10 for visual examples as well)