What DAW do you use?


Problems of Multiple Drum Miking
What to look out for when using lots of mics on drums...
By Michael Schulze

Most of us record drums, guitars, and the like with multiple microphones. We place a mic up close for presence, and another mic a few feet away for air. We mike up a drum kit with as many mics as we can beg, borrow, or steal. We mix the signals from these mics together to achieve the perfect balance, agonizing over subtle fader movements. But....

Devious gremlins lurk way down in our lower bits where we sometimes don’t see them. These gremlins use the very fabric of the space/time continuum in a conspiracy against us. They bounce our pristine signals against each other in a quantum game of pocket billiards. Once they finish with their mischievous tricks they scamper away, and our spheres, once aligned, are now scattered. Our image is smeared, a sonic veil has been drawn, and the groove is no longer in that pocket where it once nestled so snugly.

How to fight back? Time alignment will cast out the gremlins. True, this topic has been touched on in these pages, even recently, but follow me a few more steps down the path of true alignment and discover the inner peace that it brings.

Phase 1: Indoctrination

Any disciple must receive dogma before the inner teachings are revealed, and only through diligence will true enlightenment be achieved. So first comes a basic review of the way of things in our imperfect realm. Read and comprehend the fundamentals of sound and then I will take you to the next level.

A sound wave travels, or propagates, through a substance, or medium, at a speed that is dependent on the density of that substance. A sound wave propagates through a solid in a different way than it propagates through a gas, but in general, it moves faster through a dense medium, like stone, than it does through a sparse medium, like the atmosphere. This is why in Western movies you see the bandits pressing their ears to the railroad tracks to hear the still distant train long before it arrives—the sound travels much more quickly through the steel tracks than through the air. (This is also why “in space, no one can hear you scream”—there’s no medium to carry the sound.)

The speed of sound in the air is approximately 1130 feet per second (at sea level—at higher altitude, like here in Denver, Colorado, at about one mile above sea level, sound travels a bit more slowly because the atmosphere up here is less dense). This means that, at sea level, a sound wave travels a distance of 1 foot in 0.0013 seconds, or—rounded off—one foot in about 1 millisecond (0.001 second).

So, Mr. Spock, why should I care? Bear with me, through just a little bit more of my simplified physics, and then I’ll deliver the goods.

A sound wave travels through the air by means of a series of high and low pressure zones called compressions (high pressure) and rarefactions (low pressure). These are caused by the vibration of a sound source, like a guitar string or a speaker cone. As the vibrating sound source moves outward it pushes the air molecules together, which makes them move away from the source for an instant. The molecules are thus squeezed, or compressed, into a space already occupied by other molecules. This forms an area of higher than normal pressure called a compression. As the vibrating source moves back away from the molecules it leaves a more empty space of lower than normal pressure called a rarefaction. The molecules quickly spring back toward the rarefaction and back to their original place, but this series of compressions and rarefactions continues to move away from the source at 1130 feet per second—and that is what makes up a sound wave. We represent these compressions and rarefactions by drawing squiggly lines like the ones below (as in an oscilloscope display).

These particular lines represent (roughly) a sine wave, which is a pure tone with no harmonics that, as such, does not actually occur in nature, but helps us demonstrate an important phenomenon. A lower frequency or musical pitch will have squiggles that are spaced farther apart than a higher frequency or pitch.

Phase 2: The Inner Teachings

In the studio we often record a single sound source with multiple microphones. For instance, you may record a drum kit with four microphones: kick drum, snare drum, and two overhead mics. If you listen to the overheads by themselves you will probably hear plenty of snare drum along with everything else. When you mix the drums you combine the signal from the snare drum with the signal from the overheads. That presents a problem.

When the drummer hits the snare the sound hits the snare mic first, and then must travel about three feet before it hits the overheads. So when you listen to your drum mix you hear the snare first from the snare mic, then you hear it again about 3 milliseconds later from the overheads. 3 milliseconds is the time it takes the sound wave to travel from the snare mic to the overheads. We refer to this disparity as time arrival difference.

If this time arrival difference is more than 10 or 20 milliseconds, you will distinctly hear the two sounds, like a very quick echo. When the difference is less than this, the perceived effect is a noticeable change in tone or timbre, and perhaps even a shift in the stereo image. Here’s why.

Imagine you are recording a sound with two microphones. One mic is a few inches away from the sound source and the other is three feet away, to add some room sound and open things up. Because the mics are at different distances from the sound source, they will pick up the sound at different times. In the example on page 32 the signals from the two mics just happen to end up in opposite polarity from each other.

If these two signals are combined when you mix your audio they cancel each other out because one signal is high when the other is low, and vice versa. The resulting signal is silence!

In the real world, the complex sounds made by musical instruments are made up of many frequencies. A snare drum has a lower thump from the resonance of the heads and the shell, and a higher frequency crack from the snare springs underneath. If you mix the mic that is right on top of the snare with the overheads three feet away you end up with a situation where some frequencies end up in common polarity and some end up up in opposite polarity. Some frequency components will reinforce and some will cancel out. This means the sound of the snare will change, probably for the worse!

In the ancient bygone days of analog recording there was not much one could do about this. The snare mic sounded great all by itself, and the snare sounded pretty good in the overheads, but when the two signals were combined the snare would lose some of its punch, or perhaps it would sound dull. So engineers would boost certain frequencies with equalization, or perhaps reverse the polarity of the snare mic.

These techniques helped, but nowadays even the cheapest DAW software can eliminate this problem completely and lend new impact to drums, guitars, and just about anything else you record! Drums especially will have greater impact and clarity without the need for so much compression and equalization, and the stereo image of the drum kit will take on a greater sense of depth. In short, the whole thing will sound more real. Once you have this impact and depth you can use eq and compression to enhance the sound rather than compensate for unfortunate acoustical phenomena.

Talk back to Michael Schulze at


Kef America

The Magazine | Featured Review | Resources & Info | Readers' Tapes | Editors' Blogs | News | Shop | About Us | Contest | Subscriptions | Contact
Terms and Policy | Advertise | Site Map | Copyright 2014 Music Maker Online LLC | Website by Toolstudios
RSS Newsletter Refer a Friend Q&A Q&A