Say you have a computer with an audio card that offers a digital input. A friend has a hard disk recorder with a digital output, and thinks it might be worthwhile to edit some of his tracks with your software. He brings his recorder to your studio, you set the two boxes next to each other, and.
1. ...you stare glumly at the connectors on the back of his recorder and on your sound card. They both say “S/PDIF” on them, but one is an RCA jack—and there’s only one, even though you want to move two channels of audio!—while the other one just blinks at you with a little red LED in a square hole. Huh?
2. ...his recorder has something like a Mac SCSI connector on the back of it that he claims works just fine hooked up to his mixer, and you have a sneaking suspicion that even if you had something with a connector like that, using one of your ordinary SCSI cables might not be a good idea. What now?
3. ...both units have matching connector types and you have the right cables to hook them up, and after the transfer’s done you bring up a stereo signal on your computer screen and he says, “Where are the other six tracks?” What other six tracks?
All these scenarios can be avoided with a little forethought and some knowledge of what digital data formats and cable types might appear in a small studio.
What’s in the bits
Digital audio, once you’ve converted an analog signal into ones and zeroes, can be sent just as is, or with added information mixed into the signal. Such added data are called subcodes, and may tell the receiving device things like the sampling rate, the track number, whether or not copy protection has been set, or whether or not a sample is known to be valid. (This is just a set of possibilities; subcodes have lots of other uses depending on the equipment involved.)
Subcodes and data words (a word is a single sample, written out as bits; it can be any number of bits in length, but most commonly it will have 16, 20, or 24 bits) are organized into larger packets of information. Each data word may have a subcode that comes right along with it (for instance, a sync signal or a bit pattern that says, “This next word is on the left channel of a stereo signal”), or some subcodes may be placed at the start or end of a block of data words.
One thing you might not know about digital audio is that the data rate of the signal, the rate at which bits move down the cable, is much faster than the sample rate of the audio itself; a conventional stereo signal at 44.1 kHz sample rate is still delivering data at around 6 MHz. This has two effects: one is that there’s plenty of room for subcode data in and around the audio samples themselves, and the other is that digital audio has different cabling needs than analog audio, to prevent corrupting this very-high-frequency data stream.
There are two stereo digital audio formats in wide use today: AES/EBU and S/PDIF. AES/EBU was named after the Audio Engineering Society and the European Broadcast Union, who formalized the standard, and S/PDIF stands for Sony/Philips Digital Interface Format, after the two companies that created the Compact Disc (and the technologies surrounding its need for digital signal transmission, which would later also apply to Digital Audio Tape).
To audio designers, these two formats are considered parts of one larger standard called IEC958; they use different cables but contain very similar data types. S/PDIF, as the more common “consumer” standard intended for the earliest CD players and consumer DAT recorders, has subcodes for copy protection, emphasis, and other consumer-oriented information, whereas AES/EBU does not.
The way the data words are organized, digital audio from one format can often be understood without problems by a machine looking for the other (although there’s always the chance that status bits and other subcodes can be misinterpreted, so this isn’t a guaranteed compatibility). The real differences between the standards lies not so much in the data but in the cabling that carries it.
AES/EBU signals are run over balanced three-conductor cable with a 110-ohm impedance; a single cable carries a stereo signal, one per input and one per output. The standard connectors are XLRs as you’d use for microphones, but regular mic cables won’t have the right impedance to safely carry AES/EBU data over longer runs; you will run into jitter problems and may have trouble with clicks and pops.
S/PDIF is usually sent over one of two types of connections—coaxial and optical. Coaxial cable is the same shielded wire used to move video signals; it has a 75 ohm impedance and uses unbalanced RCA connectors for hookup. In a pinch you can use regular RCA audio cables to run S/PDIF over very short distances, but the potential for problems with impedance mismatches is significant enough to make dedicated video lines (which you can get at Radio Shack or anyplace else that sells video equipment) a very good idea. As with AES/EBU, a single S/PDIF cable carries an interleaved stereo signal, so an audio device that supports coaxial S/PDIF will have one RCA for input and one for output (often color-coded yellow to differentiate them from the red and white analog audio RCA jacks).
Optical S/PDIF uses an LED (light-emitting diode) to send data down a fiber optic cable. This has the advantage of avoiding impedance and grounding problems, at the cost of cabling that’s a little more expensive and fragile than coax cable. There is no single set type of jack and connector for optical S/PDIF. While the most common connector type is the ADAT optical (Toslink) connector, there are others, most notably the ultra-compact jacks used for hookups to miniature digital audio devices like portable DAT recorders and MiniDisc decks.
Because optical and coaxial S/PDIF are exactly identical in data format, the solution to our problem scenario #1 above is quite simple: buy a box to convert coaxial to optical S/PDIF and back. Many manufacturers make them and they’re affordable enough to keep in a drawer until needed.
Multitrack formats: ADAT and TDIF
If we want to move multichannel data, our two main options are ADAT optical (developed by Alesis for their ADAT digital tape machines and widely adopted by the audio industry) and TDIF-1, often abbreviated TDIF (TASCAM Digital InterFace, developed by TASCAM for their DA-series digital tape machines and also widely adopted by the audio industry). As a very general rule, ADAT interfaces outnumber TDIF primarily because the TDIF standard came along a bit after ADAT, but TDIF’s proliferation among modular digital multitrack tape machines has led to its being supported on many digital mixers, interfaces, etc.
There are other multichannel formats out there like MOTU’s AudioWire and Roland’s R-BUS, but these tend to be restricted to products and systems from their makers. The mLAN standard, a FireWire-based networking system developed by Yamaha and now being adopted by a variety of manufacturers, will be covered in a future article, as it goes well beyond the simple transfer of audio.
ADAT is an optical standard, using the now-common Toslink optical connectors and fiber-optic cable. It’s a unidirectional standard (one cable for 8 channels out, one for 8 channels in). To assure stable sync between devices using the standard, ADAT optical data can carry its own sync information or use an external sync signal. This can be a general format like BNC word clock, or a special sync signal called ADAT Sync, which is carried on a 9-pin cable. Most dedicated ADAT cards will have a DB-9 connector for sync (since it can also be used for transport control and track arming), but it’s not a requirement for use of the ADAT optical standard.
TDIF differs from ADAT in several ways. It’s a bidirectional standard, where only one cable is needed to carry eight channels of audio both to and from each box in a hookup pair. Thus a complete 24-channel TDIF hookup between, say, a TASCAM MX-2424 hard disk recorder and the DM-24 mixer, will require a total of three cables. (See the DM-24 rear panel picture on page 16 of this magazine.) It’s an electrical standard rather than an optical one, with the TDIF cable being a multiwire snake with DB-25 connectors at each end. TDIF carries timing data with it as part of the basic hookup, although TDIF devices can often take a separate word clock if needed to follow a master timing signal.
Note that TDIF’s DB-25 connectors look like those on older Macintosh SCSI hookups, but the pin connections and impedances are quite different. As feared in our scenario #2 above, if you tried to use a SCSI cable for TDIF hookups, it would fail spectacularly. There are converter boxes that will change ADAT to TDIF and back; they’re handy in studios that need to handle both hookup types on a regular basis.
One of the nice things about ADAT optical connections is that they can often be used to send optical S/PDIF instead of ADAT. In fact, as we noted above, the Toslink optical connector is the nearly universal format for optical S/PDIF cabling. Manufacturers of audio gear save money with only one set of audio hookups, but users need to know what data type is being sent if they hope to make any sense of it—S/PDIF is not automatically interpreted as only 2 channels of an 8-channel ADAT hookup (problem #3 on our list!). Most computer interface cards that have this option will have a place to select S/PDIF or ADAT in the control console software that comes with the card.
High-resolution solutions: with a name like S/MUX...
Note that all of these digital audio standards top out at 48 kHz. Yet these are the same sorts of cables that are used to hook up today’s crop of 96 kHz (and higher) interfaces. What’s going on?
This is a subject for another article all to itself, which you’ll read here when things have solidified. The basic idea is pretty simple. As our requirements for higher sampling rates have increased beyond the abilities of the current standards to carry the data, manufacturers have figured out ways to “trick” the existing cable standards into giving us the resolution we desire. The most common way to do this is by bit-splitting: splitting up, say, a 24-bit/96 kHz signal into two 24-bit/48 kHz signals, and sending them down two cables at a time.
How is this done, precisely? That depends entirely on the manufacturer of the gear in question. One buzzword that gets used a lot is S/MUX (Sample MUltipleXing), a standard developed by Sonorus, which splits up audio between cables sample by sample rather than bit by bit. But that’s only one of several competing bit-splitting (or sample-splitting) methods; Apogee’s DA-16 converter, for instance, has its own bit-splitting method in addition to S/MUX support. Standards differ from box to box, they are generally not compatible, and as yet there is no overreaching spec to assure that high-definition audio gear can share audio reliably over standard transmission lines.
For now, just be aware that 96 kHz (and above) audio uses existing specs by bending the rules a bit, and not everyone bends the rules the same way.
Mike Metlay is the Associate Editor of Recording Magazine.