- Home
- Gross Output
- dbx 700 digital audio processor

# dbx 700 digital audio processor

As ALF would say, “There’s more than one way to cook a cat.” We have been so overwhelmed by Pulse Linear Code Modulation (PCM) recording that we forget that there are other ways to go from analog to digital.

One of them is *delta modulation*. The Greek delta (which in its uppercase, uppercase form looks like an equilateral triangle) is the mathematical symbol of the *difference* between two quantities; consequently, in delta modulation, we do not record the *absolute* value of a signal sample, but the *difference* between successive samples.

Delta modulation is not new. It has been used for years as a simple way to reduce the bandwidth required to transmit TV signals. Poke your nose against the cathode ray tube and you will see that a horizontal line looks a lot like the previous or next line. The contents of the rows don’t change quickly, so we don’t need a lot of information to describe the difference between one row and the next. If we only transmit the difference information, there is a great reduction in the required bandwidth. (Major changes, which require *parcel* information transmitted is scarce and does not significantly increase bandwidth requirements.)

The same principle can be applied to ordinary PCM. The difference between two samples can never be as large as the absolute maximum level of the program material? Sounds do not jump 96 dB in 1 / 50,000th of a second! So our 16 bits, which would normally cover the entire dynamic range of the signal, can be applied to the much narrower range of sample differences. If we design a system assuming that the difference between one sample and the next never exceeds 1% of the peak signal level (a conservative estimate), we would gain 100 times in resolution! (Not bad.) Or, we could use fewer bits, for a resolution comparable to that of conventional PCM. An additional advantage, besides the gain in resolution (or a reduction in bandwidth), is that you no longer have to worry about the absolute level of the signal. When we “run out of numbers” in conventional PCM, the signal is clipped to produce an unpleasant sounding error. But with a delta modulated PCM system, the analogous situation produces a limitation in the slew rate? the difference recorded between the samples is not as big as the real difference? a less offensive distortion.

The problem with such a system, however, is that it requires pretty hairy hardware. Not only do we have to sample the signal as precisely as in ordinary PCM, but we also have to calculate a *very* precise difference between successive samples. PCM hardware is quite complex as it is. Why should we give ourselves all this trouble for a slight improvement?

There is a way out of this dilemma. Suppose we can systematically estimate the value of the next sample. By systematic, I mean that the conjecture is not random. It follows a strict set of rules, so the same set of initial conditions always results in the same estimate. Both the encoder and the decoder would obey these rules. Therefore, we would only have to convey the difference between the *predicted* signal value and its actual value. The decoder can determine the predicted value itself and then apply the difference signal for correction.

Such a system is called (surprise!) *predictive delta modulation*. PDM creates an estimated model of the signal the same way a painter makes a quick sketch on the canvas before filling in the details, then transmits a code that describes whether the estimate is larger or smaller than the actual value of l. ‘next sample. If the sampling is fast enough (greater than about 500 kHz), there will usually not be too much of a difference between one sample and the next. The difference between the sample and the estimate will then be so small that we can accurately describe it with a code of only *a little*!

**PDM explained**

The basic PDM circuit (taken from the dbx manual) is shown in fig. 1. He *looks* complicated, but it’s really very simple. There are three sections that I will explain one at a time.

Let’s start by saying “Hello!” to our old friend, the capacitor. We can charge a capacitor by applying voltage to it. The capacitor charge (in coulombs, note 1) is found by multiplying the applied voltage (in volts) by the capacitance (in farads). Or:

Q = CV (I know you’ve seen this before!)

It also works well the other way around. If we charge a quantity of charge Q on a capacitor, the voltage of the capacitor will increase by

V = Q / C

Note that the change in voltage is determined only by the capacity and the change in load. A given amount of charge added (or subtracted) will increase (or decrease) the capacitor voltage by exactly the same amount, *still* of *total* amount of charge on the capacitor. It’s understood? Good.

Now look at the right part of the diagram. The triangle represents a high gain amplifier. A capacitor is connected from the output to the input. This configuration is called a *integrator*, because it adds (integrates) the load pumped into (or removed from) its inlet.

The exact way the integrator performs his magic is too complicated to go into detail. (I would need to explain operational amplifier circuits, an article in itself.) But here’s the important part. The injected charge is transferred to the capacitor, and the output of the amplifier is the same as the voltage of the capacitor (given by Q / C). For example, if the capacitor was 2 µF and we pumped at 0.5 µC, the output voltage would increase by 0.5 / 2.0, or 0.25 volts. Likewise, if we removed 0.1 µC, the voltage would drop by 0.1 / 2, or 0.05 volts. The integrator of course adds up all these small deposits and withdrawals. Therefore, the instantaneous output of the integrator is simply the operation, *report* charge, divided by the value of the capacitor. Simple, *is not it*?

Where does the charge come from? From these two small circles marked Ipos and Ineg. These are charge pumps. One pushes the load, the other pulls. Like the two sides of Alice’s mushroom, one increases the total capacitor charge, the other decreases it. When either is on, it inserts (or removes) a precisely defined amount of charge.

As you should have understood by now, it is the integrator voltage that models the input signal. By pumping or removing the load, the encoder tries to match the voltage from the integrator to the input. If the integrator voltage is lower than the input, the load is pumped. If the integrator voltage is greater than the input, the load is removed. But how does the encoder know whether to add or subtract loads?

Easy. It uses a comparator. (It’s the triangle on the left.) A comparator is simply a *differential amplifier*. That is, it subtracts one input from the other and amplifies the difference.

Suppose the differential amplifier has a gain of 1 sagan (one billion times). If the difference between its two inputs is 1 billionth of a volt, the output will be 1 volt. Of course, a billionth of a volt is awfully small. (Random circuit noise is much larger!) 10 microvolts is a more likely difference. 10µV times 1 sagan corresponds to 10,000 volts. How do you get 10,000 volts from an amp that is running on an 18 volt power supply?

We don’t. The output of an amplifier is limited to the supply voltage (footnote 2). The amp is simply doing its best to meet the 10,000V requirement. The result in engineering jargon is that the amp “hits the rails”. That is, the output goes to the supply voltage (or ground), as this is the highest (lowest) voltage possible. *Which* the way it jumps depends on the polarity of the difference between the inputs. If it is positive, the amplifier moves to the positive rail, and *vice versa*. This is *very* unlikely that the output of the integrator will ever be close enough to the input to produce a bounded output (*that is to say*, the one that sits stably between the rails). Therefore, the comparator will constantly jump back and forth, up and down, depending on the relative polarity of the input signal and the output of the comparator.

Of course, we still haven’t explained how this contraction tension selects Ipos or Ineg. It’s done by the little trick in the middle. it’s called a *Tongues*. As you can guess from the name, this is a circuit whose output can take one of two states ??*high* or *low*. High and low can be any two voltages that we like; the important thing is that the output of the seesaw *must* be one of those two tensions.

There are several types of flip flops. The one shown here is a D type (“D” stands for “data”). The flip-flop has a special data entry: when the flip-flop is triggered, its output jumps to the same logic level (high or low) as the data entry. The trigger signal is simply a constant frequency (in the 700 model it is 644 kHz). Whenever the trigger turns positive (once per cycle), our D flip-flop is triggered and data at the input is transferred to the output. The data, in this case, is the output of the comparator. Therefore, each time the flip-flop is triggered, its output switches to match the current output state of the comparator.

The little dotted line in the diagram is meant to suggest that this logic level selects between Ipos and Ineg. Indeed, that is exactly what is happening. The comparator “decides” whether the output of the integrator is greater or less than the input signal. The flip-flop is set to this logical state, which in turn determines whether we will inject or remove the load. And so on, as long as there is an entry. (By the way, it’s this train of logical ups and downs that constitutes our digitization of entry.) That’s it! See how simple it is?

I already hear the objections of the Peanut Gallery. “If all you are transmitting is the difference between the actual and estimated signal values, how can you ever achieve the *absolute* signal level? Isn’t that what you want to get back? ”

Good question. Yes, this is the absolute value we want. Imagine this unlikely situation. There is no entry. Then suddenly a very big sine wave passes. The comparator notices that there is like, wow, a really *gross* difference between integrator and input voltages. So it distributes one of its small drops of charge, and 1 / 644,000th of a second later, compares again. Oops! It’s always late, so it unloads a little more charge, and so on. Is it okay *never* catch up?

Technically, no. What happens is that the input signal *is late*. The sine wave eventually reaches its maximum level and then falls back. At some point in its drop, the input voltage drops below the output of the integrator. At this point, the integrator voltage and the input are not too different. Everything works out, with the estimated value close to the absolute value.

Footnote 1: A coulomb is Avogadro’s number of electrons, approximately 6E^{23}. It’s named after Melvin Coulomb, the French music hall comic who discovered how easy it was to create a huge static charge by dragging the carpet. Mel is tragically deceased, victim of a jealous husband whose wife *behind* he (Mel) had zapped once too many. In accordance with French case law, the husband was acquitted.

Footnote 2: We assume that there is no output transformer to step it up.