Talk:Zero-order hold

From Wikipedia, the free encyclopedia

Contents

[edit] the T factor in Zero-order hold

hi Dastew,

two things: please ask questions like you have at Zero-order hold in the talk page, not on the article itself. if you want to know why there is this different factor of T, this is discussed quite a bit in Talk:Nyquist–Shannon sampling theorem and also on the USENET comp.dsp. the basic reason is that nearly all textbooks (Pohlmann: Principles of Digital Audio is a notable exception) sample with the Dirac comb without this leading factor of T, that causes each image to be scaled by 1/T, which requires the reconstruction LPF to have a passband gain of T to get the original x(t) out. here we include the leading T factor and the reconstruction LPF has a passband gain of 1. now this does not make any difference regarding the function of the ZOH, but there is confusion that this convention in the textbooks do with that gain factor that we do not have here in WP. this was discussed quite extensively at the sampling theorem talk page. anyway, welcome to Wikipedia. r b-j 18:09, 24 January 2007 (UTC)

btw, you can respond here if you want, no need to go to my talk page. r b-j 18:14, 24 January 2007 (UTC)

Well this is wrong!!! Matlab does not agree with you. Octave does not agree with you.
and I don't agree with you. I have built and used differnt programs that are based on the ZOH without the T in the denominator and they work!!! Therefore this is wrong!
Doug Stewart B.E.Sc. P.Eng.
it is not wrong. (and try not to underestimate other editors here - there are some pretty smart cookies here.) and MATLAB and Octave have nothing to do with it (they both work with dimensionless numbers and T is not dimensionless). you need to think about what it means to have a hypothetical filter (the brickwall LPF used in reconstruction) with a dimensionful passband gain of T. how may dB is T? does the number of dB gain that corresponds to a gain of T depend on what units you use to express T? should it?
consider that you have a filter where the species of animal coming out is the same species going in (voltage-in, voltage-out or dimensionless-in, dimensionless-out). now ask yourself what is the dimension of the impulse response of such a filter? i respect your P.Eng. but you might have something that you could learn here. r b-j 18:11, 25 January 2007 (UTC)
Would you consider puting both versions on the page? And explain that your version has to have this extra T becouse of outher problems related to how you are using it but not realy part of the ZOH?
I just took a look around the web and looked at about 30 places that disagree with you.
I found 0 zero that agreed with you.
Given that every resource that I can find disagrees with you( I could list them all here but why bother), could you explain to me Why:
Why do " MATLAB and Octave have nothing to do with it"?
Why are all these people wrong and you are the only one that knows the truth?
Doug


anything can be considered. a similar note used to be in the Nyquist–Shannon sampling theorem article and was taken out here and was discussed a little here (there was some objection to including this note about scaling Talk:Nyquist–Shannon sampling theorem#Simple_Solution [1] [2] [3] ).
so what has happened, we wanted consistancy between the sampling theorem article and the zero and first order hold articles, since the holds simply ask the question "what LTI system can you put in place of the reconstruction LPF, without changing the baseband gain, that will convert the ideally sampled signal into the piecewise constant (or piecewise linear) function that represents the output of a DAC?" if you're going to be gain neutral about this, your reconstruction LPF has to have a passband gain of 1 instead of T which is the convention in most, not all, textbooks (and that T factor is pushed ahead into the ideally sampled signal). then when you ask the question above, the answer is a filter with dimensionless gain and, in fact, a gain of 1 (0 dB) at DC (and close to it for most of the passband). it's not messed up by any factor of T. there was pretty much consensus to just take out this "Note about Scaling" and scale this the logical way even if it was not the convention in most textbooks. r b-j 01:08, 26 January 2007 (UTC)
What you have done is asked the wrong question. What you should be answering is:
What is the LTI system that will produce the "stair step* results of an a/d d/a.
This is all in the Time domain. But what you are asking for is a particular result in the frequency domain. We have the other methods for that. (bilinear Matched pole zero etc.)
The only system that produces the correct step response (in the time domain)is the ZOH and now you want to change it so it won't even work there. When I build a system that has to work in the frequency domain criteria, I do not use the ZOH.
You could put the two systems in your web page and state what they are good for.

[edit] The T scaling question

User:Dastew wrote (by the way, Doug, when you use the talk page, put four squiggles (~~~~) at the end and it will expand into a signature with working link and date):

H_{\mathrm{ZOH}}(s)\, = \mathcal{L} \{ h_{\mathrm{ZOH}}(t) \} \,= \frac{1 - e^{-sT}}{sT} \
Every text book that I have shows the ZOH as:
H_{\mathrm{ZOH}}(s)\, = \mathcal{L} \{ h_{\mathrm{ZOH}}(t) \} \,= \frac{1 - e^{-sT}}{s} \
So why the difference?

Here's the deal. Either filter can work, depending on how you scale the impulse that you put into it. Consider the limit as s goes to zero, so you can see the DC gain of the two filters. The one in the article has a DC gain of 1. The one you prefer, which is indeed common in the literature, has a gain T. Why?

We chose to use a filter with DC gain of 1 because it's nicer, and as Robert says, doesn't depend on what units you measure time in. The choice relates back to what you use as sampling impulses. The unity-gain-at-DC filter is correct for the case where the sampled signal, viewed as a stream of impulses, has the same average value or DC component (or any low frequency component for that matter) as the underlying bandlimited signal that the samples correspond to. This again is a very sensible condition. It means that if the analog bandlimited signal value is 1, you want to represent the sample with an impulse of area T, because you get a new one every T. The impulse you use to get this is Tδ(tnT), which can also be written as δ(t / Tn). This latter form makes a lot more sense to me, because it compresses the time scale instead of multiplying the amplitude scale of the delta, but it confuses some people who don't see that they are equivalent.

If you use the more common sampling impulse, just δ(tnT), the mean value of your impulse train, and the filter needed to model the ZOH, now have time factors in them, and so depend on the units with which you measure T. As you say, most textbooks put up with that; but not all. It doesn't hurt to use the cleaner formulation here, and as Robert points out it was well debated in the other article and this is where we settled out.

Your experience with Matlab and Octave is relevant to understanding this, too, because there you are using discrete sample values, not impulse trains, and you can't implement the specified laplace-domain ZOH filter directly, so you have to model or approximate it somehow. If you think of the sample values as the mean values over the sampling interval T of the sample impulse Tδ(tnT), then things will work out. But if you think of the discrete numbers as the integrals of the sample impulses, things will not work out, and you'll be off by a factor of T. The only conflict here is in how you need to interpret what you're doing to be consistent with one formulation or the other. You could add a section about that once you understand it.

Dicklyon 23:04, 26 January 2007 (UTC)


But is H_{\mathrm{ZOH}}(0) = 1 \ really worth the 1/T gain here?:
h_{\mathrm{ZOH}}(t)\,= \begin{cases} \frac{1}{T} & \mbox{if } 0 \le t < T  \\ 0           & \mbox{otherwise} \end{cases} \
Can't all the arguments for dimensionless frequency response also be used to argue for dimensionless impulse-response?
nooooo. that's precisely not the case! in a continuous-time LTI system, what must the dimension of the impulse response be if the system has the same species of animal guzzouta as guzzinta? think of what is the dimension of the dirac delta or dirac comb if you accept the bonehead, neanderthal understanding of either? then think of an LTI system that is a simple "wire". r b-j 18:30, 29 January 2007 (UTC)
Robert, if you invite comments you shouldn't then thank Bob by using words like "bonehead". Better to point out coolly what's wrong with his logic, which is this: an impulse response of height 1 and width T can not really be said to be non-dimensional. Make it height 1/T and width T, then it will have an integral of 1, or DC gain of 1, and hence will qualify as a smoothing filter. If you don't normalize the height that way, then the filter will have a gain of T for any signal in the passband, which will depend on the units of measurement of T. Dicklyon 18:47, 29 January 2007 (UTC)
I do get that, but it's not the only side of the story.
If we sample signal x(t) = \cos(\omega t)\, at rate 1/T, we get x[n] = \cos(\omega nT)\,. Similarly, if we sample it at 8 times that rate, we expect to get y[n] = \cos(\omega nT/8)\,. Therefore y[8m] = x[m]\,, not x[m]/8\,.
And if we use a good interpolator to upsample \cos(\omega nT)\, by 8, we expect the output to resemble \cos(\omega nT/8)\,. That is commonly done by inserting 7 zero-valued samples in between each pair of original samples, and filtering the whole sequence with a lowpass filter. The better the filter, the better the fidelity. A rect() function, of width T (which is 8 samples at the new rate) is a lowpass filter, but not a very good one... resulting in the staircase effect. But the output will most closely resemble \cos(\omega nT/8)\, when the amplitude of the rect() is 1, not 1/8. I think this is why the 1/T factor is not in common usage.
--Bob K 05:17, 30 January 2007 (UTC)
"bonehead" and "neanderthal" is not at all directed toward anyone but myself and any other advocate that the distribution only definition and usage of the dirac delta is not particularly useful for our use as engineers and to describe physically what might be going on. the "bonehead" and "neanderthal" use of the dirac delta (which is my use of the dirac delta) is what the they call the "nascent" delta functions in the Dirac delta article. this is in contrast to the Cromagnon (LutzL?) understanding of the dirac as a distribution which is much more nuanced, but it might be more difficult to describe what the dimension of the dependent variable (the "y-axis" variable) is, since it's not really a function to begin with. i certainly didn't expect that i was insulting anyone but just qualifying what i mean when i say what the "dimension of the impulse function" is. r b-j 21:01, 29 January 2007 (UTC)
OK, I couldn't quite tell who/what it referred to, but it didn't feel right. Got it now. Dicklyon 21:59, 29 January 2007 (UTC)
I think so, and if most of the books prefer the latter, I think we need to reconsider our position. What I suggest is this modification:
"... the Zero-order hold is the hypothetical filter or LTI system that converts an ideally sampled signal:
x(t) \sum_{n=-\infty}^{\infty} \delta(t - nT)=\sum_{n=-\infty}^{\infty} x(nT)\cdot \delta(t - nT) \
to a piecewise constant signal:
x_{\mathrm{ZOH}}(t)\,= \sum_{n=-\infty}^{\infty} x(nT)\cdot \mathrm{rect} \left(\frac{t - nT}{T}-\frac{1}{2} \right) \   "


I.e., our x_s(t)\, notation is not really helpful here. It's just unnecessary baggage.
--Bob K 18:10, 29 January 2007 (UTC)


[edit] Complete re-write

I have probably just wasted a bunch of time, but here is what I think the article needs to say:


Figure 1.  Impulse response of zero-order hold hZOH(t).
Figure 1. Impulse response of zero-order hold hZOH(t).
Figure 2.  Piecewise constant signal xZOH(t).
Figure 2. Piecewise constant signal xZOH(t).
Figure 3.  A modulated Dirac comb xs(t).
Figure 3. A modulated Dirac comb xs(t).

The Zero-order hold (ZOH) is a mathematical model of the practical reconstruction of sampled signals done by conventional digital-to-analog converters (DAC).   When a signal, x(t), is sampled at intervals of length T, we are left with just the discrete sequence: x(nT), for integer values of n.   A zero-order hold reconstructs the following continuous-time waveform from the samples:

x_{\mathrm{ZOH}}(t)\,= \sum_{n=-\infty}^{\infty} x(nT)\cdot \mathrm{rect} \left(\frac{t-T/2 -nT}{T} \right) \
where \mathrm{rect}() \ is the rectangular function.


\mathrm{rect} \left(\frac{t-T/2}{T} \right) is depicted in Figure 1, and x_{\mathrm{ZOH}}(t)\, is the piecewise constant function depicted in Figure 2.

In other words, most conventional DACs output a voltage proportional to the discrete sample value and hold that voltage constant for the duration of the sampling interval and then change that voltage rapidly to the value corresponding to the next discrete sample value.

The rect() function performs a crude, 1-sample, interpolation to span the gap between the original signal samples.   At the other extreme of sophistication is the impractical Whittaker-Shannon_interpolation_formula:

x_{\mathrm{sinc}}(t) = \sum_{n=-\infty}^{\infty} x(nT) \cdot \mathrm{sinc} \left( \frac{t -nT}{T} \right) \

Since each sinc() function is infinitely long, every interpolated value is a result of infinitely many samples.


The operations indicated in these formulas can be modelled mathematically as a filtering operation.
The input to the filter is a series of impulse functions, called a Dirac comb, modulated by the sample values:

x_s(t)\, = A \sum_{n=-\infty}^{\infty} x(nT)\cdot \delta(t - nT) \
= x(t) \ A \sum_{n=-\infty}^{\infty} \delta(t - nT) \

which is depicted in Figure 3.   "A\," represents an arbitrary constant that may be chosen to balance out the filter gain, so that the reconstructed signal has the same amplitude as x(t).

The filter is defined by its impulse response.   For a zero-order hold, the impulse response is:

h_{\mathrm{ZOH}}(t)\,=  \frac{1}{A}\cdot \mathrm{rect} \left(\frac{t -T/2}{T} \right)\

where A=1\, is a popular choice in the literature.


The effective frequency response is the continuous Fourier transform of the impulse response:

H_{\mathrm{ZOH}}(f)\, = \mathcal{F} \{ h_{\mathrm{ZOH}}(t) \} \,= \frac{1}{A}\cdot \frac{1 - e^{-i 2 \pi fT}}{i 2 \pi f} = \frac{T}{A}\cdot e^{-i \pi fT} \cdot \mathrm{sinc}(fT) \
We note that H_{\mathrm{ZOH}}(0)\, = \frac{T}{A}\,, which may make the choice A=T\, more appealing (than A=1) for some applications.

Compared to the "ideal" (Whittaker-Shannon) frequency response, H_{\mathrm{ZOH}}(f)\, has a roll-off of gain at the higher frequencies (a 3.9224 dB loss at the Nyquist frequency).   Above the Nyquist frequency, the roll-off continues, whereas the "ideal" response is a brick-wall cutoff.   x_s(t)\, is not bandlimited, so if x(t)\, is bandlimited, then the subsequent leakage of higher frequencies represents a difference between x_{\mathrm{ZOH}}(t)\, and x(t)\,.

  • Similarly, if x(t)\, is not bandlimited, then x_{\mathrm{sinc}}(t)\, is also an imperfect reconstruction.


And finally, the Laplace transform transfer function of the ZOH is found by substituting s = i2\pi f\, into H_{\mathrm{ZOH}}(f)\,:

H_{\mathrm{ZOH}}(s)\, = \mathcal{L} \{ h_{\mathrm{ZOH}}(t) \} \,= \frac{1}{A}\cdot \frac{1 - e^{-sT}}{s} \


--Bob K 23:11, 30 January 2007 (UTC)

[edit] Comment on Bob K's rewrite

Not bad, Bob. I like what you're trying to do here. However, I'm not convinced that it will be more intelligible, or that it will survive the WP:OR hammer. You generalize the choice between the two cases we're arguing over, which is novel, and is useful perhaps, once you understand it, but it's still very perplexing to a newbie to see the arbitrary A introduced. And then later when you say why A=T might be preferred, some more words are probably needed, to explain that T/A is the DC gain, and that by being one it makes a smoothing filter, which could also be applied to the underlying original bandlimited continuous-time signal to get the net system response, etc. Dicklyon 04:30, 31 January 2007 (UTC)

Thanks for the support. I was actually hoping that you would take over the A=T elaboration, so it comes out to your satisfaction. Feel free to edit my text.
Regarding "underlying original bandlimited continuous-time signal", please note that I have avoided that assumption. In fact the original x(t) could be the staircase-type function, x_{\mathrm{ZOH}}(t)\,, for all we know. The ZOH is a real-world operation, and real-world signals are not bandlimited (== infinite duration). The Whittaker-Shannon_interpolation_formula can be applied to any sequence. The only stipulation is that in order for it to reproduce the original input exactly, there can't be any aliasing. But I have not claimed that it reproduces the original input, just as the ZOH does not generally do that. I have only claimed that they are both interpolation formulas.
--Bob K 05:33, 31 January 2007 (UTC)
Right, I get that. What I meant though is that since there IS always a unique underlying bandlimited signal, even if that's not where the samples came from, the ZOH filter can be imagined applying to the that signal. It's a bit messy though, since the filter is then not time-invariant, but it gives you a way think about the rolloff of high frequencies when you think of the ZOH as a sort of smoothing filter. I'll have to think more about how/whether that works out in detail. Maybe I'm just grasping for a good excuse for making the DC gain be 1.
Rbj, what's your take on the proposed re-write? Dicklyon 05:47, 31 January 2007 (UTC)


okay, lemme start with a little reminder of some of the recent history. i am well aware of the convention in most of the textbooks and also have a bias toward consistancy with the prevailing wisdom (which would normally be what is in the textbooks and other well-established literature) in Wikipedia. that is essentially what WP:NOR is about. my purpose is not to add my own OR, but to have consistancy between articles of related topics here and also to not perpetuate a bona-fide mistake in convention. not all conventions are equally valuable, the 0-origin vs. 1-origin is another example, but here we are not dealing with an existing language or product and simply have to put up with the convention, good or bad, as it exists. here, we can choose to not perpetuate a mistaken convention. now, in the Nyquist-Shannon sampling theorem i was originally thinking of sticking with the common convention and fixing it with that "Note about scaling" so that this article could be done correctly. in one sense, where you put that gain of T doesn't matter so much as long as you have it in there somewhere. but here it is different. it is not about where the T goes, but whether it goes in or not.
now, when you contemplate the "effect" or more specifically the "net effect" of doing "something", you need to "compare apples to apples". otherwise the difference in comparing apples to oranges may not be entirely due to the "something" but there may be some component of the difference that is due to the difference between apples and oranges.
when one asks "what is the effect (on frequency response) of the inclusion of the zero-order hold?" what you are comparing is
x(t) = \sum_{n=-\infty}^{\infty} x[n] \cdot \mathrm{sinc} \left( \frac{t-nT}{T} \right) \
to
x_{\mathrm{ZOH}}(t)\,= \sum_{n=-\infty}^{\infty} x[n] \cdot \mathrm{rect} \left(\frac{t-nT-T/2}{T} \right) \ .
both of these signals are modeled as the output of an LTI that is driven by the sampled signal xs(t) that has been expressed in 2 different forms (or 3 if you include BobK's A way).
now it doesn't matter whether or not you use
x_\mathrm{s}(t)\, = x(t) \cdot \left( \sum_{n=-\infty}^{\infty} \delta(t-nT) \right) \
= \sum_{n=-\infty}^{\infty} x(t) \cdot \delta(t-nT) \
= \sum_{n=-\infty}^{\infty} x(nT) \cdot \delta(t-nT) \
= \sum_{n=-\infty}^{\infty} x[n] \cdot \delta(t-nT) \


OK, for the record your "first case" is my A=1\, case. --Bob K 12:54, 5 February 2007 (UTC)


and a reconstruction brick-wall filter with passband gain of T ...
... or use
x_\mathrm{s}(t)\, = x(t) \cdot \left( T \sum_{n=-\infty}^{\infty} \delta(t-nT) \right) \
= T \sum_{n=-\infty}^{\infty} x(t) \cdot \delta(t-nT) \
= T \sum_{n=-\infty}^{\infty} x(nT) \cdot \delta(t-nT) \
= T \sum_{n=-\infty}^{\infty} x[n] \cdot \delta(t-nT) \


And your 2nd case is my A=T\,. --Bob K 12:54, 5 February 2007 (UTC)


and a reconstruction brick-wall filter with a passband gain of 1. but whichever you use, you must be consistent and use the same brick-wall reconstruction filter in the case of using the ZOH as when you leave out the ZOH to determine the net effect of the ZOH. in either case, for frequencies below Nyquist (for frequencies above Nyquist, there is a problem of dividing by zero, if your brickwall filter is perfect) the complex transfer function from x(t) to xZOH(t) is
H_\mathrm{ZOH}(f) = e^{-j \pi f T} \mathrm{sinc}(f T) \
in the first case you have a brickwall filter with passband gain of T, and a filter that takes you from
x_\mathrm{s}(t) = \sum_{n=-\infty}^{\infty} x[n] \cdot \delta(t-nT) \
to
x_{\mathrm{ZOH}}(t)\,= \sum_{n=-\infty}^{\infty} x[n] \cdot \frac{1}{T} \mathrm{rect} \left(\frac{t-nT-T/2}{T} \right) \


Wrong.   When A=1,   h_{\mathrm{zoh}}(t)\,= \mathrm{rect} \left(\frac{t -T/2}{T} \right)\   and takes you from   x_\mathrm{s}(t) = \sum_{n=-\infty}^{\infty} x[n] \cdot \delta(t-nT) \   to   x_{\mathrm{zoh}}(t)\ \stackrel{\mathrm{def}}{=}\ \sum_{n=-\infty}^{\infty} x[n] \cdot  \mathrm{rect} \left(\frac{t-nT-T/2}{T} \right) \.

--Bob K 12:54, 5 February 2007 (UTC)


and that eventually followed by the brickwall with passband gain of T.


???   No, x_{\mathrm{zoh}}(t)\, does not subsequently go through any brickwall filter. --Bob K 12:54, 5 February 2007 (UTC)


the T and 1/T kill each other off and this essentially becomes the same as the second case which is the brickwall filter with passband gain of 1 and a filter that takes you from
x_\mathrm{s}(t) = T \sum_{n=-\infty}^{\infty} x[n] \cdot \delta(t-nT) \
to
x_{\mathrm{ZOH}}(t)\,= \sum_{n=-\infty}^{\infty} x[n] \cdot \mathrm{rect} \left(\frac{t-nT-T/2}{T} \right) \.
Now what's happening in the first case is that they are replacing the brickwall filter of passband gain of T (which gives you the unscaled x(t) back) with a ZOH that has that gain absorbed in the transfer function:
H_\mathrm{ZOH}(f) = T e^{-j \pi f T} \mathrm{sinc}(f T) \
or, if you replace jf = jω with s,
H_\mathrm{ZOH}(s) = T \frac{1 - e^{-s T}}{s T} = \frac{1 - e^{-s T}}{s} \
but if you took out the ZOH, you wouldn't replace it with a gain of 1, you would replace it again with a gain of T.


This conclusion is based on the previous errors that I have pointed out. Whatever it is you are trying to say might also be incorrect. --Bob K 12:54, 5 February 2007 (UTC)


the net effect of having the ZOH is still the component of H(s) that multiplies the T. O&S recognized this when they put in for their compensation filter one with a transfer function of
\tilde{H}_d \left( e^{j \omega} \right) =  \begin{cases}    \frac{\omega/2}{\sin(\omega/2)}, & \quad 0 \le \omega \le \omega_p , \\   0,                               & \quad \omega_s \le \omega \le \pi   \end{cases}
This is around pp. 484-485, section 7.7.2 in my (c) 1989 edition of Discrete-Time Signal Processing.
in conclusion, it doesn't matter which way you go with the sampling theorem (except i'll add that there is a serious pedagogical problem with the common convention that leads naive students into asking the question "How do I design a LPF with a gain of T?"), but the net consequence of the hold action of the DAC, the "Zero-order hold", is a dimensionless frequency response that is 0 dB at DC and something around -4 dB at Nyquist. moreover, how can you even compute the dB gain (or loss) when there is this dimensionful factor of T stuck in there?? it is non-sensical.

Here is your quote from above:

when one asks "what is the effect (on frequency response) of the inclusion of the zero-order hold?" what you are comparing is
x(t) = \sum_{n=-\infty}^{\infty} x[n] \cdot \mathrm{sinc} \left( \frac{t-nT}{T} \right) \
to
x_{\mathrm{zoh}}(t)\,= \sum_{n=-\infty}^{\infty} x[n] \cdot \mathrm{rect} \left(\frac{t-nT-T/2}{T} \right) \ .


That comparison is unitless and totally out of our control. Set x[n] = \delta[n]\,, look up a couple of Fourier transforms, and that's it. It has nothing to do with which value of A\, you prefer. A\, has nothing to do with it.

--Bob K 12:54, 5 February 2007 (UTC)


this net effect of the ZOH is something real and is not arbitrary. to just say "we'll toss in a fudge factor of T or 1/T or A or whatever" is neither accurate nor elucidating. it is actually a consequence of the less than ideal convention they use in the first place. O&S recognized this and, at least, they got their compensation correct (because they compared their ZOH to the gain of T, not to 1) but others fail to do this correctly. i don't see how it is useful for Wikipedia to reinforce an actual quantitative mistake (in the transfer function) because of a common (but poor IMO) choice of convention. the convention is arbitrary as long as you keep your ducks lined up, but the effect of the hold action of a DAC is not. the gain factor is not arbitrary. at DC you have 0 dB loss and around Nyquist, it's about -4 dB loss and there is no dependence on a T factor that is not even a dimensionless factor to begin with.


If the ZOH input is the discrete x[n] sequence, then there is no debate about its "net effect" or its reality. But if the input is the modulated Dirac comb, which exists only in our imaginations, then the ZOH is not "real" either. And its individual effect is not the "net effect" of the model.

--Bob K 15:16, 5 February 2007 (UTC)


i'm going home to sleep now. i've had a long, long day. -- r b-j 03:57, 1 February 2007 (UTC)
Yes, but what what do you think of Bob's rewrite? Should we try to use it, mutatis mutandis? Dicklyon 04:53, 1 February 2007 (UTC)


I can't agree with you anymore, Rbj. What the DAC actually does is not at issue. That is why I moved it to the top of the re-write. Your student who wants to build a DAC can stop reading right there.   If instead, he tries to implement the theoretical model, generating a Dirac comb will be just a problematic as "designing an LPF with a gain of T". The two problems cannot physically be separated.

You say the hold action of the DAC, the "Zero-order hold", is a dimensionless frequency response that is 0 dB at DC and something around -4 dB at Nyquist.   But what is 0 dB?   It is a ratio of 1.   It's the ratio of   |X_{\mathrm{zoh}}(f)|\,   to   |X_{\mathrm{sinc}}(f)|\,.   Those interpolations are both defined only in terms of the samples, x(nT), and T.   It has nothing to do with the value of A\, that we are quibbling about.

--Bob K 15:34, 1 February 2007 (UTC)

Bob, as was our pattern before, i am not sure you and i are communicating effectively (i, at least, am confused - i fully agree, AFAICT, with the second paragraph above and cannot see how that jives with the first). i made a different notation than you (replacing xsinc(t) with just x(t) )... r b-j 16:04, 1 February 2007 (UTC)
Yes, you are quite right to do that, because x(t) need not be bandlimited. I mention it here and now because your subsequent mistake is to talk only about the transfer function between x_s(t)\, [instead of x(t)\,] and x_\mathrm{zoh}(t)\,.
--Bob K 14:01, 2 February 2007 (UTC)
but in every other way, i stand by every statement made above which AFAICT is fully compatible with your second paragraph in your current response here. i cannot fully decode your first paragraph (except for the second sentence, which i disagree with), and i do not understand with what it is you're disagreeing with. r b-j 16:04, 1 February 2007 (UTC)
Rbj, can you please point out what sentences and paragraphs you are referring to that you agree with and can't parse and don't agree with? I can't figure it out. Is it Bob's short paragraphs immediately above, where the second sentence, that you disagree with, is "What the DAC actually does is not at issue."? If so, please explain what you mean by disagreeing with that, as it's pretty fundamental to discussing any of the rest that we agree on what a DAC does. Or is it something else about how Bob showed that the choice of scaling is not just between two choices, but is a continuum with a free factor A? Dicklyon 04:18, 2 February 2007 (UTC)
i just don't think there is anything more i can say but to just repeat the point ad nauseum. one more repetition, vomit if you like: when there is a quarter in the offering plate and i take it out and plop down a dollar bill, is it accurate to say "I just put down a dollar!"? it is, in one sense, but it certainly would not be accurate to say "I just contributed a dollar."
in one case, where
x_\mathrm{s}(t) = T \sum_{n=-\infty}^{\infty} x[n] \cdot \delta(t-nT) \,
the hypothetical brickwall reconstruction filter
H(f) = \mathrm{rect}(f T) \
is replaced by
H_\mathrm{ZOH}(f) = e^{-j \pi f T} \mathrm{sinc}(f T) \.
in the other case, where
x_\mathrm{s}(t) = \sum_{n=-\infty}^{\infty} x[n] \cdot \delta(t-nT) \,
the hypothetical brickwall reconstruction filter
H(f) = T \mathrm{rect}(f T) \
is replaced by
H_\mathrm{ZOH}(f) = T e^{-j \pi f T} \mathrm{sinc}(f T) \.
what is the Δ(dB) as a function of f in each case? r b-j 04:57, 2 February 2007 (UTC)

As you ignored twice yesterday, (below), the transfer function from x(t)\, [not x_s(t)\,] to x_\mathrm{zoh}(t)\, is:

\frac{A}{T}\cdot H_\mathrm{zoh}(f)= \frac{A}{T}\cdot \frac{T}{A}\cdot e^{-i \pi fT} \cdot \mathrm{sinc}(fT)= e^{-i \pi fT} \cdot \mathrm{sinc}(fT)\,,

regardless of which A\, you prefer.

--Bob K 13:40, 2 February 2007 (UTC)

i am crapped out arguing this and i do regret bringing it up. not that i expected a rubber stamp, but we shouldn't be reinforcing the misconception created the not-so-good-covention. it is only a convention, but not all conventions are equally useful.
if you can get access to the Journal of the Audio Engineering Society i can point you to where this weakness in convention actually made a widely respected author (Barry Blesser: The Digitization of Audio 1979) make this very same mistake (and, nearly a decade later, where i corrected it in the Journal, of which now i am a reviewer BTW). we can do either convention in Nyquist-Shannon sampling theorem, but if we do the common convention, there needs to be a "Note about scaling" and doing it the way we did makes for a more concise presentation and consistency with this and with First-order hold to not get the wrong answer. but, because it was not the common convention, i was surprized (pleasantly) that you and BobK so quickly adopted it and it was BobK who put it in the article (not me) and you endorsed it in the talk page explicitly. (which, again, was quite surprizing and pleasantly so.)
but now, because it isn't the common convention, some chickens are coming home to roost. and this effect of ZOH is not just a convention. it is a real thing with an unambiguous comparison of with ZOH to without ZOH. O&S do this right (even though they use the common convention) but any textbook that says that the effect of ZOH is
H_\mathrm{ZOH}(f) = T e^{-j \pi f T} \mathrm{sinc}(f T) \.
is wrong. they neglected to subtract the quarter from the dollar. O&S do it right. Pohlmann Principles of Digital Audio also does it right, but he uses the scaling convention we use here so the presentation is a little bit easier. (so User:Dastew is not accurate saying that no textbooks define the ZOH this way.) r b-j 04:57, 2 February 2007 (UTC)
Rbj, I don't think we need the rant. It's clear why you want the gain of 1 at DC, so you can measure the droop in dB relative to a flat reconstruction. You can still do that with any factor if you compare the two (sinc versus flat) with same gain, but if you want to simplify the comparison, then the DC gain of 1 is better. We are all in agreement. But instead of the rant, you could answer my question about what part of Bob's statement you don't agree with, so I can try to help.
As the article stands, the lead is misleading, in my opinion, and Bob's lead works much better. So let's focus on that instead. Presently, it says, among other things "A mathematical model such as the ZOH (or possibly the first-order hold) is necessary because, in the sampling and reconstruction theorem...". Now, to me, this doesn't make sense, since the ZOH can well be discussed without reference to the sampling theorem. It has nothing to do with sampling, only with reconstruction, and its reconstruction isn't what's called for in the sampling theorem. Sure, you can compare it, but that comparison is not what makes understanding a DAC's ZOH necessary. So the "is necessary because" assertion is just wrong. ZOH should stand on its own. The ZOH should be completely described in the time domain, and in terms of the sample values, with no deltas, before it then further modeled as a filter applied to an impulse train. That way it can be understood with simple math; the frequency-domain characterization is gravy, for people who want to know more. In terms of connecting with the sampling theorem you can, as I did, assume a unique underlying bandlimited signal (the same as what you get by sinc reconstruction); but Bob points out, correctly, that you don't need to assume that to discuss the ZOH. So, how much of that can you agree with? Dicklyon 07:23, 2 February 2007 (UTC)
I just noticed that even Bob uses sampling to get x(nT), which is not even necessary for defining and discussing ZOH; in many cases, the discrete-time signals x[n] are synthetic, not samples of any x(t). Might as well just work in terms of those to start. Then your first equation would be:
x_{\mathrm{ZOH}}(t)\,= \sum_{n=-\infty}^{\infty} x[n]\cdot \mathrm{rect} \left(\frac{t-T/2 -nT}{T} \right) \
No infinite pulses, no scale factors, no assumed underlying x(t), just a simple definition of what a ZOH does to a sequence of values. Dicklyon 07:33, 2 February 2007 (UTC)
Same problem I saw in my own logic, where I said "Maybe I'm just grasping for a good excuse for making the DC gain be 1." A gain of 1 at DC is "nice", but this is just a model, where the internal scaling of impulses and filter gain are irrelevant in the end (Bob's A). As for consistency with another article, that's overrated, too. It's "nice", but each article can stand alone and derive its own results in its own way and still be OK.
As I said, Bob's writeup that tries to make it clear why there's a free choice of A is an interesting step, but, sort of like the "note on scaling", it puts more distractions into the derivation than you really need. Bob, is there any problem with the article as it is? Maybe the problem is just how the whole thing is framed in the lead: "A mathematical model such as the ZOH (or possibly the first-order hold) is necessary because, in the sampling and reconstruction theorem,..." What if instead we explain what a first-order hold IS, and then get into the model later. What is IS is a thing that holds a sample value constant for an interval T, to make a continuous-time signal from a discrete set of samples. How you analyze its effect is then a subsequent question. Bob, I now see that your rewrite took pretty much this approach, which I failed to appreciate before with the A distraction. How about another rewrite? Dicklyon 15:57, 1 February 2007 (UTC)
there is no "free choice" with A. we have choices about what the Vref is applied to the ADC and DAC, and assuming those Vref's are the same (that the ADC and DAC scale between voltage and the converted numbers identically), that this A is T or 1/T or however Bob put it. it is not arbitrary. you guys need to work this out step-by-step. r b-j 16:04, 1 February 2007 (UTC)

A\, is a free choice, because x_s(t)\, is something that we construct (in our imaginations) from the samples. We can construct anything we want. No matter which gain you prefer for your rect() filter, I can construct an x_s(t)\, that will produce the same amplitude as x(t)\,.

--Bob K 16:21, 1 February 2007 (UTC)

using your notation, the gain from   |X_{\mathrm{sinc}}(f)|\,   to   |X_{\mathrm{zoh}}(f)|\,   is not a free choice, is deterministic, and has no A in it. and that is what the effect of the ZOH is about. r b-j 16:43, 1 February 2007 (UTC)


Exactly. A\, does not matter. It is arbitrary. --Bob K 16:59, 1 February 2007 (UTC)

no, Bob. A isn't in there because it is fixed or set to T. you see T instead. r b-j 19:17, 1 February 2007 (UTC)
You are free to pick A=T, while most of the rest of the world picks A=1. And you will have to construct your x_s(t)\, with a different gain than the rest of the world. But that's OK.
--Bob K 17:05, 1 February 2007 (UTC)
no, Bob. assuming the reference voltage or full-scale voltage is the same in the ADC and DAC, when you go from x(t) to xzoh(t), the frequency response is:
H_\mathrm{ZOH}(f) = e^{-j \pi f T} \mathrm{sinc}(f T) \
there is no other choice of A. (there is no A.) you can choose another scaling if you want, but it needs to be dimensionless and if it isn't 1 your output (at low frequencies) is not the same size as your input. this is math, not art, not politics, not personal preference. you've done nothing to refute either any of the mathematical statements in the article or in my response above. and, indeed, you cannot. the math (or "maths" if you're on the other side of the pond) is clear and unambiguous. if your ADC and DAC are set up so that when 5 volts DC goes in, then 5 volts DC comes out, the frequency response of the ZOH is the equation above. no A factor. and that is a consequence of a net transfer function of
H_\mathrm{ZOH}(s) = \frac{1 - e^{-s T}}{s T} \.
no A factor. r b-j 19:17, 1 February 2007 (UTC)


x(t)\, is not the input to your h_\mathrm{zoh}(t)\, filter. The input is x_s(t)\,. The frequency response you are talking about includes the gain of the filter and the gain of the Dirac comb. So if I hadn't done anything before to mathematically refute your statements, I have now.

--Bob K 19:36, 1 February 2007 (UTC)

no, ignoring them is not refuting them. to repeat:
Now what's happening in the first case is that they are replacing the brickwall filter of passband gain of T (which gives you the unscaled x(t) back) with a ZOH that has that gain absorbed in the transfer function:
H_\mathrm{ZOH}(f) = T e^{-j \pi f T} \mathrm{sinc}(f T) \
or, if you replace jf = jω with s,
H_\mathrm{ZOH}(s) = T \frac{1 - e^{-s T}}{s T} = \frac{1 - e^{-s T}}{s} \
but if you took out the ZOH, you wouldn't replace it with a gain of 1, you would replace it again with a gain of T. the net effect of having the ZOH is still the component of H(s) that multiplies the T.
you gotta be comparing apples to apples to meaningfully discuss the net effect of anything. and you are not. r b-j 19:42, 1 February 2007 (UTC)


You are the one who is ignoring. So I repeat:

H_\mathrm{zoh}(f)\, is only the transfer function from x_s(t)\, to x_\mathrm{zoh}(t)\,.

\mathcal{F}\{x_s(t)\} = \frac{A}{T}\sum_{k = -\infty}^{\infty} X(f - {k f_s})

So the transfer function from x(t)\, to x_\mathrm{zoh}(t)\, is   \frac{A}{T}\cdot H_\mathrm{zoh}(f)\,, where:

H_{\mathrm{zoh}}(f)\, = \frac{1}{A}\cdot \frac{1 - e^{-i 2 \pi fT}}{i 2 \pi f} = \frac{T}{A}\cdot e^{-i \pi fT} \cdot \mathrm{sinc}(fT) \

The A's cancel out.   So A\, is arbitrary and does not matter.

--Bob K 20:39, 1 February 2007 (UTC)

[edit] Synthesized our ideas

I think the rework I just did synthesizes our various concerns. Let me know if I screwed up anything. I included something like the old note on scaling (in which the freedom to scale is left as merely implicit), to address the concern that started all this, referenced Pohlmann so it's clear that it's not OR, and made the simple description of the ZOH before the more elaborate frequency-domain method that needs the impulses in the model. In all cases I used x[n] as the samples, and mentioned an underlying original signal only near the end with respect to comparing with the the Whittaker–Shannon interp. I based it mostly on Rbj's text that was there, so may have missed some of Bob's points. Please rework as you see fit, as it will be easier to see where we differ from how we right it than from out discussions, I think. Dicklyon 07:36, 3 February 2007 (UTC)

It's a big improvement. It could be better in several ways:
1. Why switcheroo from \mathrm{rect} \left(\frac{t-T/2 -nT}{T} \right)\,   and   \mathrm{rect} \left(\frac{t-T/2}{T} \right)\,   to   \mathrm{rect} \left(\frac{t - nT}{T}-\frac{1}{2} \right) \,   and   \mathrm{rect} \left(\frac{t}{T}-\frac{1}{2} \right)\,   ?
That's just because I picked up one form from you and one from Rbj. Either is OK by me. Dicklyon 17:01, 3 February 2007 (UTC)
I prefer the first way. The mixture is preferable to not showing it at all, but I don't think the mixture adds any value. --Bob K 15:09, 4 February 2007 (UTC)
I'm OK with that. But I'm going to leave it alone for now, and let you or Rbj or someone else choose how to fix it. I suspect he'll have other things he might want to tweak anyway. Dicklyon 17:48, 4 February 2007 (UTC)
2. Incongruous that you say resulting in a low-pass filter model with a DC gain of T, and hence dependent on the units of measurement of time, but you have nothing to say about h_{\mathrm{zoh}}(t)\,=  1/T\,. Either explain why the 1/T is preferable (which I believe will sound pretty lame) or just don't mention it (because frankly, IT DOES NOT MATTER).
I don't understand. The 1/T is to get the DC gain of 1. What more is needed? Dicklyon 17:01, 3 February 2007 (UTC)
You are implementing a filter whose impulse response has units of 1/T "and hence dependent on the units of measurement of time". Why is that any less remarkable than the DC gain? The instantaneous gain is what the implementer actually cares about. The DC gain is just a result of the implementation. We appear to be making a value judgment without justification, and we can't even claim "convention".
--Bob K 15:09, 4 February 2007 (UTC)
Bob, for any LTI filter where the species of aminal coming out is the same as going in, the dimension of the impulse response is 1/time. it's completely appropriate that such an impulse response be dimensionful in that manner and not correct for it to be dimensionless. r b-j 05:14, 6 February 2007 (UTC)
I don't know what instantaneous gain of a filter is. DC gain of unity is the convention I've always used for lowpass filters, perhaps because it was inherited from analog RLC filters. You'll find it in lots of books on digital filters, too. Dicklyon 17:48, 4 February 2007 (UTC)
it's not just the DC gain. it is the gain where x(nT) rect((t-nT)/T) has the same value of x(t) when t=nT. that is a unique and non-arbitary scaling. and it makes a difference if such a digital filter with an ADC and conventional DAC is placed in a loop (as it would be in a servo-control system). this constant gain is not arbitrary. it is a design parameter that is something to have control over and a design parameter that manifests a measureable and behavioral difference in a system with both analog and digital subsystems. that's why this misplaced T can lead to trouble if you don't line your ducks up. r b-j 05:14, 6 February 2007 (UTC)
Actually, the answer I have been seeking (but I didn't know it) is that the convolution integral contains a dt\, factor, which has units of time. So the units of h(t) are indeed the reciprocal of time. --Bob K 11:23, 5 February 2007 (UTC)
3. Most importantly, point out that H_{\mathrm{zoh}}(f)\, is only   \frac{X_{\mathrm{zoh}}(f)}{X_s(f)}\, .
Again, it's not clear to me why it's so important to explain that, if we've already said that we've picked x[n] and x_ZOH to have the same mean value.
That is not the gain that Rbj intends to be unitless. It's the one below, as indicated by his 16:04, 1 February 2007 entry, where he refers to x(t)\,, not x_s(t)\,. And in fact it is unitless, regardless of which convention we use. So there is nothing to argue about. The convention is irrelevant.
It's also important as a purely practical matter, because in the vast majority of cases the discrete sequence does represent sampling. We do our readers an injustice to overlook that reality. Leave that to the math journals.
--Bob K 15:09, 4 February 2007 (UTC)
I didn't interpret Rbj's comments that way. But no matter, I said the way the makes sense to me. Dicklyon 17:48, 4 February 2007 (UTC)
dunno for sure that anybody is getting it the way i meant. i thought so last June. i'll wait a little longer for the dust to settle. (and i'm more WPreoccupied by a cultural battle at Marriage at the moment.) r b-j 05:14, 6 February 2007 (UTC)

Rbj also says:

when one asks "what is the effect (on frequency response) of the inclusion of the zero-order hold?" what you are comparing is
x(t) = \sum_{n=-\infty}^{\infty} x[n] \cdot \mathrm{sinc} \left( \frac{t-nT}{T} \right) \
to
x_{\mathrm{zoh}}(t)\,= \sum_{n=-\infty}^{\infty} x[n] \cdot \mathrm{rect} \left(\frac{t-nT-T/2}{T} \right) \ .

With x[n] = \delta[n]\,, the frequency responses are:

\mathcal{F} \left \{\mathrm{sinc}(t/T)\right \} = T\cdot \mathrm{rect}(Tf)\,
\mathcal{F} \left \{\mathrm{rect} \left(\frac{t-T/2}{T} \right)\right \} = T\cdot e^{-i \pi fT} \mathrm{sinc}(fT)\,


And the relative response (or "comparison" in Rbj's words) of the ZOH, for |f| < Nyquist, is just:   e^{-i \pi fT} \mathrm{sinc}(fT)\,.  

Maybe another way of framing the controversy between Rbj and Doug is whether or not the T\cdot e^{-i \pi fT} \mathrm{sinc}(fT)\, should be divided by the T\cdot \mathrm{rect}(Tf)\,.


Regardless of that, I would like some comment on the discussion about "net effect" (15:16, 5 February 2007 (UTC)). For your convenience, I will copy/paste what I think is the key point:

If the ZOH input is the discrete x[n] sequence, then there is no debate about its "net effect" or its reality.   But if the input is the modulated Dirac comb, which exists only in our imaginations, then the ZOH is not "real" either.   And its individual effect is not the "net effect" of the model.

The model contains two components, each with potential gain. And since we can make them cancel, internally, the gain does not change the "net effect". Looking at just one component in a vacuum is what's causing all the debate.

  • And besides that, I think my initial instinct was right. We don't need any Dirac combs to talk about the ZOH and to analyze its frequency response. Without the comb, all we have is the "net effect", and I think we can all agree on that. No comb, no debate.

--Bob K 11:30, 6 February 2007 (UTC)

If there is an underlying signal, x(t), the transfer function   \frac{X_{\mathrm{zoh}}(f)}{X(f)}\,
is just   e^{-i \pi fT} \mathrm{sinc}(fT)\,   regardless of which "convention" you choose, because it includes the gain associated with the transform from x(t)\, to x_s(t)\,.
--Bob K 16:15, 3 February 2007 (UTC
Agreed, but then I'd have to introduce the spectrum of the underlying signal; I didn't really want to go there. But you could. Dicklyon 17:01, 3 February 2007 (UTC)
I got weary of being reverted. I had to decide not to care so much. --Bob K 15:09, 4 February 2007 (UTC)
BTW, without x(t)\, and X(f)\,, I have a hard time even caring about H_{\mathrm{zoh}}(f)\,. But maybe that's just me.
--Bob K 16:34, 4 February 2007 (UTC)
It's still the deviation from a spectrally-flat construction of continuous-time signal from discrete samples, whether there's an original waveform or not. And if there is an original waveform, who is to say that it was bandlimited? The ZOH can reconstruct a sampled signal exactly if the original signal was the output of a ZOH at the same sample rate. The "special" property of the bandlimited x(t) with respect to the sampling theorem shouldn't have so much to do with how we describe the ZOH. Dicklyon 17:48, 4 February 2007 (UTC)
At the risk of total confusion, let me point out that you might mean an all-pass filter (i.e. no interpolation at all), rather than the brick wall (W-S) interpolator. And that reintroduces the Dirac comb in a whole new way:
x(t) = \sum_{n=-\infty}^{\infty} x[n] \cdot \mathrm{sinc} \left( \frac{t-nT}{T} \right) \
becomes:
x(t)\, = \sum_{n=-\infty}^{\infty} x[n] \cdot \delta \left( \frac{t-nT}{T} \right) \
=T\cdot \sum_{n=-\infty}^{\infty} x[n] \cdot \delta (t-nT) \
And clearly the T-factor is consistent with the other interpolators.
With x[n] = \delta[n]\,, the frequency response is:   \mathcal{F} \{T\cdot \delta(t)\} = T\,
So again, the diff between you and Doug is that he is talking about an unnormalized (absolute?) response and you are talking about a ratio of two different responses to one input:
\frac{T\cdot e^{-i \pi fT} \mathrm{sinc}(fT)}{T} = e^{-i \pi fT} \mathrm{sinc}(fT)\,
But having a ratio (which can be converted to dB) does not make you right and him wrong, as Rbj would suggest. If Doug wants a dB plot, he can normalize by H(0) or by max(H(f)). It's done all the time. Doug is talking about apples and you are talking about oranges. So everybody just needs to define their terms clearly and acknowledge the other points of view and conventions. The users can take it from there and do whatever they need to do.
--Bob K 13:26, 6 February 2007 (UTC)

[edit] Another approach

Another approach that would come out simpler, I think, would be to work the problem first assuming that time is measured in samples. That is, the sample rate is one and T is one sample. After you get the equations, explain the measure with respect to seconds just scales the time and frequency axes. Much cleaner, no room for arguing about scale factors. Some lines like this avoid all the mess about where to put the T.

A zero-order hold reconstructs the following continuous-time waveform from a sample sequence x[n], assuming unit sample rate (that is, continuous time t is measured in units of samples):

x_{\mathrm{ZOH}}(t)\,= \sum_{n=-\infty}^{\infty} x[n]\cdot \mathrm{rect} (t-1/2 -n) \
where \mathrm{rect}() \ is the rectangular function.

Since it shouldn't matter what units time is measured in, there's no incentive to consider constants other than what you get by scaling time and frequency axes.

Shall I go ahead and try it this way? Dicklyon 19:36, 3 February 2007 (UTC)

My advice is no, because "the mess with T" is important to the people who have to implement things. If we can't figure it out, what chance do they have? But I have figured it out, so others should benefit from that. Isn't that why we are here?
--Bob K 15:47, 4 February 2007 (UTC)
OK, I'll leave it as is. Someone else can have a turn and we'll see where it goes. Dicklyon 17:50, 4 February 2007 (UTC)
I feel I should say one more time, in words instead of formulas, why the mess with T is not a mess at all. The rect() filter does not opererate directly on the x[n] sequence. Its only use is to operate (in our imaginations) on x_s(t)\,, which only exists in our imaginations. We don't get to choose the x[n], but we do get to choose the filter and we do get to construct x_s(t)\,. Since we control both gain factors, it does not matter which convention we choose. As long as the factors cancel out, the result is the same. And if they don't cancel out, then there is a non-unity gain. But the relative sinc() response is probably what most people care about.
--Bob K 16:08, 4 February 2007 (UTC)
Of course, I agree, it's not a mess at all, just a slight confusion of some people in interpreting the equations and why they don't always look like the ones someone else derived. Dicklyon 17:50, 4 February 2007 (UTC)


[edit] Back to basics

Gentlemen, I am not sure that "back to" is a correct description. (Did we ever actually go there?)

But please consider these "sample sequences":

a[n] = x(nT)\,

and:

b[n] = \frac{1}{T} \int_{nT}^{nT+T} x(t) \, dt

and:

c[n] \, = \int_{nT}^{nT+T} x(t)\, dt
= T\cdot b[n]\,

To reconstruct x(t), from these sequences, the "rectangular interpolation" formulas are:

x_a(t) = \sum_{n=-\infty}^{\infty} a[n]\cdot \mathrm{rect} \left(\frac{t-nT-T/2}{T} \right)\,
x_b(t) = \sum_{n=-\infty}^{\infty} b[n]\cdot \mathrm{rect} \left(\frac{t-nT-T/2}{T} \right)\,
x_c(t) \, = \frac{1}{T}\cdot \sum_{n=-\infty}^{\infty} c[n]\cdot \mathrm{rect} \left(\frac{t-nT-T/2}{T} \right) \,
= x_b(t)\,

The frequency responses:

H_b(f) = \frac{X_b(f)}{X(f)} \,

and

H_c(f) = \frac{X_c(f)}{X(f)} \,

are clearly identical and equal to:

e^{-i \pi fT} \cdot \mathrm{sinc}(fT) = \mathcal{F}\left\{  \, x_b(t) \bigg|_{x(t) = \delta(t)} \right\}\,


When one looks at just the reconstruction formulas, without regard to any sampling process, case "c" is suddenly different than cases "a" and "b".   b[n] = c[n] = \delta[n]\, respectively produce responses of:

\mathcal{F}\left\{  \, x_b(t) \bigg|_{b[n] = \delta[n]} \right\}= T\cdot e^{-i \pi fT} \cdot \mathrm{sinc}(fT)\,     (cases "b" and "a")
\mathcal{F}\left\{  \, x_c(t) \bigg|_{c[n] = \delta[n]} \right\}= e^{-i \pi fT} \cdot \mathrm{sinc}(fT)\,

Indeed, there may not even be a sampling process to look at, in which case there is no x(t)\, to compare with x_b(t)\,, and the "gain" of T is not a gain at all. In that case it's only the relative responses at different values of f\, that actually matters.

And if there is an x(t) and a process to sample it, then the "gain" of T in cases "b" and "a" is the rightful consequence of disregarding the sampling process. It represents an incomplete or partial result, not a net effect. Doug, and apparently most of the "world", are satisfied with that. They make no assumptions whatsoever about the sampling process, and therefore claim no particular "net effect" or "DC gain". It is unknown to them, and they do not care. They can talk about ZOH without arguing over sampling models and gains.

I think we should respect that, but we need not stop there. We are free to point out that actual signal reconstruction also requires something more... specifically accounting for both the "DC gain" of the reconstruction and the gain of the sampling process. But we are bucking convention to call that "net effect" a zero-order hold. And we are also way out-of-line to insist that the two gains must be equal (unity). Reconstruction does not require that. So what does?

--Bob K 20:02, 9 February 2007 (UTC)

One has to stop somewhere. We are free to pick a gain for some "conventional" way of looking at things, and the way I like is that the average value is maintained, as it is in your (a) and (b) sampling methods. Why not stop there? Or, if you want to introduce more free gains in the sampling and in the intermediate results of the reconstruction, what would you write that's useful enough to justify the added complexity? Dicklyon 22:58, 9 February 2007 (UTC)


I am not proposing an arbitrary gain just for generality. That was just to make basically the same point I made here with b[·] and c[·]. But I think it was so general that it became a distraction, or maybe an excuse to zone out.

What I am proposing, is that we follow convention and define ZOH and its "frequency response" in terms of just the rectangular interpolation formula (cases "a" and "b"), independent of sampling. That means you are not necessarily trying to reconstruct the original signal level, which is certainly the case a lot more often than not. Then the concept of "DC gain" is irrelevant. Only the comparative gain of each frequency vs the others is important.

But I am also proposing that we subsequently introduce the context of sampling and reconstruction, where concepts like gain and 0 dB actually make sense. And we point out the difference between a relative "frequency response":

\mathcal{F}\left\{ \, x_b(t) \bigg|_{b[n] = \delta[n]} \right\} = T\cdot e^{-i \pi fT} \cdot \mathrm{sinc}(fT)\,

and an actual transfer function:

H_b(f) = \frac{X_b(f)}{X(f)} = e^{-i \pi fT} \cdot \mathrm{sinc}(fT)\,

--Bob K 00:31, 10 February 2007 (UTC)


Not following you, Bob. How do you define ZOH in terms of the sampling formulas a and b? Those aren't interpolation formulas, right? Dicklyon 00:42, 10 February 2007 (UTC)


I did not say "sampling formulas a and b". I said "rectangular interpolation formula (cases a and b)". I.e:

x_a(t) = \sum_{n=-\infty}^{\infty} a[n]\cdot \mathrm{rect} \left(\frac{t-nT-T/2}{T} \right)\,
x_b(t) = \sum_{n=-\infty}^{\infty} b[n]\cdot \mathrm{rect} \left(\frac{t-nT-T/2}{T} \right)\,

which are identical formulas, if you don't know anything about how the discrete sequences were created. That's Doug's point, I think.

no, Bob. if Doug says that the transfer function of the ZOH is
H_\mathrm{ZOH}(s)\, = \mathcal{L} \{ h_\mathrm{ZOH}(t) \} \,= \frac{1 - e^{-sT}}{s} \
then Doug is saying
x_\mathrm{ZOH}(t) = \sum_{n=-\infty}^{\infty} (T \ x[n]) \cdot \mathrm{rect} \left(\frac{t-nT-T/2}{T} \right)\,
unless you are saying that x[n] is not the same as x(nT) and xZOH(nT). but if you insist that x[n] is the same as x(nT) and xZOH(nT) then you cannot also say that the system that takes you from x[n] to xZOH(t) is
H_\mathrm{ZOH}(s)\, = \mathcal{L} \{ h_{\mathrm{ZOH}}(t) \} \,= \frac{1 - e^{-sT}}{s} \ .
it's off by a dimensionful scaling factor. and that is off, not just a different convention. r b-j
Most of the world defines the ZOH just in terms of what happens to the a[n] sequence, i.e. what it becomes, rather than its history.
OK, I see I misinterpreted which equations you were referring to; I didn't quite understand why you started with what are apparently sampling equations. I don't think Doug has a point; he just kicked this off by noting that the equations in his books differ from what we use here, and he hasn't said anything since, has he? Dicklyon 17:49, 10 February 2007 (UTC)

We can and probably should add our own sampling/reconstruction perspective as an afterthought. I don't know whether to call it a bigger picture or a smaller picture. Depends on where you sit, I suppose. There we can derive the 0 dB DC gain "net effect" (as Rbj calls it). And it does not take a Dirac comb and rect() filter to do it. Those imaginary concoctions are a matter of choice, and they are inseparable. No comb, no rect() filter. The gains that we assign to them are arbitrary as long as the "net effect" is unity (0 dB). One way to do that is to make them both unity. That is fine. What's apparently not fine is the assertion that our 0 dB "net effect" is a ZOH.

--Bob K 12:13, 10 February 2007 (UTC)

So show us the way. Dicklyon 17:49, 10 February 2007 (UTC)
i don't get it at all, and i do not think that the problem lies with my inability to comprehend the topic. this is not like a Matched filter where you are designing or analyzing something that is scaling independent where any scaling fit the theory. it's more like a Wiener filter thing where the mathematics give you an estimate of a parameter scaled correctly. The theory doesn't have an unknown constant of integration that allows independence in scaling like the solution to
\int { \frac{d x(t)}{x(t)} } = \alpha \ .
unlike this, you're just not free to pick whatever scaling you want when discussing the transfer function from x[n] to
x_\mathrm{ZOH}(t) = \sum_{n=-\infty}^{\infty} x[n] \cdot \mathrm{rect} \left(\frac{t-nT-T/2}{T} \right)\,
the only transfer function that is correct is
H_\mathrm{ZOH}(s)\,= \frac{1 - e^{-sT}}{sT} \ .
the interest of scaling this so that H_{\mathrm{ZOH}}(0) \,= 1 is not relevant. if the scaling of x[n] to xZOH(t) (so that x[n] = x(nT) = xZOH(nT)) was such that it naturally came out in the math that H_{\mathrm{ZOH}}(0) \, \ne 1 that wouldn't change any qualitative or other quantitative point in the argument. it's not about H_{\mathrm{ZOH}}(0) \,= 1 \ . it's about x[n] = x(nT)) = xZOH(nT). r b-j 02:00, 11 February 2007 (UTC)
Robert, I'm not following you here. There is no transfer function from x[n] to xZOH(t), so I don't know what you're referring to when you say "there's only one transfer function that is correct". The continuous-time transfer function that is correct depends on how you choose to go from discrete samples to impulses; the free gain is in how that intermediate result (impulses) relates to the samples. And as Bob points out more recently, there's ANOTHER free gain if you don't require the reconstructed x to have x(nT) equal to x[n], which you might not if x[n] came from some sampling formula that didn't have that property, such as integrating an original x(t) over some intervals. We can pin these gains down by various conventions, but I'm not sure what you're trying to say about some stronger constraint. Dicklyon 05:05, 11 February 2007 (UTC)


okay, i adjusted some notation above, because i was getting a little sloppy. Dick, the issue is representing the quantitative value of the samples x[n], dimension and all, to be the same as the signal being sampled x(t). now we know that in the computer or DSP, the samples are represented as dimensionless discrete numbers that are proportional to, and quantized from, the sampled signal at the sampling instances. that is x(nT). (Bob K likes that better and, at least in the past, not using the x[n] notation.) now, this VREF and quantization error issue is a separate issue and it should be. so instead of saying that (say VREF is such that one LSB of x[n] is 1 mv of x(t)) or that this x[n] × 1 mv = x(nT), let's just say that x[n] = x(nT) and understand that value of the sample in the context of the scaling that the A/D did. x[n] and x(t) have the same dimension, right? if x(t) is volts at some particular time t=nT, so is x[n] volts and not only that, x[n] is the same volts as x(nT). so when you look at that number in computer or DSP memory, you know what voltage (or whatever physical quantity) it represents.
now, so we do not introduce extraneous and meaningless scaling factors, let's say that the D/A undoes this sampling by converting the x[n] samples back to the same kind of animal that x(t). the output of this D/A is what we are calling xZOH(t) and since it is dimensionally the same as x(t), it can be compared to x(t) and, particularly, to the values of x(t) when t=nT. now that relationship is
x_\mathrm{ZOH}(t) = \sum_{n=-\infty}^{\infty} x[n] \cdot \mathrm{rect} \left(\frac{t-nT-T/2}{T} \right) \
that is a well-defined, unique, and meaningful relationship. to say that
x_\mathrm{ZOH}(t) = \sum_{n=-\infty}^{\infty} (A \ x[n]) \cdot \mathrm{rect} \left(\frac{t-nT-T/2}{T} \right) \
for any other A other than A=1 is nothing more than pulling a scaling factor (dimensionless or not) out of our butt. it is another issue. an issue of different VREF for A/D and D/A or even of the D/A not outputting the same kind of physical quantity (dimensioned differently) than what goes into the A/D. when talking of sampling and reconstruction (and the first half means little without the second) all of this scaling crap is non sequitur. it is not germane to the conversation.

Sheesh. Rbj, that's not the A that Bob proposed. If you put A in the two places he suggested, it drops out. It is the free variable that lets one see the relationship between two correct approaches that follow different convetions. You either still can't see it, or are being difficult in ignoring it. Dicklyon 22:36, 11 February 2007 (UTC)

no, but i think it is equivalent to his A/T or its reciprocal. so call it "B" or something else (i'm too lazy to change it everywhere). i shouldn't have used the same letter but i've used it before as a nondescript scaling constant and it was familiar to me. r b-j 02:45, 12 February 2007 (UTC)
now, if you accept that (the premise that the DSP or whatever is doing nothing but passing the input x[n] to the output and the D/A is undoing the conversion of physical quantity to number that the A/D did with no extraneous scaling), and if you also accept that we're not obscuring the concepts by saying that x[n] is anything different (dimensionally and quantitatively) than x(nT), then there is an unambiguous and natural scaling for xZOH(t). it is the one where A is the dimensionless 1. in fact, consider x(t) is properly bandlimited, blah, blah... and that it is a random process with zero DC and non-zero power (think of it as the ideal reconstruction of good pseudo-random x[n] where each x[n] is a R.V. with zero mean and some non-zero variance). now if:
x_\mathrm{ZOH}(t) = \sum_{n=-\infty}^{\infty} (A \ x[n]) \cdot \mathrm{rect} \left(\frac{t-nT-T/2}{T} \right) \
where x[n] = x(nT).
now create an error signal which is the difference: xZOH(t) - x(t) and square and LPF that error signal (or apply whatever decent LP norm to this error signal) and adjust A so that mean square error is minimum. what value for A do you get? (i will bet, the dimensionless 1.)
so now, here is what we get: we do have a relationship between the spectrums of x[n] and x(t). it is this DTFT thing, and if you keep your dimensional ducks in line, that relationship is:
T \ X_\mathrm{Z} \left(e^{j 2 \pi f T} \right) = \sum_{k=-\infty}^{\infty} X \left(j 2 \pi (f - k/T) \right) \
where
X_\mathrm{Z}(z) \ \stackrel{\mathrm{def}}{=}\  \sum_{n=-\infty}^{\infty} x[n] \, z^{-n} \
and
X(s)  \ \stackrel{\mathrm{def}}{=}\  \int_{-\infty}^{\infty} x(t) \, e^{-s t} \, dt \
note that although x[n] and x(t) are the same species of animal, XZ(z) and X(s) are not. that is because integration w.r.t. time tosses in a dimensionful time factor whereas discrete summation of terms do not.
and, again if x(t) is properly bandlimited, blah, blah..., then for frequencies between -Nyquist and +Nyquist, there is a meaningful frequency response between x[n] and x(t) and it is the constant T or 1/T (whatever direction you're going). and the relationship of spectra is:
T \ X_\mathrm{Z} \left(e^{j 2 \pi f T} \right) =  X \left(j 2 \pi f \right) \
for -1/(2T) < f < +1/(2T).
with this Nyquist criterion met, there is a relationship between x(t) and the properly scaled xZOH(t), one where we pulled no scaling factor out of our butt and where xZOH(t) most nearly equals x(t) and when you crank the sampling frequency up to infinity, they be identical, not just proportional. so now you crank Fs down to some finite value and you see that there is a relationship between the spectrums of x(t) and xZOH(t). it is the unambiguous
X\left(j 2 \pi f \right) \mathrm{sinc}(fT) e^{-j \pi f T} =  X_\mathrm{ZOH} \left(j 2 \pi f \right) \
for -1/(2T) < f < +1/(2T).
the transfer function between the like animals is
H_\mathrm{ZOH} \left(j 2 \pi f \right) = \mathrm{sinc}(fT) e^{-j \pi f T} \
or
H_\mathrm{ZOH}(s) = \frac{1 - e^{-sT}}{sT} \ .


now, you or Bob or Doug or whomever may point out that the mapping of spectrums of x[n] to xZOH(t) is
T \mathrm{sinc}(fT) e^{-j \pi f T} \ X_\mathrm{Z} \left(e^{j 2 \pi f T} \right) =  X_\mathrm{ZOH} \left(j 2 \pi f \right) \
for -1/(2T) < f < +1/(2T).
and that ostensible frequency response of
T \mathrm{sinc}(fT) e^{-j \pi f T} \
is the same as
\frac{1 - e^{-sT}}{s} \
when s = jf. but i continue to point out that, without the ZOH, there had to have been a previous factor between the two spectrums of T before (because even though dimensionally x[n] and x(t) are the same species of animal, their spectrums are not dimensionally the same because of the extra dimensional factor you pick up in the continuous-time Fourier Transform, an integral, that you don't get in the DTFT, a discrete summation). the ZOH is another block, an additional block and its unambiguous transfer function is the net difference, a frequency response factor of
H_\mathrm{ZOH}(j 2 \pi f) = \mathrm{sinc}(fT) e^{-j \pi f T} \
which corresponds to a transfer function of
H_\mathrm{ZOH}(s) = \frac{1 - e^{-sT}}{sT} \ .
guys, i'm pooped out on this. i can't do this anymore. can you just read this over again or think about it or do the math? i know, we all know, that we're just repeating over and over again the same points. i know the math, the dimensional bookeeping, and the attribution of what effect to which cause that i am doing is legit and nothing any of you guys are saying is able to negate it. i cannot understand why we keep going through weird tortured logic with extraneous scaling factors that eventually just come out in the wash to accomplish what? help some engineer or student screw up and put a
\frac{1 - e^{-sT}}{s}
transfer function in a control loop (here scaling will have a salient effect) and he/she finds out that the stability of the control loop depends on what units of time they happen be using? what is gained by crapping up the concepts or language with all this extraneous factor stuff? r b-j 07:10, 11 February 2007 (UTC)


Phew! I'll be brief. For me it's much simpler, because all I really know about ZOH is that apparently the closest thing we have to a "convention" is the frequency response:

T\cdot e^{-i \pi fT} \cdot \mathrm{sinc}(fT)\,

Its unit is time, so obviously it is not a transfer function, at least not between two continuous-time domains. 'Turns out what that means is that it does not include that part of the transfer function attributable to sampling. The student who screws up, does so because he does not understand that convention, or he does not know how to complete the transfer function. But we can show him. (Or we can buck tradition and go with the Pohlmann/r b-j convention.) Whether the formula above has any practical use of its own right, or is just a partial result waiting to be completed, I don't know. IMO, it's not unlike the DTFT formula:

X(\omega) = \sum_{n=-\infty}^{\infty} x[n] \,e^{-i \omega n}\,

which leaves plenty of work to be done before you can relate it to sampling... the "real world" as we see it.

--Bob K 14:03, 11 February 2007 (UTC)


[edit] T, f, ω, s, z, and even i & j

BTW, to try to keep from an explosion of different transform definitions, in this thing i was using only the Z transform and Laplace transform defs:
X_\mathrm{Z}(z) \ \stackrel{\mathrm{def}}{=}\  \sum_{n=-\infty}^{\infty} x[n] \, z^{-n} \
and
X(s)  \ \stackrel{\mathrm{def}}{=}\  \int_{-\infty}^{\infty} x(t) \, e^{-s t} \, dt \
and defining the DTFT and continuous Fourier transform in terms of those, where
z = e^{j 2 \pi f T} = e^{j \omega} \
and
s = j 2 \pi f = j \Omega \
If you are worried about students trying to apply these things to real control loops with real sample rates and real frequencies, then keep f and T around in plain sight, and carefully deal with the resultant annoyances. --Bob K 04:01, 12 February 2007 (UTC)
I dunno, but it's even scarier to me to do away with the electrical engineering conventional notation of j2 = -1 and reverting to the more original and proper notation of using i for the imaginary unit, but maybe we should chuck that convention. It is also not so good and was not necessary in the beginning, but EE textbooks still use the j notation. r b-j 20:16, 11 February 2007 (UTC)
j\, is my preference for that reason. I'm just used to it. But I can live with i\,. I think that battle is neither small enough to win nor big enough to matter. --Bob K 04:01, 12 February 2007 (UTC)