Talk:LTI system theory

From Wikipedia, the free encyclopedia

This article is within the scope of the following WikiProjects:

Contents

[edit] Example

Need to put an example for the discrete time case to make things a little clearer.

  • I say this side focuses too much on continuous time (CT). The concept of LTI should be explained independent of whether or not the time is continuous. For example, it is difficult to link from a side which discusses digital signal processing algorithms to this LTI side, because it only talks about CT signal processing. Also, LSI (Linear Shift Invariance), which means basically the same thing, must be mentioned. Faust o 20:25, 23 January 2006 (UTC)

[edit] i think there is a lot more that can be done to make this clearer.

but i'm glad the article is there. this should be written so that it can be understood by someone who doesn't already understand it. to begin with, i think there needs to be a better introduction as to what \mathbb{H} is. what are the fundamental properties of this LTI operator \mathbb{H}? first exactly what does it mean for \mathbb{H} \{ x_1(t) + x_2(t) \} to be linear (the additive superposition property) and then what does it mean for \mathbb{H} \{ x(t-\tau) \} to be time-invariant. then from that derive the more general superposition property, then introduce the dirac delta impulse as an input and define the output of \mathbb{H} \{ \delta(t) \} to be the impulse response. since the article is LTI, there is no need to introduce h(t1,t2). all that does is obfuscate.

Mark, i hope you don't mind if i whack at this a bit in the near future. i gotta figure out how to draw a png image and upload it. r b-j 04:53, 28 Apr 2005 (UTC)

I disagree. I think h(t1,t2) should be there, as it shows how much simpler things become when translation invariance is imposed. I think you can represent any linear system with \int_{-\infty}^\infty h(t_1,t_2) x(t_2) d t_2, but I'm not 100% sure. Certainly any nice system. - Jpkotta 06:40, 12 February 2006 (UTC)
you're correct when you say that "you can represent any linear system with \int_{-\infty}^\infty h(t_1,t_2) x(t_2) d t_2", but that is more general than LTI and unnecessarily complicated. that expression includes Linear, time-variant systems also. perhaps the article should be just "Linear systems" and deal with both time-variant and time-invariant. r b-j 07:25, 14 February 2006 (UTC)


[edit] Representation

It appears that it is assumed that the LTI linear system can be represented by a convolution. However, in Zemanian's book on distributions a result due to Schwatrz and its proof are presented. The result has to do with sufficient conditions under which an LTI transformation can be represented by a convolution. I guess that continuity of the LTI transformation is one of the conditions. The result appears to be quite deep. Some other proofs in the literature may not be real proofs. This result is not as simple as one might think.

Yaacov

I deleted the word "integral" from my comment. The convolution of didtributions has a definition that does not appear to rely on integration.

Yaacov

I added: "Some other proofs in the literature may not be real proofs. This result is not as simple as one might think."

Yaacov

[edit] Notation

Is the bb font  \mathbb{H} at all common for an operator? I've never seen that before, and I think bb font should be reserved for sets like the reals and complexes. A clumsy but informative notation that one of my professors uses is this:
 \mathcal{L}_{t_o}\{x(t_i)\}_{t_i} ,
where \mathcal{L} is the operator, to is the output variable, and ti is the input variable. To say that a system is TI,
 \mathcal{L}_{t}\{x(t + \tau)\}_{t} = \mathcal{L}_{t+\tau}\{x(t)\}_{t}.
I'm not sure if it's a good idea to use it here though... -- Jpkotta 06:49, 12 February 2006 (UTC)

[edit] Discete time

Should discrete time be folded in with continuous time, or should there be two halves of the article?

By folded, I mean:

  • basics
    • what it means to be LTI in C.T.
    • what it means to be LTI in D.T.
  • transforms
    • laplace
    • z

By two halves, I mean

  • C.T.
    • what it means to be LTI
    • Laplace
  • D.T.
    • what it means to be LTI
    • z transform

I vote for the two halves option, because then it would be easier to split into two articles in the future. -- Jpkotta 06:46, 12 February 2006 (UTC)

I made a big update to the article, and most of it was to add a "mirror image" of the CT stuff for DT. There is a bit more to go, but it's almost done. -- Jpkotta 22:25, 21 April 2006 (UTC)

[edit] Comparison with Green function

This page has the equation

y(t_1) = \int_{-\infty}^{\infty} h(t_1, t_2) \, x(t_2) \, d t_2

which looks an awful lot like the application of a Green function

 u(x) = \int_0^\ell f(s) g(x,s) \, ds

however this page doesn't even mention Green functions. Can someone explan when the two approaches can be applied? (My hunch right now is that Green functions can be used for linear systems that are not necessarily time-invariant.) —Ben FrantzDale 03:24, 17 November 2006 (UTC)

Yes, a Green's function is essentially an impulse response. Different fields have developed different terms for these things. But the Green's function is also more general, as you note, than is needed for time-invariant systems; as is that h(t1,t2) integral. Dicklyon 06:14, 17 November 2006 (UTC)

[edit] Discrete example confusion

The first example starts out describing the delay operator then describes the difference operator. 203.173.167.211 23:03, 3 February 2007 (UTC)

I fixed it. And changed z to inverse z for the delay operator. I'm not sure where that came from or whether I've left some discrepancy. Dicklyon 01:30, 4 February 2007 (UTC)

[edit] Problem with linearity

Generally, it is not true that for a linear operator L (such that L(ax1 + bx2) = aL(x1) + bL(x2)), that

L\left(\sum_n a_nx_n \right) = \sum_n a_n L(x_n)

over arbitrary index sets (i.e. infinite sums). This is used heavily in LTI analysis.

The result does not follow from induction. So why should it be true for linear systems? I think linearity itself is not strong enough a condition to warrant the infinite-sum result. Are there deeper maths behind systems analysis that provide this result? (For example, restriction of linear systems to duals of certain maps is a sufficiently strong condition to imply this result.) 18.243.2.126 (talk) 01:42, 13 February 2008 (UTC)

Are you saying it's not true? Or that you don't know how to prove it? Do you have a counter-example? Dicklyon (talk) 05:07, 13 February 2008 (UTC)
It is plainly not true, if linearity is the only condition being imposed (the constant signal x[n] = 1 is linearly independent from the unit impulses and all their shifts -- while this is not the case if you allow infinite sums); I have yet to construct a viable time-invariant counterexample; time-invariant functionals tend to be a lot more restrictive. For example, any time invariant system which only outputs constant signals is identically the zero system. 18.243.2.126 (talk) 00:21, 19 February 2008 (UTC)
I'm not following you. What is the concept of "linearly independent" and how does it relate to the question at hand? Dicklyon (talk) 06:10, 19 February 2008 (UTC)
The terminology comes from linear algebra (the wiki article explains it better than I can in a short paragraph). Note that the set of real-valued signals forms a real vector space. 18.243.2.126 (talk) 19:28, 20 February 2008 (UTC)
I understand about linear algebra and vector spaces, but there's nothing in this article, nor in linear algebra about this concept you've brought up, so tell us why you think it's relevant. Dicklyon (talk) 20:05, 20 February 2008 (UTC)
I see that linear independence says "In linear algebra, a family of vectors is linearly independent if none of them can be written as a linear combination of finitely many other vectors in the collection." This renders your above statement "while this is not the case if you allow infinite sums" somewhat meaningless. So I still don't get your point. Dicklyon (talk) 20:10, 20 February 2008 (UTC)
In what way does it "not follow from induction"? Oli Filth(talk) 12:02, 13 February 2008 (UTC)
Induction can prove it for all finite subsequences of the infinite index sequences, without bound, but not for the infinite sequence itself. Dicklyon (talk) 16:24, 13 February 2008 (UTC)

[edit] New footnote

I've removed the recently-added footnote, because I'm not sure what it says is relevant. The explanation of the delay operator is purely an example of "it's easier to write", as by substitution, z = esT. The differentiation explanation is irrelevant, because when using z, we're in discrete time, and so would never differentiate w.r.t. continuous time. Oli Filth(talk) 21:28, 10 April 2008 (UTC)

[edit] "any input"?

Is this nonsense?:

"Again using the sifting property of the δ(t), we can write any input as a superposition of deltas:
x(t) = \int_{-\infty}^\infty x(\tau) \delta(t-\tau) \,d\tau " —Preceding unsigned comment added by Bob K (talkcontribs) 00:52, 11 June 2008
Other than special cases which one would need to resort to Lebesgue measures to describe, I don't think it's nonsense. What specifically are you questioning, the maths itself, or the use of the qualifier "any"? Oli Filth(talk) 08:05, 11 June 2008 (UTC)


It sounds like we are saying that delta functions are a basis set (like sinusoids) for representing signals.
Anyhow, I think the "proof" was redundant at best. At worst, it was circular logic, because it uses this formula:
 (h * \delta) (t)  = \int_{-\infty}^{\infty} h(t - \tau) \, \delta (\tau) \, d \tau = h(t),
which is a special case of this one:

y(t_1) = \int_{-\infty}^{\infty} h(t_1 - t_2) \, x(t_2) \, d t_2 \, 

 

 (Eq.1)

 

to derive this one:
\mathcal{H} x(t) = \int_{-\infty}^\infty x(\tau) h(t-\tau) \,d\tau
which is the same as Eq.1.
--Bob K (talk) 13:24, 11 June 2008 (UTC)
In the discrete case, the delayed delta signals certainly are a basis set. I'm not sure about the continuous case. However, I agree that the proof was somewhat circular. Oli Filth(talk) 15:45, 11 June 2008 (UTC)
Well besides that fact that neither of us has ever heard of using Dirac deltas as a basis set for all continuous signals, how does one make the leap from that dubious statement to this?:
x(t) = \int_{-\infty}^\infty x(\tau) \delta(t-\tau) \,d\tau
I believe it is just nonsense.
--Bob K (talk) 18:04, 11 June 2008 (UTC)
We know the following is true (it's the response of system x(t) to an impulse):
\ x(t) = \int_{-\infty}^\infty \delta(\tau) x(t - \tau) \,d\tau
and we also know that convolution is commutative. Oli Filth(talk) 18:24, 11 June 2008 (UTC)
Those two statements of yours have nothing to do with the assertion that Dirac deltas are a basis set for all continuous signals and therefore nothing to do with my question.
--Bob K (talk) 23:24, 11 June 2008 (UTC)
You said "besides that fact" above. I'm not defending the material (I didn't write it); just playing devil's advocate... I feel perhaps we are talking at cross purposes! Oli Filth(talk) 23:30, 11 June 2008 (UTC)
Out of context, the integral formula is fine. But I quoted the entire statement and asked if it makes any sense. (You said it does.) Even if you accept the idea of Dirac deltas as basis functions [I don't], how does the integral formula follow from that? The whole thing just seems silly.
--Bob K (talk) 00:17, 12 June 2008 (UTC)