Talk:Phasor (sine waves)
From Wikipedia, the free encyclopedia
Phasors are used for a lot more than just AC circuit analysis. I think actually the exact definition may differ depending on the field you are worrying about, such as the Signal Processing, where phasors most definitely do have a frequency. This is probably the best general definition of this mathematical tool: [1]
- i wrote this page to fill a link before i had a particularlly solid understanding of phasors myself. At some point i need to rewrite this article but thats quite a bit of work. Plugwash 13:52, 17 Apr 2005 (UTC)
Contents |
[edit] Phasor multiplication
Is this ever used in regular contexts besides power calculations? I imagine it is. - Omegatron 21:23, August 15, 2005 (UTC)
- Yeah, now that I think of it, I'm pretty sure I remember rotating phasor diagrams in communications classes for modulation stuff. - Omegatron 21:27, August 15, 2005 (UTC)
Here is the rule:
Doesn't that imply:
which is not true. What am I missing?
--Bob K 09:38, 21 August 2007 (UTC)
-
- Bob, consider that a phasor is just the amplitude of the frequency domain representation of the sinusoid. Multiplying two phasors is essentially multiplication in the frequency domain which results in convolution, not multiplication, in the time domain. That's why we say that 'the product (or ratio) of two phasors is not itself a phasor'. Alfred Centauri 13:01, 21 August 2007 (UTC)
Thanks, but the same source provides this definition:
which simply means to me that is a shorthand notation. The "equals" sign should mean we can directly substitute the full mathematical notation for the shorthand:
- is just a shorthand for
Similarly,
- means
I'm just doing the obvious application of the definition. If you still disagree, I would appreciate it if someone would show a trigonometric example (without phasors) of multiplication, and then illustrate how phasors simplify the math and arrive at the same answer, as the article implies, or did before I removed this statement:
-
-
- "Noting that a trigonometric function can be represented as the real component of a complex quantity, it is efficacious to perform the required mathematical operations upon the complex quantity and, at the very end, take its real component to produce the desired answer."
-
FWIW, what we seem to be talking about here is the analytic representation of a signal, . The analytic representations of and are and And the product of the analytic signals is which represents the real signal: which is not the product of and Therefore we have to be careful not to mislead people (and ourselves) about the multiplication of analytic signals. And I think the same goes for phasors. We really need an example of the multiplication property in action. When and why would someone want to use it?
--Bob K 14:44, 21 August 2007 (UTC)
-
-
- Yeah, I just took a look at the source link and it's quite confused. As you are probably aware, using phasors in AC circuit analysis amounts to 'pretending' that the excitations are of the form (which is consistent with your 'FWIW' statements). Under this pretense, the ratio of the voltage and current associated with a circuit element is a constant complex number (no time dependence):
-
-
-
- But, this is just the impedance of the circuit element and is the result we get by taking the ratio of the phasor voltage and current associated with the circuit element. Clearly, the impedance is not associated with a time function and thus, is not a phasor.
-
-
-
- The product of the voltage and the complex conjugate current associated with a circuit element is also a constant complex number:
-
-
-
- But, this is just the complex power associated with the circuit element and is the result we get by multiplying the phasor voltage and conjugate phasor current associated with a circuit element. And, as with impedance, the complex power is not associated with a time function and thus, is not a phasor. Alfred Centauri 15:22, 21 August 2007 (UTC)
-
If http://en.wikibooks.org/wiki/Circuit_Theory/Phasor_Arithmetic is "confused", then I am inclined to remove the link to it, because I don't know how to fix it. But that link was intended to clarify this statement:
After reading what you said, I think what's missing is a statement that multiplication between a phasor and a complex impedance is another phasor. But multiplication of two phasors (or squaring one) does not produce another phasor.
While we're on a roll, what do you think of this excerpt:
-
- "...the complex potential in such fields as electromagnetic theory, where—instead of manipulating a real quantity, u—it is often more convenient to derive its harmonic conjugate, v, and then operate upon the complex quantity u + jv, again recovering the real component of the complex "result" as the last stage of computation to generate the true result."
- Is it useful/helpful?
- Does it contribute to the understanding of "phasor"?
- Does the term "operate on" need clarification? E.g., is multiplication restricted to just one "complex potential" and something passive?
--Bob K 16:34, 21 August 2007 (UTC)
-
-
- "Is it useful/helpful?" IMHO, No.
- "Does it contribute to the understanding of "phasor"? I think I answered that already ;<)
- "Does the term "operate on" need clarification?" Not after you've deleted that material.
-
-
-
- You know, the quote above regarding complex math isn't necessary either. I understand that everybody wants to contribute something to Wikipedia but, as a result, there's a lot of excess verbosity in articles where a wikilink would suffice. What do you think? Alfred Centauri 21:21, 21 August 2007 (UTC)
-
Good idea! I also moved the trig stuff to a more appropriate article.
--Bob K 21:54, 21 August 2007 (UTC)
[edit] DC is sinusoid of 0 frequency
I removed this statement from the Circuit Laws section for the reasons that it isn't quite correct and isn't needed anyhow to justify the use of phasors. The problem is that phasors are complex numbers in general. DC circuits do not have complex voltages or currents. So, while phasors generalize DC circuit analysis to AC circuits, we can't really go back the other way unless we want to admit complex DC sources. Alfred Centauri 01:32, 27 February 2006 (UTC)
- Well yes but the impedance of an inductor and a capacitor go to 0 and infinity respectively at dc. So provided there are no complex dc sources in the circuit there will be no complex voltages or currents in the circuit. Plugwash 19:40, 28 March 2006 (UTC)
But my point is precisely that equating DC with a zero frequency sinusoid, as you have done above, is not quite correct. Consider the following AC circuit:
A voltage source of 1 + j0 Vrms in series with a 1 ohm resistor and a current source of 0 - j1 Arms. The average power associated with the resistor is 1W and is independent of frequency, right? But wait; recall that the time domain voltage source and current source functions are given by:
Setting the frequency to zero we get:
With a 'DC' current of 0A, the power associated with the resistor is 0W but this result conflicts with the result above. Clearly, in the context of AC circuit analysis, it is not quite correct to say that DC is just zero frequency. Alfred Centauri 22:25, 28 March 2006 (UTC)
Here's something else to consider. The rms voltage of a DC source is simply the DC voltage value. The rms voltage of an AC voltage source is the peak voltage over the square root of 2. Since this result is independent of frequency, it seems reasonable to believe that the rms voltage of a zero frequency cosine is equal to the rms value for any non-zero frequency cosine. However, if we insert a frequency of zero into a cosine, the result is the constant peak value, not the constant rms value. Once again, it does not appear to be entirely correct to say that a DC source is a sinusoidal source of zero frequency. Alfred Centauri 23:36, 28 March 2006 (UTC)
- ahh yes but you introduced a complex source, if all dc sources are real then all dc voltages and currents must also be real and the power calculations work fine. Plugwash 01:33, 29 March 2006 (UTC)
Not true! Look at the expression for i_s(t). That is a real source my friend regardless of frequency. It is the phasor representation of this source that is complex. Further, look at the 2nd example I give. No complex sources there, right? Alfred Centauri 02:16, 29 March 2006 (UTC)
-
- Phasor analysis cannot be used for power calculations at all, since the basis of phasor notation is a magnitude and angle for a single cosine at a particular frequency. You just can't multiply two phasors together and get a phasor out - that's why AC power analysis is trickier. To get the
rmspower, you can multiply the phasor representations of the rms values together to get a complex number that gives the real and imaginary power, but then the rms value of a zero-frequency current at π / 2 will be 0... DukeEGR93 23:50, 6 November 2006 (UTC)
- Phasor analysis cannot be used for power calculations at all, since the basis of phasor notation is a magnitude and angle for a single cosine at a particular frequency. You just can't multiply two phasors together and get a phasor out - that's why AC power analysis is trickier. To get the
Phasor analysis can and is used for power calculations. Although your statement is correct that the product of two phasors is not a phasor, this fact does not imply your assertion. After all, the fact that impedance, being the ratio of two phasors, is not a phasor does not make impedance any less useful a concept. Similarly, the complex power given by the product of the voltage and conjugate current (rms) phasors, while not being a phasor, is nonetheless a valid and useful quantity whose real part is the time average power (not the 'rms' power!) and whose imaginary part is the peak reactive power.
Your statement that "the rms value of a zero-frequency current at π / 2 will be 0." is not even wrong. There is no such thing as an rms value at pi/2. The rms value of a unit amplitude sinusoid - regardless of frequency or phase - is . Alfred Centauri 00:29, 7 November 2006 (UTC)
- Well, we will have to disagree with that latter one since the rms of sin(0t + π / 2) is certainly not - in the singular case of DC, the phase is important to determining the rms value. For the former, perhaps just as you are saying cos(ωt + θ) is not a sinusoid if ω = 0, I would say that it is not phasor analysis being used to compute power (or find impedance values for that matter) but rather phasors used to find complex numbers that are not in and of themselves phasors. DukeEGR93 04:05, 7 November 2006 (UTC)
There's no room for disagreement here. By definition, the rms value of a sinusoid is:
Note that this result holds in the limit as the period T goes to infinity. Thus, your assertion that the phase is important in determining the rms value is obviously false - even in the case where the frequency is zero (infinite period). If this isn't clear to you yet, then think about the time average power deliverd to a resistor by a sinsoidal voltage source with an arbitrarily large period (T = 15 billion years for example) compared to that delivered by a true DC voltage source. Alfred Centauri 05:50, 7 November 2006 (UTC)
-
-
- You have presumed incorrectly. I do indeed claim that the rms value of either sin(0t) or cos(0t) is . This is clearly so from the definition of the rms value of a sinusoidal function of time I gave above. That fact that you choose to ignore a valid mathematical result is troubling enough but then you proceed to compound your error by equating cos(0t + θ) with cos(θ). Don't you see? The former is a function of time but the later is not. If you do not see the difference, then integrate both expressions with respect to time and see what happens. Finally, your assertion that the rms value of a zero frequency sinusoidal function of time depends on the phase violates a fundamental principle - that absolute phase is not physically meaningful. To claim otherwise is equivalent to claiming that a choice of zero time has physical meaning.
-
-
-
- Look, let's say you have a sinusoidal voltage source connected to a resistor. You claim that v(t) = cos(ωt + φ)V. However, I claim that v(t') = cos(ωt')V where t' = t + φ / ω. That is, my choice of zero time differs from your choice of zero time. Nonetheless, we both calculate the same rms voltage across the resistor. This is as it should be because the choice of zero time (or equivalently, the phase) is arbitrary so we should calculate the same average power delivered to the resistor. Note that I have not placed any constraint for the frequency here. In fact, according to the principle that the choice of zero time has no physical meaning, this result should hold for the case of zero frequency. However, according to your claim, in the case of zero frequency, we will calculate different average powers! Which one is correct? Which choice of time zero is correct? Surely you can see that this is an absurd result! Alfred Centauri 22:09, 7 November 2006 (UTC)
-
-
-
-
- So, what is ? I believe you are banking too much on your definition when this is clearly a singular case worth of singular definition. Your transformation of variables has a major issue in that you are transforming a finite, bounded space into unbounded, infinite space. Though, in all this, I very much appreciate the prompts to really think about these things. One of my colleagues in the Math department, when posed with the question, immediately answered "I'm not a statistician," which I thought was a bit of a cop-out. :) DukeEGR93 01:29, 8 November 2006 (UTC)
-
-
-
-
-
-
- Phasors have a MAGNITUDE and a PHASE. The two are independent of each other, in this case the MAGNITUDE is actually an RMS value, but whether you divide by root(2) or not, the magnitude of the phasor is independent of phase. Remember, a phasor is always over a COMPLETE cycle, so it doesn't matter where in time you feel like labeling the start of that cycle. A complete cycle of a of a 0 freqency phasor in fact never ends - you just integrate on into infinity. The function does not converge... so the above definition of RMS value of a 0 frequency phasor is undefined: infinity over infinity. However, it can be shown that with DC you can choose ANYTHING besides two zeroes for your limits of integration and you just get the maginitude of the DC signal. In reality, the RMS value of a DC signal is just the magnitude of the DC signal. Just substitute 0 for T in the above equation. Or think about it graphically. A flat line at y=2 has a square of y=4. No matter where you start or stop integrating (as long as you eventually stop, i.e. you don't go to either infinity), you have a rectangle of area 4*T. This is then divided by T in the equation, so you have the square root of 4... which is 2.
-
-
-
-
-
- Reply to anonymous. (1) The magnitude of a phasor can be the peak value of the associated sinusoid. It is not always an RMS value. (2) The statement "a phasor is always over a COMPLETE cycle" doesn't even rise to the level of being wrong. A phasor is a constant complex number - it has no time dependence so the notion of a cycle is meaningless in the context of a phasor. (3) Regarding your 'argument' that the RMS value of 0 frequency phasor is undefined: a phasor doesn't have a frequency. (4) Infinity over infinity is not undefined - it is indeterminate. (5) Does the Fourier integral of cos(t) converge? Alfred Centauri 01:41, 17 May 2007 (UTC)
-
- Somehow all this reminded me of a quote, "There's no such thing as chaos, only really long transients." Struckthrough rms in front of power above. Long day... How about this - the sinusoidal representation of Acos(0t + θ) is really (Acos(θ))(cos(0t)) such that its phasor notation would be . That solves the average power problem, so long as you only use the rms version of , in that the result will be purely real given phase angles will always be zero for DC quantities given the above representation. Then again, there's no such thing as DC, only really long periods... DukeEGR93 04:59, 7 November 2006 (UTC)
While is true for finite t, it does not hold if t goes to infinity (0 * infinity can by any number). However, to take the rms value of a zero frequency sinusoid, we must in fact integrate over all time. It is for this reason that the rms value of the zero frequency sinusoid on the left is while the rms value of the zero frequency sinusoid on the right is . Alfred Centauri 06:16, 7 November 2006 (UTC)
[edit] Different from the concept of Physics?
Well, I think that the phasors for electronics is an application of those concepts....
Gabriel
The electronics phasors dont behave like the vectors , and they dont obey the rules of vectors studied in Physics (Statics & Dynamics)
Nauman —Preceding unsigned comment added by 202.83.173.10 (talk) 11:55, 24 October 2007 (UTC)
Nauman is right ,Electronics phasors dont behave the same way as the physics phasors , they are different , for reference see Fundamentals of Electric Circuits by , Sergio Franco , chapter 10 , AC Response . —Preceding unsigned comment added by 202.83.164.243 (talk) 14:53, 11 November 2007 (UTC)
[edit] Transients & Phasor analysis
I removed the following text from the intro:
(Important note: The phasor approach is for "steady state" calculations involving sinusoidal waves. It cannot be used when transients are involved. For calculations involving transients, the Laplace transform approach is often used instead.)
I don't believe that this statement is necessary even if it were true but, that fact is, phasors can be used to analyze transients. The bottom line is that the complex coefficents of a Fourier series or the inverse Fourier integral are phasors.
It is usually stated that phasor analysis assumes sinusoidal steady state at one frequency and this is true as far as it goes. However, it is quite straightforward to extend phasor analysis to a circuit with multiple frequency sinusoidal excitation. When one does this, it is clear this extension is nothing other than frequency domain analysis. In other words, phasor analysis is frequency domain analysis at a fixed frequency. Alfred Centauri 23:22, 27 April 2007 (UTC)
[edit] Merge with Phasor (physics)
This article (Phasor (electronics)) describes essentially the same thing as Phasor (physics). I believe there is no reason to maintain two different articles. The main difference between them is that this article describes the concept of phasors from the viewpoint of an engineer who uses it in a specific domain, while the other article is more general, but lacks some details that this one has. —The preceding unsigned comment was added by 129.177.44.96 (talk) 13:32, 2 May 2007 (UTC).
I agree they describe the same thing. The physics article takes a vector approach, while the electronics article is based on complex numbers. The electronics article is far more practical, while the physics article is far more theoretical, and IMO, less useful. Concatenating the electronics article to the physics article as is would probably be a good idea. Neither article is very long, though the electronics article could do without the power engineering section - it doesn't add much to the concept of phasors.
All I can say is that the Phasor (electronics) page helped me pass ECE 203. The more general physics page wouldn't have helped nearly as much.
- Merging does not mean removing information. The techniques discussed in Phasor (electronics) do not only apply to electronics. A more general context would be appropriate. —The preceding unsigned comment was added by 129.177.44.134 (talk) 15:11, 10 May 2007 (UTC).
THEY ARE THE SAME KNOWLEDGE. THEY ARE THE SAME MATERIAL, THEY ARE SYNOMINOUS. To make people happy just put the same concepts at both places.
Would it be okay to call "Phasor (Physics)" "Phasor" and rename "Phasor (electronics)" to "Application of phasors in electronics" or something of the like? All redundant material introducing abstract phasors could be deleted from the latter, and it could be considered building on the former. —1st year EE student —Preceding unsigned comment added by 128.54.192.216 (talk) 16:07, 27 September 2007 (UTC)
- I agree with the above suggestion. We can call the Phasor (Physics) simply Phasor and rename the electronics article to Applications of Phasors in Electronics. We would cut down on redundancy, and make both articles mesh together better.xC | ☎ 22:16, 1 November 2007 (UTC)
Definately merge them. Mpassman (talk) 18:18, 17 November 2007 (UTC)
[edit] The Role of Linearity
I like the section on Phasor arithmetic and I suggest noting that the role of linearity is also an important component of the technique. If, for example, the differential equation in the example were not linear then phasors would be for naught. gaussmarkov 0:28, 6 September 2007 (UTC)
- Done. Alfred Centauri 00:52, 6 September 2007 (UTC)
[edit] Importance of the properties of the Re{} operator
I think it should be better explained why the Re{} operator is usually dropped before some complicated algebra and then reapplied at the end. e.g. how the differential operator can be moved inside the Re{} operator. -Roger 23:14, 1 November 2007 (UTC)
- Isn't the explanation simply the orthogonality of even and odd functions? Alfred Centauri 02:46, 2 November 2007 (UTC)
Orthogonality (or something a little more elusive) is the reason for wanting to do operations in the complex domain, as I will try to explain. Linearity, loosely defined as operations that affect the Re and Im components independently, is a characteristic of certain mathematical operations that may be moved inside the Re operator without changing the net effect. Such operations include differentiation, integration, time-delay, multiplication by a (real-valued) scalar, and addition. No orthogonality is required to do these things.
If we limited our phasor arithmetic to those kinds of operations, the Im part we chose would not matter, and there would be no benefit at all. The benefit comes from choosing an orthogonal waveform to the Re component, such that:
which has the useful property:
whereas:
We see that in the complex domain, a time-delay can be represented more simply by just the product with a complex scalar (which is why impedances are complex).
- Multiplication of 2 phasors, such as by a mixer or a square-law rectifier, is not linear. Phasors may look like scalars, but one has to remember that they are actually a shorthand notation for a time-variant waveform.
So it appears that the motivation for working in the complex domain is to simplify the effect of time-delays / phase-shifts caused by linear devices with "memory".
--Bob K 15:27, 12 November 2007 (UTC)
[edit] problem with the new introduction
The word "phasor" has two meanings. One meaning includes the ejωt factor, and the other excludes it. Until recently these were relegated to different articles, one called "Phasor (physics)" and the other called "Phasor (engineering)" (I think). But there was fairly strong sentiment to merge the articles, which resulted in this one. The new introduction appears to be heading back to the "Phasor (engineering)" article.
--Bob K (talk) 17:29, 17 December 2007 (UTC)
- My bias is showing; but the opening paragraph should follow the forumula of giving at least a concise definition of the subject. The introduction isn't new, it's a rehash of a paragraph from a month or so ago. Let's rewrite the opening so that it allows both definitions. If the meanings are really disparate we need two articles, though I don't think we ever had "Phasor (engineering)". --Wtshymanski (talk) 17:43, 17 December 2007 (UTC)
OK, I looked back. It was "Phasor (electronics)". I'm sure we can come up with a suitable introduction. It really is the same concept in both disciplines. But the physicists are less inclined to solve circuit equations.
--Bob K (talk) 23:13, 17 December 2007 (UTC)
[edit] Phasor diagrams
I think that there should be separated article dedicated to phasor diagrams - describing their making and meaning.
--Čikić Dragan (talk) 17:06, 21 May 2008 (UTC)