In physics and engineering, a phasor (a portmanteau of phase vector[1][2]) is a complex number representing a sinusoidal function whose amplitude A and initial phase θ are time-invariant and whose angular frequency ω is fixed.
It is related to a more general concept called analytic representation,[3] which decomposes a sinusoid into the product of a complex constant and a factor depending on time and frequency.
Phasor representation allows the analyst to represent the amplitude and phase of the signal using a single complex number.
[6] An important additional feature of the phasor transform is that differentiation and integration of sinusoidal signals (having constant amplitude, period and phase) corresponds to simple algebraic operations on the phasors; the phasor transform thus allows the analysis (calculation) of the AC steady state of RLC circuits by solving simple algebraic equations (albeit with complex coefficients) in the phasor domain instead of solving differential equations (with real coefficients) in the time domain.
[8][9][a] The originator of the phasor transform was Charles Proteus Steinmetz working at General Electric in the late 19th century.
[12] Glossing over some mathematical details, the phasor transform can also be seen as a particular case of the Laplace transform (limited to a single frequency), which, in contrast to phasor representation, can be used to (simultaneously) derive the transient response of an RLC circuit.
[9][11] However, the Laplace transform is mathematically more difficult to apply and the effort may be unjustified if only steady state analysis is required.
Multiplication and division of complex numbers become straight forward through the phasor notation.
, the following is true:[14] A real-valued sinusoid with constant amplitude, frequency, and phase has the form: where only parameter
The inclusion of an imaginary component: gives it, in accordance with Euler's formula, the factoring property described in the lead paragraph: whose real part is the original sinusoid.
That means its only effect is to change the amplitude and phase of the underlying sinusoid:
is: or, via the law of cosines on the complex plane (or the trigonometric identity for angle differences):
A key point is that A3 and θ3 do not depend on ω or t, which is what makes phasor notation possible.
The time and frequency dependence can be suppressed and re-inserted into the outcome as long as the only operations used in between are ones that produce another phasor.
In physics, this sort of addition occurs when sinusoids interfere with each other, constructively or destructively.
The static vector concept provides useful insight into questions like this: "What phase difference would be required between three identical sinusoids for perfect cancellation?"
Clearly, the shape which satisfies these conditions is an equilateral triangle, so the angle between each phasor to the next is 120° (2π⁄3 radians), or one third of a wavelength λ⁄3.
This means that for many sources, destructive interference happens when the first and last wave differ by 360 degrees, a full wavelength
If the length of its moving tip is transferred at different angular intervals in time to a graph as shown above, a sinusoidal waveform would be drawn starting at the left with zero time.
Then the time axis of the waveform represents the angle either in degrees or radians through which the phasor has moved.
So we can say that a phasor represents a scaled voltage or current value of a rotating vector which is "frozen" at some point in time, (t) and in our example above, this is at an angle of 30°.
Therefore, in phasor representation, the time derivative of a sinusoid becomes just multiplication by the constant
When we solve a linear differential equation with phasor arithmetic, we are merely factoring
We can also define the complex power S = P + jQ and the apparent power which is the magnitude of S. The power law for an AC circuit expressed in phasors is then S = VI* (where I* is the complex conjugate of I, and the magnitudes of the voltage and current phasors V and of I are the RMS values of the voltage and current, respectively).
Multiple frequency linear AC circuits and AC circuits with different waveforms can be analyzed to find voltages and currents by transforming all waveforms to sine wave components (using Fourier series) with magnitude and phase then analyzing each frequency separately, as allowed by the superposition theorem.
In analysis of three phase AC power systems, usually a set of phasors is defined as the three complex cube roots of unity, graphically represented as unit magnitudes at angles of 0, 120 and 240 degrees.
This approach greatly simplifies the work required in electrical calculations of voltage drop, power flow, and short-circuit currents.
In the context of power systems analysis, the phase angle is often given in degrees, and the magnitude in RMS value rather than the peak amplitude of the sinusoid.
where the term in brackets is viewed as a rotating vector in the complex plane.
In this case the vector sum of the modulating phasors is shifted 90° from the carrier phase.