Taylor rule
From Wikipedia, the free encyclopedia
The Taylor rule is a modern monetary policy rule proposed by economist John B. Taylor that would stipulate exactly how much the Federal Reserve should change the interest rates in response to real divergences of real GDP from potential GDP and divergences of actual rates of inflation from a target rate of inflation.
it = + + ( πt – ) + ay ( yt – )
Where it is the target federal funds rate, πt is the rate of inflation as measured by the GDP deflator, is the desired rate of inflation, rt is the assumed equilibrium real interest rate, yt is the logarithm of real GDP, and is the logarithm of potential output, as determined by a linear trend (Taylor, 1993).
[edit] Interpretation
The rule "recommends" a relatively high interest rate (that is, a "tight" monetary policy) when inflation is above its target or when the economy is above its full employment level, and a relatively low interest rate ("easy" monetary policy) in the opposite situations. Sometimes these goals are in conflict: inflation may be above its target while the economy is below full employment (such as in the case of stagflation. In such situations, the rule provides guidance to policy makers on how to balance these competing considerations in setting an appropriate level for the interest rate.
Although the Fed does not explicitly follow the rule, analysis shows that the rule does a fairly accurate job of describing how monetary policy actually has been conducted during the past decade under Alan Greenspan. This fact has been cited by many economists inside and outside of the Fed as a reason that inflation has remained under control and that the economy has been relatively stable in the US over the past ten years.