Monetary Economics
Members and guests of the NBER's Program on Monetary Economics met in Cambridge on November 17. Program Director Ben S. Bernanke, also of Princeton University, organized the program and chose the following papers for discussion:
|
Hsieh and Romer consider the $1 billion expansionary open market operation undertaken in the spring of 1932 as a crucial case study of the link between monetary expansion and expectations of devaluation. They use data on forward exchange rates to measure expectations of devaluation during this episode but find little evidence that the large monetary expansion led investors to believe that the United States would devalue. The financial press and the records of the Federal Reserve System also show little evidence of expectations of devaluation or fear of a speculative attack. The authors find that a flawed model of the effects of monetary policy and conflict among the 12 Federal Reserve banks, rather than concern about the gold standard, led the Fed to suspend the expansionary policy in the summer of 1932.
A number of studies have used quarterly data to estimate monetary policy rules or reaction functions. These rules seem to imply a very slow adjustment of the policy interest rate: about 20 percent of the target per quarter. The conventional wisdom is that this gradual adjustment reflects policy inertia or interest rate smoothing behavior by central banks. However, Rudebusch notes that such slow quarterly adjustment implies predictable future variation in the policy rate at horizons of several quarters. In contrast, evidence from the term structure of interest rates suggests that there is no information about such changes in financial markets. Rudebusch provides an alternative interpretation: the large lag coefficients in the estimated policy rules may reflect persistent special factors that cause the central bank to deviate from the policy rule in unpredictable ways.
Svensson and Woodford derive and interpret the optimal weights on indicators in models with partial information about the state of the economy and forward-looking variables, for equilibriums under discretion and under commitment. They examine an example of optimal monetary policy with a partially observable potential output and a forward-looking indicator. The optimal response to the optimal estimate of potential output displays certainty-equivalence, while the optimal response to the imperfect observation of output depends on the noise in this observation.
The U.S. economy has experienced a dramatic decline in the volatility of both inflation and output since the early 1980s. Kahn, McConnell, and Queros examine two competing explanations for this. The first is the popular view that improved Fed policy since the late 1970s is chiefly responsible. The second view asserts that improvements in information technology have stabilized aggregate output variability, primarily through their effects on inventory behavior. The authors model the joint determination of output, inflation, and policy in an optimizing framework and argue that the technology story plays the primary role in explaining the relative stability of the last two decades.
While staggered price setting models are increasingly popular in macroeconomics, recent empirical studies question their ability to explain short-run inflation dynamics. Jadresic shows that a staggered price setting model that allows for a flexible distribution of price durations can replicate the persistence of inflation found in the data. The model also can explain the empirical regularity that, although inflation surprises are followed by a period of slow output growth, booms in output growth are followed by a period of high inflation. The distribution of price durations that yields these results, estimated from aggregate data on prices and other variables, is consistent with the microeconomic evidence suggesting that the duration of prices and wages is about a year on average, but that there is a great deal of heterogeneity across individual prices and wages.
Tornell presents an alternative expectation formation mechanism that helps rationalize well-known asset pricing anomalies, such as the predictability of excess returns, excess volatility, and the equity-premium puzzle. As with rational expectations (RE), the expectation formation mechanism that Tornell considers is based on a rigorous optimization algorithm that does not presume misperceptions -- it simply departs from some of the implicit assumptions that underlie RE. The new element is that uncertainty cannot be modeled via probability distributions. Tornell considers an asset pricing model in which uncertainty is represented by unknown disturbance sequences, as in the H-infinity-control literature. Agents must filter the "persistent" and "transitory" components of a sequence of observations to make consumption and portfolio decisions. Tornell finds that H-infinity forecasts are more sensitive to news than RE forecasts and that equilibrium prices exhibit the anomalies previously mentioned.