# generated by /homes/nber/adrepec/bin/nbrred running on mysql0
Template-Type: ReDIF-Paper 1.0
Title: Error Components in Grouped Data: Why It's Never Worth Weighting
Author-Name: William T. Dickens
Note: LS
Number: 0043
Creation-Date: 1985-02
Order-URL: http://www.nber.org/papers/t0043
File-URL: http://www.nber.org/papers/t0043.pdf
File-Format: application/pdf
Publication-Status: published as Review of Economics and Statistics, "Error Components in Grouped Data: Is it Ever Worth Weighting?" vol LXII, no. 2, May 1990, p. 328-333
Abstract: When estimating linear models using grouped data researchers typically weight each observation by the group size. Under the assumption that the regression errors for the underlying micro data have expected values of zero, are independent and are homoscedastic, this procedure produces best linear unbiased estimates. This note argues that for most applications in economics the assumption that errors are independent within groups is inappropriate. Since grouping is commonly done on the basis of common observed characteristics, it is inappropriate to assume that there are no unobserved characteristics in common. If group members have unobserved characteristics in common, individual errors will be correlated. If errors are correlated within groups and group sizes are large then heteroscedasticity may be relatively unimportant and weighting by group size may exacerbate heteroscedasticity rather than eliminate it. Two examples presented here suggest that this may be the effect of weighting in most non-experimental applications. In many situations unweighted ordinary least squares may be a preferred alternative. For those cases where it is not, a maximum likelihood and an asymptotically efficient two-step generalized least squares estimator are proposed. An extension of the two-step estimator for grouped binary data is also presented.
Handle: RePEc:nbr:nberte:0043
Template-Type: ReDIF-Paper 1.0
Title: Asset Pricing Theories
Author-Name: Michael Rothschild
Author-Person: pro48
Note: ME
Number: 0044
Creation-Date: 1985-03
Order-URL: http://www.nber.org/papers/t0044
File-URL: http://www.nber.org/papers/t0044.pdf
File-Format: application/pdf
Publication-Status: published as Heller, Walter P., Ross M. Starr and David Starrett (eds.) Uncertainty, Information and Communication. Essays in Honor of Kenneth J. Arrow, Vol. III. New York: Cambridge University Press, 1986.
Abstract: This article compares two leading models of asset pricing: the capital asset pricing model (CAPM) and the arbitrage pricing theory (APT): I argue that while the APT is compatible with the data available for testing theories of asset pricing, the CAPM is not. In reaching this conclusion emphasis is placed on the distinction between the unconditional (relatively incomplete) information which econometricians must use to estimate asset pricing models and the conditional (complete) information which investors use in making the portfolio decisions which determine asset prices. Empirical work to date suggests that it is unlikely that the APT will produce a simple equation which explains differences in risk premium well with a few parameters. If the CAPM were correct, it would provide such an equation.
Handle: RePEc:nbr:nberte:0044
Template-Type: ReDIF-Paper 1.0
Title: Testing the Random Walk Hypothesis: Power versus Frequency of Observation
Author-Name: Robert J. Shiller
Author-Person: psh69
Author-Name: Pierre Perron
Author-Person: ppe32
Note: ME
Number: 0045
Creation-Date: 1985-04
Order-URL: http://www.nber.org/papers/t0045
File-URL: http://www.nber.org/papers/t0045.pdf
File-Format: application/pdf
Publication-Status: published as Perron, Pierre and Robert J. Shiller. "Testing the Random Walk Hypothesis: Power Versus Frequency of Observation," Economic Letters, Vol. 18, 1985,pp. 381-386.
Abstract: Power functions of tests of the random walk hypothesis versus stationary first order autoregressive alternatives are tabulated for samples of fixed span but various frequencies of observation.
Handle: RePEc:nbr:nberte:0045
Template-Type: ReDIF-Paper 1.0
Title: Is There Chronic Excess Supply of Labor? Designing a Statistical Test
Author-Name: Richard E. Quandt
Author-Name: Harvey S. Rosen
Author-Person: pro55
Note: LS
Number: 0046
Creation-Date: 1985-04
Order-URL: http://www.nber.org/papers/t0046
File-URL: http://www.nber.org/papers/t0046.pdf
File-Format: application/pdf
Publication-Status: published as Quandt, Richard E. and Harvey S. Rosen, "Is There Chronic Excess Supply of Labor? Designing a Statistical Test," Economics Letters, Vol. 19, pp. 193- 197, 1985.
Abstract: In this paper we present and implement a statistical test of the hypothesis that the labor market has chronic excess supply. The procedure is to estimate a disequilibrium labor market model, and construct a test statistic based on the unconditional probability that there is excess supply each period. We find that the data reject the hypothesis of chronic excess supply. Hence, one cannot assume that all observations lie on the demand curve.
Handle: RePEc:nbr:nberte:0046
Template-Type: ReDIF-Paper 1.0
Title: Technical Progress in U.S. Manufacturing Sectors, 1948-1973: An Application of Lie Groups
Author-Name: Ryuzo Sato
Author-Name: Thomas M. Mitchell
Number: 0047
Creation-Date: 1985-05
Order-URL: http://www.nber.org/papers/t0047
File-URL: http://www.nber.org/papers/t0047.pdf
File-Format: application/pdf
Abstract: The purpose of this paper is to apply the theory of Lie transformation groups as developed by the first author, and derive a testable model of production and technical change. The econometric model is then applied to data derived by F. Gollop and D. Jorgenson for U.S. manufacturing industries for the years 1948-1973. This is the first empirical work in economics to incorporate the theory of Lie transformation groups, so the results are new, but they are also interesting. Using Zellner's seemingly unrelated regression equations method of generalized least squares produces an estimate of a model for the 21-industry system which has a high degree of explanatory power: The system's weighted-R2 is 0.9675 and all coefficients are statistically significant at the 5% level (on the basis of t-tests). While the "form" of technical change in a given industry of the model is probably new, it is easily characterized within the Lie group structure and the system estimate is statistically significant.
Handle: RePEc:nbr:nberte:0047
Template-Type: ReDIF-Paper 1.0
Title: Alternative Nonnested Specification Tests of Time Series Investment Models
Author-Name: Ben S. Bernanke
Author-Person: pbe55
Author-Name: Henning Bohn
Author-Person: pbo25
Author-Name: Peter C. Reiss
Note: EFG
Number: 0049
Creation-Date: 1985-06
Order-URL: http://www.nber.org/papers/t0049
File-URL: http://www.nber.org/papers/t0049.pdf
File-Format: application/pdf
Publication-Status: published as Bernanke, Bohn, and Reiss. "Alternative Nonnested Specification Tests of Time Series Investment Models," in Journal of Econometrics, Vol. 37, No. 2, March 1988, pp. 293-326.
Abstract: This paper develops and compares nonnested hypothesis tests for linear regression models with first-order serially correlated errors. It extends the nonnested testing procedures of Pesaran, Fisher and McAleer, and Davidson and MacKinnon, and compares their performance on four conventional models of aggregate investment demand using quarterly U.S. investment data from 1951:1 to 1983:IV. The data and the nonnested hypothesis tests initially indicate that no model is correctly specified, and that the tests are occasionally intransitive in their assessments. Before rejecting these conventional models of investment demand, we go on to investigate the small sample properties of these different nonnested test procedures through a series of monte carlo studies. These investigations demonstrate that when there is significant serial correlation, there are systematic finite sample biases in the nominal size and power of these test statistics. The direction of the bias is toward rejection of the null model, although it varies considerably by the type of test and estimation technique. After revising our critical levels for this finite sample bias, we conclude that the accelerator model of equipment investment cannot be rejected by any of the other alternatives.
Handle: RePEc:nbr:nberte:0049
Template-Type: ReDIF-Paper 1.0
Title: Do We Reject Too Often? Small Sample Properties of Tests of Rational Expectations Models
Author-Name: N. Gregory Mankiw
Author-Name: Matthew D. Shapiro
Author-Person: psh144
Note: EFG
Number: 0051
Creation-Date: 1985-10
Order-URL: http://www.nber.org/papers/t0051
File-URL: http://www.nber.org/papers/t0051.pdf
File-Format: application/pdf
Publication-Status: published as Mankiw, N. Gregory and Matthew D. Shapiro. "Do We Reject Too Often? Small Sample Properties of Tests of Rational Expectations Models," Economic Letters, Vol. 20, pp. 139-145, 1986.
Abstract: We examine the small sample properties of tests of rational expectations models. We show using Monte Carlo experiments that the asymptotic distribution of test statistics can be extremely misleading when the tine series examined are highly autoregressive. In particular, a practitioner relying on the asymptotic distribution will reject true models too frequently. We also show that this problem is especially severe with detrended data. We present correct small sample critical values for our canonical problem.
Handle: RePEc:nbr:nberte:0051
Template-Type: ReDIF-Paper 1.0
Title: A Fiscal Theory of Hyperdeflations? Some Surprising Monetarist Arithmetic
Author-Name: Willem H. Buiter
Author-Person: pbu137
Note: ME
Number: 0052
Creation-Date: 1985-11
Order-URL: http://www.nber.org/papers/t0052
File-URL: http://www.nber.org/papers/t0052.pdf
File-Format: application/pdf
Publication-Status: published as Buiter, Willem H. "A Fiscal Theory of Hyperdeflations? Some Surprising Monetarist Arithmetic," Oxford Economic Papers, Vol. 3G, No. 3. (1987)
Abstract: The note mines an unsuspected lode in the Sargent-Wallace "Unpleasant Monetarist Arithmetic" deposit. While that model is shown to be incapable of generating hyperinflations as a result of large monetized public sector deficits, it can generate hyperdeflations or perhaps more accurately, the first stages of an unsustainable process of hyperdeflation. The drawing of policy conclusions is left as an exercise for the reader.
Handle: RePEc:nbr:nberte:0052
Template-Type: ReDIF-Paper 1.0
Title: Microeconomic Approaches to the Theory of International Comparisons
Author-Name: W.E. Diewert
Author-Person: pdi117
Note: PR
Number: 0053
Creation-Date: 1985-12
Order-URL: http://www.nber.org/papers/t0053
File-URL: http://www.nber.org/papers/t0053.pdf
File-Format: application/pdf
Abstract: The paper considers alternative approaches to providing consistent multilateral indexes of real output, real input, real consumption or productivity across many regions, countries or industries at one point in time. The recommended approaches are based on aggregating up various bilateral indexes which in turn are based on the economic theory of index numbers, either in the producer or consumer theory context. In order to distinguish between various competing multilateral approaches, an axiomatic or test approach to multilateral comparisons is developed. This test approach indicates that the Geary-Khamis and Van Yzeren approaches to multilateral output comparisons are dominated by the (new) own share and the Elteto-Koves-Szulc methods.
Handle: RePEc:nbr:nberte:0053