skip to content
 

Working Papers

 

Archives: WP version of published papers

 

  • "Posterior Means and Precisions of the Coefficients in Linear Models with Highly Collinear Regressors", by M. Hashem Pesaran and Ron P. Smith. November 2017

    Abstract: When there is exact collinearity between regressors, their individual coefficients are not identi…ed, but given an informative prior their Bayesian posterior means are well de…ned. The case of high but not exact collinearity is more complicated but similar results follow. Just as exact collinearity causes non-identi…cation of the parameters, high collinearity can be viewed as weak identi…cation of the parameters, which we represent, in line with the weak instrument literature, by the correlation matrix being of full rank for a finite sample size T, but converging to a rank de…cient matrix as T goes to infi…nity. This paper examines the asymptotic behaviour of the posterior mean and precision of the parameters of a linear regression model for both the cases of exactly and highly collinear regressors. We show that in both cases the posterior mean remains sensitive to the choice of prior means even if the sample size is sufficiently large, and that the precision rises at a slower rate than the sample size. In the highly collinear case, the posterior means converge to normally distributed random variables whose mean and variance depend on the priors for coefficients and precision. The distribution degenerates to fixed points for either exact collinearity or strong identifi…cation. The analysis also suggests a diagnostic statistic for the highly collinear case, which is illustrated with an empirical example.
    JEL Classifications: C11, C18
    Key Words: Bayesian identi…cation, multicollinear regressions, weakly identi…ed regression coefficients, highly collinear regressors.
    Full Text: http://www.econ.cam.ac.uk/emeritus/mhp1/wp17/PS_high_collinearity_7_November_2017.pdf
     

  • "A Bias-Corrected Method of Moments Approach to Estimation of Dynamic Short-T Panels", by Alexander Chudik and M. Hashem Pesaran, CESifo WP no. 6688. October 2017

    Abstract: This paper contributes to the GMM literature by introducing the idea of self-instrumenting target variables instead of searching for instruments that are uncorrelated with the errors, in cases where the correlation between the target variables and the errors can be derived. The advantage of the proposed approach lies in the fact that, by construction, the instruments have maximum correlation with the target variables and the problem of weak instrument is thus avoided. The proposed approach can be applied to estimation of a variety of models such as spatial and dynamic panel data models. In this paper we focus on the latter and consider both univariate and multivariate panel data models with short time dimension. Simple Bias-corrected Methods of Moments (BMM) estimators are proposed and shown to be consistent and asymptotically normal, under very general conditions on the initialization of the processes, individual-speci…c effects, and error variances allowing for heteroscedasticity over time as well as cross-sectionally. Monte Carlo evidence document BMM’s good small sample performance across different experimental designs and sample sizes, including in the case of experiments where the system GMM estimators are inconsistent. We also …nd that the proposed estimator does not suffer size distortions and has satisfactory power performance as compared to other estimators.
    JEL Classifications: C12, C13, C23.
    Key Words: Short-T Dynamic Panels, GMM, Weak Instrument Problem, Quadratic Moment Conditions, Panel VARs, Monte Carlo Evidence.
    Full Text: http://www.econ.cam.ac.uk/emeritus/mhp1/wp17/CP_BMM_2017_Sept20wp.pdf
     

  • "Testing for Alpha in Linear Factor Pricing Models with a Large Number of Securities", by M. Hashem Pesaran and Takashi Yamagata, March 2017

    Abstract: This paper proposes a novel test of zero pricing errors for the linear factor pricing model when the number of securities, N, can be large relative to the time dimension, T, of the return series. The test is based on Student t tests of individual securities and has a number of advantages over the existing standardised Wald type tests. It allows for non-Gaussianity and general forms of weakly cross correlated errors. It does not require estimation of an invertible error covariance matrix, it is much faster to implement, and is valid even if N is much larger than T. Monte Carlo evidence shows that the proposed test performs remarkably well even when T = 60 and N = 5, 000. The test is applied to monthly returns on securities in the S&P 500 at the end of each month in real time, using rolling windows of size 60. Statistically significant evidence against Sharpe-Lintner CAPM and Fama-French three factor models are found mainly during the recent …financial crisis. Also we …find a significant negative correlation between a twelve-months moving average p-values of the test and excess returns of long/short equity strategies (relative to the return on S&P 500) over the period November 1994 to June 2015, suggesting that abnormal profits are earned during episodes of market inefficiencies.
    JEL Classifications: C12, C15, C23, G11, G12
    Key Words: CAPM, Testing for alpha, Weak and spatial error cross-sectional dependence, S&P 500 securities, Long/short equity strategy.
    Full Text: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp17/PY_LFPM_11_March_2017_Paper.pdf
    Supplement: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp17/PY_LFPM_11_March_2017_Supplement.pdf
     

  • "Double-question Survey Measures for the Analysis of Financial Bubbles and Crashes", by M. Hashem Pesarann and Ida Johnsson, December 2016, revised June 2017

    Abstract: This paper proposes a new double-question survey whereby an individual is presented with two sets of questions; one on beliefs about current asset values and another on price expectations. A theoretical asset pricing model with heterogeneous agents is advanced and the existence of a negative relationship between price expectations and asset valuations is established, which is tested using survey results on equity, gold and house prices. Leading indicators of bubbles and crashes are proposed and their potential value is illustrated in the context of a dynamic panel regression of realized house price changes across a number of key MSAs in the US.
    JEL Classifications: C83, D84, G12, G14.
    Key Words: Price expectations, bubbles and crashes, house prices, belief valuations.
    Full Text: http://www.econ.cam.ac.uk/emeritus/mhp1/wp17/PJ-Double-Question-Survey-main-Paper-June-2017.pdf
    Supplement: http://www.econ.cam.ac.uk/emeritus/mhp1/wp17/PJ-Double-Question-Survey-Supplement-June-2017.pdf
    Data: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp17/Double_Q_survey_data_Aug_2012-Jan_2013.zip
    Replication: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp17/Double_Q_Survey_Replication.zip
     

  • "Econometric Analysis of Production Networks with Dominant Units", by M. Hashem Pesarann and Cynthia Fan Yang, October 2016, revised August 2017

    Abstract: This paper considers production and price networks with unobserved common factors, and derives an exact expression for the rate at which aggregate fluctuations vary with the dimension of the network. It introduces the notions of strongly and weakly dominant and non-dominant units, and shows that at most a finite number of units in the network can be strongly dominant. The pervasiveness of a network is measured by the degree of dominance of the most pervasive unit in the network, and is shown to be equivalent to the inverse of the shape parameter of the power law fitted to the network outdegrees. New cross-section and panel extremum estimators for the degree of dominance of individual units in the network are proposed and their asymptotic properties investigated. Using Monte Carlo techniques, the proposed estimator is shown to have satisfactory small sample properties. An empirical application to US input-output tables spanning the period 1972 to 2007 is provided which suggests that no sector in the US economy is strongly dominant. The most dominant sector turns out to be the wholesale trade with an estimated degree of dominance ranging from 0.72 to 0.82 over the years 1972-2007.
    JEL Classifications: C12, C13, C23, C67, E32
    Key Words: Aggregate fluctuations, strongly and weakly dominant units, spatial models, outdegrees, degree of pervasiveness, power law, input-output tables, US economy.
    Full Text: http://www.econ.cam.ac.uk/emeritus/mhp1/wp17/Main-paper-PY-Production-network-4-August-2017.pdf
    Supplement: http://www.econ.cam.ac.uk/emeritus/mhp1/wp17/Online-supplement-PY-Production-network-4-August-2017.pdf
    Readme: http://www.econ.cam.ac.uk/emeritus/mhp1/wp17/Readme-PY-Production-network-4-August-2017.pdf
    Data: http://www.econ.cam.ac.uk/emeritus/mhp1/wp17/Data-PY-Production-network-4-August-2017.zip
    Codes: http://www.econ.cam.ac.uk/emeritus/mhp1/wp17/Codes-PY-Production-network-4-August-2017.zip
     

  • "Half-Panel Jackknife Fixed Effects Estimation of Panels with Weakly Exogenous Regressors, by Alexander Chudik, M. Hashem Pesarann and Jui-Chung Yang, SSRN Working Paper No. 281, September 2016

    Abstract: This paper considers estimation and inference in fixed effects (FE) linear panel regression models with lagged dependent variables and/or other weakly exogenous (or predetermined) regressors when N (the cross section dimension) is large relative to T (the time series dimension). The paper first derives a general formula for the bias of the FE estimator which is a generalization of the Nickell type bias derived in the literature for the pure dynamic panel data models. It shows that in the presence of weakly exogenous regressors, inference based on the FE estimator will result in size distortions unless N/T is suffciently small. To deal with the bias and size distortion of FE estimator when N is large relative to T, the use of half-panel Jackknife FE estimator is proposed and its asymptotic distribution is derived. It is shown that the bias of the proposed estimator is of order [code], and for valid inference it is only required that N/T[code], as N, T [code] jointly. Extensions to panel data models with time effects (TE), for balanced as well as unbalanced panels, are also provided. The theoretical results are illustrated with Monte Carlo evidence. It is shown that the FE estimator can suffer from large size distortions when N > T, with the proposed estimator showing little size distortions. The use of half-panel jackknife FE-TE estimator is illustrated with two empirical applications from the literature.
    JEL Classifications: C32, E17, E32, F44, F47, O51, Q43.
    Key Words: Panel Data Models, Weakly Exogenous Regressors, Lagged Dependent Variable, Fixed Effects, Time Effects, Unbalanced Panels, Half-Panel Jackknife, Bias Correction
    Full Text: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp16/CPY_jackknifeFE_13-Sep-2016.pdf
    Supplement: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp16/CPY_jackknifeFE_supplement_12-Sep-2016.pdf
    Data: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp16/Matlab-Codes-and-Data-for-Chudik-Pesaran-and-Yang-(2016).rar
     

  • "A One-Covariate at a Time, Multiple Testing Approach to Variable Selection in High-Dimensional Linear Regression Models", by Alexander Chudik, George Kapetanios and M. Hashem Pesaran, February 2016, revised November 2016

    Abstract: Model specification and selection are recurring themes in econometric analysis. Both topics become considerably more complicated in the case of large-dimensional data sets where the set of specification possibilities can become quite large. In the context of linear regression models, penalised regression has become the de facto benchmark technique used to trade off parsimony and fit when the number of possible covariates is large, often much larger than the number of available observations. However, issues such as the choice of a penalty function and tuning parameters associated with the use of penalized regressions remain contentious. In this paper, we provide an alternative approach that considers the statistical significance of the individual covariates one at a time, whilst taking full account of the multiple testing nature of the inferential problem involved. We refer to the proposed method as One Covariate at a Time Multiple Testing (OCMT) procedure. The OCMT provides an alternative to penalised regression methods: It is based on statistical inference and is therefore easier to interpret and relate to the classical statistical analysis, it allows working under more general assumptions, it is faster, and performs well in small samples for almost all of the different sets of experiments considered in this paper. We provide extensive theoretical and Monte Carlo results in support of adding the proposed OCMT model selection procedure to the toolbox of applied researchers. The usefulness of OCMT is also illustrated by an empirical application to forecasting U.S. output growth and inflation.
    JEL Classifications: C52, C55
    Key Words: One covariate at a time, multiple testing, model selection, high dimensionality, penalised regressions, boosting, Monte Carlo experiments.
    Full Text: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp16/ChudikKapetaniosPesaran_14Nov2016.pdf
    Supplement 1: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp16/Supplement_Theory_ChudikKapetaniosPesaran_10Nov2016.pdf
    Supplement 2: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp16/Supplement_MC_ChudikKapetaniosPesaran_10Nov2016.pdf

     

  • "Quasi Maximum Likelihood Estimation of Spatial Models with Heterogeneous Coefficients", by Michele Aquaro, Natalia Bailey and M. Hashem Pesaran, June 2015

    Abstract: This paper considers spatial autoregressive panel data models and extends their analysis to the case where the spatial coefficients differ across the spatial units. It derives conditions under which the spatial coefficients are identied and develops a quasi maximum likelihood (QML) estimation procedure. Under certain regularity conditions, it is shown that the QML estimators of individual spatial coefficients are consistent and asymptotically normally distributed when both the time and cross section dimensions of the panel are large. It derives the asymptotic covariance matrix of the QML estimators allowing for the possibility of non-Gaussian error processes. Small sample properties of the proposed estimators are investigated by Monte Carlo simulations for Gaussian and non-Gaussian errors, and with spatial weight matrices of differing degree of sparseness. The simulation results are in line with the paper's key theoretical findings and show that the QML estimators have satisfactory small sample properties for panels with moderate time dimensions and irrespective of the number of cross section units in the panel, under certain sparsity conditions on the spatial weight matrix.
    JEL Classifications: C21, C23
    Key Words: Spatial panel data models, heterogeneous spatial lag coefficients, identification, quasi maximum likelihood (QML) estimators, non-Gaussian errors.
    Full Text: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp15/ABP_June_19_2015.pdf

     

  • "A Multiple Testing Approach to the Regularisation of Large Sample Correlation Matrices", by Natalia Bailey, M. Hashem Pesaran and L. Vanessa Smith, CAFE Research Paper No. 14.05, May 2014, revised September 2016

    Abstract: This paper proposes a regularisation method for the estimation of large covariance matrices that uses insights from the multiple testing (MT) literature. The approach tests the statistical signi?cance of individual pair-wise correlations and sets to zero those elements that are not statistically significant, taking account of the multiple testing nature of the problem. The effective p-values of the tests are set as a decreasing function of N (the cross section dimension), the rate of which is governed by the maximum degree of dependence of the underlying observations when their pair-wise correlation is zero, and the relative expansion rates of N and T (the time dimension). In this respect, the method specifies the appropriate thresholding parameter to be used under Gaussian and non-Gaussian settings. The MT estimator of the sample correlation matrix is shown to be consistent in the spectral and Frobenius norms, and in terms of support recovery, so long as the true covariance matrix is sparse. The performance of the proposed MT estimator is compared to a number of other estimators in the literature using Monte Carlo experiments. It is shown that the MT estimator performs well and tends to outperform the other estimators, particularly when N is larger than T.
    JEL Classifications: C13, C58.
    Key Words: High-dimensional data, Multiple testing, Non-Gaussian observations, Sparsity, Thresholding, Shrinkage.
    Full Text: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp16/BPS_14_September_2016.pdf
    Supplementary Material: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp16/BPS_14_September_2016_Supplement.pdf

     

  • "Transformed Maximum Likelihood Estimation of Short Dynamic Panel Data Models with Interactive Effects", by Kazuhiko Hayakawa, M. Hashem Pesaran and L. Vanessa Smith, CAFE Research Paper No. 14.06, May 2014

    Abstract: This paper proposes the transformed maximum likelihood estimator for short dynamic panel data models with interactive fixed effects, and provides an extension of Hsiao et al. (2002) that allows for a multifactor error structure. This is an important extension since it retains the advantages of the transformed likelihood approach, whilst at the same time allows for observed factors (fixed or random). Small sample results obtained from Monte Carlo simulations show that the transformed ML estimator performs well in finite samples and outperforms the GMM estimators proposed in the literature in almost all cases considered.
    JEL Classifications: C12, C13, C23.
    Key Words: short T dynamic panels, transformed maximum likelihood, multi-factor error structure, interactive fixed effects.
    Full Text: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp14/HPS_May14.pdf

     

  • "Uncertainty and Economic Activity: A Global Perspective", by Ambrogio Cesa-Bianchi, M. Hashem Pesaran and Alessandro Rebucci, March 2014

    Abstract: The 2007-2008 global financial crisis and the subsequent anemic recovery have rekindled academic interest in quantifying the impact of uncertainty on macroeconomic dynamics based on the premise that uncertainty causes economic activity to slow down and contract. In this paper, we study the interrelation between financial markets volatility and economic activity assuming that both variables are driven by the same set of unobserved common factors. We further assume that these common factors affect volatility and economic activity with a time lag of at least a quarter. Under these assumptions, we show analytically that volatility is forward looking and that the output equation of a typical VAR estimated in the literature is mis-specified as least squares estimates of this equation are inconsistent. Empirically, we document a statistically significant and economically sizable impact of future output growth on current volatility, and no effect of volatility shocks on business cycles, over and above those driven by the common factors. We interpret this evidence as suggesting that volatility is a symptom rather than a cause of economic instability.
    JEL Classifications: E44, F44, G15.
    Key Words: Uncertainty, Realized volatility, GVAR, Great Recession, Identification, Business Cycle, Common Factors.
    Full Text: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp14/Volatility_21March2014.pdf

     

  • "Business Cycle Effects of Credit Shocks in a DSGE Model with Firm Defaults", by M. Hashem Pesaran and TengTeng Xu, CWPE Working paper. No. 1159, CESifo Working Paper No. 3609, IZA Discussion Paper No. 6027, October 2011, revised April 2016

    Abstract: This paper proposes a new theoretical framework for the analysis of the relationship between credit shocks, firm defaults and volatility. The key feature of the modelling approach is to allow for the possibility of default in equilibrium. The model is then used to study the impact of credit shocks on business cycle dynamics. It is assumed that firms are identical ex ante but differ ex post due to different realizations of firm-specific technology shocks, possibly leading to default by some firms. The implications of firm defaults for the balance sheets of households and banks and their subsequent impacts on business uctuations are investigated within a dynamic stochastic general equilibrium framework. Results from a calibrated version of the model suggest that, in the steady state, a firm's default probability rises with its leverage ratio and the level of uncertainty in the economy. A positive credit shock, defined as a rise in the loan-to-deposit ratio, increases output, consumption, hours and productivity, and reduces the spread between loan and deposit rates. Interestingly, the effects of the credit shock tend to be highly persistent, even without price rigidities and habit persistence in consumption behavior.
    JEL Classifications: E32, E44, E50, G21.
    Key Words: Firm Defaults; Credit Shocks; Financial Intermediation; Interest Rate Spread; Uncertainty.
    Full Text: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp16/MacroCredit_PesaranXu_April-2016.pdf
    Supplement: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp11/MacroCredit_ 5Oct2011_Supplement.pdf

     

  • "Optimality and Diversifiability of Mean Variance and Arbitrage Pricing Portfolios", by M. Hashem Pesaran, and Paolo Zaffaroni, CESifo Working Papers No. 2857, October, 2009

    Abstract: This paper investigates the limit properties of mean-variance (mv) and arbitrage pricing (ap) trading strategies using a general dynamic factor model, as the number of assets diverge to infinity. It extends the results obtained in the literature for the exact pricing case to two other cases of asymptotic no-arbitrage and the unconstrained pricing scenarios. The paper characterizes the asymptotic behaviour of the portfolio weights and establishes that in the non-exact pricing cases the ap and mv portfolio weights are asymptotically equivalent and, moreover, functionally independent of the factors conditional moments. By implication, the paper sheds light on a number of issues of interest such as the prevalence of short-selling, the number of dominant factors and the granularity property of the portfolio weight.
    JEL Classifications: Large Portfolios, Factor Models, Mean-Variance Portfolio, Arbitrage Pricing, Market (Beta) Neutrality, Well Diversification.
    Key Words: C32, C52, C53, G11.
    Full Text: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp09/pz_port_17_October_09.pdf