skip to content
 

Working Papers

 

Archives: WP version of published papers

 

  • "Variable Selection and Forecasting in High Dimensional Linear Regressions with Structural Breaks", by Alexander Chudik, M. Hashem Pesaran and Mahrad Sharifvaghe, July 2020.

    Abstract: This paper is concerned with problem of variable selection and forecasting in the presence of parameter instability. There are a number of approaches proposed for forecasting in the presence of breaks, including the use of rolling windows or exponential down-weighting. However, these studies start with a given model specification and do not consider the problem of variable selection. It is clear that, in the absence of breaks, researchers should weigh the observations equally at both variable selection and forecasting stages. In this study, we investigate whether or not we should use weighted observations at the variable selection stage in the presence of structural breaks, particularly when the number of potential covariates is large. Amongst the extant variable selection approaches we focus on the recently developed One Covariate at a time Multiple Testing (OCMT) method that allows a natural distinction between the selection and forecasting stages, and provide theoretical justification for using the full (not down-weighted) sample in the selection stage of OCMT and down-weighting of observations only at the forecasting stage (if needed). The benefits of the proposed method are illustrated by empirical applications to forecasting output growths and stock market returns.
    JEL Classifications: C22, C52, C53, C55
    Key Words: Time-varying parameters, structural breaks, high-dimensionality, multiple testing, variable selection, one covariate at a time multiple testing (OCMT), forecasting
    Full Text: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp20/CPS_OCMT_Break_Forecatsing_07_23_2020.pdf

     

  • "Voluntary and Mandatory Social Distancing: Evidence on COVID-19 Exposure Rates from Chinese Provinces and Selected Countries", by Alexander Chudik, M. Hashem Pesaran and Alessandro Rebucci, CESifo Working Paper No. tbc, April 2020.

    Abstract: This paper considers a modification of the standard Susceptible-Infected-Recovered (SIR) model of epidemic that allows for di¤erent degrees of compulsory as well as voluntary social distancing. It is shown that the fraction of population that self-isolates varies with the perceived probability of contracting the disease. Implications of social distancing both on the epidemic and recession curves are investigated and their trade o¤is simulated under a number of di¤erent social distancing and economic participation scenarios. We show that mandating social distancing is very e¤ective at attening the epidemic curve, but is costly in terms of employment loss. However, if targeted towards individuals most likely to spread the infection, the employment loss can be somewhat reduced. We also show that voluntary self-isolation driven by individual's perceived risk of becoming infected kicks in only towards the peak of the epidemic and has little or no impact on attening the aggregate epidemic curve. Using available statistics and correcting for measurement errors, we estimate the rate of exposure to COVID-19 for 21 Chinese provinces and a selected number of countries. The exposure rates are generally small, but vary considerably between Hubei and other Chinese provinces as well as across countries. Strikingly, the exposure rate in Hubei province is around 40 times larger than the rates for other Chinese provinces, with the exposure rates for some European countries being 3-5 times larger than Hubei (the epicenter of the epidemic). The paper also provides country-specific estimates of the recovery rate, showing it to be about 21 days (a week longer than the 14 days typically assumed), and relatively homogeneous across Chinese provinces and for a selected number of countries.
    JEL Classifications: D0, F6, C4, I120, E7
    Key Words: COVID-19, SIR model, epidemics, exposed population, measurement error, social distancing, self-isolation, employment loss.
    Full Text: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3576703

     

  • "Measurement of Factor Strength: Theory and Practice", by Natalia Bailey, George Kapetanios and M. Hashem Pesaran, CESifo Working Paper No. tbc, February 2020.

    Abstract: This paper proposes an estimator of factor strength and establishes its consistency and asymptotic distribution. The proposed estimator is based on the number of statistically significant factor loadings, taking account of the multiple testing problem. We focus on the case where the factors are observed which is of primary interest in many applications in macroeconomics and finance. We also consider using cross section averages as a proxy in the case of unobserved common factors. We face a fundamental factor identification issue when there are more than one unobserved common factors. We investigate the small sample properties of the proposed estimator by means of Monte Carlo experiments under a variety of scenarios. In general, we find that the estimator, and the associated inference, perform well. The test is conservative under the null hypothesis, but, nevertheless, has excellent power properties, especially when the factor strength is sufficiently high. Application of the proposed estimation strategy to factor models of asset returns shows that out of 146 factors recently considered in the finance literature, only the market factor is truly strong, while all other factors are at best semi-strong, with their strength varying considerably over time. Similarly, we only find evidence of semi-strong factors in an updated version of the Stock and Watson (2012) macroeconomic dataset.
    JEL Classifications: C38, E20, G20
    Key Words: Factor models, factor strength, measures of pervasiveness, cross-sectional dependence, market factor.
    Full Text: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp20/Factor_strength_25_Feb_2020.pdf

     

  • "The Role of Factor Strength and Pricing Errors for Estimation and Inference in Asset Pricing Models", by M. Hashem Pesaran and Ron P. Smith, CESifo Working Paper No. tbc, October 2019.

    Abstract: In this paper we are concerned with the role of factor strength and pricing errors in asset pricing models, and their implications for identification and estimation of risk premia. We establish an explicit relationship between the pricing errors and the presence of weak factors that are correlated with stochastic discount factor. We introduce a measure of factor strength, and distinguish between observed factors and unobserved factors. We show that unobserved factors matter for pricing if they are correlated with the discount factor, and relate the strength of the weak factors to the strength (pervasiveness) of non-zero pricing errors. We then show, that even when the factor loadings are known, the risk premia of a factor can be consistently estimated only if it is strong and if the pricing errors are weak. Similar results hold when factor loadings are estimated, irrespective of whether individual returns or portfolio returns are used. We derive distributional results for two pass estimators of risk premia, allowing for non-zero pricing errors. We show that for inference on risk premia the pricing errors must be sufficiently weak. We consider both when n (the number of securities) is large and T (the number of time periods) is short, and the case of large n and T. Large n is required for consistent estimation of risk premia, whereas the choice of short T is intended to reduce the possibility of time variations in the factor loadings. We provide monthly rolling estimates of the factor strengths for the three Fama-French factors over the period 1989-2018.
    JEL Classifications: C38, G12
    Key Words: Arbitrage Pricing Theory, APT, factor strength, identification of risk premia, two-pass regressions, Fama-French factors.
    Full Text: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp19/Factor-Strength-and-APT-October-16-2019.pdf

     

  • "Long-Term Macroeconomic Effects of Climate Change: A Cross-Country Analysis", by Matthew E. Kahn, Kamiar Mohaddes, Ryan N. C. Ng, M. Hashem Pesaran, Mehdi Raissi and Jui-Chung Yang, CESifo Working Paper No. 7738, July 2019.

    Abstract: We study the long-term impact of climate change on economic activity across countries, using a stochastic growth model where labour productivity is affected by country-specific climate variables-defined as deviations of temperature and precipitation from their historical norms. Using a panel data set of 174 countries over the years 1960 to 2014, we find that per-capita real output growth is adversely affected by persistent changes in the temperature above or below its historical norm, but we do not obtain any statistically significant effects for changes in precipitation. Our counterfactual analysis suggests that a persistent increase in average global temperature by 0.04℃ per year, in the absence of mitigation policies, reduces world real GDP per capita by 7.22 percent by 2100. On the other hand, abiding by the Paris Agreement, thereby limiting the temperature increase to 0.01℃ per annum, reduces the loss substantially to 1.07 percent. These effects vary significantly across countries. We also provide supplementary evidence using data on a sample of 48 U.S. states between 1963 and 2016, and show that climate change has a long-lasting adverse impact on real output in various states and economic sectors, and on labour productivity and employment.
    JEL Classifications: C33, O40, O44, O51, Q51, Q54
    Key Words: Climate change, economic growth, adaptation, counterfactual analysis.
    Full Text: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp19/Climate Growth_190701.pdf

     

  • "Short T Dynamic Panel Data Models with Individual, Time and Interactive Effects", by Kazuhiko Hayakawa, M. Hashem Pesaran and L. Vanessa Smith, September 2018, revised February 2020

    Abstract: This paper proposes a quasi maximum likelihood (QML) estimator for short T dynamic fixed effects panel data models allowing for interactive effects through a multi-factor error structure. The proposed estimator is robust to the heterogeneity of the initial values and common unobserved effects, whilst at the same time allowing for standard fixed and time effects. It is applicable to both stationary and unit root cases. Order conditions for identification of the number of interactive effects are established, and conditions are derived under which the parameters are almost surely locally identified. It is shown that global identification is possible only when the model does not contain lagged dependent variables. The QML estimator is proven to be consistent and asymptotically normally distributed. A sequential multiple testing likelihood ratio procedure is also proposed for estimation of the number of factors which is shown to be consistent. Finite sample results obtained from Monte Carlo simulations show that the proposed procedure for determining the number of factors performs very well and the QML estimator has small bias and RMSE, and correct empirical size in most settings. The practical use of the QML approach is illustrated by means of two empirical applications from the literature on cross county crime rates and cross country growth regressions.
    JEL Classifications: C12, C13, C23.
    Key Words: short T dynamic panels, unobserved common factors, quasi maximum likelihood, interactive effects, multiple testing, sequential likelihood ratio tests, crime rate, growth regressions.
    Full Text: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp20/HPS_11Feb20.pdf
    SSRN Link: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3268434

     

  • "Land Use Regulations, Migration and Rising House Price Dispersion in the U.S.", by Wukuang Cun and M. Hashem Pesaran. April 2018, revised October 2018

    Abstract: This paper develops and solves a dynamic spatial equilibrium model of regional housing markets in which house prices are jointly determined with location-to-location migration flows. Agents optimize period-by-period and decide whether to remain where they are or migrate to a new location at the start of each period. The agent'’s optimal location choice and the resultant migration process is shown to be Markovian with the transition probabilities across all location pairs given as non-linear functions of wage and housing cost differentials, which are time varying and endogenously determined. On the supply side, in each location the construction …firms build new houses by combining land and residential structures; with housing supplies endogenously responding to migration flows. The model can be viewed as an example of a dynamic network where regional housing markets interact with each other via migration flows that function as a source of spatial spill-overs. It is shown that the deterministic version of the model has a unique equilibrium and a unique balanced growth path. We estimate the state-level supplies of new residential land from the model using housing market and urban land acreage data. These estimates are shown to be significantly negatively correlated with the Wharton Residential Land Use Regulatory Index. The model can simultaneously account for the rise in house price dispersion and the interstate migration in the U.S.. Counterfactual simulations suggest that reducing either land supply differentials or migration costs could significantly lower house price dispersion.
    JEL Classifications: E0, R23, R31
    Key Words: House price dispersion, endogenous location choice, interstate migration, land-use restriction, spatial equilibrium.
    Full Text: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp18/PC_Housing_Paper_2018_10_09.pdf
    SSRN Working Paper Link: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3162399

  • "A Bias-Corrected Method of Moments Approach to Estimation of Dynamic Short-T Panels", by Alexander Chudik and M. Hashem Pesaran, CESifo WP no. 6688. October 2017

    Abstract: This paper contributes to the GMM literature by introducing the idea of self-instrumenting target variables instead of searching for instruments that are uncorrelated with the errors, in cases where the correlation between the target variables and the errors can be derived. The advantage of the proposed approach lies in the fact that, by construction, the instruments have maximum correlation with the target variables and the problem of weak instrument is thus avoided. The proposed approach can be applied to estimation of a variety of models such as spatial and dynamic panel data models. In this paper we focus on the latter and consider both univariate and multivariate panel data models with short time dimension. Simple Bias-corrected Methods of Moments (BMM) estimators are proposed and shown to be consistent and asymptotically normal, under very general conditions on the initialization of the processes, individual-speci…c effects, and error variances allowing for heteroscedasticity over time as well as cross-sectionally. Monte Carlo evidence document BMM’s good small sample performance across different experimental designs and sample sizes, including in the case of experiments where the system GMM estimators are inconsistent. We also …nd that the proposed estimator does not suffer size distortions and has satisfactory power performance as compared to other estimators.
    JEL Classifications: C12, C13, C23.
    Key Words: Short-T Dynamic Panels, GMM, Weak Instrument Problem, Quadratic Moment Conditions, Panel VARs, Monte Carlo Evidence.
    Full Text: http://www.econ.cam.ac.uk/emeritus/mhp1/wp17/CP_BMM_2017_Sept20wp.pdf
     

  • "Testing for Alpha in Linear Factor Pricing Models with a Large Number of Securities", by M. Hashem Pesaran and Takashi Yamagata, March 2017, revised January 2018

    Abstract: This paper considers tests of zero pricing errors for the linear factor pricing model when the number of securities, N, can be large relative to the time dimension, T, of the return series. We focus on class of tests that are based on Student t tests of individual securities which have a number of advantages over the existing standardised Wald type tests, and propose a test procedure that allows for non-Gaussianity and general forms of weakly cross correlated errors. It does not require estimation of an invertible error covariance matrix, it is much faster to implement, and is valid even if N is much larger than T. Monte Carlo evidence shows that the proposed test performs remarkably well even when T = 60 and N = 5; 000. The test is applied to monthly returns on securities in the S&P 500 at the end of each month in real time, using rolling windows of size 60. Statistically signifi…cant evidence against Sharpe-Lintner CAPM and Fama-French three factor models are found mainly during the recent fi…nancial crisis. Also we fi…nd a signifi…cant negative correlation between a twelve-months moving average p-values of the test and excess returns of long/short equity strategies (relative to the return on S&P 500) over the period November 1994 to June 2015, suggesting that abnormal pro…ts are earned during episodes of market ineffiencies.
    JEL Classifications: C12, C15, C23, G11, G12
    Key Words: CAPM, Testing for alpha, Weak and spatial error cross-sectional dependence, S&P 500 securities, Long/short equity strategy.
    Full Text: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp18/PY_LFPM_30_Jan_2018.pdf
    Supplement: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp17/PY_LFPM_11_March_2017_Supplement.pdf
     

  • "Business Cycle Effects of Credit Shocks in a DSGE Model with Firm Defaults", by M. Hashem Pesaran and TengTeng Xu, CWPE Working paper. No. 1159, CESifo Working Paper No. 3609, IZA Discussion Paper No. 6027, October 2011, revised April 2016

    Abstract: This paper proposes a new theoretical framework for the analysis of the relationship between credit shocks, firm defaults and volatility. The key feature of the modelling approach is to allow for the possibility of default in equilibrium. The model is then used to study the impact of credit shocks on business cycle dynamics. It is assumed that firms are identical ex ante but differ ex post due to different realizations of firm-specific technology shocks, possibly leading to default by some firms. The implications of firm defaults for the balance sheets of households and banks and their subsequent impacts on business uctuations are investigated within a dynamic stochastic general equilibrium framework. Results from a calibrated version of the model suggest that, in the steady state, a firm's default probability rises with its leverage ratio and the level of uncertainty in the economy. A positive credit shock, defined as a rise in the loan-to-deposit ratio, increases output, consumption, hours and productivity, and reduces the spread between loan and deposit rates. Interestingly, the effects of the credit shock tend to be highly persistent, even without price rigidities and habit persistence in consumption behavior.
    JEL Classifications: E32, E44, E50, G21.
    Key Words: Firm Defaults; Credit Shocks; Financial Intermediation; Interest Rate Spread; Uncertainty.
    Full Text: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp16/MacroCredit_PesaranXu_April-2016.pdf
    Supplement: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp11/MacroCredit_ 5Oct2011_Supplement.pdf

     

  • "Optimality and Diversifiability of Mean Variance and Arbitrage Pricing Portfolios", by M. Hashem Pesaran, and Paolo Zaffaroni, CESifo Working Papers No. 2857, October, 2009

    Abstract: This paper investigates the limit properties of mean-variance (mv) and arbitrage pricing (ap) trading strategies using a general dynamic factor model, as the number of assets diverge to infinity. It extends the results obtained in the literature for the exact pricing case to two other cases of asymptotic no-arbitrage and the unconstrained pricing scenarios. The paper characterizes the asymptotic behaviour of the portfolio weights and establishes that in the non-exact pricing cases the ap and mv portfolio weights are asymptotically equivalent and, moreover, functionally independent of the factors conditional moments. By implication, the paper sheds light on a number of issues of interest such as the prevalence of short-selling, the number of dominant factors and the granularity property of the portfolio weight.
    JEL Classifications: Large Portfolios, Factor Models, Mean-Variance Portfolio, Arbitrage Pricing, Market (Beta) Neutrality, Well Diversification.
    Key Words: C32, C52, C53, G11.
    Full Text: http://www.econ.cam.ac.uk/people-files/emeritus/mhp1/wp09/pz_port_17_October_09.pdf