Presenter: Peter Phillips (Yale)
Paper: "Weak IV and Curiosa for Richard"
Abstract:
We review some exact finite sample theory and expansions that deliver weak instrument asymptotics in structural equation estimation and reveal some curiosa in the limit theory of dynamic panel estimation.
Presenter: Brendan McCabe (Liverpool)
Paper: "The Role of the Support in Semi Parametric Hypothesis Testing: the Case of Testing Independence in Count Data", (with D. Harris)
Abstract:
This talk looks at some issues that arise when testing low counts for independence in a semiparametric framework. When treating distributions non
parametrically little or no attention is typically given to the nature of the support on which probabilities are defined. In the case of looking at low counts it is
perfectly feasible that data might be generated by a binomial distribution, say,
where both parameters are unknown. In this case the support is finite with an
unknown upper bound. It turns out that classical likelihood based tests, derived
under the assumption of an infinite support, have a size of either zero or one
should the support be finite. The reason for this phenomenon is that, with this
type of restriction, the support of the observations now depends on the value of
the parameter under test and the score does not have zero mean. The effective
score test though does not suffer from this defect. Remarkably, the effective
score when standardised by T 1=2 is asymptotically normal and found to have
power against local alternatives that shrink to the null at the rate T 1.
Presenter: Paulo Parente (Exeter)
Paper: "Dynamic Vector Mode Regression", (with G. C.R. Kemp and J.M.C. Santos Silva)
Abstract:
We study the semiparametric estimation of the conditional mode of a random vector that has a continuous conditional joint density with a welldefined global mode. A fullsystem estimator is proposed and its asymptotic properties are studied allowing for possibly dependent data. We specifically consider the estimation of vector autoregressive conditional mode models and of systems of linear simultaneous equations defined by mode restrictions.
Presenter: Whitney Newey (MIT)
Paper: "Individual Heterogeneity and Demand Analysis", (with Jerry A. Hausman)
Abstract:
Individual heterogeneity is an important source of variation in demand.
Rsquareds are often estimated to be quite low, so that accounting
correctly for unobserved variation is potentially important. We consider
general heterogenous demand with smooth preferences, where budget sets
are statistically independent of preferences. This environment is a
smooth version of the revealed stochastic preference setting of McFadden
(2005). We find that the dimension of heterogeneity and the individual
demand functions are not identified. We also find that the exact
consumer surplus of a price change, averaged across individuals, is not
identified, motivating bounds analysis of consumer surplus. We show how
such bounds can be computed by expanding around a conditional quantile
estimate that is a demand function. We apply the results to gasoline
demand and find tights bounds in this application.
Presenter: Kees Jan van Garderen (Amsterdam)
Paper: "Bimodal p* Based Confidence Intervals" (with Fallaw Sowell)
Abstract:
BarndorffNielsen's celebrated p*formula and variations thereof have amongst their various attractions the ability to approximate bimodal distributions. In this paper we show that in general this requires an important adjustment to the basic formula that is easy to implement. We partition the sample space and show that certain regions of the sample space imply zero density for the MLE, rather than positive density as a straight application of the formula would suggest. We subsequently show how this can be used to construct appropriate confidence intervals that have correct coverage probabilities conditionally as well as unconditionally.
Presenter: Yuichu Kitmaura (Yale)
Paper: "Empirical Likelihood and Measurement Errors", (with Taisuke Otsu)
Abstract:
This paper develops an empirical likelihoodbased estimation procedure in the presence of measurement errors. The new method is based on a smoothed version of empirical likelihood that incorporates deconvolution techniques, thus termed the Deconvolution Empirical Likelihood (DEL) estimator. The DEL estimator is easy to compute. Under sufficient conditions it is $\sqrt n$consistent and asymptotically normal. Moreover, it retains desirable finite sample properties of empirical likelihood, such as invariance with respect to the formulation of moments. A weighted version of DEL is also proposed. The weighting scheme is useful in obtaining sufficiently fast convergence rates, and it is potentially useful in practice as well.
Presenter: Martin Weale (QMUL and Bank of England)
Paper: "New Keynesian Pricing Behaviour: an Analysis of Micro Data", (joint with James Cloyne, Lena Koerber and
Tomasz Wieladek)
Abstract:
A survey conducted by the Conferderation of British Industry collects a range of data including firms' responses to questions about price increases in the previous twelve months and expected price increases in the coming twelve month. We use these data to estimate a new Keynesian pricing equation in which price changes depend on expected price changes and nominal prices relative to marginal costs. Unit wage costs provides a measure of costs which satisfies the restrictions imposed by price homogeneity, but is most appropriate when firms produce with constant returns to scale. We find firmsâ€™ behaviour consistent with constant returns to scale, and a coefficient of on expected price changes in the pricing equation entirely consistent with new Keynesian theory. [Paper and Slides]
Presenter: Nicky Grant (Manchester)
Paper: "Efficient Estimation & Inference from GMM in Linear Models with Singular Moment Variance"
Abstract:
Empirically (almost) singular variance is frequently encountered in dynamic panel and large simultaneous equation settings, though the general asymptotic properties of efficient GMM in this case are unknown. This paper considers efficient GMM estimation under strongidentification from moment functions with (asymptotically) singular variance at the true parameter. The 2Step GMM estimator in this case is shown to converge at rate rootn to a nonstandard limit distribution with convergence at rate n in certain directions. This paper provides evidence and discussion on why the assumption on both form and rank of the moment variance be made as an identification condition, with its implications for efficient inference inextricably linked to the well known firstorder identification condition. Many examples of (almost) singular variance are provided, along with discussion on the implications for inference of some adhoc methods of removing singularities commonly employed in applied research. Results in this paper show that these methods (including those removing only linearly independent combinations of moments with (asymptotically) zero population variance eigenvalues) lead to a change in the asymptotic distribution of efficient GMM with an increased asymptotic variance and slower rate of convergence in certain directions. A simulation study demonstrates the key results of this paper.
Presenter: JeanMarc Robin (Sciences Po)
Paper: "Nonparametric Spectralbased Estimation of Latent Structures"
Abstract:
We present a constructive identification proof of $p$linear decompositions of $q$way arrays. The analysis is based on the joint spectral decomposition of a set of matrices. It has applications in the analysis of a variety of latentstructure models, such as $q$variate mixtures of $p$ distributions. As such, our results provide a constructive alternative to \cite{AllmanMatiasRhodes2009}. The identification argument suggests a joint approximatediagonalization estimator that is easy to implement and whose asymptotic properties we derive. We illustrate the usefulness of our approach by applying it to nonparametrically estimate multivariate finitemixture models and hidden Markov models. [Slides]
Presenter: Vitaliy Oryshchenko (Nuffield College, Oxford)
Paper: "Indirect Maximum Entropy Bandwidth for the Kernel Estimator of a Distribution Function"
Abstract:
In parametric models, the equivalence between minimisation of the KullbackLeibler (KL) divergence between the model and the true distribution and minimisation of the (reverse) KL divergence between the distribution of the probability integral transforms (PITs) and the uniform distribution can be exploited to define a class of estimators alternative to maximum likelihood. Such estimators remain valid in situations when likelihood does not exist, in which case they maximise the Shannon entropy of PITs. Similar arguments carry on to kernel estimation of the cumulative distribution function (CDF). It is shown that PITs defined as leaveoneout kernel estimates of the CDF at sample values are distributed on a certain permutohedron, and the maximum entropy distribution is characterised. The proposed bandwidth indirectly maximises the entropy of PITs. Connection with crossvalidation procedures is described.
Presenter: Andrew Chesher (UCL)
Paper: "New Directions for IV"
Abstract:
In key papers published in the 1980's Richard Smith pioneered the microeconometric analysis using models involving discrete endogenous variables. These models, at the research frontier at the time, were complete and point identifying. I will review recent work which shows how incomplete, partially identifying, instrumental variable (IV) models can be deployed to deliver more robust inference in such models.
