Finally. This journey started in June 2020 in the North American Productivity Workshop, Miami Florida USA, (virtual due to covid-19). There were some papers on applying quantile methods to stochastic frontier analysis (SFA), and I publicly commented that something just didn’t feel in place. Not technical stuff, but conceptual issues. I started thinking and writing about it, and one year later in the next NAPW workshop (again virtual), June 2021, I presented “Quantile regression in Stochastic Frontier Analysis: some fundamental considerations” in the virtual North American Productivity Workshop (Miami Florida USA), where I was rather pessimistic about the applicability of quantile methods in SFA. The organizer of the workshop Chris Parmeter was also writing a piece on the topic, having his own concerns about whether Quantile Regression and SFA can co-exist. We initially discussed putting together a “symposium”, a small collection of papers around the subject in a journal, but we ended instead writing a monograph together. Which just got published. It is not just a review, but has new (valid) tools, an empirical application, and many open issues for further research.

The abstract goes

Quantile regression has become one of the standard tools of econometrics. We examine its compatibility with the special goals of stochastic frontier analysis. We document several conflicts between quantile regression and stochastic frontier analysis. From there we review what has been done up to now, we propose ways to overcome the conflicts that exist, and we develop new tools to do applied efficiency analysis using quantile methods in the context of stochastic frontier models. The work includes an empirical illustration to reify the issues and methods discussed, and catalogs the many open issues and topics for future research.

Here are the ToC

1 Introduction
I Where We Are
2 The Relation Between Conditional Quantiles and the Regression Function
3 Basics of Quantile Regression: The Independence Case
4 Where Quantile Regression and Stochastic Frontier Analysis Clash
5 Reconciling Quantile Regression with Stochastic Frontier Models
6 Likelihood-Based Quantile Estimation

II What We Can Do
7 The Corrected Q-Estimator
8 Quantile-Dependent Efficiency
9 From the Composite Error Term to Inefficiency: A Fundamental Result
10 Quantile Estimation and Inference with Dependence
11 An Empirical Application

III For the Road
12 Challenges Ahead
13 Summary and Concluding Remarks

The full text is at (pay-walled).

Click here to download front material, chapter 1, 12, 13 and references.


This has been accepted and published in Empirical Economics, Papadopoulos, A. (2021) Trade liberalization and growth: a quantile moderator for Hoyos’ (2021) replication study of Estevadeordal and Taylor (2013). Empir Econ,

Estevadeordal and Taylor (2013) is unique in that the authors dug deeply into raw data to construct indicators of tariff reductions in the 1990’s for several countries. Hoyos (2021) is a “critical replication” study that challenges their main finding (that tariff reduction accelerated growth in the 1990’s), on account of the empirical exercise being “non-robust”.

My paper is indeed a moderator in this debate, because it shows that the non-robustness results of Hoyos are valid, but they do not invalidate the overall conclusion of Estevadeordal and Taylor. In order to understand this, one should apply quantile regression, which is what I do.

The abstract reads:

We examine whether Hoyos’ (Empir Econ, 2021) critical replication of Estevadeordal and Taylor (Rev Econ Stat 95(5):1669–1690, 2013) that dealt with trade liberalization and growth, does provide, as the author claims, clear evidence that the estimation results and the conclusions of the original should be discarded, the first as “nonrobust,” the second as relying on the former. We find that robustness is indeed an issue. We correct for it using quantile regression and we obtain results that explain both papers and support a modified argument that tariff reduction in the 1990’s contributed to growth acceleration, when other determinants of growth were conducive to such acceleration.

Eleven months later, the chapter has just been released on line.

Contributed chapter to Handbook of Production Economics vol. 2, Springer, 45-1. On-line: July 16, 2021.

ABSTRACT: We review econometric studies that attempt to estimate the effects of management on production, being on output, productivity, or efficiency. We group the studies mainly by a methodological criterion: whether they treat management as a latent variable, whether they proxy it by some other variable(s), or whether they attempt to construct a direct measure of management and use it as a regressor in an econometric model. A large part of the literature uses data from small-size agriculture, while in recent years national surveys have started to collect more systematically data related to management and management practices from various industries. Rather than being mentioned by telegraphic references, most of the studies presented are given a somewhat detailed summary so that the reader can acquire a good sense of the methodological choices made, the estimation techniques adopted, and the results obtained on the effects of management.

Download here.

This paper just got aired on the website of Journal of Productivity Analysis, It is jointly written with Chris F. Parmeter and Subal Kumbhakar.

Abstract: The two-tier stochastic frontier model has seen widespread application across a range of social science domains. It is particularly useful in examining bilateral exchanges where unobserved side-specific information exists on both sides of the transaction. These buyer and seller specific informational aspects offer opportunities to extract surplus from the other side of the market, in combination also with uneven relative bargaining power. Currently, this model is hindered by the fact that identification and estimation relies on the potentially restrictive assumption that these factors are statistically independent. We present three different models for empirical application that allow for varying degrees of dependence across these latent informational/bargaining factors.

Here is a presentation I recently made in the (virtual) North American Productivity Workshop XII (vNAPW-XII). Quantile regression has seen a large number of applications in econometrics, especially in Treatment Effects models, because it can capture also the “indirect” effects of the regressors on the dependent variable, along the quantiles of the latter.

But does the trick work in Stochastic Frontier Analysis (SFA)? My conclusion is that it doesn’t. Fundamentally, this is because in Treatment Effects models we consider equilibrium relations and we care about the total effect of the regressors on the dependent variable. But in SFA, we consider frontier relations, and we want to keep clearly separate the effects of the regressors on the “deterministic frontier”, from any other indirect effect they may have.

Moreover, the defining property of the estimator used in Quantile Regression (the Q-estimator), is not harmless in SFA models – in fact, it is devastating and we need distributional assumptions to mend the damage. This does not mean that we cannot use the “quantile approach” in SFA at all, it only means that we should expect different benefits, and perhaps, after all said and done, fewer: currently it appears that the value-added in using Quantile Regression in SFA is smaller than when using it in Treatment Effects models.

Download the presentation and think for your self. Hopefully, it will appear eventually as a paper somewhere. The paper will contain some more material than the presentation, and its current abstract goes like this:

We review fundamental properties of quantile regression and we examine the degree to which they are compatible with the special statistical and economic characteristics of stochastic frontier models, and with the goals of stochastic frontier analysis. We find that the scope and focus of quantile regression changes when applied to stochastic frontier models, compared to the conventional regression setup. We show that a “corrected” quantile estimator can be used to estimate the quantile probability of the deterministic frontier, given a distributional assumption. We examine the quantile estimator in the benchmark case of independence between the regressors and the error term, but also when “predictors of inefficiency” enter the model, in which case we obtain a non-linear median stochastic frontier regression model, where the deterministic frontier can be estimated without distributional assumptions. We show how quantiles can be used to obtain information on individual levels of inefficiency, and also as a basis for specification tests for the distributional assumptions.

  • Papadopoulos, A (2021). “Accounting for endogeneity in regression models using Copulas: A step-by-step guide for empirical studies.” Journal of Econometric Methods, . Download the pre-print incl. the on-line supplement.
  • Abstract : We provide a detailed presentation and guide for the use of Copulas in order to account for endogeneity in linear regression models without the need for instrumental variables. We start by developing the model from first principles of likelihood inference, and then focus on the Gaussian Copula. We discuss its merits and propose diagnostics to assess its validity. We analyze in detail and provide solutions to the various issues that may arise in empirical applications for applying the method. We treat the cases of both continuous and discrete endogenous regressors. We present simulation evidence for the performance of the proposed model in finite samples, and we illustrate its application by a short empirical study. A supplementary file contains additional simulations and another empirical illustration.

    This has just been accepted in European Journal of Operational Research and the Author’s accepted version has been uploaded here. It has been written together with Christopher Parmeter of Miami University.

    There are some nice theoretical results related to Skewness and Excess Kurtosis for the composite distributions used in Stochastic Frontier Analysis, but to me the main contribution is a specification test that uses only OLS residuals and it appears the most powerful such test to date. With this test, one can first test for the error specification after just an OLS regression, and then code the maximum likelihood estimator.

    Abstract. The distributional specifications for the composite regression error term most
    often used in stochastic frontier analysis are inherently bounded as regards their skewness
    and excess kurtosis coefficients. We derive general expressions for the skewness and excess
    kurtosis of the composed error term in the stochastic frontier model based on the ratio
    of standard deviations of the two separate error components as well as theoretical ranges
    for the most popular empirical specifications. While these simple expressions can be used
    directly to assess the credibility of an assumed distributional pair, they are likely to over
    reject. Therefore, we develop a formal test based on the implied ratio of standard deviations
    for the skewness and the kurtosis. This test is shown to have impressive power compared
    with other tests of the specification of the composed error term. We deploy this test on
    a range of well-known datasets that have been used across the efficiency community. For
    many of them we find that the classic distribution assumptions cannot be rejected.

    UPDATE: The paper went on-line on February 2, 2021,

    The paper “Stochastic frontier models using the Generalized Exponential distribution” has just been approved for publication in the Journal of Productivity Analysis.

    Abstract: We present a new, single-parameter distributional specification for the one-sided error components in single-tier and two-tier stochastic frontier models. The distribution has its mode away from zero, and can represent cases where the most likely outcome is non-zero inefficiency. We present the necessary formulas for estimating production, cost and two-tier stochastic frontier models in logarithmic form. We pay particular attention to the use of the conditional mode as a predictor of individual inefficiency. We use simulations to assess the performance of existing models when the data include an inefficiency term with non-zero mode, and we also contrast the conditional mode to the conditional expectation as measures of individual (in)efficiency.

    Download the pre-print here.

    This survey has just been published in the collection Parmeter, C. F., & Sickles, R. C. (2020) Advances in Efficiency and Productivity Analysis. Springer. Naturally, it is based on my PhD, and it is a comprehensive survey of the state-of-the-art of the Two-tier Stochastic Frontier Framework, surveying theoretical foundations, estimation tools, and the large variety of application this modeling framework has been used for. Indicatively, it has been used to measure the impact of informational asymmetry in wage negotiations, in the house market, in the Health Services market, the impact of asymmetric bargaining power in International donors-recipients relationship but also in Tourist shopping, or to measure the effects of “optimism” and “pessimism” in self-reported quality of life. And may more, economic and not-so-economic situations.

    Anywhere where we can perceive of opposing latent forces operating on the outcome, this model can be applied. This is why I use as its pet name the “noisy Tug-of-War” model -“noisy” because there is also a “noise” component in the composed error specification.



    This paper is a joint effort with prof. Mike Tsionas. It has just been accepted for publication in Econometric Reviews. It really has a new least-squares method to propose that reduces the variance of the estimator in linear regression. And it is very easy to implement.

    ABSTRACT. In pursuit of efficiency, we propose a new way to construct least squares estimators, as the minimizers of an augmented objective function that takes explicitly into account the variability of the error term and the resulting uncertainty, as well as the possible existence of heteroskedasticity. We initially derive aninfeasible estimator which we then approximate using Ordinary Least Squares (OLS) residuals from a first-step regression to obtain the feasible “HOLS” estimator. This estimator has negligible bias, is consistent and outperforms OLS in terms of finite-sample Mean Squared Error, but also in terms of asymptotic efficiency, under all skedastic scenarios, including homoskedasticity. Analogous efficiency gains are obtained for the case of Instrumental Variables estimation. Theoretical results are accompanied by simulations that support them.

    Download the pre-print and the on-line Appendix.