## About Born in 1968, Athens, Greece. Fascinated with Economics in all my life. In November 2018 I completed and was awarded a PhD in Economics from the Department of Economics of the Athens University of Economics and Business. The PhD thesis is a monograph about the Two-tier Stochastic Frontier model, and includes many new theoretical results, econometric applications as well as two new economic models, on bilateral Nash bargaining, and on the effects of Management in production. I generally keep busy by writing research papers alone or with co-authors, and I continue to work in the private sector as a Finance & Administration executive and consultant, as I have done from 1990 on. You can contact me at the e-mail address papadopalex(at)aueb.gr    Comments
1. Sophie says:

Hi, I’m Sophie1998 from cross validated. I will appreciate a lot if you can afford your help for my thesis. I’m a 2nd year master student in ULB université libre de bruxelles.
Best regards

• yufei says:

Distribution of likelihood ratio in a test on the unknown variance of a normal sample
i guess there is a problem in your answer

2. yufei says:

in your answer
Distribution of likelihood ratio in a test on the unknown variance of a normal sample

[https://math.stackexchange.com/questions/634914/distribution-of-likelihood-ratio-in-a-test-on-the-unknown-variance-of-a-normal-s/635371?noredirect=1#comment1339933_635371]

i think there is a mistake: z_i are not independent, in fact, the sum of n z_i ~ chisq dist with dof n-1

• Alecos Papadopoulos says:

Thanks for contacting me. Let me check.

• Alecos Papadopoulos says:

I don’t see why the z_i’s are not independent. They are standardized variables (standardized by the true values) of an i.i.d. sample.

3. jouneau says:

Hi I am reposting a message from stack exchange. It seems to me that your statement concerning the fact that OLS always asymptotically identifies the derivative of the conditional expectation of $E[Y|X=x]$ at the point $x=E[X]$ fells short in explaining why $E[XR_1(X)]=0$ (where $R_1(.)$ is the first order remainder function of the Taylor expansion of the conditional expectation function. You mention that you have a full submitted paper. Could you please give me a version of it ?

• Alecos Papadopoulos says:

Hi, thanks for the interest. The paper is in the revision stage so there is no version to be sent out at the moment. I will look into the post again and see whether I can improve it.

• jouneau says:

I’ll be glad to see the paper but to be perfectly clear, I don’t think such a strong result can be true. Take $X$ uniformly distributed, and $\epsilon$ be zero mean, independant from $X.$ Let $\alpha>1.$ If $Y=X^{\alpha}+\epsilon$ then $E[Y|X]=X^{\alpha}.$Hence the derivative of the expectation function at $E[X]$ is $\alpha \times 2^{\alpha-1}.$

Now the linear correlation coefficient is $Cov[X^{\alpha};X]/Var[X].$ The denominator is $1/12$ and the numerator $E[X^{\alpha+1}]-E[X^{\alpha}]/2=\frac{\alpha-1}{24\alpha(1+\alpha)}.$

• Alecos Papadopoulos says:

I am beginning to have doubts myself, now I am digging deeper. For the moment, I will flag the CV post as being of “contested validity”.

• jouneau says:

Fine. I recommend this reference by Bera (84) Shankya 285-290 “The Use of Linear Approximation to Nonlinear Regression Analysis”. I think the result provided by Bera is close to the best that can be achieved using the technique you sketched in the initial post, since the bound on the reminder of the Taylor expansion used in this paper can be reached.

• Alecos Papadopoulos says:

Thanks for the reference. In case you missed it, another is White, H. (1980). Using least squares to approximate unknown regression functions. International Economic Review, vol. 21(1), pp. 149-170.

• jouneau says:

Actually, Bera’s paper stands a critic of White’s paper. Looking at some particular examples, White went on to claim that the difference between the a.s limit of the OLS and the derivative of the conditional expectation function at the average value can be as large as desired. Bera shows that it can be bounded -by the Taylor-Lagrange inequality applied to the reminder function-, provided the second derivative is (a.s.) bounded over the range of $X.$ (btw, since we deal with a.s limits you can invert difference of the limit and limit of the difference).

• Alecos Papadopoulos says:

Yes, it gets more interesting by the hour.