TY - JOUR
T1 - On plug-in estimation of long memory models
AU - Lieberman, Offer
PY - 2005/4
Y1 - 2005/4
N2 - We consider the Gaussian ARFIMA(j, d, l,) model, with spectral density f θ(λ), θ ∈ R p, λ ∈ (-ππ), d ∈ (0,1/2) and an unknown mean μ ∈ R. For this class of models, the n -1-normalized information matrix of the full parameter vector, (μ,θ), is asymptotically degenerate. To estimate θ, Dahlhaus (1989, Annals of Statistics 17, 1749-1766) suggested using the maximizer of the plug-in loglikelihood, L n(θ, μ̃ n), where μ̃ n is any n (1-2d)/2-consistent estimator of μ. The resulting estimator is a plug-in maximum likelihood estimator (PMLE). This estimator is asymptotically normal, efficient, and consistent, but in finite samples it has some serious drawbacks. Primarily, none of the Bartlett identities associated with L n(θ,μ̃ n) are satisfied for fixed n. Cheung and Diebold (1994, Journal of Econometrics 62, 301-316) conducted a Monte Carlo simulation study and reported that the bias of the PMLE is about 3-4 times the bias of the regular maximum likelihood estimator (MLE). In this paper, we derive asymptotic expansions for the PMLE and show that its second-order bias is contaminated by an additional term, which does not exist in regular cases. This term arises because of the failure of the first Bartlett identity to hold and seems to explain Cheung and Diebold's simulated results. We derive similar expansions for the Whittle MLE, which is another estimator tacitly using the plug-in principle. An application to the ARFIMA(0, d, 0) shows that the additional bias terms are considerable.
AB - We consider the Gaussian ARFIMA(j, d, l,) model, with spectral density f θ(λ), θ ∈ R p, λ ∈ (-ππ), d ∈ (0,1/2) and an unknown mean μ ∈ R. For this class of models, the n -1-normalized information matrix of the full parameter vector, (μ,θ), is asymptotically degenerate. To estimate θ, Dahlhaus (1989, Annals of Statistics 17, 1749-1766) suggested using the maximizer of the plug-in loglikelihood, L n(θ, μ̃ n), where μ̃ n is any n (1-2d)/2-consistent estimator of μ. The resulting estimator is a plug-in maximum likelihood estimator (PMLE). This estimator is asymptotically normal, efficient, and consistent, but in finite samples it has some serious drawbacks. Primarily, none of the Bartlett identities associated with L n(θ,μ̃ n) are satisfied for fixed n. Cheung and Diebold (1994, Journal of Econometrics 62, 301-316) conducted a Monte Carlo simulation study and reported that the bias of the PMLE is about 3-4 times the bias of the regular maximum likelihood estimator (MLE). In this paper, we derive asymptotic expansions for the PMLE and show that its second-order bias is contaminated by an additional term, which does not exist in regular cases. This term arises because of the failure of the first Bartlett identity to hold and seems to explain Cheung and Diebold's simulated results. We derive similar expansions for the Whittle MLE, which is another estimator tacitly using the plug-in principle. An application to the ARFIMA(0, d, 0) shows that the additional bias terms are considerable.
UR - http://www.scopus.com/inward/record.url?scp=17444429246&partnerID=8YFLogxK
U2 - 10.1017/S0266466605050231
DO - 10.1017/S0266466605050231
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:17444429246
SN - 0266-4666
VL - 21
SP - 431
EP - 454
JO - Econometric Theory
JF - Econometric Theory
IS - 2
ER -