[R] Different results in the unit root test. Why?

angchanyy squidyau at mail.goo.ne.jp
Wed Oct 24 16:42:18 CEST 2007


Situation:
I had tired a 1000-data generated by random error(i.i.d.), then I sub it
into different unit root tests. I got different results among the tests. The
following are the test statistics I got:

adf.test @ tseries ~ -10.2214 (lag = 9)
ur.df @ urca ~ -21.8978
ur.sp @ urca ~ -27.68
pp.test @ tseries ~ -972.3343 (truncation lag =7)
ur.pp @ urca ~ -973.2409
ur.kpss @ urca ~ 0.1867
kpss.test @ tseries ~ 0.1867 (truncation lag =7)

Questions:
1. Why there are different test statistics? Even in tests under same test
name, say Phillips-perron test (pp.test & ur.pp), they have different test
statistics.
2. Don't the Phillips-perron test based on the Dickey-Fuller distribution
table? How the value being so negative (-9xx)?
3. What is truncation lag? Is it the same with lag terms?
-- 
View this message in context: http://www.nabble.com/Different-results-in-the-unit-root-test.-Why--tf4684056.html#a13384769
Sent from the R help mailing list archive at Nabble.com.



More information about the R-help mailing list