5Autocorrelated Errors
InChapter 4 we discussed a null condition for which the regularity assumption – of homogeneity of error variances – does not hold. Another situation that violates the classical assumption is when the errors are autocorrelated. This situation might occur when the data are collected over time. In such a case it might be expected that time series methods could be applied. In fact, when the data over time follow a structural curve, we use nonlinear regression. However, the nonlinear structure of data are considered as a trend in the time series approach and the trend is removed by filtering. In this chapter we assume the errors of a nonlinear regression model are correlated, and attempt to find estimates using classical and robust methods. The theory, and also the computational tools provided in the nlr
package, are based on the methods developed by Riazoshams et al. (2010).
5.1 Introduction
Some statistics practitioners often ignore the underlying assumptions when analyzing real data and employ the ordinary least squares (OLS) method to estimate the parameters of a nonlinear model. In order to make reliable inferences about the parameters of a model, the underlying assumptions, especially the assumption that the errors are independent, must be satisfied. However, in real situations we may encounter dependent error terms, which might produce autocorrelated errors.
A two‐stage estimator (known as the classical two‐stage, or CTS estimator) has been developed to remedy ...
Get Robust Nonlinear Regression now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.