# Chapter Fourteen: Nonlinear Regression

The multiple linear regression and analysis of variance models predict the dependent variable as a weighted sum of the independent variables. The weights in this sum are the regression coefficients. In the simplest case, the dependent variable changed in proportion to each of the independent variables, but it was possible to describe many nonlinear relationships by transforming one or more of the variables (such as by taking a logarithm or a power of one or more of the variables), while still maintaining the fact that the dependent variable remained a weighted sum of the independent variables. These models are *linear in the regression parameters.* The fact that the models are linear in the regression parameters means that the problem of solving for the set of parameter estimates (coefficients) that minimizes the sum of squared residuals between the observed and predicted values of the dependent variable boils down to the problem of solving a set of simultaneous linear algebraic equations in the unknown regression coefficients. There is a unique solution to each problem and the coefficients can be computed for any linear model from any set of data. Moreover, because the coefficients are themselves weighted sums (more precisely, linear combinations) of the observations, statistical theory demonstrates that the sampling distributions of the regression coefficients follow the *t* or *z* distributions, so that we can conduct hypothesis tests and compute confidence intervals for these coefficients and for the regression model as a whole. These techniques are powerful tools for describing complex biological, clinical, economic, social, and physical systems and gaining insight into the important variables that define these systems, despite the restriction that the model be linear in the regression parameters.

There are, however, many instances in which it is not desirable—or even possible—to describe a system of interest with a linear regression model. We have already encountered two such examples: logistic regression to predict qualitative dependent variables (Chapter 12) and Cox proportional hazards models to predict when events will occur (Chapter 13).

In addition, predictive theories are commonplace in the physical sciences and are beginning to appear in the life and health sciences. Rather than being based on a purely empirical description of the process under study, such theories are derived from fundamental assumptions about the underlying structure of a problem (often embodied in one or more differential equations) and manipulated to produce theoretical relationships between observable variables of interest. For example, in studies of the distribution of drugs within the body—pharmacokinetics—one often wishes to describe or predict the concentration of drugs over time after giving an injection or a pill. If one assumes that the drug is distributed in and exchanged between several compartments within the body, the resulting concentration of the drug in the blood will be described by a sum of exponential functions of time, in which the associated coefficients and exponents depend on the characteristics of the compartments and ...