Iteratively Reweighted Least Squares (IRLS)
- Iterative method that fits models by repeating weighted least squares updates.
- At each iteration the algorithm recomputes weights based on error magnitudes and updates model parameters accordingly.
- Often used to reduce the influence of outliers and, per the source, can converge faster and more reliably than ordinary least squares.
Definition
Section titled “Definition”Iteratively reweighted least squares (IRLS) is an algorithm used to solve optimization problems that involve fitting a model to data by minimizing the sum of squared errors. It is an iterative method that repeats a series of steps until it arrives at a satisfactory solution.
Explanation
Section titled “Explanation”IRLS fits a model by performing repeated weighted least squares fits. The process typically follows these steps as described in the source:
- Initialize model parameters (for a simple linear example, the line y = mx + b is used).
- Compute predicted values using the current parameters.
- Compute the errors as the differences between observed and predicted values.
- Compute a weight for each error based on its magnitude (the source gives an example weight such as the reciprocal of the error).
- Update the model parameters using weighted least squares with those weights.
- Repeat the cycle until the sum of squared errors is minimized or a convergence criterion is met.
The source emphasizes the key idea that errors are weighted differently at each iteration. Descriptions in the source include both formulations that state larger errors are given greater influence and formulations that state outliers are given less weight so they have less influence; the method centers on changing weights iteratively to affect parameter updates and robustness.
The simple linear model used in the explanation is
with an example initialization of parameters m = 1 and b = 0.
Examples
Section titled “Examples”Simple linear regression example
Section titled “Simple linear regression example”- Model:
- Initialize parameters to m = 1 and b = 0.
- Compute predicted y values, then errors (observed − predicted).
- Compute weights for each error (the source example uses the reciprocal of the error as a possible weight).
- Perform a weighted least squares update of m and b using those weights.
- Repeat until the sum of squared errors is minimized or a convergence criterion is met.
Example with outliers
Section titled “Example with outliers”- If the dataset contains outliers, ordinary least squares treats all errors equally and outliers can have a large influence on parameter estimates.
- Using IRLS, the algorithm weights errors differently across iterations; in the source example this results in outliers being given less influence (assigned smaller weights), producing a more robust fitted model.
Use cases
Section titled “Use cases”- Dealing with datasets that contain outliers, where reducing the influence of those outliers on fitted parameters is desirable.
Notes or pitfalls
Section titled “Notes or pitfalls”- The source notes that the (ordinary) method of least squares can be computationally expensive and may not always converge; IRLS is presented as a more efficient and, in the source, more reliable alternative in some settings.
- The specific choice of weight function (for example, the reciprocal of the error in the source example) affects behavior and robustness.
Related terms
Section titled “Related terms”- Method of least squares
- Weighted least squares
- Outliers
- Robust regression