You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -61,7 +61,7 @@ julia>] add RobustModels#main;
61
61
62
62
## Usage
63
63
64
-
The prefered way of performing robust regression is by calling the `rlm` function:
64
+
The preferred way of performing robust regression is by calling the `rlm` function:
65
65
66
66
`m = rlm(X, y, MEstimator{TukeyLoss}(); initial_scale=:mad)`
67
67
@@ -156,7 +156,7 @@ Several loss functions are implemented:
156
156
-`CauchyLoss`: `ρ(r) = log(1+(r/c)²)`, non-convex estimator, that also corresponds to a Student's-t distribution (with fixed degree of freedom). It suppresses outliers more strongly but it is not sure to converge.
157
157
-`GemanLoss`: `ρ(r) = ½ (r/c)²/(1 + (r/c)²)`, non-convex and bounded estimator, it suppresses outliers more strongly.
158
158
-`WelschLoss`: `ρ(r) = ½ (1 - exp(-(r/c)²))`, non-convex and bounded estimator, it suppresses outliers more strongly.
159
-
-`TukeyLoss`: `ρ(r) = if r<c; ⅙(1 - (1-(r/c)²)³) else ⅙ end`, non-convex and bounded estimator, it suppresses outliers more strongly and it is the prefered estimator for most cases.
159
+
-`TukeyLoss`: `ρ(r) = if r<c; ⅙(1 - (1-(r/c)²)³) else ⅙ end`, non-convex and bounded estimator, it suppresses outliers more strongly and it is the preferred estimator for most cases.
160
160
-`YohaiZamarLoss`: `ρ(r)` is quadratic for `r/c < 2/3` and is bounded to 1; non-convex estimator, it is optimized to have the lowest bias for a given efficiency.
161
161
162
162
The value of the tuning constants `c` are optimized for each estimator so the M-estimators have a high efficiency of 0.95. However, these estimators have a low breakdown point.
@@ -184,9 +184,9 @@ the two loss functions should be the same but with different tuning constants.
184
184
### MQuantile-estimators
185
185
186
186
Using an asymmetric variant of the `L1Estimator`, quantile regression is performed
187
-
(although the `QuantileRegression` solver should be prefered because it gives an exact solution).
188
-
Identically, with an M-estimator using an asymetric version of the loss function,
189
-
a generalization of quantiles is obtained. For instance, using an asymetric`L2Loss` results in _Expectile Regression_.
187
+
(although the `QuantileRegression` solver should be preferred because it gives an exact solution).
188
+
Identically, with an M-estimator using an asymmetric version of the loss function,
189
+
a generalization of quantiles is obtained. For instance, using an asymmetric`L2Loss` results in _Expectile Regression_.
190
190
191
191
### Robust Ridge regression
192
192
@@ -215,7 +215,7 @@ This package derives from the [RobustLeastSquares](https://github.com/FugroRoame
215
215
package for the initial implementation, especially for the Conjugate Gradient
216
216
solver and the definition of the M-Estimator functions.
217
217
218
-
Credits to the developpers of the [GLM](https://github.com/JuliaStats/GLM.jl)
218
+
Credits to the developers of the [GLM](https://github.com/JuliaStats/GLM.jl)
219
219
and [MixedModels](https://github.com/JuliaStats/MixedModels.jl) packages
220
220
for implementing the Iteratively Reweighted Least Square algorithm.
Copy file name to clipboardExpand all lines: docs/src/manual.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ Supported loss functions are:
32
32
-[`CauchyLoss`](@ref): `ρ(r) = log(1+(r/c)²)`, non-convex estimator, that also corresponds to a Student's-t distribution (with fixed degree of freedom). It suppresses outliers more strongly but it is not sure to converge.
33
33
-[`GemanLoss`](@ref): `ρ(r) = ½ (r/c)²/(1 + (r/c)²)`, non-convex and bounded estimator, it suppresses outliers more strongly.
34
34
-[`WelschLoss`](@ref): `ρ(r) = ½ (1 - exp(-(r/c)²))`, non-convex and bounded estimator, it suppresses outliers more strongly.
35
-
-[`TukeyLoss`](@ref): `ρ(r) = if r<c; ⅙(1 - (1-(r/c)²)³) else ⅙ end`, non-convex and bounded estimator, it suppresses outliers more strongly and it is the prefered estimator for most cases.
35
+
-[`TukeyLoss`](@ref): `ρ(r) = if r<c; ⅙(1 - (1-(r/c)²)³) else ⅙ end`, non-convex and bounded estimator, it suppresses outliers more strongly and it is the preferred estimator for most cases.
36
36
-[`YohaiZamarLoss`](@ref): `ρ(r)` is quadratic for `r/c < 2/3` and is bounded to 1; non-convex estimator, it is optimized to have the lowest bias for a given efficiency.
37
37
38
38
An estimator is constructed from an estimator type and a loss, e.g. `MEstimator{TukeyLoss}()`.
0 commit comments