Skip to content

Commit 7bb9ead

Browse files
committed
run typos
1 parent 6a247a0 commit 7bb9ead

File tree

4 files changed

+10
-10
lines changed

4 files changed

+10
-10
lines changed

README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ julia>] add RobustModels#main;
6161

6262
## Usage
6363

64-
The prefered way of performing robust regression is by calling the `rlm` function:
64+
The preferred way of performing robust regression is by calling the `rlm` function:
6565

6666
`m = rlm(X, y, MEstimator{TukeyLoss}(); initial_scale=:mad)`
6767

@@ -156,7 +156,7 @@ Several loss functions are implemented:
156156
- `CauchyLoss`: `ρ(r) = log(1+(r/c)²)`, non-convex estimator, that also corresponds to a Student's-t distribution (with fixed degree of freedom). It suppresses outliers more strongly but it is not sure to converge.
157157
- `GemanLoss`: `ρ(r) = ½ (r/c)²/(1 + (r/c)²)`, non-convex and bounded estimator, it suppresses outliers more strongly.
158158
- `WelschLoss`: `ρ(r) = ½ (1 - exp(-(r/c)²))`, non-convex and bounded estimator, it suppresses outliers more strongly.
159-
- `TukeyLoss`: `ρ(r) = if r<c; ⅙(1 - (1-(r/c)²)³) else ⅙ end`, non-convex and bounded estimator, it suppresses outliers more strongly and it is the prefered estimator for most cases.
159+
- `TukeyLoss`: `ρ(r) = if r<c; ⅙(1 - (1-(r/c)²)³) else ⅙ end`, non-convex and bounded estimator, it suppresses outliers more strongly and it is the preferred estimator for most cases.
160160
- `YohaiZamarLoss`: `ρ(r)` is quadratic for `r/c < 2/3` and is bounded to 1; non-convex estimator, it is optimized to have the lowest bias for a given efficiency.
161161

162162
The value of the tuning constants `c` are optimized for each estimator so the M-estimators have a high efficiency of 0.95. However, these estimators have a low breakdown point.
@@ -184,9 +184,9 @@ the two loss functions should be the same but with different tuning constants.
184184
### MQuantile-estimators
185185

186186
Using an asymmetric variant of the `L1Estimator`, quantile regression is performed
187-
(although the `QuantileRegression` solver should be prefered because it gives an exact solution).
188-
Identically, with an M-estimator using an asymetric version of the loss function,
189-
a generalization of quantiles is obtained. For instance, using an asymetric `L2Loss` results in _Expectile Regression_.
187+
(although the `QuantileRegression` solver should be preferred because it gives an exact solution).
188+
Identically, with an M-estimator using an asymmetric version of the loss function,
189+
a generalization of quantiles is obtained. For instance, using an asymmetric `L2Loss` results in _Expectile Regression_.
190190

191191
### Robust Ridge regression
192192

@@ -215,7 +215,7 @@ This package derives from the [RobustLeastSquares](https://github.com/FugroRoame
215215
package for the initial implementation, especially for the Conjugate Gradient
216216
solver and the definition of the M-Estimator functions.
217217

218-
Credits to the developpers of the [GLM](https://github.com/JuliaStats/GLM.jl)
218+
Credits to the developers of the [GLM](https://github.com/JuliaStats/GLM.jl)
219219
and [MixedModels](https://github.com/JuliaStats/MixedModels.jl) packages
220220
for implementing the Iteratively Reweighted Least Square algorithm.
221221

docs/src/manual.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ Supported loss functions are:
3232
- [`CauchyLoss`](@ref): `ρ(r) = log(1+(r/c)²)`, non-convex estimator, that also corresponds to a Student's-t distribution (with fixed degree of freedom). It suppresses outliers more strongly but it is not sure to converge.
3333
- [`GemanLoss`](@ref): `ρ(r) = ½ (r/c)²/(1 + (r/c)²)`, non-convex and bounded estimator, it suppresses outliers more strongly.
3434
- [`WelschLoss`](@ref): `ρ(r) = ½ (1 - exp(-(r/c)²))`, non-convex and bounded estimator, it suppresses outliers more strongly.
35-
- [`TukeyLoss`](@ref): `ρ(r) = if r<c; ⅙(1 - (1-(r/c)²)³) else ⅙ end`, non-convex and bounded estimator, it suppresses outliers more strongly and it is the prefered estimator for most cases.
35+
- [`TukeyLoss`](@ref): `ρ(r) = if r<c; ⅙(1 - (1-(r/c)²)³) else ⅙ end`, non-convex and bounded estimator, it suppresses outliers more strongly and it is the preferred estimator for most cases.
3636
- [`YohaiZamarLoss`](@ref): `ρ(r)` is quadratic for `r/c < 2/3` and is bounded to 1; non-convex estimator, it is optimized to have the lowest bias for a given efficiency.
3737

3838
An estimator is constructed from an estimator type and a loss, e.g. `MEstimator{TukeyLoss}()`.

src/estimators.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -224,7 +224,7 @@ function scale_estimate(est::E, res; kwargs...) where {E<:MEstimator}
224224
return scale_estimate(est.loss, res; kwargs...)
225225
end
226226

227-
"`L1Estimator` is a shorthand name for `MEstimator{L1Loss}`. Using exact QuantileRegression should be prefered."
227+
"`L1Estimator` is a shorthand name for `MEstimator{L1Loss}`. Using exact QuantileRegression should be preferred."
228228
const L1Estimator = MEstimator{L1Loss}
229229

230230
"`L2Estimator` is a shorthand name for `MEstimator{L2Loss}`, the non-robust OLS."

src/losses.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -439,7 +439,7 @@ estimator_high_efficiency_constant(::Type{CauchyLoss}) = 2.385
439439
estimator_high_breakdown_point_constant(::Type{CauchyLoss}) = 1.468
440440

441441
"""
442-
The non-convex Geman-McClure for strong supression of outliers and does not guarantee a unique solution.
442+
The non-convex Geman-McClure for strong suppression of outliers and does not guarantee a unique solution.
443443
For the S-Estimator, it is equivalent to the Cauchy loss.
444444
ψ(r) = r / (1 + r^2)^2
445445
"""
@@ -469,7 +469,7 @@ estimator_high_breakdown_point_constant(::Type{GemanLoss}) = 0.61200
469469

470470

471471
"""
472-
The non-convex Welsch for strong supression of outliers and does not guarantee a unique solution
472+
The non-convex Welsch for strong suppression of outliers and does not guarantee a unique solution
473473
ψ(r) = r * exp(-r^2)
474474
"""
475475
struct WelschLoss <: BoundedLossFunction

0 commit comments

Comments
 (0)