Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
DelimitedFiles = "8bb1440f-4735-579b-a4ab-409b98df4dab"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
FiniteDiff = "6a86dc24-6348-571c-b903-95158fe2bd41"
InteractiveUtils = "b77e0a4c-d291-57a0-90e8-8db25a27a240"
LazyArtifacts = "4af54fe1-eca0-43a8-85a7-787d91b784e3"
LineSearches = "d3d80556-e9d4-5f37-9878-2ab0fcc64255"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
Expand All @@ -30,6 +31,7 @@ StenoGraphs = "0.2 - 0.3, 0.4.1 - 0.5"
DataFrames = "1"
Distributions = "0.25"
FiniteDiff = "2"
InteractiveUtils = "1.11.0"
LineSearches = "7"
NLSolversBase = "7"
NLopt = "0.6, 1"
Expand Down
39 changes: 14 additions & 25 deletions docs/src/tutorials/backends/nlopt.md
Original file line number Diff line number Diff line change
@@ -1,47 +1,36 @@
# Using NLopt.jl

[`SemOptimizerNLopt`](@ref) implements the connection to `NLopt.jl`.
It is only available if the `NLopt` package is loaded alongside `StructuralEquationModels.jl` in the running Julia session.
It takes a bunch of arguments:
When [`NLopt.jl`](https://github.com/jump-dev/NLopt.jl) is loaded in the running Julia session,
it could be used by the [`SemOptimizer`](@ref) by specifying `engine = :NLopt`
(see [NLopt-specific options](@ref `SemOptimizer(Val(:NLopt))`)).
Among other things, `NLopt` enables constrained optimization of the SEM models, which is
explained in the [Constrained optimization](@ref) section.
Comment on lines +3 to +7
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
When [`NLopt.jl`](https://github.com/jump-dev/NLopt.jl) is loaded in the running Julia session,
it could be used by the [`SemOptimizer`](@ref) by specifying `engine = :NLopt`
(see [NLopt-specific options](@ref `SemOptimizer(Val(:NLopt))`)).
Among other things, `NLopt` enables constrained optimization of the SEM models, which is
explained in the [Constrained optimization](@ref) section.
When [`NLopt.jl`](https://github.com/jump-dev/NLopt.jl) is loaded in the running Julia session,
it can be used by the [`SemOptimizer`](@ref) by specifying `engine = :NLopt`
(see [NLopt-specific options](@ref `SemOptimizer(Val(:NLopt))`)).
Among other things, `NLopt` enables constrained optimization of SEMs, which is
explained in the [Constrained optimization](@ref) section.


```julia
• algorithm: optimization algorithm

• options::Dict{Symbol, Any}: options for the optimization algorithm

• local_algorithm: local optimization algorithm

• local_options::Dict{Symbol, Any}: options for the local optimization algorithm

• equality_constraints::Vector{NLoptConstraint}: vector of equality constraints

• inequality_constraints::Vector{NLoptConstraint}: vector of inequality constraints
```
Constraints are explained in the section on [Constrained optimization](@ref).

The defaults are LBFGS as the optimization algorithm and the standard options from `NLopt.jl`.
We can choose something different:
We can override the default *NLopt* algorithm (LFBGS) and instead use
the *augmented lagrangian* method with LBFGS as the *local* optimization algorithm,
stop at a maximum of 200 evaluations and use a relative tolerance of
the objective value of `1e-6` as the stopping criterion for the local algorithm:

```julia
using NLopt

my_optimizer = SemOptimizerNLopt(;
my_optimizer = SemOptimizer(;
engine = :NLopt,
algorithm = :AUGLAG,
options = Dict(:maxeval => 200),
local_algorithm = :LD_LBFGS,
local_options = Dict(:ftol_rel => 1e-6)
)
```

This uses an augmented lagrangian method with LBFGS as the local optimization algorithm, stops at a maximum of 200 evaluations and uses a relative tolerance of the objective value of `1e-6` as the stopping criterion for the local algorithm.

To see how to use the optimizer to actually fit a model now, check out the [Model fitting](@ref) section.

In the NLopt docs, you can find explanations about the different [algorithms](https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/) and a [tutorial](https://nlopt.readthedocs.io/en/latest/NLopt_Introduction/) that also explains the different options.
In the *NLopt* docs, you can find details about the [optimization algorithms](https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/),
and the [tutorial](https://nlopt.readthedocs.io/en/latest/NLopt_Introduction/) that demonstrates how to tweak their behavior.

To choose an algorithm, just pass its name without the 'NLOPT\_' prefix (for example, 'NLOPT\_LD\_SLSQP' can be used by passing `algorithm = :LD_SLSQP`).

The README of the [julia package](https://github.com/JuliaOpt/NLopt.jl) may also be helpful, and provides a list of options:
The README of the [*NLopt.jl*](https://github.com/JuliaOpt/NLopt.jl) may also be helpful, and provides a list of options:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The README of the [*NLopt.jl*](https://github.com/JuliaOpt/NLopt.jl) may also be helpful, and provides a list of options:
The README of [*NLopt.jl*](https://github.com/JuliaOpt/NLopt.jl) may also be helpful, and provides a list of options:


- `algorithm`
- `stopval`
Expand Down
20 changes: 10 additions & 10 deletions docs/src/tutorials/backends/optim.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
# Using Optim.jl

[`SemOptimizerOptim`](@ref) implements the connection to `Optim.jl`.
It takes two arguments, `algorithm` and `options`.
The defaults are LBFGS as the optimization algorithm and the standard options from `Optim.jl`.
We can load the `Optim` and `LineSearches` packages to choose something different:
[Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl) is the default optimization engine of *SEM.jl*,
see [`SemOptimizer(Val(:Optim))`](@ref) for a full list of its parameters.
It defaults to the LBFGS optimization, but we can load the `Optim` and `LineSearches` packages
and specify BFGS (!not L-BFGS) with a back-tracking linesearch and Hager-Zhang initial step length guess:

```julia
using Optim, LineSearches

my_optimizer = SemOptimizerOptim(
my_optimizer = SemOptimizer(
algorithm = BFGS(
linesearch = BackTracking(order=3),
linesearch = BackTracking(order=3),
alphaguess = InitialHagerZhang()
),
options = Optim.Options(show_trace = true)
)
),
options = Optim.Options(show_trace = true)
)
```

This optimizer will use BFGS (!not L-BFGS) with a back tracking linesearch and a certain initial step length guess. Also, the trace of the optimization will be printed to the console.
Note that we used `options` to print the optimization progress to the console.

To see how to use the optimizer to actually fit a model now, check out the [Model fitting](@ref) section.

Expand Down
62 changes: 31 additions & 31 deletions docs/src/tutorials/constraints/constraints.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,15 @@
# Constrained optimization

## Using the NLopt backend
*SEM.jl* allows to fit models with additional constraints imposed on the parameters.

## Using the NLopt engine

*NLopt.jl* is one of *SEM.jl* optimization engines that supports constrained optimization.
In the example below we show how to specify constraints for the *SEM* model when using *NLopt*.

### Define an example model

Let's revisit our model from [A first model](@ref):
Let's revisit our model from [A first model](@ref) and fit it first without constraints:

```@example constraints
using StructuralEquationModels
Expand Down Expand Up @@ -57,39 +62,40 @@ details(partable)

### Define the constraints

Let's introduce some constraints:
Let's introduce some constraints (they are not based on any real properties of the underlying study and serve only as an example):
1. **Equality constraint**: The covariances `y3 ↔ y7` and `y8 ↔ y4` should sum up to `1`.
2. **Inequality constraint**: The difference between the loadings `dem60 → y2` and `dem60 → y3` should be smaller than `0.1`
3. **Bound constraint**: The directed effect from `ind60 → dem65` should be smaller than `0.5`

(Of course those constaints only serve an illustratory purpose.)

We first need to get the indices of the respective parameters that are invoved in the constraints.
We can look up their labels in the output above, and retrieve their indices as
Since *NLopt* does not have access to the SEM parameter names, its constaints are defined on the vector of all SEM parameters.
We have to look up the indices of the parameters involved in the constraints to construct the respective functions.

```@example constraints
parind = param_indices(model)
parind[:y3y7] # 29
```

The bound constraint is easy to specify: Just give a vector of upper or lower bounds that contains the bound for each parameter. In our example, only the parameter labeled `:λₗ` has an upper bound, and the number of total parameters is `n_par(model) = 31`, so we define
The bound constraint is easy to specify: just give a vector of upper or lower bounds for each parameter.
In our example, only the parameter labeled `:λₗ` has an upper bound, and the number of total parameters is `n_par(model) = 31`, so

```@example constraints
upper_bounds = fill(Inf, 31)
upper_bounds[parind[:λₗ]] = 0.5
```

The equailty and inequality constraints have to be reformulated to be of the form `x = 0` or `x ≤ 0`:
1. `y3 ↔ y7 + y8 ↔ y4 - 1 = 0`
2. `dem60 → y2 - dem60 → y3 - 0.1 ≤ 0`
The equailty and inequality constraints have to be reformulated in the `f(θ) = 0` or `f(θ) ≤ 0` form,
where `θ` is the vector of SEM parameters:
1. `f(θ) = 0`, where `f(θ) = y3 ↔ y7 + y8 ↔ y4 - 1`
2. `g(θ) ≤ 0`, where `g(θ) = dem60 → y2 - dem60 → y3 - 0.1`

Now they can be defined as functions of the parameter vector:
If the optimization algorithm needs gradients, it will pass the `gradient` vector that is of the same size as the parameters,
and the constraint function has to calculate the gradient in-place.

```@example constraints
parind[:y3y7] # 29
parind[:y8y4] # 30
# θ[29] + θ[30] - 1 = 0.0
function eq_constraint(θ, gradient)
function f(θ, gradient)
if length(gradient) > 0
gradient .= 0.0
gradient[29] = 1.0
Expand All @@ -101,7 +107,7 @@ end
parind[:λ₂] # 3
parind[:λ₃] # 4
# θ[3] - θ[4] - 0.1 ≤ 0
function ineq_constraint(θ, gradient)
function g(θ, gradient)
if length(gradient) > 0
gradient .= 0.0
gradient[3] = 1.0
Expand All @@ -111,49 +117,43 @@ function ineq_constraint(θ, gradient)
end
```

If the algorithm needs gradients at an iteration, it will pass the vector `gradient` that is of the same size as the parameters.
With `if length(gradient) > 0` we check if the algorithm needs gradients, and if it does, we fill the `gradient` vector with the gradients
of the constraint w.r.t. the parameters.

In NLopt, vector-valued constraints are also possible, but we refer to the documentation for that.
In *NLopt*, vector-valued constraints are also possible, but we refer to the documentation for that.

### Fit the model

We now have everything together to specify and fit our model. First, we specify our optimizer backend as
Now we can construct the *SemOptimizer* that will use the *NLopt* engine for constrained optimization.

```@example constraints
using NLopt

constrained_optimizer = SemOptimizerNLopt(
constrained_optimizer = SemOptimizer(
engine = :NLopt,
algorithm = :AUGLAG,
options = Dict(:upper_bounds => upper_bounds, :xtol_abs => 1e-4),
local_algorithm = :LD_LBFGS,
equality_constraints = NLoptConstraint(;f = eq_constraint, tol = 1e-8),
inequality_constraints = NLoptConstraint(;f = ineq_constraint, tol = 1e-8),
equality_constraints = (f => 1e-8),
inequality_constraints = (g => 1e-8),
)
```

As you see, the equality constraints and inequality constraints are passed as keyword arguments, and the bounds are passed as options for the (outer) optimization algorithm.
As you see, the equality and inequality constraints are passed as keyword arguments, and the bounds are passed as options for the (outer) optimization algorithm.
Additionally, for equality and inequality constraints, a feasibility tolerance can be specified that controls if a solution can be accepted, even if it violates the constraints by a small amount.
Especially for equality constraints, it is recommended to allow for a small positive tolerance.
In this example, we set both tolerances to `1e-8`.

!!! warning "Convergence criteria"
We have often observed that the default convergence criteria in NLopt lead to non-convergence flags.
Indeed, this example does not convergence with default criteria.
As you see above, we used a realively liberal absolute tolerance in the optimization parameters of 1e-4.
As you see above, we used a relatively liberal absolute tolerance in the optimization parameters of 1e-4.
This should not be a problem in most cases, as the sampling variance in (almost all) structural equation models
should lead to uncertainty in the parameter estimates that are orders of magnitude larger.
We nontheless recommend choosing a convergence criterion with care (i.e. w.r.t. the scale of your parameters),
inspecting the solutions for plausibility, and comparing them to unconstrained solutions.

```@example constraints
model_constrained = Sem(
specification = partable,
data = data
)
We now have everything to fit our model under constraints:

model_fit_constrained = fit(constrained_optimizer, model_constrained)
```@example constraints
model_fit_constrained = fit(constrained_optimizer, model)
```

As you can see, the optimizer converged (`:XTOL_REACHED`) and investigating the solution yields
Expand Down
67 changes: 29 additions & 38 deletions docs/src/tutorials/regularization/regularization.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,9 @@
For ridge regularization, you can simply use `SemRidge` as an additional loss function
(for example, a model with the loss functions `SemML` and `SemRidge` corresponds to ridge-regularized maximum likelihood estimation).

For lasso, elastic net and (far) beyond, you can load the `ProximalAlgorithms.jl` and `ProximalOperators.jl` packages alongside `StructuralEquationModels`:
You can define lasso, elastic net and other forms of regularization using [`ProximalOperators.jl`](https://github.com/JuliaFirstOrder/ProximalOperators.jl)
and optimize the SEM model with [`ProximalAlgorithms.jl`](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl)
that provides so-called *proximal optimization* algorithms.
Comment on lines +8 to +10
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
You can define lasso, elastic net and other forms of regularization using [`ProximalOperators.jl`](https://github.com/JuliaFirstOrder/ProximalOperators.jl)
and optimize the SEM model with [`ProximalAlgorithms.jl`](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl)
that provides so-called *proximal optimization* algorithms.
You can define lasso, elastic net and other forms of regularization using [`ProximalOperators.jl`](https://github.com/JuliaFirstOrder/ProximalOperators.jl)
and optimize the SEM with so-called *proximal optimization* algorithms from [`ProximalAlgorithms.jl`](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl).


```@setup reg
using StructuralEquationModels, ProximalAlgorithms, ProximalOperators
Expand All @@ -19,24 +21,22 @@ Pkg.add("ProximalOperators")
using StructuralEquationModels, ProximalAlgorithms, ProximalOperators
```

## `SemOptimizerProximal`
## Proximal optimization

To estimate regularized models, we provide a "building block" for the optimizer part, called `SemOptimizerProximal`.
It connects our package to the [`ProximalAlgorithms.jl`](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl) optimization backend, providing so-called proximal optimization algorithms.
Those can handle, amongst other things, various forms of regularization.

It can be used as
With *ProximalAlgorithms* package loaded, it is now possible to use `:Proximal` optimization engine
in `SemOptimizer` for estimating regularized models.
Comment on lines +26 to +27
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
With *ProximalAlgorithms* package loaded, it is now possible to use `:Proximal` optimization engine
in `SemOptimizer` for estimating regularized models.
With the *ProximalAlgorithms* package loaded, it is now possible to use the `:Proximal` optimization engine
in `SemOptimizer` for estimating regularized models.


```julia
SemOptimizerProximal(
SemOptimizer(;
engine = :Proximal,
algorithm = ProximalAlgorithms.PANOC(),
operator_g,
operator_h = nothing
)
```

The proximal operator (aka the regularization function) can be passed as `operator_g`.
The available Algorithms are listed [here](https://juliafirstorder.github.io/ProximalAlgorithms.jl/stable/guide/implemented_algorithms/).
The *proximal operator* (aka the *regularization function*) is passed as `operator_g`, see [available operators](https://juliafirstorder.github.io/ProximalOperators.jl/stable/functions/).
The `algorithm` is chosen from one of the [available algorithms](https://juliafirstorder.github.io/ProximalAlgorithms.jl/stable/guide/implemented_algorithms/).

## First example - lasso

Expand Down Expand Up @@ -84,7 +84,7 @@ model = Sem(
We labeled the covariances between the items because we want to regularize those:

```@example reg
ind = getindex.(
cov_inds = getindex.(
Ref(param_indices(model)),
[:cov_15, :cov_24, :cov_26, :cov_37, :cov_48, :cov_68])
```
Expand All @@ -96,30 +96,24 @@ The lasso penalty is defined as
\sum \lambda_i \lvert \theta_i \rvert
```

From the previously linked [documentation](https://juliafirstorder.github.io/ProximalOperators.jl/stable/functions/#ProximalOperators.NormL1), we find that lasso regularization is named `NormL1` in the `ProximalOperators` package, and that we can pass an array of hyperparameters (`λ`) to control the amount of regularization for each parameter. To regularize only the observed item covariances, we define `λ` as
In `ProximalOperators.jl`, lasso regularization is represented by the [`NormL1`](https://juliafirstorder.github.io/ProximalOperators.jl/stable/functions/#ProximalOperators.NormL1) operator. It allows controlling the amount of
regularization individually for each SEM model parameter via the vector of hyperparameters (`λ`).
To regularize only the observed item covariances, we define `λ` as

```@example reg
λ = zeros(31); λ[ind] .= 0.02
```

and use `SemOptimizerProximal`.
λ = zeros(31); λ[cov_inds] .= 0.02

```@example reg
optimizer_lasso = SemOptimizerProximal(
optimizer_lasso = SemOptimizer(
engine = :Proximal,
operator_g = NormL1(λ)
)

model_lasso = Sem(
specification = partable,
data = data
)
```

Let's fit the regularized model

```@example reg

fit_lasso = fit(optimizer_lasso, model_lasso)
fit_lasso = fit(optimizer_lasso, model)
```

and compare the solution to unregularizted estimates:
Expand All @@ -134,34 +128,31 @@ update_partable!(partable, :estimate_lasso, fit_lasso, solution(fit_lasso))
details(partable)
```

Instead of explicitely defining a `SemOptimizerProximal` object, you can also pass `engine = :Proximal` and additional keyword arguments to `fit`:
Instead of explicitly defining a `SemOptimizer` object, you can also pass `engine = :Proximal`
and additional keyword arguments directly to the `fit` function:

```@example reg
sem_fit = fit(model; engine = :Proximal, operator_g = NormL1(λ))
fit_lasso2 = fit(model; engine = :Proximal, operator_g = NormL1(λ))
```

## Second example - mixed l1 and l0 regularization

You can choose to penalize different parameters with different types of regularization functions.
Let's use the lasso again on the covariances, but additionally penalyze the error variances of the observed items via l0 regularization.
Let's use the *lasso* (*l1*) again on the covariances, but additionally penalize the error variances of the observed items via *l0* regularization.

The l0 penalty is defined as
The *l0* penalty is defined as
```math
\lambda \mathrm{nnz}(\theta)
l_0 = \lambda \mathrm{nnz}(\theta)
```

To define a sup of separable proximal operators (i.e. no parameter is penalized twice),
we can use [`SlicedSeparableSum`](https://juliafirstorder.github.io/ProximalOperators.jl/stable/calculus/#ProximalOperators.SlicedSeparableSum) from the `ProximalOperators` package:
Since we apply *l1* and *l0* to the disjoint sets of parameters, this regularization could be represented as
as sum of *separable proximal operators* (i.e. no parameter is penalized twice)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
as sum of *separable proximal operators* (i.e. no parameter is penalized twice)
a sum of *separable proximal operators* (i.e. no parameter is penalized twice)

implemented by the [`SlicedSeparableSum`](https://juliafirstorder.github.io/ProximalOperators.jl/stable/calculus/#ProximalOperators.SlicedSeparableSum) operator:

```@example reg
prox_operator = SlicedSeparableSum((NormL0(20.0), NormL1(0.02), NormL0(0.0)), ([ind], [9:11], [vcat(1:8, 12:25)]))

model_mixed = Sem(
specification = partable,
data = data,
)
l0_and_l1_reg = SlicedSeparableSum((NormL0(20.0), NormL1(0.02), NormL0(0.0)), ([cov_inds], [9:11], [vcat(1:8, 12:25)]))

fit_mixed = fit(model_mixed; engine = :Proximal, operator_g = prox_operator)
fit_mixed = fit(model; engine = :Proximal, operator_g = l0_and_l1_reg)
```

Let's again compare the different results:
Expand Down
Loading
Loading