Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
e1b4bc3
minus2ll(): cleanup method signatures
alyst Feb 2, 2025
1071533
fix chi2
alyst Jun 13, 2024
1dd08dd
fix RMSEA
alyst Mar 22, 2025
6cb3753
p_values(): use ccdf()
May 28, 2024
bb3df99
RAM: don't need to copy (I-A)
Mar 23, 2024
a5a553e
EM: move code refs to docstring
Apr 10, 2024
affd75f
fix batch_sym_inv_updates() ws
alyst Dec 22, 2024
1e60b16
RAMSymbolic: rename _func to _eval!
alyst Dec 24, 2024
3ef44c5
md ws fixes
alyst Mar 22, 2025
ef5a22e
test/Proximal: move usings to the central file
alyst Dec 24, 2024
d712b40
tests: move usings in the top file
alyst Dec 24, 2024
b1af16f
params_array: fix formatting
Jan 21, 2026
ec6e12f
simulation: fix formatting
Jan 21, 2026
d1f534a
checks: fix formatting
Jan 21, 2026
970d89e
bootstrap: fix formatting
Jan 21, 2026
12a43c3
types: fix formatting
Jan 21, 2026
9f1d958
StatsAPI: fix formatting
Jan 21, 2026
e145b58
documentation.jl: fix formatting
Jan 21, 2026
dd13ccb
EnsParTable: fix formatting
Jan 21, 2026
6500be7
ParTable: fix formatting
Jan 21, 2026
db8d7b9
generic: fix formatting
Jan 21, 2026
88a1a13
missing: fix formatting
Jan 21, 2026
6127b37
abstract: fix formatting
Jan 21, 2026
4b450b0
optim: fix ws
Jan 21, 2026
5fd64c4
test/model: fix ws
Jan 21, 2026
f1aaf4c
test/proximal/l0: fix ws
Jan 21, 2026
1195157
test/multigroup: fix ws
Jan 21, 2026
56a82a5
test/StatsAPI: fix ws
Jan 21, 2026
535b797
tests/unit_test_iactive: fix ws
Jan 21, 2026
ae6fb71
tests/unit_tests: fix formatting
Jan 21, 2026
488a58f
fix formattng
Jan 22, 2026
0cb546e
NLopt: fixup formatting
Jan 27, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/src/tutorials/concept.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,8 @@ Available loss functions are
- [`SemRidge`](@ref): ridge regularization

## The optimizer part aka `SemOptimizer`
The optimizer part of a model connects to the numerical optimization backend used to fit the model.
It can be used to control options like the optimization algorithm, linesearch, stopping criteria, etc.
The optimizer part of a model connects to the numerical optimization backend used to fit the model.
It can be used to control options like the optimization algorithm, linesearch, stopping criteria, etc.
There are currently three available backends, [`SemOptimizerOptim`](@ref) connecting to the [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl) backend, [`SemOptimizerNLopt`](@ref) connecting to the [NLopt.jl](https://github.com/JuliaOpt/NLopt.jl) backend and [`SemOptimizerProximal`](@ref) connecting to [ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl).
For more information about the available options see also the tutorials about [Using Optim.jl](@ref) and [Using NLopt.jl](@ref), as well as [Constrained optimization](@ref) and [Regularization](@ref) .

Expand Down
18 changes: 9 additions & 9 deletions docs/src/tutorials/constraints/constraints.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ end

partable = ParameterTable(
graph,
latent_vars = latent_vars,
latent_vars = latent_vars,
observed_vars = observed_vars)

data = example_data("political_democracy")
Expand All @@ -64,7 +64,7 @@ Let's introduce some constraints:

(Of course those constaints only serve an illustratory purpose.)

We first need to get the indices of the respective parameters that are invoved in the constraints.
We first need to get the indices of the respective parameters that are invoved in the constraints.
We can look up their labels in the output above, and retrieve their indices as

```@example constraints
Expand Down Expand Up @@ -112,7 +112,7 @@ end
```

If the algorithm needs gradients at an iteration, it will pass the vector `gradient` that is of the same size as the parameters.
With `if length(gradient) > 0` we check if the algorithm needs gradients, and if it does, we fill the `gradient` vector with the gradients
With `if length(gradient) > 0` we check if the algorithm needs gradients, and if it does, we fill the `gradient` vector with the gradients
of the constraint w.r.t. the parameters.

In NLopt, vector-valued constraints are also possible, but we refer to the documentation for that.
Expand All @@ -134,15 +134,15 @@ constrained_optimizer = SemOptimizerNLopt(
```

As you see, the equality constraints and inequality constraints are passed as keyword arguments, and the bounds are passed as options for the (outer) optimization algorithm.
Additionally, for equality and inequality constraints, a feasibility tolerance can be specified that controls if a solution can be accepted, even if it violates the constraints by a small amount.
Additionally, for equality and inequality constraints, a feasibility tolerance can be specified that controls if a solution can be accepted, even if it violates the constraints by a small amount.
Especially for equality constraints, it is recommended to allow for a small positive tolerance.
In this example, we set both tolerances to `1e-8`.

!!! warning "Convergence criteria"
We have often observed that the default convergence criteria in NLopt lead to non-convergence flags.
Indeed, this example does not convergence with default criteria.
As you see above, we used a realively liberal absolute tolerance in the optimization parameters of 1e-4.
This should not be a problem in most cases, as the sampling variance in (almost all) structural equation models
This should not be a problem in most cases, as the sampling variance in (almost all) structural equation models
should lead to uncertainty in the parameter estimates that are orders of magnitude larger.
We nontheless recommend choosing a convergence criterion with care (i.e. w.r.t. the scale of your parameters),
inspecting the solutions for plausibility, and comparing them to unconstrained solutions.
Expand All @@ -162,14 +162,14 @@ As you can see, the optimizer converged (`:XTOL_REACHED`) and investigating the
update_partable!(
partable,
:estimate_constr,
model_fit_constrained,
solution(model_fit_constrained),
)
model_fit_constrained,
solution(model_fit_constrained),
)

details(partable)
```

As we can see, the constrained solution is very close to the original solution (compare the columns estimate and estimate_constr), with the difference that the constrained parameters fulfill their constraints.
As we can see, the constrained solution is very close to the original solution (compare the columns estimate and estimate_constr), with the difference that the constrained parameters fulfill their constraints.
As all parameters are estimated simultaneously, it is expexted that some unconstrained parameters are also affected (e.g., the constraint on `dem60 → y2` leads to a higher estimate of the residual variance `y2 ↔ y2`).

## Using the Optim.jl backend
Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/construction/build_by_parts.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ end

partable = ParameterTable(
graph,
latent_vars = lat_vars,
latent_vars = lat_vars,
observed_vars = obs_vars)
```

Expand Down
20 changes: 10 additions & 10 deletions docs/src/tutorials/fitting/fitting.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,19 +7,19 @@ model_fit = fit(model)

# output

Fitted Structural Equation Model
===============================================
--------------------- Model -------------------
Fitted Structural Equation Model
===============================================
--------------------- Model -------------------

Structural Equation Model
- Loss Functions
Structural Equation Model
- Loss Functions
SemML
- Fields
observed: SemObservedData
implied: RAM
optimizer: SemOptimizerOptim
- Fields
observed: SemObservedData
implied: RAM
optimizer: SemOptimizerOptim

------------- Optimization result -------------
------------- Optimization result -------------

* Status: success

Expand Down
4 changes: 2 additions & 2 deletions docs/src/tutorials/meanstructure.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ end

partable = ParameterTable(
graph,
latent_vars = latent_vars,
latent_vars = latent_vars,
observed_vars = observed_vars)
```

Expand Down Expand Up @@ -78,7 +78,7 @@ end

partable = ParameterTable(
graph,
latent_vars = latent_vars,
latent_vars = latent_vars,
observed_vars = observed_vars)
```

Expand Down
14 changes: 7 additions & 7 deletions docs/src/tutorials/regularization/regularization.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Setup

For ridge regularization, you can simply use `SemRidge` as an additional loss function
For ridge regularization, you can simply use `SemRidge` as an additional loss function
(for example, a model with the loss functions `SemML` and `SemRidge` corresponds to ridge-regularized maximum likelihood estimation).

For lasso, elastic net and (far) beyond, you can load the `ProximalAlgorithms.jl` and `ProximalOperators.jl` packages alongside `StructuralEquationModels`:
Expand All @@ -22,7 +22,7 @@ using StructuralEquationModels, ProximalAlgorithms, ProximalOperators
## `SemOptimizerProximal`

To estimate regularized models, we provide a "building block" for the optimizer part, called `SemOptimizerProximal`.
It connects our package to the [`ProximalAlgorithms.jl`](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl) optimization backend, providing so-called proximal optimization algorithms.
It connects our package to the [`ProximalAlgorithms.jl`](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl) optimization backend, providing so-called proximal optimization algorithms.
Those can handle, amongst other things, various forms of regularization.

It can be used as
Expand All @@ -32,7 +32,7 @@ SemOptimizerProximal(
algorithm = ProximalAlgorithms.PANOC(),
operator_g,
operator_h = nothing
)
)
```

The proximal operator (aka the regularization function) can be passed as `operator_g`.
Expand Down Expand Up @@ -69,7 +69,7 @@ end

partable = ParameterTable(
graph,
latent_vars = latent_vars,
latent_vars = latent_vars,
observed_vars = observed_vars
)

Expand All @@ -85,7 +85,7 @@ We labeled the covariances between the items because we want to regularize those

```@example reg
ind = getindex.(
[param_indices(model)],
Ref(param_indices(model)),
[:cov_15, :cov_24, :cov_26, :cov_37, :cov_48, :cov_68])
```

Expand All @@ -107,7 +107,7 @@ and use `SemOptimizerProximal`.
```@example reg
optimizer_lasso = SemOptimizerProximal(
operator_g = NormL1(λ)
)
)

model_lasso = Sem(
specification = partable,
Expand Down Expand Up @@ -158,7 +158,7 @@ prox_operator = SlicedSeparableSum((NormL0(20.0), NormL1(0.02), NormL0(0.0)), ([

model_mixed = Sem(
specification = partable,
data = data,
data = data,
)

fit_mixed = fit(model_mixed; engine = :Proximal, operator_g = prox_operator)
Expand Down
12 changes: 6 additions & 6 deletions docs/src/tutorials/specification/graph_interface.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Graph interface

## Workflow
## Workflow
As discussed before, when using the graph interface, you can specify your model as a graph

```julia
Expand All @@ -17,7 +17,7 @@ lat_vars = ...

partable = ParameterTable(
graph,
latent_vars = lat_vars,
latent_vars = lat_vars,
observed_vars = obs_vars)

model = Sem(
Expand All @@ -32,7 +32,7 @@ In general, there are two different types of parameters: **directed** and **indi
We allow multiple variables on both sides of an arrow, for example `x → [y z]` or `[a b] → [c d]`. The later specifies element wise edges; that is its the same as `a → c; b → d`. If you want edges corresponding to the cross-product, we have the double lined arrow `[a b] ⇒ [c d]`, corresponding to `a → c; a → d; b → c; b → d`. The undirected arrows ↔ (element-wise) and ⇔ (crossproduct) behave the same way.

!!! note "Unicode symbols in julia"
The `→` symbol is a unicode symbol allowed in julia (among many others; see this [list](https://docs.julialang.org/en/v1/manual/unicode-input/)). You can enter it in the julia REPL or the vscode IDE by typing `\to` followed by hitting `tab`. Similarly,
The `→` symbol is a unicode symbol allowed in julia (among many others; see this [list](https://docs.julialang.org/en/v1/manual/unicode-input/)). You can enter it in the julia REPL or the vscode IDE by typing `\to` followed by hitting `tab`. Similarly,
- `←` = `\leftarrow`,
- `↔` = `\leftrightarrow`,
- `⇒` = `\Rightarrow`,
Expand All @@ -54,7 +54,7 @@ graph = @StenoGraph begin
ξ₃ ↔ fixed(1.0)*ξ₃
end
```
would
would
- fix the directed effects from `ξ₁` to `x1` and from `ξ₂` to `x2` to `1`
- leave the directed effect from `ξ₃` to `x7` free but instead restrict the variance of `ξ₃` to `1`
- give the effect from `ξ₁` to `x3` the label `:a` (which can be convenient later if you want to retrieve information from your model about that specific parameter)
Expand All @@ -66,7 +66,7 @@ As you saw above and in the [A first model](@ref) example, the graph object need
```julia
partable = ParameterTable(
graph,
latent_vars = lat_vars,
latent_vars = lat_vars,
observed_vars = obs_vars)
```

Expand All @@ -85,7 +85,7 @@ The variable names (`:x1`) have to be symbols, the syntax `:something` creates a
_(lat_vars) ⇔ _(lat_vars)
end
```
creates undirected effects coresponding to
creates undirected effects coresponding to
1. the variances of all observed variables and
2. the variances plus covariances of all latent variables
So if you want to work with a subset of variables, simply specify a vector of symbols `somevars = [...]`, and inside the graph specification, refer to them as `_(somevars)`.
Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/specification/parameter_table.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,5 +5,5 @@ As lavaan also uses parameter tables to store model specifications, we are worki

## Convert from and to RAMMatrices

To convert a RAMMatrices object to a ParameterTable, simply use `partable = ParameterTable(rammatrices)`.
To convert a RAMMatrices object to a ParameterTable, simply use `partable = ParameterTable(rammatrices)`.
To convert an object of type `ParameterTable` to RAMMatrices, you can use `ram_matrices = RAMMatrices(partable)`.
26 changes: 13 additions & 13 deletions docs/src/tutorials/specification/ram_matrices.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# RAMMatrices interface

Models can also be specified by an object of type `RAMMatrices`.
Models can also be specified by an object of type `RAMMatrices`.
The RAM (reticular action model) specification corresponds to three matrices; the `A` matrix containing all directed parameters, the `S` matrix containing all undirected parameters, and the `F` matrix filtering out latent variables from the model implied covariance.

The model implied covariance matrix for the observed variables of a SEM is then computed as
Expand Down Expand Up @@ -56,9 +56,9 @@ A =[0 0 0 0 0 0 0 0 0 0 0 1.0 0 0
θ = Symbol.(:θ, 1:31)

spec = RAMMatrices(;
A = A,
S = S,
F = F,
A = A,
S = S,
F = F,
param_labels = θ,
vars = [:x1, :x2, :x3, :y1, :y2, :y3, :y4, :y5, :y6, :y7, :y8, :ind60, :dem60, :dem65]
)
Expand All @@ -71,9 +71,9 @@ model = Sem(

Let's look at this step by step:

First, we specify the `A`, `S` and `F`-Matrices.
For a free parameter, we write a `Symbol` like `:θ1` (or any other symbol we like) to the corresponding place in the respective matrix, to constrain parameters to be equal we just use the same `Symbol` in the respective entries.
To fix a parameter (as in the `A`-Matrix above), we just write down the number we want to fix it to.
First, we specify the `A`, `S` and `F`-Matrices.
For a free parameter, we write a `Symbol` like `:θ1` (or any other symbol we like) to the corresponding place in the respective matrix, to constrain parameters to be equal we just use the same `Symbol` in the respective entries.
To fix a parameter (as in the `A`-Matrix above), we just write down the number we want to fix it to.
All other entries are 0.

Second, we specify a vector of symbols containing our parameters:
Expand All @@ -82,14 +82,14 @@ Second, we specify a vector of symbols containing our parameters:
θ = Symbol.(:θ, 1:31)
```

Third, we construct an object of type `RAMMatrices`, passing our matrices and parameters, as well as the column names of our matrices.
Third, we construct an object of type `RAMMatrices`, passing our matrices and parameters, as well as the column names of our matrices.
Those are quite important, as they will be used to rearrange your data to match it to your `RAMMatrices` specification.

```julia
spec = RAMMatrices(;
A = A,
S = S,
F = F,
A = A,
S = S,
F = F,
param_labels = θ,
vars = [:x1, :x2, :x3, :y1, :y2, :y3, :y4, :y5, :y6, :y7, :y8, :ind60, :dem60, :dem65]
)
Expand All @@ -109,7 +109,7 @@ According to the RAM, model implied mean values of the observed variables are co
```math
\mu = F(I-A)^{-1}M
```
where `M` is a vector of mean parameters. To estimate the means of the observed variables in our example (and set the latent means to `0`), we would specify the model just as before but add
where `M` is a vector of mean parameters. To estimate the means of the observed variables in our example (and set the latent means to `0`), we would specify the model just as before but add

```julia
...
Expand All @@ -128,5 +128,5 @@ spec = RAMMatrices(;

## Convert from and to ParameterTables

To convert a RAMMatrices object to a ParameterTable, simply use `partable = ParameterTable(ram_matrices)`.
To convert a RAMMatrices object to a ParameterTable, simply use `partable = ParameterTable(ram_matrices)`.
To convert an object of type `ParameterTable` to RAMMatrices, you can use `ram_matrices = RAMMatrices(partable)`.
2 changes: 1 addition & 1 deletion ext/SEMNLOptExt/NLopt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ function SemOptimizerNLopt(;
applicable(iterate, equality_constraints) && !isa(equality_constraints, NamedTuple) ||
(equality_constraints = [equality_constraints])
applicable(iterate, inequality_constraints) &&
!isa(inequality_constraints, NamedTuple) ||
!isa(inequality_constraints, NamedTuple) ||
(inequality_constraints = [inequality_constraints])
return SemOptimizerNLopt(
algorithm,
Expand Down
6 changes: 3 additions & 3 deletions src/additional_functions/helper.jl
Original file line number Diff line number Diff line change
Expand Up @@ -21,22 +21,22 @@ function batch_inv!(fun, model)
end

# computes A*S*B -> C, where ind gives the entries of S that are 1
function sparse_outer_mul!(C, A, B, ind)
function sparse_outer_mul!(C, A, B, ind)
fill!(C, 0.0)
for i in 1:length(ind)
BLAS.ger!(1.0, A[:, ind[i][1]], B[ind[i][2], :], C)
end
end

# computes A*∇m, where ∇m ind gives the entries of ∇m that are 1
function sparse_outer_mul!(C, A, ind)
function sparse_outer_mul!(C, A, ind)
fill!(C, 0.0)
@views C .= sum(A[:, ind], dims = 2)
return C
end

# computes A*S*B -> C, where ind gives the entries of S that are 1
function sparse_outer_mul!(C, A, B::Vector, ind)
function sparse_outer_mul!(C, A, B::Vector, ind)
fill!(C, 0.0)
@views @inbounds for i in 1:length(ind)
C .+= B[ind[i][2]] .* A[:, ind[i][1]]
Expand Down
10 changes: 3 additions & 7 deletions src/additional_functions/params_array.jl
Original file line number Diff line number Diff line change
Expand Up @@ -199,11 +199,7 @@ materialize!(
kwargs...,
) = materialize!(parent(dest), src, params; kwargs...)

function sparse_materialize(
::Type{T},
arr::ParamsMatrix,
params::AbstractVector,
) where {T}
function sparse_materialize(::Type{T}, arr::ParamsMatrix, params::AbstractVector) where {T}
nparams(arr) == length(params) || throw(
DimensionMismatch(
"Number of values ($(length(params))) does not match the number of parameter ($(nparams(arr)))",
Expand Down Expand Up @@ -251,8 +247,8 @@ sparse_gradient(arr::ParamsArray{T}) where {T} = sparse_gradient(T, arr)

# range of parameters that are referenced in the matrix
function params_range(arr::ParamsArray; allow_gaps::Bool = false)
first_i = findfirst(i -> arr.param_ptr[i+1] > arr.param_ptr[i], 1:nparams(arr)-1)
last_i = findlast(i -> arr.param_ptr[i+1] > arr.param_ptr[i], 1:nparams(arr)-1)
first_i = findfirst(i -> arr.param_ptr[i+1] > arr.param_ptr[i], 1:(nparams(arr)-1))
last_i = findlast(i -> arr.param_ptr[i+1] > arr.param_ptr[i], 1:(nparams(arr)-1))

if !allow_gaps && !isnothing(first_i) && !isnothing(last_i)
for i in first_i:last_i
Expand Down
Loading
Loading