vignettes/articles/multiple-group-analysis.Rmd
multiple-group-analysis.Rmd
Aim. This vignette shows how to fit a penalized multiple-group factor analysis model with the alasso penalty using the routines in the penfa
package. The penalties will automatically encourage sparsity in the factor structures, and invariance in the loadings and intercepts.
Data. For illustration purposes, we use the cross-cultural data set ccdata
containing the standardized ratings to 12 items concerning organizational citizenship behavior. Employees from different countries were asked to rate their attitudes towards helping other employees and giving suggestions for improved work conditions. The items are thought to measure two latent factors: help, defined by the first seven items (h1
to h7
), and voice, represented by the last five items (v1
to v5
). See ?ccdata
for details.
This data set is a standardized version of the one in the ccpsyc
package, and only considers employees from Lebanon and Taiwan (i.e., "LEB"
, "TAIW"
). The country of origin is the group variable for the multiple-group analysis. This vignette is meant as a demo of the capabilities of penfa
; please refer to Fischer et al. (2019) and Fischer and Karl (2019) for a description and analysis of these data.
Let us load and inspect ccdata
.
library(penfa)
data(ccdata)
summary(ccdata)
## country h1 h2 h3 h4
## Length:767 Min. :-2.62004 Min. :-2.9034 Min. :-2.63082 Min. :-3.0441
## Class :character 1st Qu.:-0.69516 1st Qu.:-0.2163 1st Qu.:-0.70356 1st Qu.:-0.2720
## Mode :character Median :-0.05354 Median : 0.4554 Median :-0.06114 Median : 0.4211
## Mean : 0.00000 Mean : 0.0000 Mean : 0.00000 Mean : 0.0000
## 3rd Qu.: 0.58809 3rd Qu.: 0.4554 3rd Qu.: 0.58128 3rd Qu.: 0.4211
## Max. : 1.22971 Max. : 1.1272 Max. : 1.22370 Max. : 1.1141
## h5 h6 h7 v1 v2
## Min. :-2.9105 Min. :-2.9541 Min. :-2.8364 Min. :-2.627694 Min. :-2.674430
## 1st Qu.:-0.8662 1st Qu.:-0.9092 1st Qu.:-0.7860 1st Qu.:-0.660770 1st Qu.:-0.671219
## Median : 0.4966 Median : 0.4541 Median :-0.1025 Median :-0.005129 Median :-0.003482
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.000000 Mean : 0.000000
## 3rd Qu.: 0.4966 3rd Qu.: 0.4541 3rd Qu.: 0.5810 3rd Qu.: 0.650512 3rd Qu.: 0.664255
## Max. : 1.1781 Max. : 1.1358 Max. : 1.2645 Max. : 1.306154 Max. : 1.331992
## v3 v4 v5
## Min. :-2.65214 Min. :-2.65722 Min. :-2.51971
## 1st Qu.:-0.68800 1st Qu.:-0.68041 1st Qu.:-0.61127
## Median :-0.03329 Median :-0.02148 Median : 0.02488
## Mean : 0.00000 Mean : 0.00000 Mean : 0.00000
## 3rd Qu.: 0.62142 3rd Qu.: 0.63746 3rd Qu.: 0.66103
## Max. : 1.27613 Max. : 1.29639 Max. : 1.29718
The syntax becomes more elaborate than the one in vignette("automatic-tuning-selection")
due to the additional specification of the mean structure. Following the rationale in Geminiani et al. (2021), we only specify the minimum number of identification constraints. The metric of the factors is accommodated through the ‘marker-variable’ approach, with the markers being h1
for help
and v1
for voice
: the primary loadings of the markers are set to 1, and their intercepts to 0. To avoid rotational freedom, the secondary loading of the markers are set to zero. Intercepts for multiple variables can be introduced by specifying the variables of interest on the left-hand side, followed ‘+’ signs, and on the right-hand side the tilde operator ~
and the number 1. By default, factor means are fixed to zero. Provided that identification restrictions are applied, users can force the estimation of any model parameter through a pre-multiplication by NA
(i.e., 'help ~ NA*1'
and 'voice ~ NA*1'
). Alternatively, factor means can be estimated by setting int.lv.free = TRUE
in the model function call. If the variable appearing in the intercept formulas is observed, then the formula specifies the intercept term for that item; if the variable is latent (i.e., a factor), then the formula specifies a factor mean. By default, unique variances are automatically added to the model, and the factors are allowed to correlate.
syntax.mg = 'help =~ 1*h1 + h2 + h3 + h4 + h5 + h6 + h7 + 0*v1 + v2 + v3 + v4 + v5
voice =~ 0*h1 + start(0)*h2 + start(0)*h3 + h4 + h5 + h6 + h7 + 1*v1 + v2 + v3 + v4 + v5
h2 + h3 + h4 + h5 + h6 + h7 + v2 + v3 + v4 + v5 ~ 1
help ~ NA*1
voice ~ NA*1 '
Users can take advantage of the start()
function to provide informative starting values to some model parameters and hence facilitate the estimation process. In the syntax above, we specified the starting values for two secondary loadings to zero. These specifications can be modified by altering the syntax (see ?penfa
for details on how to write the model syntax).
As discussed in vignette("automatic-tuning-selection")
, we first fit an unpenalized model to obtain the maximum likelihood estimates to be used as weights for the alasso. In the group
argument, we indicate the name of the group variable in the data set, which is the country
of origin of the employees. We also set pen.shrink = "none"
, and strategy = "fixed"
.
mle.fitMG <- penfa(## factor model
model = syntax.mg,
data = ccdata,
group = "country",
## (no) penalization
pen.shrink = "none",
pen.diff = "none",
eta = list(shrink = c("lambda" = 0), diff = c("none" = 0)),
strategy = "fixed",
verbose = FALSE)
The estimated parameters can be extracted via the coef
method. We collect them in the mle.weightsMG
vector, which will be used when fitting the penalized multiple-group model with the alasso penalty.
mle.weightsMG <- coef(mle.fitMG)
We can now estimate a penalized multiple-group model with the alasso penalty. Following the rationale in Geminiani et al. (2021), we specify the following penalties:
This makes up for a total of three tuning parameters, whose optimal values can be fast and efficiently estimated through the automatic tuning procedure. The argument pen.shrink
specifies the function for Penalty 1 (shrinkage), whereas pen.diff
the penalty function for Penalties 2 and 3 (shrinkage of pairwise group differences). The names "lambda"
and "tau"
in the eta
argument clarify that the shrinkage penalty has to be applied on the loadings, and the invariance penalty on both loadings and intercepts. The numeric values in eta
constitute the starting values for each tuning value. By default the alasso penalty is computed with the exponent equal to 1, and the automatic procedure uses an influence factor of 4. Users desiring sparser solutions are encouraged to try higher values for these quantities through the corresponding arguments (a.alasso
, gamma
) in the penfa
call.
alasso.fitMG <- penfa(## factor model
model = syntax.mg,
data = ccdata,
group = "country",
int.lv.free = TRUE,
## penalization
pen.shrink = "alasso",
pen.diff = "alasso",
eta = list(shrink = c("lambda" = 0.01),
diff = c("lambda" = 0.1, "tau" = 0.01)),
## automatic procedure
strategy = "auto",
gamma = 4,
## alasso
weights = mle.weightsMG,
verbose = TRUE)
##
## Automatic procedure:
## Iteration 1 : 0.00215674 827.66749181 0.00300201
## Iteration 2 : 0.00196781 29643.22874071 0.00772724
##
## Largest absolute gradient value: 62.07895697
## Fisher information matrix is positive definite
## Eigenvalue range: [53.00926, 8.675704e+12]
## Trust region iterations: 21
## Factor solution: admissible
## Effective degrees of freedom: 56.75804
alasso.fitMG
## penfa 0.1.1 reached convergence
##
## Number of observations per group:
## LEB 402
## TAIW 365
##
## Estimator PMLE
## Optimization method trust-region
## Information fisher
## Strategy auto
## Number of iterations (total) 211
## Number of two-steps (automatic) 2
## Effective degrees of freedom 56.758
##
## Penalty functions:
## Sparsity alasso
## Invariance alasso
##
##
From the summary
of the fitted object we can notice that the automatic tuning procedure required just a couple of iterations to converge. The optimal tuning parameters are 0.00196781 for Penalty 1, 29643.229 for Penalty 2, and 0.00772724 for Penalty 3.
summary(alasso.fitMG)
## penfa 0.1.1 reached convergence
##
## Number of observations per group:
## LEB 402
## TAIW 365
## Number of groups 2
## Number of observed variables 12
## Number of latent factors 2
##
## Estimator PMLE
## Optimization method trust-region
## Information fisher
## Strategy auto
## Number of iterations (total) 211
## Number of two-steps (automatic) 2
## Influence factor 4
## Number of parameters:
## Free 34
## Penalized 60
## Effective degrees of freedom 56.758
## GIC 17096.153
## GBIC 17359.652
##
## Penalty functions:
## Sparsity alasso
## Invariance alasso
##
## Additional tuning parameter
## alasso 1
##
## Optimal tuning parameters:
## Sparsity
## - Factor loadings 0.002
## Invariance
## - Factor loadings 29643.229
## - Intercepts 0.008
##
##
## Parameter Estimates:
##
##
## Group 1 [LEB]:
##
## Latent Variables:
## Type Estimate Std.Err 2.5% 97.5%
## help =~
## h1 fixed 1.000 1.000 1.000
## h2 pen 1.103 0.039 1.027 1.180
## h3 pen 0.996 0.040 0.917 1.076
## h4 pen 1.196 0.050 1.099 1.294
## h5 pen 1.106 0.038 1.031 1.181
## h6 pen 1.084 0.051 0.984 1.185
## h7 pen 0.742 0.065 0.614 0.870
## v1 fixed 0.000 0.000 0.000
## v2 pen 0.000
## v3 pen 0.000
## v4 pen 0.000
## v5 pen -0.000
## voice =~
## h1 fixed 0.000 0.000 0.000
## h2 pen -0.000
## h3 pen 0.000
## h4 pen -0.038
## h5 pen 0.000
## h6 pen 0.044
## h7 pen 0.336 0.056 0.225 0.446
## v1 fixed 1.000 1.000 1.000
## v2 pen 1.015 0.028 0.960 1.070
## v3 pen 0.977 0.029 0.920 1.034
## v4 pen 0.977 0.029 0.920 1.034
## v5 pen 0.932 0.030 0.873 0.991
##
## Covariances:
## Type Estimate Std.Err 2.5% 97.5%
## help ~~
## voice free 0.759 0.062 0.637 0.882
##
## Intercepts:
## Type Estimate Std.Err 2.5% 97.5%
## .h2 pen -0.012 0.031 -0.073 0.049
## .h3 pen -0.001 0.032 -0.062 0.061
## .h4 pen -0.058 0.033 -0.122 0.007
## .h5 pen 0.024 0.031 -0.037 0.084
## .h6 pen 0.003 0.029 -0.055 0.060
## .h7 pen -0.004 0.025 -0.054 0.045
## .v2 pen -0.022 0.026 -0.072 0.029
## .v3 pen -0.001 0.025 -0.050 0.048
## .v4 pen -0.001 0.025 -0.050 0.048
## .v5 pen 0.004 0.026 -0.047 0.055
## help free 0.092 0.049 -0.005 0.189
## voice free 0.148 0.051 0.048 0.248
## .h1 fixed 0.000 0.000 0.000
## .v1 fixed 0.000 0.000 0.000
##
## Variances:
## Type Estimate Std.Err 2.5% 97.5%
## .h1 free 0.382 0.029 0.326 0.438
## .h2 free 0.213 0.017 0.180 0.247
## .h3 free 0.344 0.026 0.293 0.395
## .h4 free 0.126 0.012 0.103 0.150
## .h5 free 0.162 0.014 0.135 0.189
## .h6 free 0.164 0.014 0.137 0.191
## .h7 free 0.210 0.016 0.178 0.241
## .v1 free 0.176 0.016 0.145 0.208
## .v2 free 0.207 0.018 0.171 0.242
## .v3 free 0.250 0.021 0.209 0.290
## .v4 free 0.259 0.021 0.217 0.300
## .v5 free 0.290 0.023 0.245 0.335
## help free 0.753 0.069 0.617 0.889
## voice free 0.911 0.074 0.766 1.056
##
##
## Group 2 [TAIW]:
##
## Latent Variables:
## Type Estimate Std.Err 2.5% 97.5%
## help =~
## h1 fixed 1.000 1.000 1.000
## h2 pen 1.103 0.039 1.027 1.180
## h3 pen 0.996 0.040 0.917 1.076
## h4 pen 1.196 0.050 1.099 1.294
## h5 pen 1.106 0.038 1.031 1.181
## h6 pen 1.084 0.051 0.984 1.185
## h7 pen 0.742 0.065 0.614 0.870
## v1 fixed 0.000 0.000 0.000
## v2 pen 0.000
## v3 pen 0.000
## v4 pen 0.000
## v5 pen -0.000
## voice =~
## h1 fixed 0.000 0.000 0.000
## h2 pen -0.000
## h3 pen 0.000
## h4 pen -0.038
## h5 pen 0.000
## h6 pen 0.044
## h7 pen 0.336 0.056 0.225 0.446
## v1 fixed 1.000 1.000 1.000
## v2 pen 1.015 0.028 0.960 1.070
## v3 pen 0.977 0.029 0.920 1.034
## v4 pen 0.977 0.029 0.920 1.034
## v5 pen 0.932 0.030 0.873 0.991
##
## Covariances:
## Type Estimate Std.Err 2.5% 97.5%
## help ~~
## voice free 0.428 0.039 0.351 0.505
##
## Intercepts:
## Type Estimate Std.Err 2.5% 97.5%
## .h2 pen 0.008 0.032 -0.054 0.070
## .h3 pen -0.001 0.032 -0.062 0.061
## .h4 pen 0.058 0.036 -0.013 0.129
## .h5 pen -0.012 0.032 -0.075 0.051
## .h6 pen 0.003 0.029 -0.055 0.060
## .h7 pen -0.004 0.025 -0.054 0.046
## .v2 pen 0.022 0.027 -0.030 0.075
## .v3 pen -0.001 0.025 -0.050 0.048
## .v4 pen -0.001 0.025 -0.050 0.048
## .v5 pen 0.002 0.026 -0.050 0.053
## help free -0.101 0.043 -0.186 -0.017
## voice free -0.161 0.045 -0.249 -0.074
## .h1 fixed 0.000 0.000 0.000
## .v1 fixed 0.000 0.000 0.000
##
## Variances:
## Type Estimate Std.Err 2.5% 97.5%
## .h1 free 0.407 0.033 0.343 0.471
## .h2 free 0.267 0.023 0.222 0.312
## .h3 free 0.411 0.033 0.346 0.476
## .h4 free 0.240 0.022 0.197 0.283
## .h5 free 0.303 0.026 0.253 0.354
## .h6 free 0.237 0.021 0.196 0.278
## .h7 free 0.321 0.026 0.270 0.371
## .v1 free 0.318 0.027 0.265 0.372
## .v2 free 0.208 0.020 0.169 0.247
## .v3 free 0.272 0.024 0.225 0.319
## .v4 free 0.261 0.023 0.215 0.306
## .v5 free 0.361 0.030 0.302 0.419
## help free 0.461 0.045 0.373 0.549
## voice free 0.568 0.051 0.469 0.668
The estimated parameters can be inspected in matrix-form through the penfaOut
function. The which
argument allows to extract the elements of interest. The resulting loading matrices are sparse and equivalent across groups (as demonstrated by the very large value for the second tuning parameter).
penfaOut(alasso.fitMG, which = c("lambda", "tau"))
## $lambda
## help voice
## h1 1.000 0.000
## h2 1.103 0.000
## h3 0.996 0.000
## h4 1.196 -0.038
## h5 1.106 0.000
## h6 1.084 0.044
## h7 0.742 0.336
## v1 0.000 1.000
## v2 0.000 1.015
## v3 0.000 0.977
## v4 0.000 0.977
## v5 0.000 0.932
##
## $tau
## intercept
## h1 0.000
## h2 -0.012
## h3 -0.001
## h4 -0.058
## h5 0.024
## h6 0.003
## h7 -0.004
## v1 0.000
## v2 -0.022
## v3 -0.001
## v4 -0.001
## v5 0.004
##
## $lambda
## help voice
## h1 1.000 0.000
## h2 1.103 0.000
## h3 0.996 0.000
## h4 1.196 -0.038
## h5 1.106 0.000
## h6 1.084 0.044
## h7 0.742 0.336
## v1 0.000 1.000
## v2 0.000 1.015
## v3 0.000 0.977
## v4 0.000 0.977
## v5 0.000 0.932
##
## $tau
## intercept
## h1 0.000
## h2 0.008
## h3 -0.001
## h4 0.058
## h5 -0.012
## h6 0.003
## h7 -0.004
## v1 0.000
## v2 0.022
## v3 -0.001
## v4 -0.001
## v5 0.002
The number of edf of this penalized model is 56.76, which is a fractional number, and is the sum of the contributions from the edf of each model parameter. Each edf quantifies the impact of the three penalties on each parameter. Parameters unaffected by the penalization have an edf equal to 1.
alasso.fitMG@Inference$edf.single
## help=~h2 help=~h3 help=~h4 help=~h5 help=~h6
## 0.694 0.684 0.740 0.756 0.702
## help=~h7 help=~v2 help=~v3 help=~v4 help=~v5
## 0.676 0.001 0.001 0.000 0.001
## voice=~h2 voice=~h3 voice=~h4 voice=~h5 voice=~h6
## 0.000 0.000 0.168 0.000 0.219
## voice=~h7 voice=~v2 voice=~v3 voice=~v4 voice=~v5
## 0.584 0.679 0.681 0.668 0.698
## h2~1 h3~1 h4~1 h5~1 h6~1
## 0.627 0.564 0.875 0.711 0.594
## h7~1 v2~1 v3~1 v4~1 v5~1
## 0.617 0.689 0.569 0.554 0.594
## help~1 voice~1 h1~~h1 h2~~h2 h3~~h3
## 1.000 1.000 1.000 1.000 1.000
## h4~~h4 h5~~h5 h6~~h6 h7~~h7 v1~~v1
## 1.000 1.000 1.000 1.000 1.000
## v2~~v2 v3~~v3 v4~~v4 v5~~v5 help~~help
## 1.000 1.000 1.000 1.000 1.000
## voice~~voice help~~voice help=~h2.g2 help=~h3.g2 help=~h4.g2
## 1.000 1.000 0.303 0.311 0.255
## help=~h5.g2 help=~h6.g2 help=~h7.g2 help=~v2.g2 help=~v3.g2
## 0.241 0.291 0.299 0.001 0.001
## help=~v4.g2 help=~v5.g2 voice=~h2.g2 voice=~h3.g2 voice=~h4.g2
## 0.000 0.000 0.000 0.000 0.086
## voice=~h5.g2 voice=~h6.g2 voice=~h7.g2 voice=~v2.g2 voice=~v3.g2
## 0.000 0.138 0.335 0.318 0.316
## voice=~v4.g2 voice=~v5.g2 h2~1.g2 h3~1.g2 h4~1.g2
## 0.329 0.299 0.505 0.436 0.780
## h5~1.g2 h6~1.g2 h7~1.g2 v2~1.g2 v3~1.g2
## 0.479 0.406 0.384 0.604 0.431
## v4~1.g2 v5~1.g2 help~1.g2 voice~1.g2 h1~~h1.g2
## 0.446 0.417 1.000 1.000 1.000
## h2~~h2.g2 h3~~h3.g2 h4~~h4.g2 h5~~h5.g2 h6~~h6.g2
## 1.000 1.000 1.000 1.000 1.000
## h7~~h7.g2 v1~~v1.g2 v2~~v2.g2 v3~~v3.g2 v4~~v4.g2
## 1.000 1.000 1.000 1.000 1.000
## v5~~v5.g2 help~~help.g2 voice~~voice.g2 help~~voice.g2
## 1.000 1.000 1.000 1.000
The group-specific implied moments (the covariance matrix and the mean vector) can be found via the fitted
method.
implied <- fitted(alasso.fitMG)
implied
## $LEB
## $LEB$cov
## h1 h2 h3 h4 h5 h6 h7 v1 v2 v3 v4 v5
## h1 1.135
## h2 0.831 1.130
## h3 0.750 0.828 1.092
## h4 0.872 0.963 0.869 1.137
## h5 0.833 0.919 0.830 0.965 1.083
## h6 0.850 0.938 0.847 0.985 0.940 1.124
## h7 0.814 0.898 0.811 0.941 0.900 0.920 1.105
## v1 0.759 0.838 0.756 0.874 0.840 0.863 0.869 1.088
## v2 0.771 0.850 0.768 0.887 0.852 0.876 0.882 0.925 1.145
## v3 0.742 0.819 0.739 0.854 0.820 0.844 0.849 0.890 0.904 1.120
## v4 0.742 0.818 0.739 0.854 0.820 0.844 0.849 0.890 0.904 0.870 1.128
## v5 0.708 0.781 0.705 0.815 0.783 0.805 0.810 0.849 0.862 0.830 0.830 1.082
##
## $LEB$mean
## h1 h2 h3 h4 h5 h6 h7 v1 v2 v3 v4 v5
## 0.092 0.090 0.091 0.047 0.126 0.109 0.114 0.148 0.128 0.144 0.144 0.142
##
##
## $TAIW
## $TAIW$cov
## h1 h2 h3 h4 h5 h6 h7 v1 v2 v3 v4 v5
## h1 0.868
## h2 0.509 0.828
## h3 0.459 0.507 0.869
## h4 0.535 0.591 0.533 0.862
## h5 0.510 0.562 0.508 0.592 0.867
## h6 0.519 0.572 0.517 0.602 0.574 0.821
## h7 0.486 0.536 0.484 0.562 0.537 0.549 0.851
## v1 0.428 0.472 0.427 0.491 0.473 0.489 0.508 0.887
## v2 0.435 0.480 0.433 0.498 0.481 0.497 0.516 0.577 0.794
## v3 0.418 0.462 0.417 0.480 0.463 0.478 0.497 0.555 0.564 0.815
## v4 0.418 0.462 0.417 0.479 0.462 0.478 0.497 0.555 0.564 0.543 0.803
## v5 0.399 0.440 0.398 0.457 0.441 0.456 0.474 0.530 0.538 0.518 0.518 0.855
##
## $TAIW$mean
## h1 h2 h3 h4 h5 h6 h7 v1 v2 v3 v4 v5
## -0.101 -0.103 -0.101 -0.057 -0.124 -0.114 -0.133 -0.161 -0.142 -0.159 -0.158 -0.149
The complete penalty matrix is stored in alasso.fit@Penalize@Sh.info$S.h
, but it can be easily extracted via the penmat
function. This matrix is the sum of the penalty matrices for Penalty 1 (sparsity.penmat
), Penalty 2 (loadinvariance.penmat
), and Penalty 3 (intinvariance.penmat
). Unique variances, factor (co)variances and factor means were not affected by the penalization, so their entries in the penalty matrices are equal to zero.
full.penmat <- penmat(alasso.fitMG)
The above penalty matrix is the sum of the following three individuals penalty matrices.
This penalty matrix shrinks the small factor loadings of each group to zero. Apart from the group loadings, all the remaining entries of the penalty matrix are equal to zero.
sparsity.penmat <- penmat(alasso.fitMG, type = "shrink", which = "lambda")
This penalty matrix shrinks the pairwise group differences of the factor loadings to zero.
loadinvariance.penmat <- penmat(alasso.fitMG, type = "diff", which = "lambda")
This penalty matrix shrinks the pairwise group differences of the intercepts to zero.
intinvariance.penmat <- penmat(alasso.fitMG, type = "diff", which = "tau")
See “plotting-penalty-matrix” for details on how to produce interactive plots of the above penalty matrices.
Lastly, group-specific factor scores can be calculated via the penfaPredict
function. The argument assemble = TRUE
allows to assemble the factor scores from each group in a single data frame with a group column.
fscoresMG <- penfaPredict(alasso.fitMG, assemble = TRUE)
head(fscoresMG)
## help voice country
## 1 -0.33313708 0.1084453 LEB
## 2 -0.04529539 -0.3208210 LEB
## 3 0.45706110 0.6149927 LEB
## 4 0.10698654 0.5475679 LEB
## 5 0.29863211 0.7514275 LEB
## 6 -0.50629431 -0.4367660 LEB
sessionInfo()
## R version 4.1.0 (2021-05-18)
## Platform: x86_64-apple-darwin17.0 (64-bit)
## Running under: macOS Catalina 10.15.7
##
## Matrix products: default
## BLAS: /Library/Frameworks/R.framework/Versions/4.1/Resources/lib/libRblas.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/4.1/Resources/lib/libRlapack.dylib
##
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] penfa_0.1.1
##
## loaded via a namespace (and not attached):
## [1] VineCopula_2.4.2 magic_1.5-9 sfsmisc_1.1-11
## [4] VGAM_1.1-5 splines_4.1.0 tmvnsim_1.0-2
## [7] distr_2.8.0 stats4_4.1.0 yaml_2.2.1
## [10] numDeriv_2016.8-1.1 pillar_1.6.1 lattice_0.20-44
## [13] startupmsg_0.9.6 glue_1.4.2 digest_0.6.27
## [16] colorspace_2.0-2 htmltools_0.5.1.1 Matrix_1.3-3
## [19] survey_4.1-1 psych_2.1.6 pcaPP_1.9-74
## [22] pkgconfig_2.0.3 ismev_1.42 purrr_0.3.4
## [25] GJRM_0.2-4 mvtnorm_1.1-2 scales_1.1.1
## [28] copula_1.0-1 tibble_3.1.2 ADGofTest_0.3
## [31] gmp_0.6-2 mgcv_1.8-35 generics_0.1.0
## [34] ggplot2_3.3.5 ellipsis_0.3.2 cachem_1.0.5
## [37] distrEx_2.8.0 Rmpfr_0.8-4 mnormt_2.0.2
## [40] survival_3.2-11 magrittr_2.0.1 crayon_1.4.1
## [43] memoise_2.0.0 evaluate_0.14 fs_1.5.0
## [46] fansi_0.5.0 nlme_3.1-152 MASS_7.3-54
## [49] gsl_2.1-6 textshaping_0.3.5 tools_4.1.0
## [52] mitools_2.4 lifecycle_1.0.0 pspline_1.0-18
## [55] matrixStats_0.59.0 stringr_1.4.0 trust_0.1-8
## [58] munsell_0.5.0 stabledist_0.7-1 scam_1.2-11
## [61] gamlss.dist_5.3-2 compiler_4.1.0 pkgdown_1.6.1
## [64] evd_2.3-3 systemfonts_1.0.2 rlang_0.4.11
## [67] grid_4.1.0 trustOptim_0.8.6.2 rmarkdown_2.9
## [70] gtable_0.3.0 abind_1.4-5 DBI_1.1.1
## [73] R6_2.5.0 knitr_1.33 dplyr_1.0.7
## [76] fastmap_1.1.0 utf8_1.2.1 rprojroot_2.0.2
## [79] ragg_1.1.3 desc_1.3.0 matrixcalc_1.0-4
## [82] stringi_1.7.3 parallel_4.1.0 Rcpp_1.0.7
## [85] vctrs_0.3.8 tidyselect_1.1.1 xfun_0.24
Fischer, R., Ferreira, M. C., Van Meurs, N. et al. (2019). “Does Organizational Formalization Facilitate Voice and Helping Organizational Citizenship Behaviors? It Depends on (National) Uncertainty Norms.” Journal of International Business Studies, 50(1), 125-134. https://doi.org/10.1057/s41267-017-0132-6
Fischer, R., & Karl, J. A. (2019). “A Primer to (Cross-Cultural) Multi-Group Invariance Testing Possibilities in R.” Frontiers in psychology, 10, 1507. https://doi.org/10.3389/fpsyg.2019.01507
Geminiani, E., Marra, G., & Moustaki, I. (2021). “Single- and Multiple-Group Penalized Factor Analysis: A Trust-Region Algorithm Approach with Integrated Automatic Multiple Tuning Parameter Selection.” Psychometrika, 86(1), 65-95. https://doi.org/10.1007/s11336-021-09751-8