Description Usage Arguments Details Value References See Also
View source: R/backwardCompatibility.R
DEPRECATED! USE THE FUNCTION fitGSMVAR INSTEAD!
fitGMVAR
estimates a GMVAR model model in two phases:
in the first phase it uses a genetic algorithm to find starting values for a gradient based
variable metric algorithm, which it then uses to finalize the estimation in the second phase.
Parallel computing is utilized to perform multiple rounds of estimations in parallel.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 
data 
a matrix or class 
p 
a positive integer specifying the autoregressive order of the model. 
M 

conditional 
a logical argument specifying whether the conditional or exact loglikelihood function 
parametrization 

constraints 
a size (Mpd^2 x q) constraint matrix C specifying general linear constraints
to the autoregressive parameters. We consider constraints of form
(φ_{1},...,φ_{M}) = C ψ,
where φ_{m} = (vec(A_{m,1}),...,vec(A_{m,p}) (pd^2 x 1), m=1,...,M,
contains the coefficient matrices and ψ (q x 1) contains the related parameters.
For example, to restrict the ARparameters to be the same for all regimes, set C=
[ 
same_means 
Restrict the mean parameters of some regimes to be the same? Provide a list of numeric vectors
such that each numeric vector contains the regimes that should share the common mean parameters. For instance, if

structural_pars 
If
See Virolainen (2020) for the conditions required to identify the shocks and for the Bmatrix as well (it is W times a timevarying diagonal matrix with positive diagonal entries). 
ncalls 
the number of estimation rounds that should be performed. 
ncores 
the number CPU cores to be used in parallel computing. 
maxit 
the maximum number of iterations in the variable metric algorithm. 
seeds 
a length 
print_res 
should summaries of estimation results be printed? 
... 
additional settings passed to the function 
If you wish to estimate a structural model without overidentifying constraints that is identified statistically,
specify your W matrix is structural_pars
to be such that it contains the same sign constraints in a single row
(e.g. a row of ones) and leave the other elements as NA
. In this way, the genetic algorithm works the best.
The ordering and signs of the columns of the W matrix can be changed afterwards with the functions
reorder_W_columns
and swap_W_signs
.
Because of complexity and high multimodality of the loglikelihood function, it's not certain that the estimation algorithms will end up in the global maximum point. It's expected that most of the estimation rounds will end up in some local maximum or saddle point instead. Therefore, a (sometimes large) number of estimation rounds is required for reliable results. Because of the nature of the model, the estimation may fail especially in the cases where the number of mixture components is chosen too large. With two regimes and couple hundred observations in a twodimensional time series, 50 rounds is usually enough. Several hundred estimation rounds often suffices for reliably fitting tworegimes models to 3 or 4 dimensional time series. With more than two regimes and more than couple hundred observations, thousands of estimation rounds (or more) are often required to obtain reliable results.
The estimation process is computationally heavy and it might take considerably long time for large models with
large number of observations. If the iteration limit maxit
in the variable metric algorithm is reached,
one can continue the estimation by iterating more with the function iterate_more
. Alternatively, one may
use the found estimates as starting values for the genetic algorithm and and employ another round of estimation
(see ?GAfit
how to set up an initial population with the dot parameters).
If the estimation algorithm fails to create an initial population for the genetic algorithm, it usually helps to scale the individual series so that the AR coefficients (of a VAR model) will be relative small, preferably less than one. Even if one is able to create an initial population, it should be preferred to scale the series so that most of the AR coefficients will not be very large, as the estimation algorithm works better with relatively small AR coefficients. If needed, another package can be used to fit linear VARs to the series to see which scaling of the series results in relatively small AR coefficients.
The code of the genetic algorithm is mostly based on the description by Dorsey and Mayer (1995) but it includes some extra features that were found useful for this particular estimation problem. For instance, the genetic algorithm uses a slightly modified version of the individually adaptive crossover and mutation rates described by Patnaik and Srinivas (1994) and employs (50%) fitness inheritance discussed by Smith, Dike and Stegmann (1995).
The gradient based variable metric algorithm used in the second phase is implemented with function optim
from the package stats
.
Note that the structural models are even more difficult to estimate than the reduced form models due to
the different parametrization of the covariance matrices, so larger number of estimation rounds should be considered.
Also, be aware that if the lambda parameters are constrained in any other way than by restricting some of them to be
identical, the parameter "lambda_scale" of the genetic algorithm (see ?GAfit
) needs to be carefully adjusted accordingly.
When estimating a structural model that imposes overidentifiying constraints to a time series with d>3,
it is highly recommended to create an initial population based on the estimates of a statistically identified model
(when M=2). This is because currently obtaining the ML estimate reliably to such a structural model seems
difficult in many application.
Finally, the function fails to calculate approximate standard errors and the parameter estimates are near the border
of the parameter space, it might help to use smaller numerical tolerance for the stationarity and positive
definiteness conditions. The numerical tolerance of an existing model can be changed with the function
update_numtols
.
Returns an object of class 'gsmvar'
defining the estimated (reduced form or structural) GMVAR, StMVAR, or GStMVAR model.
Multivariate quantile residuals (Kalliovirta and Saikkonen 2010) are also computed and included in the returned object.
In addition, the returned object contains the estimates and loglikelihood values from all the estimation rounds performed.
The estimated parameter vector can be obtained at gsmvar$params
(and corresponding approximate standard errors
at gsmvar$std_errors
). See ?GSMVAR
for the form of the parameter vector, if needed.
Remark that the first autocovariance/correlation matrix in $uncond_moments
is for the lag zero,
the second one for the lag one, etc.
Dorsey R. E. and Mayer W. J. 1995. Genetic algorithms for estimation problems with multiple optima, nondifferentiability, and other irregular features. Journal of Business & Economic Statistics, 13, 5366.
Kalliovirta L., Meitz M. and Saikkonen P. 2016. Gaussian mixture vector autoregression. Journal of Econometrics, 192, 485498.
Patnaik L.M. and Srinivas M. 1994. Adaptive Probabilities of Crossover and Mutation in Genetic Algorithms. Transactions on Systems, Man and Cybernetics 24, 656667.
Smith R.E., Dike B.A., Stegmann S.A. 1995. Fitness inheritance in genetic algorithms. Proceedings of the 1995 ACM Symposium on Applied Computing, 345350.
Virolainen S. 2020. Structural Gaussian mixture vector autoregressive model. Unpublished working paper, available as arXiv:2007.04713.
Virolainen S. 2021. Gaussian and Student's t mixture vector autoregressive model. Unpublished working paper, available as arXiv:2109.13648.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.