Please use this identifier to cite or link to this item:
DC FieldValueLanguage
dc.contributor.advisorConigliani, Caterina-
dc.contributor.authorStolfi, Paola-
dc.description.abstractModel–based statistical inference primarily deals with parameters estima tion. Under the usual assumption of data being generated from a fully specified model belonging to a given family of distributions Fϑ indexed by a parameter ϑ ⊂ Θ ∈ R p, inference on the true unknown parameter ϑ0 can be easily performed by maximum likelihood. However, in some pathological situations the maximum likelihood estimator (MLE) is difficult to compute either because of the model complexity or because the probability density function is not analytically available. For example, the computation of the log–likelihood may involve numerical approximations or integrations that highly deteriorate the quality of the resulting estimates. Moreover, as the dimension of the parameter space increases the computation of the like lihood or its maximisation in a reasonable amount of time becomes even more prohibitive. In all those circumstances, the researcher should resort to alternative solutions. Several approaches have been developed, as, for instance, the method of moments, the method of simulated moments, the method of simulated maximum likelihood or the indirect inference (II) method. De spite their appealing characteristics of only requiring to be able to simulate from the specified DGP, some of those methods suffer from serious draw backs. Furthermore, all those approaches do not effectively deal with the curse of dimensionality problem, i.e., the situation where the number of parameters grows quadratically or exponentially with the dimension of the problem. Indeed, the right identification of the sparsity patterns becomes crucial because it reduces the number of parameters to be estimated. Those reasonings motivate the use of sparse estimators that automatically shrink to zero some parameters, such as, for example, the off–diagonal elements of the scale matrix. Several works related to sparse estimation of the co variance matrix are available in literature; most of them are related to the graphical models, where the precision matrix, e.g., the inverse of the covari ance matrix, represents the conditional dependence structure of the graph. Those estimators are obtained by penalising a Gaussian log–likelihood. In this work we handle the lack of the model likelihood or the exis tence of valid moment conditions together with the curse of dimensionality problem within a high–dimensional non–Gaussian framework. Specifically, our approach penalises the objective function of simulation–based infer ential procedures such as the II method of Gouriéroux et al. (1993) and the Method of Simulated Quantiles (MSQ) of Dominicy Veredas (2013). The II method replaces the maximum likelihood estimator of the model parameters with a quasi–maximum likelihood estimator which relies on an alternative auxiliary model and then exploits simulations from the original model to correct for inconsistency. The MSQ instead estimates parame ters by minimising a quadratic distance between a vector of quantile–based summary statistics calculated on the available sample of observations and those calculated on synthetic observations generated from the model. The first contribution is related to the method of simulated quantiles (MSQ), a simulation–based extension of the quantile – matching method (QM). Specifically, we extend the method of simulated quantiles to deal with multivariate data, originating the multivariate method of simulated quantiles (MMSQ). The extension of the MSQ to multivariate data is not trivial because it requires the concept of multivariate quantile that is not unique given the lack of a natural ordering in R n for n > 1. Indeed, only very recently the literature on multivariate quantiles has proliferated. The MMSQ relies on the definition of projectional quantile of Hallin et al (2010b) and Kong Mizera (2012), that is a particular case of directional quantile. This latter definition is particularly appealing since it allows to reduce the dimension of the problem from R n to R by projecting data to wards given directions in the plane. Moreover, the projectional quantiles incorporate information on the covariance between the projected variables which is crucial in order to relax the assumption of independence between variables. An important methodological contribution of this thesis concerns the choice of the relevant directions to project data in order to summarise the information for any given parameter of interest. Although the inclusion of more directions can convey more information about the parameters, it comes at a cost of a larger number of expensive quantile evaluations. Of course the number of quantile functions is unavoidably related to the dimen sion of the observables and strictly depends upon the specific distribution considered. We provide a general solution for Elliptical distributions and for those Skew–Elliptical distributions that are closed under linear combi nations. We also establish consistency and asymptotic normality of the proposed MMSQ estimator under weak conditions on the underlying DGP. The con ditions for consistency and asymptotic Normality of the MMSQ are similar to those imposed by Dominicy and Veredas (2013) with minor changes due to the employed projectional quantiles. For the distributions considered in our illustrative examples, full details on how to calculate all the quantities involved in the asymptotic variance–covariance matrix are provided. The asymptotic variance–covariance matrix of the MMSQ estimator is helpful to derive its efficient version, the E–MMSQ. We introduce an important methodological contribution that deals with the curse of dimensionality problem. Specifically, the objective function of the MMSQ is penalised by adding a SCAD `1–penalisation term that shrinks to zero the off–diagonal elements of either the scaling matrix of the postulated distribution or its Cholesky factor. To this aim we propose two different algorithms: the first algorithm updates the scaling matrix, and is similar to that proposed by Bien and Tibshirani (2011), while the second considers the Cholesky factor. We extend the asymptotic theory in order to accommodate sparse estimators, and we prove that the resulting sparse–MMSQ estimator enjoys the oracle properties of Fan and Li (2001) under mild regularity conditions. Moreover, since the chosen penalty is concave, it is necessary to construct an algorithm to solve the optimisation problem. This is another main contribution of the thesis. We also develop a method to choose the tuning parameters of the `1 penalisation. The introduced sparse–MMSQ estimator cannot be easily applied to estimate parameters of conditional model, such as multivariate regression models (SURE) or vector autoregressive models (VAR), because the choice of the quantile–based summary statistics is not obvious in those circum stances. In those cases, we rely to another simulation–based method, the Indirect Inference procedure introduced by Gouriéroux et al. (1993). In deed, another important contribution of this work is related to the Indirect inference estimator. Specifically, we introduce the penalised Indirect Infer ence method (S–II) that is obtained by adding a SCAD `1–penalisation to the objective function of the II estimator. The proposed estimator can be effectively used in order to estimates high–dimensional conditional models. More importantly, the method can be used in order to provide a sparse estimator of both the regression parameters and the scale (or precision) matrices. We developed the asymptotic theory and we show that under mild conditions on the data generating process, the S–II estimator enjoys the oracle property. We also provide a fast algorithm in order deal with non concave optimisation and high–dimensional matrices, and we discuss how to chose the tuning parameters. The proposed methods can be effectively used to make inference on the parameters of large–dimensional distributions such as, for example, Stable, Elliptical Stable, Skew–Elliptical Stable, Copula, Multivariate Gamma and Tempered Stable. Among those, the Stable distribution allows for infi nite variance, skewness and heavy–tails that exhibit power decay allowing extreme events to have higher probability mass than in Gaussian model. Univariate Stable laws have been studied in many branches of the science and their theoretical properties have been deeply investigated from mul tiple perspectives, therefore many tools are now available for estimation and inference on parameters, to evaluate the cumulative density or the quantile function, or to perform fast simulation. The Stable distribution plays an interesting role in modelling multivariate data. Its peculiarity of having heavy tailed properties and its closeness under summation make it appealing in the financial contest. Nevertheless, multivariate Stable laws pose several challenges that go further beyond the lack of closed form ex pression for the density. Although general expressions for the multivariate density have been provided by Abdul-hamid and Nolan (1998), Byczkowski et al.(1993) and Matsui and Takemura (2009), their computations is still not feasible in dimension larger than two. In this work we consider to the Elliptical Stable and Skew Elliptical distribution previously introduced by Branco and Dey (2001) as an interesting application of the MMSQ and of the S–II. As regards applications to real data, we consider the well–known port folio optimisation problem, where the performances of the ESD distribu tions have been investigated. Portfolio optimisation has a long tradition in finance since the seminal paper of Markowitz (1952) that introduced the mean–variance (MVO) approach. The MVO approach relies on quite restrictive conditions about the underlying DGP that are relaxed by the assuming multivariate Stable distributions. Furthermore, the assumption of elliptically contoured joint returns has important implications about the risk–measures than can be considered in the portfolio optimisation problem. The usual consequence of the elliptical assumption is that Value–at–Risk and variance are coherent risk measures, see Artzner et al (1999). However, since Stable distributions do not have finite second moment, we consider a portfolio allocation problem where the expected return is traded–off against higher Value–at–Risk profiles that make investment less attractive.en_US
dc.publisherUniversità degli studi Roma Treen_US
dc.typeDoctoral Thesisen_US
dc.subject.miurSettori Disciplinari MIUR::Scienze economiche e statistiche::STATISTICA ECONOMICAen_US
dc.subject.isicruiCategorie ISI-CRUI::Scienze economiche e statisticheen_US
dc.subject.anagraferoma3Scienze economiche e statisticheen_US
dc.description.romatrecurrentDipartimento di Economia*
item.fulltextWith Fulltext-
Appears in Collections:Dipartimento di Economia
T - Tesi di dottorato
Files in This Item:
File Description SizeFormat
PhDThesis_StolfiPaola_def.pdf1.95 MBAdobe PDFView/Open
Show simple item record Recommend this item

Page view(s)

checked on Sep 24, 2023


checked on Sep 24, 2023

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.