⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 mcmcmixfactanal.rd

📁 使用R语言的马尔科夫链蒙特卡洛模拟(MCMC)源代码程序。
💻 RD
字号:
\name{MCMCmixfactanal}\alias{MCMCmixfactanal}\title{Markov Chain Monte Carlo for Mixed Data Factor Analysis Model}\description{  This function generates a sample from the posterior distribution of a  mixed data (both continuous and ordinal) factor analysis model. Normal  priors are assumed on the factor loadings and factor scores, improper  uniform priors are assumed on the cutpoints, and inverse gamma priors  are assumed for the error variances (uniquenesses). The user supplies  data and parameters for the prior distributions, and a sample from the  posterior distribution is returned as an mcmc object, which can be  subsequently analyzed with functions provided in the coda package.}  \usage{MCMCmixfactanal(x, factors, lambda.constraints=list(),                data=parent.frame(), burnin = 1000, mcmc = 20000,                thin=1, tune=NA, verbose = 0, seed = NA,                lambda.start = NA, psi.start=NA,                l0=0, L0=0, a0=0.001, b0=0.001,                store.lambda=TRUE, store.scores=FALSE,                std.mean=TRUE, std.var=TRUE, ... ) }\arguments{  \item{x}{A one-sided formula containing the    manifest variables. Ordinal (including dichotomous) variables must    be coded as ordered factors. Each level of these ordered factors must    be present in the data passed to the function.  NOTE: data input is different in    \code{MCMCmixfactanal} than in either \code{MCMCfactanal} or    \code{MCMCordfactanal}.}  \item{factors}{The number of factors to be fitted.}    \item{lambda.constraints}{List of lists specifying possible equality    or simple inequality constraints on the factor loadings. A typical    entry in the list has one of three forms: \code{varname=list(d,c)} which    will constrain the dth loading for the variable named varname to    be equal to c, \code{varname=list(d,"+")} which will constrain the dth    loading for the variable named varname to be positive, and    \code{varname=list(d, "-")} which will constrain the dth loading for the    variable named varname to be negative. If x is a matrix without    column names defaults names of ``V1", ``V2", ... , etc will be    used. Note that, unlike \code{MCMCfactanal}, the    \eqn{\Lambda}{Lambda} matrix used here has \code{factors}+1    columns. The first column of \eqn{\Lambda}{Lambda} corresponds to    negative item difficulty parameters for ordinal manifest variables    and mean parameters for continuous manifest variables and should    generally not be constrained directly by the user.      }    \item{data}{A data frame.}    \item{burnin}{The number of burn-in iterations for the sampler.}    \item{mcmc}{The number of iterations for the sampler.}    \item{thin}{The thinning interval used in the simulation.  The number of    iterations must be divisible by this value.}  \item{tune}{The tuning parameter for the Metropolis-Hastings    sampling. Can be either a scalar or a \eqn{k}{k}-vector (where    \eqn{k}{k} is the number of manifest variables). \code{tune} must be    strictly positive.}    \item{verbose}{A switch which determines whether or not the progress of    the sampler is printed to the screen.  If \code{verbose} is great    than 0 the iteration number and    the Metropolis-Hastings acceptance rate are printed to the screen    every \code{verbose}th iteration.}      \item{seed}{The seed for the random number generator.  If NA, the Mersenne    Twister generator is used with default seed 12345; if an integer is     passed it is used to seed the Mersenne twister.  The user can also    pass a list of length two to use the L'Ecuyer random number generator,    which is suitable for parallel computation.  The first element of the    list is the L'Ecuyer seed, which is a vector of length six or NA (if NA     a default seed of \code{rep(12345,6)} is used).  The second element of     list is a positive substream number. See the MCMCpack     specification for more details.}    \item{lambda.start}{Starting values for the factor loading matrix    Lambda. If \code{lambda.start} is set to a scalar the starting value for    all unconstrained loadings will be set to that scalar. If    \code{lambda.start} is a matrix of the same dimensions as Lambda then the    \code{lambda.start} matrix is used as the starting values (except    for equality-constrained elements). If \code{lambda.start} is set to    \code{NA} (the default) then starting values for unconstrained    elements in the first column of Lambda are based on the observed    response pattern, the remaining unconstrained elements of Lambda are    set to 0, and starting values for inequality constrained elements    are set to either 1.0 or -1.0 depending on the  nature of the    constraints.}   \item{psi.start}{Starting values for the error variance (uniqueness)    matrix. If  \code{psi.start} is set to a scalar then the starting    value for all diagonal elements of \code{Psi} that represent error    variances for continuous variables are set to this value. If    \code{psi.start} is a \eqn{k}{k}-vector (where \eqn{k}{k} is the    number of manifest variables) then the staring value of \code{Psi}    has \code{psi.start} on the main diagonal with the exception that    entries corresponding to error variances for ordinal variables are    set to 1.. If \code{psi.start} is set to \code{NA} (the default) the    starting values of all the continuous variable uniquenesses are set    to 0.5. Error variances for ordinal response variables are always    constrained (regardless of the value of \code{psi.start} to have an    error variance of 1 in order to achieve identification.}     \item{l0}{The means of the independent Normal prior on the factor    loadings. Can be either a scalar or a matrix with the same    dimensions as \code{Lambda}.}  \item{L0}{The precisions (inverse variances) of the independent Normal    prior on the factor loadings. Can be either a scalar or a matrix with    the same dimensions as \code{Lambda}.}  \item{a0}{Controls the shape of the inverse Gamma prior on the    uniqueness. The actual shape parameter is set to \code{a0/2}. Can be    either a scalar or a \eqn{k}{k}-vector.}   \item{b0}{Controls the scale of the inverse Gamma prior on the    uniquenesses. The actual scale parameter is set to \code{b0/2}. Can    be either a scalar or a \eqn{k}{k}-vector.}   \item{store.lambda}{A switch that determines whether or not to store    the factor loadings for posterior analysis. By default, the factor    loadings are all stored.}    \item{store.scores}{A switch that determines whether or not to    store the factor scores for posterior analysis.     \emph{NOTE: This takes an enormous amount of memory, so      should only be used if the chain is thinned heavily, or for      applications with a small number of observations}.  By default, the    factor scores are not stored.}  \item{std.mean}{If \code{TRUE} (the default) the continuous manifest    variables are rescaled to have zero mean.}      \item{std.var}{If \code{TRUE} (the default) the continuous manifest    variables are rescaled to have unit variance.}      \item{...}{further arguments to be passed}       }\value{   An mcmc object that contains the posterior sample.  This    object can be summarized by functions provided by the coda package.}\details{The model takes the following form:  Let \eqn{i=1,\ldots,N}{1=1,...,n} index observations and  \eqn{j=1,\ldots,K}{j=1,...,K} index response variables within an  observation. An observed  variable \eqn{x_{ij}}{x_ij} can be either ordinal with a  total of \eqn{C_j}{C_j}    categories or continuous.  The distribution of \eqn{X}{X} is governed by a \eqn{N    \times K}{N by K} matrix of latent variables \eqn{X^*}{Xstar} and a  series of cutpoints \eqn{\gamma}{gamma}. \eqn{X^*}{Xstar} is assumed  to be generated according to:    \deqn{x^*_i = \Lambda \phi_i + \epsilon_i}{xstar_i = Lambda phi_i +    epsilon_i}  \deqn{\epsilon_i \sim \mathcal{N}(0,\Psi)}{epsilon_i ~ N(0, Psi)}  where \eqn{x^*_i}{xstar_i} is the \eqn{k}{k}-vector of latent variables  specific to observation \eqn{i}{i}, \eqn{\Lambda}{Lambda} is the  \eqn{k \times d}{k by d} matrix of factor loadings, and  \eqn{\phi_i}{phi_i} is  the \eqn{d}{d}-vector of latent factor scores. It is assumed that the  first element of \eqn{\phi_i}{phi_i} is equal to 1 for all  \eqn{i}{i}.   If the \eqn{j}{j}th variable is ordinal, the probability that it takes  the value \eqn{c}{c} in observation \eqn{i}{i} is:   \deqn{     \pi_{ijc} = \Phi(\gamma_{jc} - \Lambda'_j\phi_i) -     \Phi(\gamma_{j(c-1)} - \Lambda'_j\phi_i)   }{     pi_ijc = pnorm(gamma_jc - Lambda'_j phi_i) -     pnorm(gamma_j(c-1) - Lambda'_j phi_i)   }  If the \eqn{j}{j}th variable is continuous, it is assumed that  \eqn{x^*_{ij} = x_{ij}}{xstar_{ij} = x_{ij}} for all \eqn{i}{i}.      The implementation used here assumes independent conjugate priors for  each element of \eqn{\Lambda}{Lambda} and each  \eqn{\phi_i}{phi_i}. More specifically we assume:  \deqn{\Lambda_{ij} \sim \mathcal{N}(l_{0_{ij}}, L_{0_{ij}}^{-1}),    i=1,\ldots,k,  j=1,\ldots,d}{Lambda_ij ~ N(l0_ij,  L0_ij^-1),    i=1,...,k, j=1,...,d}   \deqn{\phi_{i(2:d)} \sim \mathcal{N}(0, I),    i=1,\dots,n}{phi_i(2:d) ~ N(0, I),      i=1,...,n}   \code{MCMCmixfactanal} simulates from the posterior distribution using  a Metropolis-Hastings within Gibbs sampling algorithm. The algorithm  employed is based on work by Cowles (1996).  Note that  the first element of \eqn{\phi_i}{phi_i} is a 1. As a result, the  first column of \eqn{\Lambda}{Lambda} can be interpretated as negative  item difficulty parameters.  Further, the first  element  \eqn{\gamma_1}{gamma_1} is normalized to zero, and thus not   returned in the mcmc object.  The simulation proper is done in compiled C++ code to maximize  efficiency.  Please consult the coda documentation for a comprehensive  list of functions that can be used to analyze the posterior sample.   As is the case with all measurement models, make sure that you have plenty  of free memory, especially when storing the scores.}\references{    M. K. Cowles. 1996. ``Accelerating Monte Carlo Markov Chain Convergence for  Cumulative-link Generalized Linear Models." \emph{Statistics and Computing.}  6: 101-110.       Valen E. Johnson and James H. Albert. 1999. ``Ordinal Data Modeling."   Springer: New York.     Daniel Pemstein, Kevin M. Quinn, and Andrew D. Martin.  2007.     \emph{Scythe Statistical Library 1.0.} \url{http://scythe.wustl.edu}.      Martyn Plummer, Nicky Best, Kate Cowles, and Karen Vines. 2002.   \emph{Output Analysis and Diagnostics for MCMC (CODA)}.   \url{http://www-fis.iarc.fr/coda/}.        Kevin M. Quinn. 2004. ``Bayesian Factor Analysis for Mixed Ordinal and  Continuous Responses.'' \emph{Political Analysis}. 12: 338-353.   }\examples{\dontrun{data(PErisk)post <- MCMCmixfactanal(~courts+barb2+prsexp2+prscorr2+gdpw2,                        factors=1, data=PErisk,                        lambda.constraints = list(courts=list(2,"-")),                        burnin=5000, mcmc=1000000, thin=50,                        verbose=500, L0=.25, store.lambda=TRUE,                        store.scores=TRUE, tune=1.2)plot(post)summary(post)library(MASS)data(Cars93)attach(Cars93)new.cars <- data.frame(Price, MPG.city, MPG.highway,                 Cylinders, EngineSize, Horsepower,                 RPM, Length, Wheelbase, Width, Weight, Origin)rownames(new.cars) <- paste(Manufacturer, Model)detach(Cars93)# drop obs 57 (Mazda RX 7) b/c it has a rotary enginenew.cars <- new.cars[-57,]# drop 3 cylinder carsnew.cars <- new.cars[new.cars$Cylinders!=3,]# drop 5 cylinder carsnew.cars <- new.cars[new.cars$Cylinders!=5,]new.cars$log.Price <- log(new.cars$Price)new.cars$log.MPG.city <- log(new.cars$MPG.city)new.cars$log.MPG.highway <- log(new.cars$MPG.highway)new.cars$log.EngineSize <- log(new.cars$EngineSize)new.cars$log.Horsepower <- log(new.cars$Horsepower)new.cars$Cylinders <- ordered(new.cars$Cylinders)new.cars$Origin    <- ordered(new.cars$Origin)post <- MCMCmixfactanal(~log.Price+log.MPG.city+                 log.MPG.highway+Cylinders+log.EngineSize+                 log.Horsepower+RPM+Length+                 Wheelbase+Width+Weight+Origin, data=new.cars,                 lambda.constraints=list(log.Horsepower=list(2,"+"),                 log.Horsepower=c(3,0), weight=list(3,"+")),                 factors=2,                 burnin=5000, mcmc=500000, thin=100, verbose=500,                 L0=.25, tune=3.0)plot(post)summary(post)}}\keyword{models}\seealso{\code{\link[coda]{plot.mcmc}}, \code{\link[coda]{summary.mcmc}},  \code{\link[mva]{factanal}}, \code{\link[MCMCpack]{MCMCfactanal}},  \code{\link[MCMCpack]{MCMCordfactanal}},  \code{\link[MCMCpack]{MCMCirt1d}}, \code{\link[MCMCpack]{MCMCirtKd}}}

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -