⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 var.jack.rd

📁 做主成分回归和偏最小二乘回归
💻 RD
字号:
%% $Id: var.jack.Rd 142 2007-10-01 13:10:25Z bhm $\encoding{latin1}\name{var.jack}\alias{var.jack}\title{Jackknife Variance Estimates of Regression Coefficients}\description{  Calculates jackknife variance or covariance estimates of regression  coefficients.}\usage{var.jack(object, ncomp = object$ncomp, covariance = FALSE, use.mean = TRUE)}\arguments{  \item{object}{an \code{mvr} object.  A cross-validated model fitted    with \code{jackknife = TRUE}.}  \item{ncomp}{the number of components to use for estimating the (co)variances}  \item{covariance}{logical.  If \code{TRUE}, covariances are    calculated; otherwise only variances.  The default is \code{FALSE}.}  \item{use.mean}{logical.  If \code{TRUE} (default), the mean    coefficients are used when estimating the (co)variances; otherwise    the coefficients from a model fitted to the entire data set.  See Details.}}\details{  The original (Tukey) jackknife variance estimator is defined as  \eqn{(g-1)/g \sum_{i=1}^g(\tilde\beta_{-i} - \bar\beta)^2}, where  \eqn{g} is the number of segments, \eqn{\tilde\beta_{-i}} is the  estimated coefficient when segment \eqn{i} is left out (called the  jackknife replicates), and  \eqn{\bar\beta} is the mean of the \eqn{\tilde\beta_{-i}}.  The most  common case is delete-one jackknife, with \eqn{g = n}, the number of  observations.    This is the definition \code{var.jack} uses by default.  However, Martens and Martens (2000) defined the estimator as  \eqn{(g-1)/g \sum_{i=1}^g(\tilde\beta_{-i} - \hat\beta)^2}, where  \eqn{\hat\beta} is the coefficient estimate using the entire data set.  I.e., they use the original fitted coefficients instead of the  mean of the jackknife replicates.  Most (all?) other jackknife  implementations for PLSR use this estimator.  \code{var.jack} can be  made to use this definition with \code{use.mean = FALSE}.  In  practice, the difference should be small if the number of  observations is sufficiently large.  Note, however, that all  theoretical results about the jackknife refer to the `proper'  definition.  (Also note that this option might disappear in a future  version.)}\value{  If \code{covariance} is \code{FALSE}, an \eqn{p\times q \times c}  array of variance estimates, where \eqn{p} is the number of  predictors, \eqn{q} is the number of responses, and \eqn{c} is the  number of components.  If \code{covariance} id \code{TRUE}, an \eqn{pq\times pq \times c}  array of variance-covariance estimates.}\section{Warning}{  Note that the Tukey jackknife variance estimator is not unbiased for  the variance of regression coefficients (Hinkley 1977).  The bias depends on the \eqn{X} matrix.  For ordinary least squares  regression (OLSR), the bias can be calculated, and depends  on the number of observations \eqn{n} and the number of parameters  \eqn{k} in the mode.  For the common case of an orthogonal design  matrix with \eqn{\pm 1}{

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -