elnet.fit.Rd
Solves the weighted least squares (WLS) problem for a single lambda value. Internal function that users should not call directly.
Input matrix, of dimension nobs x nvars
; each row is an
observation vector. If it is a sparse matrix, it is assumed to be unstandardized.
It should have attributes xm
and xs
, where xm(j)
and
xs(j)
are the centering and scaling factors for variable j respsectively.
If it is not a sparse matrix, it is assumed that any standardization needed
has already been done.
Quantitative response variable.
Observation weights. elnet.fit
does NOT standardize
these weights.
A single value for the lambda
hyperparameter.
The elasticnet mixing parameter, with \(0 \le \alpha \le 1\).
The penalty is defined as $$(1-\alpha)/2||\beta||_2^2+\alpha||\beta||_1.$$
alpha=1
is the lasso penalty, and alpha=0
the ridge penalty.
Should intercept be fitted (default=TRUE) or set to zero (FALSE)?
Convergence threshold for coordinate descent. Each inner
coordinate-descent loop continues until the maximum change in the objective
after any coefficient update is less than thresh times the null deviance.
Default value is 1e-7
.
Maximum number of passes over the data; default is 10^5
.
(If a warm start object is provided, the number of passes the warm start object
performed is included.)
Separate penalty factors can be applied to each
coefficient. This is a number that multiplies lambda
to allow differential
shrinkage. Can be 0 for some variables, which implies no shrinkage, and that
variable is always included in the model. Default is 1 for all variables (and
implicitly infinity for variables listed in exclude). Note: the penalty
factors are internally rescaled to sum to nvars
.
Indices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor.
Vector of lower limits for each coefficient; default
-Inf
. Each of these must be non-positive. Can be presented as a single
value (which will then be replicated), else a vector of length nvars
.
Vector of upper limits for each coefficient; default
Inf
. See lower.limits
.
Either a glmnetfit
object or a list (with names beta
and a0
containing coefficients and intercept respectively) which can
be used as a warm start. Default is NULL
, indicating no warm start.
For internal use only.
Was elnet.fit()
called from glmnet.fit()
?
Default is FALSE.This has implications for computation of the penalty factors.
Return the warm start object? Default is FALSE.
An object with class "glmnetfit" and "glmnet". The list returned has
the same keys as that of a glmnet
object, except that it might have an
additional warm_fit
key.
Intercept value.
A nvars x 1
matrix of coefficients, stored in sparse matrix
format.
The number of nonzero coefficients.
Dimension of coefficient matrix.
Lambda value used.
The fraction of (null) deviance explained. The deviance calculations incorporate weights if present in the model. The deviance is defined to be 2*(loglike_sat - loglike), where loglike_sat is the log-likelihood for the saturated model (a model with a free parameter per observation). Hence dev.ratio=1-dev/nulldev.
Null deviance (per observation). This is defined to be 2*(loglike_sat -loglike(Null)). The null model refers to the intercept model.
Total passes over the data.
Error flag, for warnings and errors (largely for internal debugging).
Always FALSE, since offsets do not appear in the WLS problem. Included for compability with glmnet output.
The call that produced this object.
Number of observations.
If save.fit=TRUE
, output of C++ routine, used for
warm starts. For internal use only.
WARNING: Users should not call elnet.fit
directly. Higher-level functions
in this package call elnet.fit
as a subroutine. If a warm start object
is provided, some of the other arguments in the function may be overriden.
elnet.fit
is essentially a wrapper around a C++ subroutine which
minimizes
$$1/2 \sum w_i (y_i - X_i^T \beta)^2 + \sum \lambda \gamma_j [(1-\alpha)/2 \beta^2+\alpha|\beta|],$$
over \(\beta\), where \(\gamma_j\) is the relative penalty factor on the
jth variable. If intercept = TRUE
, then the term in the first sum is
\(w_i (y_i - \beta_0 - X_i^T \beta)^2\), and we are minimizing over both
\(\beta_0\) and \(\beta\).
None of the inputs are standardized except for penalty.factor
, which
is standardized so that they sum up to nvars
.