nlminb {stats} | R Documentation |
Unconstrained and constrained optimization using PORT routines.
nlminb(start, objective, gradient = NULL, hessian = NULL, scale = 1, control = list(), lower = -Inf, upper = Inf, ...)
start |
numeric vector, initial values for the parameters to be optimized |
objective |
function to be minimized. Must return a scalar value (possibly
NA/Inf). The first argument to objective is the vector of
parameters to be optimized, whose initial values are supplied
through start . Further arguments (fixed during the course of
the optimization) to objective may be specified as well (see
... ).
|
gradient |
optional function that takes the same arguments as objective and
evaluates the gradient of objective at its first argument. Must
return a vector as long as start .
|
hessian |
optional function that takes the same arguments as objective and
evaluates the hessian of objective at its first argument. Must
return a square matrix of order length(start) . Only the
lower triangle is used.
|
scale |
See PORT documentation (or leave alone) |
control |
a list of control parameters. See below for details. |
lower, upper |
vector of lower and upper bounds, replicated to be as long as
start . If unspecified, all parameters are assumed to be
unconstrained.
|
... |
further arguments to be supplied to objective |
A list with components:
par |
The best set of parameters found. |
objective |
The value of objective corresponding to par . |
convergence |
An integer code. 0 indicates successful
convergence.
|
message |
A character string giving any additional information returned by the
optimizer, or NULL . For details, see PORT documentation.
|
iterations |
Number of iterations performed. |
evaluations |
Number of objective function and gradient function evaluations |
Possible names in the control
list and their default values
are:
eval.max
iter.max
trace
abs.tol
1e-20
.rel.tol
1e-10
.x.tol
1.5e-8
.step.min
2.2e-14
.(of R port) Douglas Bates and Deepayan Sarkar.
http://netlib.bell-labs.com/netlib/port/
optimize
for one-dimensional minimization and
constrOptim
for constrained optimization.
x <- rnbinom(100, mu = 10, size = 10) hdev <- function(par) { -sum(dnbinom(x, mu = par[1], size = par[2], log = TRUE)) } nlminb(c(9, 12), hdev) nlminb(c(20, 20), hdev, lower = 0, upper = Inf) nlminb(c(20, 20), hdev, lower = 0.001, upper = Inf) ## slightly modified from the S-PLUS help page for nlminb # this example minimizes a sum of squares with known solution y sumsq <- function( x, y) {sum((x-y)^2)} y <- rep(1,5) x0 <- rnorm(length(y)) nlminb( start = x0, obj = sumsq, y = y) # now use bounds with a y that has some components outside the bounds y <- c( 0, 2, 0, -2, 0) nlminb( start = x0, obj = sumsq, lower = -1, upper = 1, y = y) # try using the gradient sumsq.g <- function(x,y) 2*(x-y) nlminb( start = x0, obj = sumsq, grad = sumsq.g, lo = -1, up = 1, y = y) # now use the hessian, too sumsq.h <- function(x,y) diag(2, nrow = length(x)) nlminb(st = x0, obj = sumsq, grad = sumsq.g, hes = sumsq.h, lo = -1, up = 1, y = y) ## Rest lifted from optim help page fr <- function(x) { ## Rosenbrock Banana function x1 <- x[1] x2 <- x[2] 100 * (x2 - x1 * x1)^2 + (1 - x1)^2 } grr <- function(x) { ## Gradient of 'fr' x1 <- x[1] x2 <- x[2] c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1), 200 * (x2 - x1 * x1)) } nlminb(c(-1.2,1), fr) nlminb(c(-1.2,1), fr, grr) flb <- function(x) { p <- length(x); sum(c(1, rep(4, p-1)) * (x - c(1, x[-p])^2)^2) } ## 25-dimensional box constrained ## par[24] is *not* at boundary nlminb(rep(3, 25), flb, lower=rep(2, 25), upper=rep(4, 25))