[R] The smallest enclosing ball problem

Berend Hasselman bhh at xs4all.nl
Sun Nov 17 07:28:53 CET 2013


Forgot to forward my answer to R-help.

Berend


On 16-11-2013, at 13:11, Hans W.Borchers <hwborchers at googlemail.com> wrote:

> I wanted to solve the following geometric optimization problem, sometimes 
> called the "enclosing ball problem":
> 
>   Given a set P = {p_1, ..., p_n} of n points in R^d, find a point p_0 such 
>   that max ||p_i - p_0|| is minimized.
> 
> A known algorithm to solve this as a Qudratic Programming task is as follows:
> 
>   Define matrix C = (p_1, ..., p_n), i.e. coordinates of points in columns,
>   and minimize
>                   x' C' C x - \sum (p_i' p_i) x_i
>   subject to
>                   \sum x_1 = 1, x_i >= 0 .
> 
> Let x0 = (x0_1, ..., x0_n) be an optimal solution, then the linear combination
> 
>   p0 = \sum x0_i p_i
> 
> is the center of the smallest enclosing ball, and the negative value of the 
> objective function at x0 is the squared radius of the ball.
> 
> As an example, find the smallest ball enclosing the points (1,0,0), (0,1,0), 
> and (0.5, 0.5, 0.1) in 3-dim. space. We would expect the center of the ball to 
> be at (0.5, 0.5, 0), and with radius 1/sqrt(2).
> 
>   C <- matrix( c(1, 0, 0.5,
>                  0, 1, 0.5,
>                  0, 0, 0.1), ncol = 3, byrow = TRUE)
> 
>   # We need to define D = 2    C' C, d = (p_i' p_i), A = c(1,1,1), and b = 1
>   D <- 2 * t(C) %*% C
>   d <- apply(C^2, 2, sum)
>   A <- matrix(1, nrow = 1, ncol = 3)
>   b <- 1
> 
>   # Now solve with solve.QP() in package quadprog ...
>   require(quadprog)
>   sol1 <- solve.QP(D, d, t(A), b, meq = 1)
> 
>   # ... and check the result
>   sum(sol1$solution)                  # 1
>   p0 <- c(C %*% sol1$solution)        # 0.50  0.50 -2.45
>   r0 <- sqrt(-sol1$value)             # 2.55
> 
>   # distance of all points to the center
>   sqrt(colSums((C - p0)^2))           # 2.55 2.55 2.55
> 
> As a result we get a ball such that all points lie on the boundary.
> The same happens for 100 points in 100-dim. space (to keep D positive 
> definite, n = d is required).
> That is a very nice, even astounding result, but not what I had hoped for!
> 


At the risk of making a complete fool of myself.

Why the restriction:  \sum x_1 = 1, x_i >= 0 ?

Isn’t just  x_i >= 0 sufficient?

Change your code with this

A <- diag(3)
b <- rep(0,3)
sol2 <- solve.QP(D, d, A, b, meq = 0)
sol2

resulting in this

$solution
[1] 0.5 0.5 0.0
$value
[1] -0.5
$unconstrained.solution
[1]  12.75  12.75 -24.50
$iterations
[1] 2 0
$Lagrangian
[1] 0.00 0.00 0.49
$iact
[1] 3

And $solution seems to be what you want.

And:

p0 <- c(C %*% sol2$solution)
r0 <- sqrt(-sol2$value)

# distance of all points to the center
sqrt(colSums((C - p0)^2))

gives the same results as LowRankQP for the last expression.

Berend



> Compare this with another quadratic programming solver in R, for example 
> LowRankQP() in the package of the same name.
> 
>   require(LowRankQP)
>   sol2 <- LowRankQP(D, -d, A, b, u = rep(3, 3), method="LU")
> 
>   p2 <- c(C %*% sol2$alpha)   # 5.000000e-01 5.000000e-01 1.032019e-12
>   sqrt(colSums((C - p2)^2))   # 0.7071068 0.7071068 0.1000000
> 
> The correct result, and we get correct solutions in higher dimensions, too. 
> LowRankQP works also if we apply it with n > d, e.g., 100 points in 2- 
> or 3-dimensional space, when matrix D is not positive definite -- solve.QP( ) 
> does not work in these cases.
> 
> *** What do I do wrong in calling solve.QP()? ***
> 
> My apologies for this overly long request. I am sending it through the weekend
> hoping that someone may have a bit of time to look at it more closely.
> 
> Hans Werner
> 
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



More information about the R-help mailing list