[R] Multi-objective optimization

Bert Gunter gunter.berton at gene.com
Thu Oct 18 16:56:58 CEST 2007


I haven't followed this thread very closely, but it sounds like it may be
related to the somewhat arcane idea of "Desirability functions" for multiple
responses (mostly in th e experimental design context, as I recall). There
were some papers on this in TECHNOMETRICS a couple of decades ago. The
JMP(R) software package from SAS has an implementation of this. Anyway, you
can search on this and see if it's relevant or not. 


Bert Gunter
Genentech Nonclinical Statistics


-----Original Message-----
From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-project.org] On
Behalf Of Paul Smith
Sent: Thursday, October 18, 2007 3:10 AM
To: r-help
Subject: Re: [R] Multi-objective optimization

> > On 10/17/07, Ravi Varadhan <rvaradhan at jhmi.edu> wrote:
> >
> >> What if simultaneously maximizing f(x,y) and g(x,y) is an incompatible
> >> objective?
> >>
> >> Modifying Duncan's example slightly, What if:
> >>
> >> f(x,y) = -(x-y)^2 and
> >> g(x,y) = -(x-2)^2-(y-x-1)^2?
> >>
> >> Here:
> >> (1) => x = y
> >> (2) => y = x + 1
> >> (3) => x = y => no solution!
> >>
> >> In order for a solution to necessarily exist, one needs to define a
scalar
> >> function that strikes a compromise between f and g.
> >>
> >
> > But imagine that one is sure that there is no incompatibility, how can
> > R get the solution? For instance, can R get the solution for Duncan's
> > example?
> >
> I'm pretty sure there is nothing in base R designed to solve it, and I
> don't know of any packages that do it:  but there are more than 1000
> packages on CRAN, so I could easily have missed one.
>
> I'd suggest looking for literature that describes methods for
> numerically solving such systems, and then do a search on CRAN for
> packages that mention those methods.
>
> And if that fails, you could try putting something together by writing
> functions that call optimize() for the one dimensional optimization
> problem of finding x to optimize f given y, and finding y to optimize g
> given x, then use optim to find (x,y) minimizing the distance between
> these two solutions.  For example, something like this
> might work
>
> f <- function(x, y) -(x-y)^2
> g <- function(x, y) -(x-2)^2 - (y-1)^2
>
> f1 <- function(y) optimize( function(x) f(x,y), c(-10, 10),
> maximum=TRUE)$maximum
> g1 <- function(x) optimize( function(y) g(x,y), c(-10, 10),
> maximum=TRUE)$maximum
>
> sqdist <- function(xy) {
>   x <- xy[1]
>   y <- xy[2]
>   pt1 <- cbind(f1(y), y)
>   pt2 <- cbind(x, g1(x))
>   sum((pt1-pt2)^2)
> }
>
> optim(c(0,0), sqdist)
>
> but I wouldn't trust it on a real problem: the results of f1 and g1 will
> probably not be very smooth, and that will likely mess up optim if you
> use a fast method (not Nelder-Mead, as I did).  But give it a try.

Thanks, Duncan. Any suggestions about the keywords that should I use
to search for related literature?

Paul

______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



More information about the R-help mailing list