Built-in parallelization in optimization functions

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

Built-in parallelization in optimization functions

Žiga Povalej
Dear All,

I am trying to solve a non-linear optimization problem in parallel. Does there exist an optimization function with a built-in parallel option (similar to 'fmincon' in Matlab)? I have checked the built-in optimization functions and the package optim, but there seems to be no function with such option.

If there is no such optimization function, I am planning to implement it on my own, since it would be very helpful to have a tool to speed-up large time-consuming non-linear optimization problems.

Any help or comment is appreciated.

Best regards,

Žiga Povalej
_______________________________________________
Help-octave mailing list
[hidden email]
https://mailman.cae.wisc.edu/listinfo/help-octave
Reply | Threaded
Open this post in threaded view
|

Re: Built-in parallelization in optimization functions

Olaf Till-2
On Thu, Nov 14, 2013 at 12:47:29PM +0100, Žiga Povalej wrote:
> Dear All,
>
> I am trying to solve a non-linear optimization problem in parallel. Does there exist an optimization function with a built-in parallel option (similar to 'fmincon' in Matlab)? I have checked the built-in optimization functions and the package optim, but there seems to be no function with such option.
>
> If there is no such optimization function, I am planning to implement it on my own, since it would be very helpful to have a tool to speed-up large time-consuming non-linear optimization problems.

Hi Žiga,

seemingly I never put something like that into the optim package, and
I don't think there is ready made parallelism available now. It should
not be hard to implement, though (i.e. for parallelism on a single
multi-processor machine).

One could change the default gradient/jacobian function of nonlin_min
or nonlin_residmin & co. and of the classical leasqr. If you can wait
a bit more than a week, I can do it myself (I can't do it during the
next week). Indeed this should be better than doing it yourself, since
I'd like to review the code anyway before inclusion.

If you can't wait so long, you can provide an own gradient/jacobian
function to nonlin_min or nonlin_residmin & co. or leasqr which
computes the finite difference gradient/jacobian in parallel (consider
using parcellfun of the general package).

Of course, better than parallel finite differences is providing an
explicit gradient/jacobian function, if possible.

Olaf

--
public key id EAFE0591, e.g. on x-hkp://pool.sks-keyservers.net

_______________________________________________
Help-octave mailing list
[hidden email]
https://mailman.cae.wisc.edu/listinfo/help-octave

signature.asc (853 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Built-in parallelization in optimization functions

Olaf Till-2
On Mon, Nov 18, 2013 at 02:55:05PM +0100, Žiga Povalej wrote:
> I would like to thank You for the answer and the willingness to
> implement such option to a nonlinear optimization function. It is
> not a problem to wait for another week or so. Please, inform me when
> the parallel approach is ready.

Done, for nonlin_residmin, nonlin_curvefit, and nonlin_min. Introduces
dependency on the parallel package. Though the changes are in the
repository, there is no new optim package yet. I'd like to wait with
that till the imminent new Octave release. (If you should have an
urgent need for this functionality, e.g. a problem with time-consuming
model-functions or objective functions, and can't install directly
from the repository, say so. Than we can either make a preliminary
optim package or explain how to install from the repository.)

Olaf

--
public key id EAFE0591, e.g. on x-hkp://pool.sks-keyservers.net

_______________________________________________
Help-octave mailing list
[hidden email]
https://mailman.cae.wisc.edu/listinfo/help-octave

signature.asc (853 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

help installing functions from repository (was: Re: Built-in parallelization in optimization functions)

Olaf Till
On Mon, Dec 02, 2013 at 10:44:31AM +0100, Žiga Povalej wrote:
> I would like to thank You for the implementation of parallel
> approach for the nonlinear optimization.
>
> Could You please explain how can one install only one function from
> the repository. If it helps, I am using Linux Ubuntu 13.04.

Hello Žiga,

it is better to keep the mailing list in CC, since installation issues
are something with which others may be more familiar than me.

Installing single functions may be more effort than installing a whole
package.

In any case, first get the package directory from the repository. (You
must have mercurial installed.) To do this, in the shell, 'cd' to the
directory under which you want to put the package directory, then:

umask 022
hg clone http://hg.code.sf.net/p/octave/optim optim

will create a subdirectory 'optim' with the package files from the
current version in the repository.

(For other packages than 'optim' there may be additional steps
necessary here.)

To use the respective functions, you must have the Octave packages
struct (>= 1.0.10) and parallel (>= 2.0.5) installed. To install the
whole package, you must additionally have the Octave package
miscellaneous (>= 1.0.10) installed.

To install the whole optim package, you may first to have to uninstall
the respective package provided by the package system of your
operating system. Then, still from your current directory in the
shell:

tar -czf optim-1.2.2.tar.gz optim/
su

and as root, start Octave and from Octave:

> pkg install -auto optim-1.2.2.tar.gz

Without installing the whole optim package, you could create a
directory somewhere and put it in Octaves path or change the current
working directory of Octave to it. Into this directory, you could put
the following functions (enabled to compute in parallel or indirectly
needed) from the packages 'inst' subdirectory:

cpiv_bard.m, curvefit_stat.m, gjp.m, nonlin_curvefit.m, nonlin_min.m,
nonlin_residmin.m, residmin_stat.m, and the whole subdirectory
'private'.

Olaf

--
public key id EAFE0591, e.g. on x-hkp://pool.sks-keyservers.net

_______________________________________________
Help-octave mailing list
[hidden email]
https://mailman.cae.wisc.edu/listinfo/help-octave

signature.asc (853 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: help installing functions from repository (was: Re: Built-in parallelization in optimization functions)

mikepatton
Has any progress been made with parallelizing the optim package? I'm using the function samin for my problem, and if it could be parallelized, it would increase the speed tremendously.
Reply | Threaded
Open this post in threaded view
|

Re: help installing functions from repository (was: Re: Built-in parallelization in optimization functions)

Olaf Till-2
On Thu, Jun 19, 2014 at 10:54:22AM -0700, mikepatton wrote:
> Has any progress been made with parallelizing the optim package? I'm using
> the function samin for my problem, and if it could be parallelized, it would
> increase the speed tremendously.

I'm not aware of attempts to parallelize samin.cc.

Some parallelization has been done for backends of new frontends of
the optim package. Since optim-1.3.1, this parallelization includes a
backend for simulated annealing, whose algortithm is, however,
different from that of samin.cc. Principally, a similar
parallelization could be possible for samin.cc.

Due to the nature of the simulated annealing algorithms I'm aware of,
the benefit of parallelization is limited and depends on the rate of
acceptance of steps. (The parallel results must be ordered, and only
those up to and including the first accepted can be used, the others
have to be discarded.) As long as many steps are accepted,
parallelization will only slow down things, even if the objective
function is computationally intensive.

Do you have a reference for a simulated annealing algorithm in which
parallelization can be more effective?

Olaf

--
public key id EAFE0591, e.g. on x-hkp://pool.sks-keyservers.net

_______________________________________________
Help-octave mailing list
[hidden email]
https://lists.gnu.org/mailman/listinfo/help-octave

signature.asc (853 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: help installing functions from repository (was: Re: Built-in parallelization in optimization functions)

Michael Creel
I have played around in the past with some ideas for using a battery of SA minimizers. If I recall correctly, this gave some improvements in getting to the neighborhood of the global minimum, during the early part of the iterations, but convergence in the later part was no faster than the public version. Genetic algorithms should be easier to parallelize, but I'm not sure about their performance relative to SA.

An obvious way to do some parallelization in optim would be to do numeric gradients and/or Hessians in parallel, and possibly stepsize calculations. That would be a generic, general purpose improvement.
Reply | Threaded
Open this post in threaded view
|

Re: help installing functions from repository (was: Re: Built-in parallelization in optimization functions)

Olaf Till-2
In reply to this post by Olaf Till-2
Hi Michael,

On Mon, Jun 23, 2014 at 04:53:09AM -0700, Michael Creel wrote:
> I have played around in the past with some ideas for using a battery of SA
> minimizers. If I recall correctly, this gave some improvements in getting to
> the neighborhood of the global minimum, during the early part of the
> iterations, but convergence in the later part was no faster than the public
> version.

maybe you can specify the approach ... ? But surely the resulting
algorithm actually won't be real simulated annealing anymore ...

> An obvious way to do some parallelization in optim would be to do numeric
> gradients and/or Hessians in parallel. That would be a generic, general
> purpose improvement.

For gradients, this is already done (in the native numeric gradient
functions of the new frontends). As for Hessians, algorithms needing a
Hessian seem to use BFGS, usually ...

Olaf

--
public key id EAFE0591, e.g. on x-hkp://pool.sks-keyservers.net

_______________________________________________
Help-octave mailing list
[hidden email]
https://lists.gnu.org/mailman/listinfo/help-octave

signature.asc (853 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: help installing functions from repository (was: Re: Built-in parallelization in optimization functions)

Michael Creel
That's right, it's not real simulated annealing. I experimented with it to see if it could lead to improved time to optimize for the sort of problems I work with. My conclusion was that it was not a good idea. That's why I suggested to work on the other areas. I'm glad to hear that that has been done.