Data layout and operator overloading in octave.

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view

Data layout and operator overloading in octave.

Viral Shah-2
Hello world,

I am just beginning to look into octave, and had a few questions
about the current state of things. My goal is to implement, or at
least test the feasibility of implementing the functionality (very
limited) of the matlab*p project
( in octave.

I have a few questions, and would appreciate any feedback and pointers:

How easy is it to change the layout of data objects (pardon me if
my terminology is not the best) in octave, to support parallel
objects ? I am beginning to look at the sparse matrix implementation
in octave-forge to get an idea..

Is operator overloading possible without modifying the octave sources ?

What is the current state of sparse matrix implementation in octave ?

There was a recent discussion about MPI support in octave. The sources
seem to have a --with-mpi option to configure, but there doesn't seem
to be MPI support in the codebase. Is there a chance of any near future
versions of octave being released with MPI support ? What would be the
best way to get MPI support in octave otherwise ?


Fate is not without a sense of irony.

Reply | Threaded
Open this post in threaded view

Parallel Command in Octave

J.D. Cole
Hi all,
   I've currently been working on distributed support in Octave and have
gotten some POC code working if anyone wants to check it out. Some of
the main issues it addresses are:

1. High level command for simultaneous evaluation of scripts

For example:

----------- snip --------------
# A simple 2-D bandpass  

img = loadimage('default.img'); #Octave's built in baby face

kernel1 = ones (3)/9;
kernel2 = ones (5)/25;

parallelsection # this line is omittable
  result1 = conv2 (img, kernel1, 'same');
  result2 = conv2 (img, kernel2, 'same');

combine = result1 - result2;

imagesc (combine);
---------- snip -----------------

Each "parallelsection" is potentially executed on a remote node

2. Octave binary separation. One concern voiced in an earlier discussion
of adding MPI support to Octave was that we didn't want people to have
to recompile Octave if they wanted MPI support. I have added an
interface layer in the Octave source which can be dynamically linked to
what I call a PIL (Parallel Interface Library) to give MPI support (or
your parallel interface of choice). If no PIL is specified at Octave
startup, parallel command, such as the one shown above, are executed
locally, in a serial fashion, yielding the same numerical results as if
they were executed on multiple processors.

3. An Distributed Abstraction Layer. For those of you who have been
following the discussions on implementing a better plotting commands in
Octave, you'll know that everyone has a favorite package, but the
general fealing is that Octave should have a uniform interface, with
some capacity to access "special" features of individual packages.
Distributed support is not much different: (1) Everyone has their
favorite implementation MPI, PVM, etc ... (2) each one of those package
have similarities and unique features. IMHO it would be wrong to tie
Octave to one of these. The Parallel Interface Library potentially
solves this problem by allowing users to implement an interface which
allows Octave to execute the code shown above and/or additionally
install implementation specific API's.

You can grab my current work here:

Don't worry, if you don't currently have a parallel setup, you can still
play around with the parallel command stuff.

Can't wait to get some feedback.

J.D. Cole
Transient Research

Reply | Threaded
Open this post in threaded view

Re: Parallel Command in Octave

J.D. Cole
Sorry folks, I'm having some website problems, try

for the time being


J.D. Cole
Transient Research