Parallel and MPI with default branch

classic Classic list List threaded Threaded
32 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: Parallel and MPI with default branch

kingcrimson
Hi,

I think I messed up esterday, it is not the linker but the compiler that is throwing out the error message.

It is a bit mess to reconstruct as you are doing a parallel build with 4 cores, but the exact error sequence
seems to be the following:

> On 18 Feb 2018, at 17:56, Sebastian Schöps <[hidden email]> wrote:
>
> /usr/local/Cellar/octave/HEAD-90bd5649983c_1/bin/mkoctfile-4.3.0+ --verbose -I/usr/local/Cellar/open-mpi/3.0.0_2/include -c MPI_Send.cc
> clang++ -std=gnu++11 -c -I/usr/X11/include -fPIC -I/usr/local/Cellar/octave/HEAD-90bd5649983c_1/include/octave-4.3.0+/octave/.. -I/usr/local/Cellar/octave/HEAD-90bd5649983c_1/include/octave-4.3.0+/octave -I/usr/local/Cellar/octave/HEAD-90bd5649983c_1/include  -D_THREAD_SAFE -pthread -I/Library/Java/JavaVirtualMachines/jdk-9.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/jdk-9.jdk/Contents/Home/include/darwin   -I/usr/local/Cellar/open-mpi/3.0.0_2/include  MPI_Send.cc -o MPI_Send.o

> MPI_Send.cc:90:26: error: no viable overloaded '='
>                  retval = info;
>                  ~~~~~~ ^ ~~~~
> /usr/local/Cellar/octave/HEAD-90bd5649983c_1/include/octave-4.3.0+/octave/ov.h:359:17: note: candidate function not viable: no
>      known conversion from 'Array<int>' to 'const octave_value' for 1st argument
>  octave_value& operator = (const octave_value& a)

The above sequence compares with the following commands that I see on my system (and without error):

/opt/octave/4.3.0+/bin/mkoctfile-4.3.0+ --verbose -I/opt/local/include/openmpi-mp -c MPI_Iprobe.cc
/usr/bin/clang++ -c -I/opt/local/include -I/opt/X11/include -fPIC -I/opt/octave/4.3.0+/include/octave-4.3.0+/octave/.. -I/opt/octave/4.3.0+/include/octave-4.3.0+/octave -I/opt/octave/4.3.0+/include  -D_THREAD_SAFE -pthread -std=gnu++11   -I/opt/local/include/openmpi-mp  MPI_Iprobe.cc -o MPI_Iprobe.o


The two look equivalent to me, do you see any relevant difference?

If not, the difference might be in the exact Octave release we are using,
what does

  __octave_config_info__ hg_id

return for you? For me it returns

  >> __octave_config_info__ hg_id
  ans = 3da6c628873a

which refers to the following changeset:
 
  http://hg.savannah.gnu.org/hgweb/octave/rev/3da6c628873a

Hope this helps,
c.

-----------------------------------------
Join us March 12-15 at CERN near Geneva
Switzerland for OctConf 2018.  More info:
https://wiki.octave.org/OctConf_2018
-----------------------------------------
Reply | Threaded
Open this post in threaded view
|

Re: Parallel and MPI with default branch

Sebastian Schöps
Hey Carlo,

> I think I messed up esterday, it is not the linker but the compiler that is throwing out the error message.
> ...
> which refers to the following changeset:
> http://hg.savannah.gnu.org/hgweb/octave/rev/3da6c628873a

I have a theory: I think octave's 64bit-indexing is now enable by default. Did you disable it on your machine? It seems that some int types are messed up in the MPI package and that would explain the errors that I see...

If I change  "Array<int> info (dim_vector (dest.numel (), 1));"
to "int8NDArray info (dim_vector (dest.numel (), 1));" in MPI_Send.cc and "octave_idx_type num;" (which is long long in my case) to "int num;" then I can compile.

Seb.



-----------------------------------------
Join us March 12-15 at CERN near Geneva
Switzerland for OctConf 2018.  More info:
https://wiki.octave.org/OctConf_2018
-----------------------------------------
Reply | Threaded
Open this post in threaded view
|

Re: Parallel and MPI with default branch

kingcrimson
Hi,


> On 19 Feb 2018, at 15:53, Sebastian Schöps <[hidden email]> wrote:
>
> I have a theory: I think octave's 64bit-indexing is now enable by default. Did you disable it on your machine?

Indeed I am configuring Octave with "--disable-64".

> It seems that some int types are messed up in the MPI package and that would explain the errors that I see...
>
> If I change  "Array<int> info (dim_vector (dest.numel (), 1));"
> to "int8NDArray info (dim_vector (dest.numel (), 1));" in MPI_Send.cc and "octave_idx_type num;" (which is long long in my case) to "int num;" then I can compile.

Do you see "octave_idx_type num;" in MPI_Send.cc ?
I have only this definition for "num" on line 79 of MPI_Send.cc:

 int num = s.length ();

> Seb.

BTW, is your build of Octave done with "--enable-64" ?
Does this still require building all dependencies with 64 integers as explained here:

http://wiki.octave.org/Enable_large_arrays:_Build_octave_such_that_it_can_use_arrays_larger_than_2Gb.

?

What version of BLAS/LAPACK are you linking to? What other libraries do you need to compile with custom options?

c.



-----------------------------------------
Join us March 12-15 at CERN near Geneva
Switzerland for OctConf 2018.  More info:
https://wiki.octave.org/OctConf_2018
-----------------------------------------
Reply | Threaded
Open this post in threaded view
|

Re: Parallel and MPI with default branch

Sebastian Schöps
Dear Carlo,

>> I have a theory: I think octave's 64bit-indexing is now enable by default. Did you disable it on your machine?
> Indeed I am configuring Octave with "--disable-64".
>> If I change  "Array<int> info (dim_vector (dest.numel (), 1));"
>> to "int8NDArray info (dim_vector (dest.numel (), 1));" in MPI_Send.cc and "octave_idx_type num;" (which is long long in my case) to "int num;" then I can compile.
> Do you see "octave_idx_type num;" in MPI_Send.cc ?
> I have only this definition for "num" on line 79 of MPI_Send.cc:
> int num = s.length ();

Sorry "octave_idx_type num;" is in MPI_Recv.cc. I had to fix two files.

> BTW, is your build of Octave done with "--enable-64" ?
> Does this still require building all dependencies with 64 integers as explained here:
> http://wiki.octave.org/Enable_large_arrays:_Build_octave_such_that_it_can_use_arrays_larger_than_2Gb.
> ?
> What version of BLAS/LAPACK are you linking to? What other libraries do you need to compile with custom options?

I do not use any particular switches and I do not special BLAS libraries. It works with plain openBLAS or Apple's Accelerate. Octave detects during configure the situation correctly and enables 64bit only for indexing but not for BLAS. Configure says

64-bit array dims and indexing:       yes
64-bit BLAS array dims and indexing:  no

Best
Sebastian

-----------------------------------------
Join us March 12-15 at CERN near Geneva
Switzerland for OctConf 2018.  More info:
https://wiki.octave.org/OctConf_2018
-----------------------------------------
Reply | Threaded
Open this post in threaded view
|

Re: Parallel and MPI with default branch

kingcrimson


Il 19 feb 2018 18:20, Sebastian Schöps <[hidden email]> ha scritto:

Dear Carlo,

>> I have a theory: I think octave's 64bit-indexing is now enable by default. Did you disable it on your machine?
> Indeed I am configuring Octave with "--disable-64".
>> If I change  "Array<int> info (dim_vector (dest.numel (), 1));"
>> to "int8NDArray info (dim_vector (dest.numel (), 1));" in MPI_Send.cc and "octave_idx_type num;" (which is long long in my case) to "int num;" then I can compile.
> Do you see "octave_idx_type num;" in MPI_Send.cc ?
> I have only this definition for "num" on line 79 of MPI_Send.cc:
> int num = s.length ();

Sorry "octave_idx_type num;" is in MPI_Recv.cc. I had to fix two files.

Ok this change is correct then, not sure about int8NDArray though.


> BTW, is your build of Octave done with "--enable-64" ?
> Does this still require building all dependencies with 64 integers as explained here:
> http://wiki.octave.org/Enable_large_arrays:_Build_octave_such_that_it_can_use_arrays_larger_than_2Gb.
> ?
> What version of BLAS/LAPACK are you linking to? What other libraries do you need to compile with custom options?

I do not use any particular switches and I do not special BLAS libraries. It works with plain openBLAS or Apple's Accelerate. Octave detects during configure the situation correctly and enables 64bit only for indexing but not for BLAS. Configure says

64-bit array dims and indexing:       yes
64-bit BLAS array dims and indexing:  no

Best
Sebastian

Well I'll also switch to --enable-64 then.
Does this mean the wiki is outdated?

c.


-----------------------------------------
Join us March 12-15 at CERN near Geneva
Switzerland for OctConf 2018.  More info:
https://wiki.octave.org/OctConf_2018
-----------------------------------------
Reply | Threaded
Open this post in threaded view
|

Re: Parallel and MPI with default branch

Sebastian Schöps
Dear Carlo

> Sorry "octave_idx_type num;" is in MPI_Recv.cc. I had to fix two files.
>
> Ok this change is correct then, not sure about int8NDArray though.

Well, int8NDArray was just a wild guess :) Probably one of (u)int(8|16|32|64)NDArray is correct?! I did not dig deep enough to make an educated decision.

> I do not use any particular switches and I do not special BLAS libraries. It works with plain openBLAS or Apple's Accelerate. Octave detects during configure the situation correctly and enables 64bit only for indexing but not for BLAS. Configure says
>
> 64-bit array dims and indexing:       yes
> 64-bit BLAS array dims and indexing:  no
>
> Best
> Sebastian
>
> Well I'll also switch to --enable-64 then.
> Does this mean the wiki is outdated?

I assume yes, maybe there is a thread in the forum? I did not find much documentation on this and I was scared in the beginning. However, all tests pass and I got confident over time :)

Sebastian

-----------------------------------------
Join us March 12-15 at CERN near Geneva
Switzerland for OctConf 2018.  More info:
https://wiki.octave.org/OctConf_2018
-----------------------------------------
Reply | Threaded
Open this post in threaded view
|

Re: Parallel and MPI with default branch

Olaf Till-2
In reply to this post by Sebastian Schöps
On Sun, Feb 18, 2018 at 10:20:43PM +0100, Sebastian Schöps wrote:

> Hi Olaf,
> Sorry, I was to quick; I didn't use my most recent octave. I get an error when using default (90bd5649983c). Maybe I missed something?  I have compiled octave with your patch from savannah, checked out your sf-repository and created the package via "make dist". Then, installation fails with:
> pserver.cc:1148:7: error: use of undeclared identifier 'octave_child_list'; did you mean 'octave_value_list'?
>       octave_child_list::insert (pid, pserver_child_event_handler);
>       ^~~~~~~~~~~~~~~~~
>       octave_value_list
> /usr/local/Cellar/octave/HEAD-90bd5649983c_1/include/octave-4.3.0+/octave/../octave/ov-struct.h:41:7: note: 'octave_value_list' declared here
> class octave_value_list;
>       ^
> pserver.cc:1148:26: error: no member named 'insert' in 'octave_value_list'
>       octave_child_list::insert (pid, pserver_child_event_handler);
>       ~~~~~~~~~~~~~~~~~~~^
I've no time now for testing, but have pushed something that should
fix the error.

The warnings will be fixed by and by.

Olaf

--
public key id EAFE0591, e.g. on x-hkp://pool.sks-keyservers.net


-----------------------------------------
Join us March 12-15 at CERN near Geneva
Switzerland for OctConf 2018.  More info:
https://wiki.octave.org/OctConf_2018
-----------------------------------------

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Parallel and MPI with default branch

Sebastian Schöps
> I've no time now for testing, but have pushed something that should
> fix the error.
>
> The warnings will be fixed by and by.

Great! Compilation and simple examples work on my machine with macOS and Octave 3da6c628873a. I will play with more advanced stuff tomorrow. Thanks!

Best,
Sebastian

-----------------------------------------
Join us March 12-15 at CERN near Geneva
Switzerland for OctConf 2018.  More info:
https://wiki.octave.org/OctConf_2018
-----------------------------------------
Reply | Threaded
Open this post in threaded view
|

Re: Parallel and MPI with default branch

kingcrimson
In reply to this post by Sebastian Schöps


> On 19 Feb 2018, at 18:59, Sebastian Schöps <[hidden email]> wrote:
>
> Dear Carlo
>
>> Sorry "octave_idx_type num;" is in MPI_Recv.cc. I had to fix two files.
>>
>> Ok this change is correct then, not sure about int8NDArray though.
>
> Well, int8NDArray was just a wild guess :) Probably one of (u)int(8|16|32|64)NDArray is correct?! I did not dig deep enough to make an educated decision.
>
>> I do not use any particular switches and I do not special BLAS libraries. It works with plain openBLAS or Apple's Accelerate. Octave detects during configure the situation correctly and enables 64bit only for indexing but not for BLAS. Configure says
>>
>> 64-bit array dims and indexing:       yes
>> 64-bit BLAS array dims and indexing:  no
>>
>> Best
>> Sebastian
>>
>> Well I'll also switch to --enable-64 then.
>> Does this mean the wiki is outdated?
>
> I assume yes, maybe there is a thread in the forum? I did not find much documentation on this and I was scared in the beginning. However, all tests pass and I got confident over time :)
>
> Sebastian

OK, I built an octave version with --enable-64 and installed and tested this version of the mpi package:

  https://gitserver.mate.polimi.it/redmine/attachments/download/66/mpi-2.2.0.tar.gz

which seems to work for me.

Thanks for testing,
c.






-----------------------------------------
Join us March 12-15 at CERN near Geneva
Switzerland for OctConf 2018.  More info:
https://wiki.octave.org/OctConf_2018
-----------------------------------------
Reply | Threaded
Open this post in threaded view
|

Re: Parallel and MPI with default branch

Sebastian Schöps
> OK, I built an octave version with --enable-64 and installed and tested this version of the mpi package:
>
>  https://gitserver.mate.polimi.it/redmine/attachments/download/66/mpi-2.2.0.tar.gz
>
> which seems to work for me.
>
> Thanks for testing,

Great! I confirm that it works for me, too.

Thanks,
Sebastian



-----------------------------------------
Join us March 12-15 at CERN near Geneva
Switzerland for OctConf 2018.  More info:
https://wiki.octave.org/OctConf_2018
-----------------------------------------
Reply | Threaded
Open this post in threaded view
|

Re: Parallel and MPI with default branch

marco atzeri-2
In reply to this post by kingcrimson
On 1/22/2018 2:47 PM, [hidden email] wrote:

>
>
> MPI is no longer maintained on octave-forge.
> The version available on bitbucket, which you should be able to download from here:
>
> https://bitbucket.org/cdf1/octave-mpi/downloads/
>
> worked correctly for me on the default development branch as of the beginning of 2018.
> If recent changest to Octave have broken the package please let me know so I can fix it.
>
> c.
>

Is it the page still valid ?
Bitbucket replies "Access denied"

Regards
Marco


Reply | Threaded
Open this post in threaded view
|

Re: Parallel and MPI with default branch

kingcrimson
The page is valid but I messed up permissions. I'll fix that asap.
c.
12