[Rd] Rmpi, openMPI editions.

Paul Johnson pauljohn32 at gmail.com
Wed Jul 5 17:46:48 CEST 2017


Here is what I've learned about OpenMPI and Rmpi during the past 2
weeks. Please tell me if you think I'm incorrect.

I don't understand computer science enough to understand fully the
dangers of forks and data corruption when OpenMPI uses infiniband.
However, perhaps one of you can tell me.

1. Rmpi will compile with OpenMPI >= 2.0, but it is not fully
compatible. The Rmpi author has written to me directly that he is
working on revisions that will make these compatible.  One symptom of
the problem we find is that stopCluster() does not work. It hangs the
session entirely. The only way to shut down the cluster is mpi.quit(),
which terminates the R session entirely.

2. Rmpi will compile/run with OpenMPI < 2.0.

However, on systems that have Infiniband connective devices and openib
libraries, there will be warnings about threads and forks as well as a
danger of data corruption.  The warning from OpenMPI is triggered by
such innocuous R functions as sessionInfo().

Here is a session that shows the warning.

$ R

R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
Copyright (C) 2017 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

Microsoft R Open 3.4.0
The enhanced R distribution from Microsoft
Microsoft packages Copyright (C) 2017 Microsoft Corporation

Using the Intel MKL for parallel mathematical computing(using 1 cores).

Default CRAN mirror snapshot taken on 2017-05-01.
See: https://mran.microsoft.com/.

[Previously saved workspace restored]

> library(Rmpi)
> sessionInfo()
--------------------------------------------------------------------------
An MPI process has executed an operation involving a call to the
"fork()" system call to create a child process.  Open MPI is currently
operating in a condition that could result in memory corruption or
other system errors; your MPI job may hang, crash, or produce silent
data corruption.  The use of fork() (or system() or other calls that
create child processes) is strongly discouraged.

The process that invoked fork was:

  Local host:          n410 (PID 34456)
  MPI_COMM_WORLD rank: 0

If you are *absolutely sure* that your application will successfully
and correctly survive a call to fork(), you may disable this warning
by setting the mpi_warn_on_fork MCA parameter to 0.
--------------------------------------------------------------------------
R version 3.4.0 (2017-04-21)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Red Hat Enterprise Linux Server release 6.4 (Santiago)

Matrix products: default
BLAS: /panfs/pfs.local/software/install/MRO/3.4.0/microsoft-r/3.4/lib64/R/lib/libRblas.so
LAPACK: /panfs/pfs.local/software/install/MRO/3.4.0/microsoft-r/3.4/lib64/R/lib/libRlapack.so

locale:
[1] C

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base

other attached packages:
[1] Rmpi_0.6-6           RevoUtilsMath_10.0.0

loaded via a namespace (and not attached):
[1] compiler_3.4.0   RevoUtils_10.0.4 parallel_3.4.0


I do not know how how dangerous forks might be, but if you go read
this message, it appears they can cause data corruption, and this has
been known since 2010:

https://www.mail-archive.com/devel@lists.open-mpi.org/msg08785.html

It is above my understanding to say whether garden variety R users
will cause these problems. I do know the R parallel documentation
warns against system calls and forks, possibly for same reason. R
functions that use disk--dir.create, list.files--make a system call
that would fall into the dangerous fork category? I'm not stating that
as a fact, I'm asking if that's right.

Anyway, my "better safe than sorry" instinct leads to this conclusion:
TURN OFF INFINIBAND SUPPORT IN OpenMPI. This is the policy we adopted
in 2010, but I had forgotten until a new cluster computer network came
on line.  With newly installed OpenMPI, I ran into same old problem.

This can be done in the user account, by adding
~/.openmpi/mca-params.conf  (or, systemwide in the openmpi install
folder etc/openmpi-mca-params.conf) with this line.

btl = ^openib

That prevents OpenMPI from using Infiniband transport layer.

One can tell that an Infiniband device is detected with the shell
program "ompi_info" provided by OpenMPI. Look for the btl stanza: The
return from ompi_info is like this if you have Infiniband.

   MCA btl: ofud (MCA v2.0, API v2.0, Component v1.6.5)
   MCA btl: openib (MCA v2.0, API v2.0, Component v1.6.5)
   MCA btl: self (MCA v2.0, API v2.0, Component v1.6.5)
   MCA btl: sm (MCA v2.0, API v2.0, Component v1.6.5)
   MCA btl: tcp (MCA v2.0, API v2.0, Component v1.6.5)

And like this after changing either ~/openmpi/mca-params.conf or,
etc/openmpi-mca-params.conf).

   MCA btl: ofud (MCA v2.0, API v2.0, Component v1.6.5)
   MCA btl: self (MCA v2.0, API v2.0, Component v1.6.5)
   MCA btl: sm (MCA v2.0, API v2.0, Component v1.6.5)
   MCA btl: tcp (MCA v2.0, API v2.0, Component v1.6.5)

I believe it is worth mentioning that, if some of your compute nodes
have Infiniband, an some do not, then OpenMPI jobs will crash if they
try to integrate nodes connected with ethernet and Infiniband.  That
is another reason to tell OpenMPI not to try to use Infiniband at all.

pj

On Mon, Jun 19, 2017 at 2:34 PM, Paul Johnson <pauljohn32 at gmail.com> wrote:
> Greetings.
>
> I see a warning message while compiling OpenMPI and would appreciate
> it if you tell me what it means.
>
> This warning happens with any OpenMPI > 1.6.5.  Even before starting a
> cluster, just "sessionInfo" triggers this warning.
>
> I'm pasting in the message from R-3.3.2 (this is MRO).
>
> Do the R parallel package cluster functions violate the warnings described here?
>
>> library("Rmpi")
>> sessionInfo()
> --------------------------------------------------------------------------
> An MPI process has executed an operation involving a call to the
> "fork()" system call to create a child process.  Open MPI is currently
> operating in a condition that could result in memory corruption or
> other system errors; your MPI job may hang, crash, or produce silent
> data corruption.  The use of fork() (or system() or other calls that
> create child processes) is strongly discouraged.
>
> The process that invoked fork was:
>
>   Local host:          n401 (PID 114242)
>   MPI_COMM_WORLD rank: 0
>
> If you are *absolutely sure* that your application will successfully
> and correctly survive a call to fork(), you may disable this warning
> by setting the mpi_warn_on_fork MCA parameter to 0.
> --------------------------------------------------------------------------
> R version 3.3.2 (2016-10-31)
> Platform: x86_64-pc-linux-gnu (64-bit)
> Running under: Red Hat Enterprise Linux Server release 6.4 (Santiago)
>
> locale:
> [1] C
>
> attached base packages:
> [1] stats     graphics  grDevices utils     datasets  methods   base
>
> other attached packages:
> [1] Rmpi_0.6-6           RevoUtilsMath_10.0.0
>
> loaded via a namespace (and not attached):
> [1] RevoUtils_10.0.2 parallel_3.3.2   tools_3.3.2
>>
>
> What I think this means is that we need to never run any multicore
> functions and we need to be very careful that MKL or such does not
> launch threads.  Is that right? Is it worse than that?
>
> Why am I chasing this one today?
>
> I've been on an adventure compiling R in a RedHat 6 cluster again. The
> cluster admins here like the Microsoft R, and they had both 3.3 and
> 3.4 installed. However, we found some packaging flaws in 3.4 and so
> that MRO was removed. I'm interested in building R-3.4, but it is a
> pretty big job on the old RedHat.  I want to get this correct.
>
> I've run into the problem I'd forgotten about OpenMPI. If OpenMPI >=
> 2, then Rmpi will compile, but jobs hang with "stopCluster".  With
> OpenMPI-1.6.5, we get a clean build and no warnings, and clusters do
> start and stop cleanly.  With newer version 1 editions of OpenMPI,
> such as 1.10 or 1.12 (I suspect any versions (> 1.6.5), the Rmpi
> generates an intimidating warning, but the cluster will stop when
> asked.
>
>
> --
> Paul E. Johnson   http://pj.freefaculty.org
> Director, Center for Research Methods and Data Analysis http://crmda.ku.edu
>
> To write to me directly, please address me at pauljohn at ku.edu.



-- 
Paul E. Johnson   http://pj.freefaculty.org
Director, Center for Research Methods and Data Analysis http://crmda.ku.edu

To write to me directly, please address me at pauljohn at ku.edu.



More information about the R-devel mailing list