contacting people | a-z index | help | search;
  

 


 


 


   
   
   
   
   
   
   
   
   
   
   
   
   

Rmpi

Description

Rmpi provides an interface (wrapper) to MPI APIs. It also provides interactive R slave environment.

Links

  • Visit the CRAN Rmpi home page.
  • Visit the Rmpi home page at The University of Western Ontario.

Using Rmpi

To use the Rmpi you must load the following modules:

module load languages/R-2.9.1

module load ofed/openmpi/gcc/64/1.3.2

Please remember that the above commands must be included in your .bashrc file or your job submission script.

Check:

From the command line which R should return the following: /usr/local/languages/R-2.9.1/bin/R

Job submission script

Below is an example job submission script for running Rmpi across 8 cores

Rmpi works by creating one Master R process and n slave processes. Since each compute node on Bluecrystal Phase 2 contains 8 cores the script request all 8 cores using the ppn=8 switch, then creates an MPI machinefile containing 7 host entries.

To use the script simply cut and paste it into your working directory on Bluecrystal. Edit the marked lines and submit it to the queue using the qsub command

cut here ----------------------------

#!/bin/bash 
#
# Tell PBS to use 1 nodes and 8 processes per node ----------------------
#PBS -l nodes=1:ppn=8
#PBS -q testq

# Set the working directory for the job --------------------------------- 
# 1. Edit this to point to the working directory

export RUNDIR="/gpfs/cluster/isys/iszcjw/Software/beate-ncdf4" 

# Name of application --------------------------------------------------- 

export APPLICATION="R --slave CMD BATCH "

# Any required run flags/input files etc. -------------------------------
# 2. Edit this to include the required input file

export RUNFLAGS="task_pull.R "


# DO NOT CHNAGE ANYTHING BELOW THIS LINE --------------------------------

# Change into the working directory -------------------------------------

cd $RUNDIR

# Generate the list of nodes the code will run on -----------------------

export node=`cat $PBS_NODEFILE|uniq`

export confile=host.$PBS_JOBID

for i in {1..7} ; do
   echo ${node} >>$confile
done

# Execute the code ------------------------------------------------------

mpirun -np 1 -machinefile $confile $APPLICATION $RUNFLAGS

cut here ----------------------------

 

 

university home | a-z index  | help | © 2002-2017 University of Bristol