Compiling the MPIs
This page provides information on compiling your code with the five Message-Passing Interface libraries.
Click on the appropriate link in the table for your permutation of compiler, language, and MPI.
IMPORTANT: In general these solutions release users and administrators from
extraneous configuration. In particular:
- No command-line login: rsh, ssh keygen
- No shared storage: NFS, AFS, etc.
- No static files: host files,
machine files, etc.
That is part of our effort to make the cluster as easy and reliable as possible for its users.
If you do not see your compiler listed, please note that
the procedure when using other compilers is likely to be similar to these instructions.
|
|
MacMPI
MacMPI, an Open Transport/Carbon-based implementation of an MPI subset, is available from
the AppleSeed Development web site.
It also is present in the Cluster SDK.
Both C and Fortran versions are available with corresponding header files. An introduction to MPI, some basic examples of MPI programming,
and links to references on MPI are available on the site as well.
Although MacMPI is only a subset of MPI-1, Dauger Research, Inc., recommends its use because of its long history of reliability on the Mac OS,
its wide flexibility compiling in different environments, and its extremely helpful
visualization tools.
Code examples and makefiles using MacMPI_X and MacMPI_S are also available on the
Cluster SDK. Introduced in 2006, MacMPI_XUB makes it possible
to compile Universal Applications that can run on mixed Intel and PowerPC Macs.
A code example of this behavior is embodied in
Parallel Fractal GPL Universal Application.
Compiling MacMPI_X.f with the Absoft Pro Fortran v7.0 - 8.2 compilers in Terminal on OS X
Download MacMPI_X.f and mpif.h and include them in the same directory as your Fortran source. Compile MacMPI_X using:
f77 -O -c MacMPI_X.f
which generates MacMPI_X.o. To compile your code, you will need to link with MacMPI_X.o and the Carbon libraries.
For example, depending on if you are using Fortran 77 or Fortran90/95 to compile test.f, you would use commands like these:
f77 -O test.f MacMPI_X.o /System/Library/Frameworks/Carbon.framework/Carbon
or:
f95 -O test.f MacMPI_X.o /System/Library/Frameworks/Carbon.framework/Carbon
These commands create a Mach-O executable, which Pooch should recognize.
By default, the Absoft compiler on OS X creates executables with a 512 kB stack.
If your application needs more, you may wish to use the -s flag or the limit command.
back to top
Compiling MacMPI_X.f with the Absoft Pro Fortran v7.0 compiler on OS 9
You will need to download MacMPI_X.f and mpif.h into the directory where your source is located. Compile MacMPI_X using:
f77 -O -c MacMPI_X.f
which generates an object file named MacMPI_X.f.o. To compile your main code, test.f in this example,
in Fortran 77 or Fortran 90, use commands like these:
f77 -O test.f MacMPI_X.f.o
f90 -O test.f MacMPI_X.f.o
This version of the Absoft compiler on OS 9 will create a Carbon application by default, which should be executable on both OS 9 and X.
Pooch should be able to recognize and launch this application.
back to top
The NAG Fortran 95 compiler
Be sure to install the OS X Developer Tools, including gcc, before using NAG.
To compile and run your own Fortran source code, first download
MacMPI_X.c, the C version of the MPI library, the include files mpi.h and mpif.h, and
MacMPIf77.c, a wrapper code that bridges Fortran and MacMPI_X.c,
into the same directory as your Fortran code.
You will need to create object code versions of MacMPI_X and MacMPIf77 using gcc:
gcc -c MacMPI_X.c MacMPIf77.c -I /Developer/Headers/FlatCarbon
You can then create an executable using your Fortran code and the NAG compiler.
If test.f is the main program, one links
with MacMPI_X and Carbon with the following one-line command:
f95 -O -framework carbon test.f MacMPIf77.o MacMPI_X.o
This should create a Mach-O executable that should launch in parallel
using Pooch.
back to top
The IBM xlf Fortran 95 compiler
Be sure to install the OS X Developer Tools, including gcc, before installing xlf.
After installing xlf, we recommend that you remove xlf's dynamic libraries otherwise the
resulting executables will require those files to be present on other systems to run:
sudo rm /opt/ibmcmp/lib/*.dylib
To compile and run your own Fortran source code, first download
MacMPI_X.c, the C version of the MPI library, the include files mpi.h and mpif.h, and
MacMPIf77.c, a wrapper code that bridges Fortran and MacMPI_X.c,
into the same directory as your Fortran code.
You will need to create object code versions of MacMPI_X and MacMPIf77 using gcc:
gcc -c MacMPI_X.c MacMPIf77.c -I /Developer/Headers/FlatCarbon
You can then create an executable using your Fortran code and the xlf compiler.
If test.f is the main program, one links
with MacMPI_X and Carbon with the following one-line command:
xlf -O -qextname test.f MacMPIf77.o MacMPI_X.o /System/Library/Frameworks/Carbon.framework/Carbon
This should create a Mach-O executable that should launch in parallel
using Pooch.
back to top
The Intel ifort Fortran compiler
Be sure to install the OS X Developer Tools, including gcc, before installing Intel's compilers.
After installing ifort, we recommend that you remove ifort's dynamic libraries otherwise the
resulting executables will require those files to be present on other systems to run:
sudo rm /opt/intel/fc/9.1.014/lib/*.dylib
Note: "9.1.014" may need substitution for later versions of the compiler.
To compile and run your own Fortran source code, first download
MacMPI_S.c, the C version of the MPI library, the include files mpi.h and mpif.h, and
MacMPIf77.c, a wrapper code that bridges Fortran and MacMPI,
into the same directory as your Fortran code.
You will need to create object code versions of MacMPI_S and MacMPIf77 using gcc:
gcc -c MacMPI_S.c MacMPIf77.c -I /Developer/Headers/FlatCarbon
You can then create an executable using your Fortran code and the xlf compiler.
If test.f is the main program, one links
with MacMPI and Carbon with the following one-line command:
ifort -o test.f MacMPIf77.o MacMPI_S.o /System/Library/Frameworks/Carbon.framework/Carbon
For free-form Fortran, add the -free flag.
This should create a Mach-O executable that should launch in parallel on a cluster
using Pooch.
back to top
The GNU Fortran 77 compiler
(a. k. a. g77)
The g77 compiler is available via
Fink. Fink automatically
adjusts and installs many open-source programs available on other Unixes.
g77 takes some time to compile and install (many minutes), so be patient.
Be sure to install the OS X Developer Tools, including gcc, before using Fink.
To compile and run your own Fortran 77 source code, first download
MacMPI_X.c, the C version of the MPI library, the include files mpi.h and mpif.h, and
MacMPIg77.c, a wrapper code that bridges g77 and MacMPI_X.c,
into the same directory as your Fortran 77 code.
You will need to create object code versions of MacMPI_X and MacMPIg77 using gcc:
cc -c MacMPI_X.c MacMPIg77.c -I /Developer/Headers/FlatCarbon
You can then create an executable using your Fortran 77 code and the g77 compiler.
Because g77 only recognizes Unix-style line breaks,
be sure you convert mpif.h and your Fortran 77 code into Unix-style text files
(e.g., using BBEdit),
rather than files that use Macintosh-style line breaks. If test.f is the main program, one links
with MacMPI_X and Carbon with the following one-line command:
g77 test.f MacMPIg77.o MacMPI_X.o /System/Library/Frameworks/Carbon.framework/Carbon
This should create a Mach-O executable that should launch in parallel
using Pooch.
back to top
Compiling MacMPI_X.c with Metrowerks CodeWarrior Pro 6 for both OS 9 and X
There are two major options to using MacMPI_X.c in CodeWarrior:
- using the Standard C Console window; and
- creating a Macintosh application.
In most cases where you are porting an ANSI C-compliant code from another platform, you would probably want to use the Standard C Console.
A Macintosh application (the AltiVec Fractal Carbon demo and Parallel Fractal Demo are examples) would need to know how to organize menus,
windows, and so forth.
1. Standard Console C
If your C source code is standard ANSI C (i.e., this code is multiplatform and uses ANSI C calls like fprintf and scanf), you should start
creating your project by selecting the Mac OS C Stationery category. Under the Standard Console > Carbon category, select the
"Std C Console Carbon" stationery. Add your source as you normally would.
Then add MacMPI_X.c.
Be sure mpi.h is in your
source directory as well.
There are a few settings that should be set for the best behavior of your executable. In the Target Settings Panel named "PPC Target",
edit the 'SIZE' Flags to use localAndRemoteHLEvents. Also in the same panel, be sure to set the Preferred Heap Size so that your code
will have enough available memory. In addition, your code will need to edit certain run-time console flags. Before your main() code,
add:
#include
<SIOUX.h>
and at the top of your main() code add these two lines:
SIOUXSettings.asktosaveonclose=0;
SIOUXSettings.autocloseonquit=1;
This will have the app quit when it falls out of main(), otherwise, the apps will wait for user input before quitting,
locking up the remote machines forever.
In addition, if you don't already have one, you may wish to add a call to printf prior to calling MPI_Init().
It appears to be necessary for proper initialization of the app's runtime environment, without which a crash may occur.
2. Mac OS Toolbox C
If your C source code is a Macintosh C application (i.e., your C calls Macintosh Toolbox routines exclusively),
then create your project using the Mac OS C Stationery category. Under the Mac OS Toolbox category, select the
"MacOS Toolbox Carbon" stationery. Add your source as you normally would, then add MacMPI_X.c as well.
Since MacMPI relies on ANSI C routines, confirm that the stationery included CodeWarrior's ANSI libraries in your project.
Again, in the Target Settings Panel named "PPC Target", edit the 'SIZE' Flags to use localAndRemoteHLEvents.
And, when you write your code, be sure to have your remote apps (node ID > 0) quit without direct human interaction.
It would also be helpful to have your app's event loop respond correctly to Quit AppleEvents, so Pooch can kill them remotely, if necessary.
Also, you may wish to set the monitor flag of MacMPI_X to 0, which will turn off the MacMPI status window.
If you find your event loop and windows conflicting with MacMPI, this change may help.
back to top
Compiling MacMPI_X.c with the GNU cc compiler through Terminal on OS X
In order to use cc, you must install the Mac OS X Developer Tools from the Dev Tools CD.
This CD should have accompanied your OS X User Install CD but is also available for download from the Apple Developer web site.
Be sure to download MacMPI_X.c and mpi.h into the directory where your source is located.
To compile your application, in the case of test.c, from the Terminal window, compile the MacMPI_X.c code and link with
the Carbon library and include files using this one-line command:
cc test.c MacMPI_X.c /System/Library/Frameworks/Carbon.framework/Carbon -I /Developer/Headers/FlatCarbon
This command creates a Mach-O executable, which Pooch should recognize and be able to launch.
back to top
The IBM xlc C/C++/C99 compiler
Be sure to install the OS X Developer Tools, including gcc, before installing xlc.
To compile and run your own C source code, first download
MacMPI_X.c, the C version of the MPI library, and the include file mpi.h
into the same directory as your C code.
You will need to create object code versions of MacMPI_X using gcc:
gcc -c MacMPI_X.c -I /Developer/Headers/FlatCarbon
You can then create an executable using your C code and the xlc compiler.
If test.c is the main program, one links
with MacMPI_X and Carbon with the following one-line command:
xlc -O test.c MacMPI_X.o /System/Library/Frameworks/Carbon.framework/Carbon
This should create a Mach-O executable that should launch in parallel
using Pooch.
back to top
Compiling MacMPI_X.c in a Cocoa application on OS X
Create a Cocoa application using the Project Builder on OS X.
Download MacMPI_X.c and mpi.h into your source directory.
Include MacMPI_X.c and Carbon.frameworks in your project.
Because MacMPI_X expects to read the nodelist_ip file using fopen, and this file is generally placed in the same
directory where the Cocoa bundle application resides, it is necessary to set the default directory to the directory
of the application very early in the code (before calling MPI_Init). To do this, add the
following code (thanks to Steve Hayman of Apple Canada) to the beginning of main() in main.m:
NSString *path = [[NSBundle mainBundle] bundlePath];
char cpath[1024];
[path getCString:cpath];
{char *lastSlash, *tp;
for(lastSlash=tp=cpath; tp=strchr(tp, '/'); lastSlash=tp++) ;
lastSlash[1]=0; //specifies the parent of the bundle directory
}
chdir(cpath);
When your code calls MPI_Init, MacMPI should then be able to locate the node list information.
This executable should be recognized and launched by version 1.2 and later of Pooch.
back to top
mpich
mpich is an open source implementation of the MPI standard, and it is used extensively on Linux-based clusters.
By 2002, it was ported to Mac OS X. The following information was derived from the mpich documentation.
In order for mpich to work with Pooch, some minor modifications to mpich had to be made.
Under open-source license, the version of mpich modified for Pooch by Dauger Research, Inc., is available from the Dauger Research web site at:
http://daugerresearch.com/pooch/mpich/
The above link also describes how to configure, install, and use mpich with Pooch.
Please note that using this modified mpich with Pooch DOES NOT require:
- NFS or Network File System - a mechanism for many computers to have identical access to the same file and directory structure over a network
- rsh or ssh - the ability to remotely log into machines at a command-line level
- the machines.xxx file - a file containing a static list of host names
- mpich's mpirun - mpich's default mechanism for launching jobs, which normally needs the above features
all nodes to have access to the mpich libraries
|
This combination of mpich and Pooch releases these traditional requirements.
back to top
MPI/Pro
MPI/Pro is a commercial implementation of the MPI standard developed and released by
MPI Software Technology, Inc.
MPI/Pro is a trademark of MSTI. Some of the people there have been involved with the MPI standard since its beginning.
They provide a commercial impetus to the performance and quality of their MPI implementation.
To install MPI/Pro, you may use the its installer package. No special modifications or configuration is needed for
MPI/Pro to work with Pooch. Also, configuring inetd.conf or .rhost files is not necessary when using MPI/Pro with Pooch.
Once you have finished installating MPI-Pro, you may compile your C code using the following command:
cc -o test.out test.c -lmpipro -lpthread -lm
Pooch should recognize the resulting executable. IMPORTANT: Be sure to select "MPI/Pro" from the Job Options
pop-up on the Job Window. Pooch should then be able to launch your executable.
(Thanks goes to Bobby Hunter of MSTI for his insight into MPI/Pro.)
Using NFS, rsh, inetd.conf files, .rhosts files, or mpirun is not necessary.
back to top
mpich-gm for Myrinet hardware
mpich-gm is a version of ANL's mpich for modified to use Myricom's Myrinet hardware interface.
The Mac OS X version of mpich-gm is available from
Myricom, Inc.
Myrinet is a trademark of Myricom. No modification of their distribution was required.
After installing the gm libraries and drivers on all nodes with Myrinet hardware,
you may use the the following configure line to install mpich-gm:
./configure --with-device=ch_gm -prefix=/dir/for/gm --enable-sharedlib
then the usual "make" and "make install" commands. Also, configuring host files is not necessary when using mpich-gm with Pooch.
Be sure that all dylib's, or dynamic libraries, created by this mpich are deleted. By removing these dylib's,
you need install mpich-gm only on the machine you use to compile your code. Once you have finished installating mpich, you
may compile your C code using its mpicc command:
mpicc -o test.out test.c
Pooch should recognize the resulting executable. IMPORTANT: Be sure to select "mpich-gm" from the Job Type option in the Job Window.
Pooch should then be able to launch your executable on a Mac cluster connected using Myrinet hardware.
Many thanks goes to Prof. John Huelsenbeck of UCSD for his support and help. As before, using shared storage (NFS, etc.), ssh, rsh,
inetd.conf files,
.rhosts files, or mpirun is not necessary.
back to top
LAM/MPI
LAM/MPI is an open-source MPI implementation created and supported at the Pervasive Technology Labs at Indiana Unviersity.
The original Mac OS X version of LAM is available from the
LAM/MPI web site.
With the help of Dr. Jeff Squyres there, we have produced a modified version of LAM that operates with Pooch available here:
http://daugerresearch.com/pooch/lam/
To install this LAM, you may use the the following configure line:
./configure --prefix=/usr/local/lamPooch --with-boot=pooch
If you don't have a fortran compiler installed, you will need to add --without-fc.
After that, use the usual "make" and "make install" commands. This places the LAM binaries
in /usr/local/lamPooch, where Pooch can find it. Configuring host files or other static data is not
necessary when using this LAM/MPI with Pooch. Because LAM's run time environment (RTE) executables are needed
for LAM to run, you will need to repeat the above LAM installation process for every node on your cluster.
Once you have finished installating LAM, you may compile your C code using its mpicc command:
/usr/local/lamPooch/bin/mpicc -o test.out test.c
or, for Fortran, using its mpif77 command:
/usr/local/lamPooch/bin/mpif77 -o test.out test.f
Pooch should recognize the resulting executable. IMPORTANT: Be sure to select "LAM/MPI" from the Job Type option in the Job Window.
Pooch should then be able to launch your executable on a Mac cluster whose nodes have the above LAM executables installed.
As always, using shared storage (NFS, etc.), ssh, rsh, inetd.conf files, .rhosts files, or mpirun is not required.
back to top
MPJ Express
MPJ Express is an
MPI-like Java implementation providing a way to perform calculations in Java on a cluster.
Thanks to Dr. Mark Baker.
Once you have finished installating MPJ, add the following to your ~/.bash_profile file:
export MPJ_HOME=/Users/yourusername/Documents/mpj
export PATH=$PATH:$MPJ_HOME/bin
Then you may compile your C code using the javac command:
javac -cp .:$MPJ_HOME/lib/mpj.jar test.java
or create a jar file by creating a file named 'manifest':
Manifest-Version: 1.0
Main-Class: World
Class-Path: mpj.jar
then using the jar command:
jar -cfm test.jar manifest test.class
Pooch v1.7.5 or later should recognize the resulting .jar or .class executable. IMPORTANT: Be sure to select "MPJ Express"
from the Job Type option in the Job Window.
Pooch should then be able to launch your executable on a Mac cluster whose nodes
have MPJ Express installed.
As always, using shared storage (NFS, etc.), ssh, rsh, inetd.conf files, .rhosts files, or mpirun is not required.
See the MPJ Readme for updates.
back to top
Open MPI
Open MPI is an
open source MPI implementation that is developed and maintained by a consortium of academic, research, and industry partners.
Mac OS X 10.5 "Leopard" comes with Open MPI built in.
When Pooch v1.7.6 runs for the first time on Leopard, it asks permission to install
modules into /usr/lib/openmpi/ that bridge Pooch and Open MPI and are required
for Pooch to use Open MPI. These were developed with
the help of Dr. Jeff Squyres.
Once you have finished installating these modules, you may compile your C code using Open MPI's
mpicc command:
mpicc -o test test.c
Pooch should recognize the resulting executable. IMPORTANT: Be sure to select "Open MPI"
from the Job Type option in the Job Window.
Pooch should then be able to launch your executable on a Mac cluster whose nodes
are running Leopard.
As always, using shared storage (NFS, etc.), ssh, rsh, inetd.conf files, .rhosts files, or mpirun is not required.
back to top
Other Resources
For further details, please see the Appendix of the
Pooch Manual,
read the
AppleSeed Development Page,
or examine the
Cluster SDK
for details.
|