MPI reference material

Contents

  • MPICH2 user doc pdf
  • mpi-20.pdf specification
  • mpi-20.ps specification
  • MPI manual WEB pages
  • MPI-Talk.ps
  • examples.tar
  • mar13.ps
  • mpi-spec.ps old
  • mpi1.html
  • mpi2.html
  • mpi3.html
  • mpi_changes.html
  • mpi_ezstart.html
  • mpi_top10.html
  • slides.ps
  • clustcmd
  • 
    For the moment, running on Bluegrit.cs.umbc.edu on blade1..32:
    Detailed sequence of commands. May change.
    
    
    Sample code with output demonstrating basic MPI communication
    
    Makefile for following source files
    nodes  you need a file with your clusters node names
    
    roll_call.c sends and receives message from every process
    roll_call.out output from running roll_call.c
    
    bcast.c Braodcast a message to every process
    bcast.out output from running bcast.c
    
    scat.c Scatter and Gather with every process
    scat.out output from running scat.c
    
    schedule.c find processors where your processes are running
    schedule.out output from running schedule.c 
    
    Note: This may be specific to the Blurgrit cluster:
    (If the  -machinefile nodes  does not work for you:)
    
    Now, in order to get schedule to run on many nodes, create a 'job' file
    file 'job', change for your userid and files
    
    Submit your job on the management node using the command    qsub job
    The output now shows two processes on each of 32 nodes (dual core PPC's)
    scheduleq.out output from running   qsub job
    
    
    The Bluegrit Cluster now has an additional 12 IBM Cell BE nodes.
    Check on the latest at bluegrit.cs.umbc.edu
    
    

    Go to top

    Last updated 8/23/08