Software on Discovery Cluster

A complete list of Discovery Cluster software is available here. For users that have Discovery Cluster accounts you can get this list after logging in as described below. Note that a handful of standard software like gnuplot 4.2, ghostscript 8.70, LaTeX, and emacs 23.1.1 for example are available cluster wide by default for all users on all nodes.

Discovery Cluster uses “modules” to control software available to users. The order in which these modules are loaded is important so make sure you pay special attention to how you load the “modules”. While you can load “modules” manually one at a time when you login it is better to use either your .bashrc or a bash shell script. These methods are discussed below. To view available software modules on Discovery Cluster use “module avail”. A representative output is shown below: mod_avail For information on a particular module and list of dependent modules required including any other requirements use “module whatis <name_of_module>”. An example is shown below: mod_whatis

1) Using .bashrc: This is recommended if you do not have many version of the same program to run but consistently use one set of software all built using the same compiler. For example if you are using R 3.0.1 compiled using Intel Compilers your .bashrc would be as shown below. You can comment out modules that you do need in your .bashrc. Use the “module whatis <name_of_module>” command to get the modules you will need. Once you have saved your .bashrc file that in your home directory (the directory you are when you login – /home/<your_user-id>) log out and log in again. “module list” will show you if the modules are loaded correctly. This is also shown below.

bashrc_module

2) Using bash script files: This is recommended if you have many different versions of software to run compiled using different compiles. In the case your .bashrc would be empty (comment out all or any “module loads” or “source” entries. Now lets say from one terminal you want to run Gromacs 6.6.3 using double precision for one run and using single precision for the other. The former is one compiled with GNU compilers and the latter is one compiled with INTEL compilers. Typically there are two modules that you will use to construct two shell script files: “gromacs-4.6.3-single-intel” and “gromacs-4.6.3“. “module whatis <name_of_module>” will show you the other dependent modules each of these modules need and the other requisites.

For running “gromacs-4.6.3-single-intel” create a file called gromacs-4.6.3-single-intel.sh. The file will look like:

#!/bin/sh
module load intel-2013-compilers
module load gnu-4.4-compilers
module load fftw-3.3.3-single
module load platform-mpi
module load gromacs-4.6.3-single-intel
source /shared/apps/gromacs/gromacs-4.6.3/INSTALL/single-intel/bin/GMXRC.bash

Now you can run:

>>source gromacs-4.6.3-single-intel.sh

Then type “module list” and you should see the requisite modules. You can now submit your jobs using this terminal for Gromacs 4.6.3 compiled in single precision using INTEL compilers.

Similarly if you want to use the double precision version of Gromacs 4.6.3 compiled using GNU compilers create a file called gromacs-4.6.3.sh. The file will look like:

#!/bin/sh
module load gnu-4.4-compilers
module load fftw-3.3.3
module load platform-mpi
module load gromacs-4.6.3
source /shared/apps/gromacs/gromacs-4.6.3/INSTALL/bin/GMXRC.bash

Now you can run:

>>source gromacs-4.6.3.sh

Then type “module list” and you should see the requisite modules. You can now submit your jobs using this terminal for Gromacs 4.6.3 compiled in double precision using GNU compilers.

In this way you can have many terminals open where each terminal is configured for a specific set of software for running using LSF. If you login into an Interactive Node then for running interactively on a given node provided by LSF the appropriate file should be sourced after login. More details on the different Interactive queues available and how to get and use a node interactively are in the section “Queues on Discovery Cluster” and “Submitting Jobs on Discovery Cluster“.