The Cray XT4/XT5 system is optimized for massively parallel applications. Serial production runs must be executed on other CSC platforms than Louhi.
On Louhi, normal users have access to two kind of nodes, login nodes and compute nodes. Here a login node refers to a node type, not the particular node the user first logged in. Login nodes run under Linux operating system and kernel, and provide the usual system services needed for program development and the preparation of the production runs. All CPU and memory intensive applications are executed on the compute nodes, which run under Compute Node Linux, CNL, (previously: Catamount) operating system and modified linux kernel (previously: Quintessential kernel, Qk). Parallel MPI programs compiled for CNL compute nodes will not work on login nodes.
All user applications are submitted to the compute nodes through batch system and aprun command, with the sole exception of TotalView debugging sessions which are run in the small number of compute nodes reserved for interactive use, see TotalView Debugger.
The most important directory that compute nodes see is the work directory ($WRKDIR). It has fast Lustre filesystem, and should always be used for batch jobs. It is recommended to both launch your batch jobs from there and cd to under $WRKDIR before the aprun command in your batch job script.
Running a simple parallel job
How to write a batch job description (batch job script) and submit a batch job is described in Chapter Batch jobs and the batch system. Here we give only a short overview of what happens after issuing
First, batch job description is provided to the qsub command in a user written text file. After parsing the options provided in the command line and in the job description, qsub moves the job into queuing state. When the requested resources become available, qsub reserves them for the job and connects to the least loaded login node, hereafter called the remote node. The shell commands in the job description are then executed in the remote node.
There are several details to note here:
- The initial directory in which the commands in the job description are executed is $HOME. The original directory in which the qsub command was given is available in the remote node via the shell environment variable $PBS_O_WORKDIR.
- Only the executables specified for aprun command are executed on CNL compute nodes, other commands in the script are run on an login node.
- The aprun size argument given with flag -n defines the number of processor cores for the parallel job. By default all cores in the nodes of louhi.csc.fi are used. The default behavour can be altered by -N command line option to aprun, but it should be used only when absolutely necessary.
- The qsub option -l mppwidth=128 refers to the number of requested cores (previously: -l size=128 referred to the number of compute nodes).
The complete description of the qsub and aprun commands and their options are given in the man pages. More detailed instructions are provided in the chapter Batch job and batch system and batch script examples the section Parallel batch jobs.