Here are some simple instructions to get your first test case running.
First connect to cluster and create case directory under your home directory and change to that directory:
mkdir BWR cd BWR
Copy job file template and BWR example to that newly created directory:
cp /shared/tutorials/serpent/template.sh ~/BWR/BWR_2D.sh cp /shared/tutorials/serpent/BWR_2D.sss ~/BWR
Modify your job file BWR_2D.sh next (example below). Follow instructions inside job script. Script changes to submit directory when executed. When ready hit Ctrl+X and choose to save and overwrite.
nano BWR_2D.sh
Then send job to queue. If you get errors just check std.err and std.out files. Correct things which lead to errors and requeue.
sbatch BWR_2D.sh
Example of the runscript needed for Slurm.
#!/bin/bash -l ### ### BWR_2D example case ### ## name of your job #SBATCH -J BWR_2D ## system error message output file #SBATCH -e BWR_2D.std.err%j ## system message output file #SBATCH -o BWR_2D.std.out%j ## send mail after job is finished #SBATCH --mail-type=end #SBATCH --mail-user=<username>@lut.fi ## a per-process (soft) memory limit ## limit is specified in MB ## example: 1 GB is 1000 #SBATCH --mem-per-cpu=900 ## how long a job takes, wallclock time d-hh:mm:ss #SBATCH -t 01:00:00 ## the number of processes (tasks) ## number of nodes #SBATCH -N 1 ## number of tasks (amount of MPI processes) #SBATCH --ntasks-per-node=2 ## maximum number of tasks per socket (this should equal to 1) #SBATCH --ntasks-per-socket=1 ## threads per process (task) ## (amount of OpenMP threads per MPI process) #SBATCH --cpus-per-task=8 ## name of queue #SBATCH -p phase2 ## load modules module load serpent/openmpi-4.0/2.1.30 ## change directory cd $SLURM_SUBMIT_DIR ## run my MPI executable srun sss2 -omp $SLURM_CPUS_PER_TASK BWR_2D.sss