Differences

This shows you the differences between two versions of the page.

Link to this comparison view

en:hpc:usage:interactive [2018/12/12 09:06]
vrintala [Reserve resources]
en:hpc:usage:interactive [2018/12/12 09:21] (current)
vrintala [Reserve resources]
Line 25: Line 25:
 ===== Reserve resources ===== ===== Reserve resources =====
  
-First it's good to check if there are free nodes for interactive use with sinfo. Then use srun and normal SLURM options to reserve resources for calculation.+First it's good to check if there are free nodes for interactive use with sinfo. Then use srun and normal SLURM options to reserve resources.
  
-Exact command depends on which kind of calculation is going to be started:+Command to reserve resources depends on which kind of job is going to be started (adapt following examples to your needs):
  
-If you use MPI parallelization: +  - If you use MPI parallelization (typical HPC calculation code):<code bash>
-<code bash>+
 srun -p <slurm partition> -n <number of cores> -t <maxtime> --x11=first --pty $SHELL srun -p <slurm partition> -n <number of cores> -t <maxtime> --x11=first --pty $SHELL
 </code> </code>
- +  - If you use only threaded software (for example compilation):<code bash>
-If you use only threaded software: +
-<code bash>+
 srun -p <slurm partition> -N 1 -n 1 -c <number of cores> -t <maxtime> --x11=first --pty $SHELL srun -p <slurm partition> -N 1 -n 1 -c <number of cores> -t <maxtime> --x11=first --pty $SHELL
 </code> </code>
- +  - If you use MPI+OpenMP hybrid parallelization:<code bash>
-If you use MPI+OpenMP hybrid parallelization: +
-<code bash>+
 srun -p <slurm partition> -N <number of nodes> --ntasks-per-node=<number of processes per node> -c <number of cores per process> -t <maxtime> --x11=first --pty $SHELL srun -p <slurm partition> -N <number of nodes> --ntasks-per-node=<number of processes per node> -c <number of cores per process> -t <maxtime> --x11=first --pty $SHELL
 </code> </code>
  
  
-Example (draco node with all 4 cores for 6 hours):+Example (node from partition draco with all 4 cores for 6 hours to be used with MPI parallelization):
  
 <code bash> <code bash>
Line 59: Line 54:
   srun: job <jobid> has been allocated resources   srun: job <jobid> has been allocated resources
  
-Now you can start the actual interactive use. See separate instructions from software pages.+Now you can start the actual interactive use. See separate instructions from software pages. **Next you should check carefully that software is actually able to use all resources reserved.** Threaded software might actually use only single CPU core if resources are not reserved correctly. Just use another terminal window to connect to node which you are using and check with top or htop command.
 
/opt/webdata/webroot/wiki/data/pages/en/hpc/usage/interactive.txt · Last modified: 2018/12/12 09:21 by vrintala
[unknown button type]
 
Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Noncommercial-Share Alike 3.0 Unported
Recent changes RSS feed Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki