Translations of this page:

Table of Contents

CFX

Tutorial

A walk-through tutorial for running a case on Celaeno using CFX

This tutorial only describes the procedures to run a case using CFX on Celaeno. Thus, it is assumed that the user is rather familiar with basic shell commands.

The first thing that you need is the MyCase.def file of your case which is automatically saved (on your local computer) when you run the CFX solver. Here we use the StaticMixer.def which is provided in CFX tutorials.

1. Copy the file StaticMixer.def from Ansys CFX installation directory to your (work) directory on Celaeno.

cp /shared/apps/ansys_inc/v150/CFX/examples/StaticMixer.def /home/<UserName>/<MyWorkDir>

2. Prepare a batch file considering the requirements of your case. A batch job file must be submitted to place your job in the queue on any cluster based on SLURM batch job system such as Celaeno. All calculations which run under SLURM resource manager start when there is enough room in cluster. And after the run finishes then next calculation in the queue starts. So process must be automated and you can't run interactive calculations. The following can be used for our case.

Example is set to run with 4 cores, 1000 MiB per core, 10-hour timelimit and latest version of CFX (not explicitly defined).

StaticMixer.sh
#!/bin/bash -l
 
## name of your job
#SBATCH -J StaticMixer
 
## system error message output file
#SBATCH -e StaticMixer_%j.err
 
## system message output file
#SBATCH -o StaticMixer_%j.out
 
## send mail after job is finished
#SBATCH --mail-type=end
#SBATCH --mail-user=<username>@lut.fi
 
## a per-process memory limit in MB
#SBATCH --mem-per-cpu=1000
 
## how long a job takes, wallclock time d-hh:mm:ss
#SBATCH -t 10:00:00
 
## number of nodes
#SBATCH -N 1
 
## number of cores
#SBATCH -n 4
 
## name of queue, phase1 or phase2
#SBATCH -p phase2
 
## licenses required
## if you need more than 4 cores in calculation you need to add aa_r_hpc licenses 
#SBATCH --licenses=aa_r_cfd:1
 
## load modules
module load cfx
 
## environmental variable SLURM_GTIDS has to be unset
unset SLURM_GTIDS
 
## change directory
cd /home/<username>/<myworkdir>
 
## run CFX
cfx5solve -def StaticMixer.def -part 4 -part-mode metis-kway -par-local -start-method 'Platform MPI Distributed Parallel'

3. Save the above scripts with the extension of .sh (MyJobFile.sh) and store it in the same directory that you copied the StaticMixer.def. Remember to use an appropriate modifier (not Microsoft Word) such as nano if you need to edit it.

4. Login to your account using your command-line interface and go to the directory that you previously have stored MyJobFile.sh. Then, type the command

sbatch MyJobFile.sh

5. You should have the results in less than 10 mins.

6. If you need to run your case from an initial solution you have to define the Execution Control using a tab with the same name in the tools bar in the CFX Pre. Remember to copy the .res file in the same directory on Celaeno and also specify its address accordingly in the .def file. For instance, /home/<UserName>/<MyWorkDir>.

NOTE: The number of Ansys licenses are limited (20 altogether in LUT), so please don't run your case using large number of cores. Moreover, is not quite unusual if you get slower speed by increasing the cores after a certain number, therefore try to find the optimum number before running your case for a long duration. There is a rule of thumb that when doubling the number of cores the computational speed should be increased by roughly 1.5, otherwise increasing the cores is not efficient.

Example job scripts

Single node parallel

singlenodejobname.sh
#!/bin/bash -l
 
## name of your job
#SBATCH -J <jobname>
 
## system error message output file
#SBATCH -e <jobname>_%j.err
 
## system message output file
#SBATCH -o <jobname>_%j.out
 
## send mail after job is finished
#SBATCH --mail-type=end
#SBATCH --mail-user=<username>@lut.fi
 
## a per-process memory limit in MB
#SBATCH --mem-per-cpu=<megabytes_per_CPU>
 
## how long a job takes, wallclock time d-hh:mm:ss
#SBATCH -t <days>-<hours>:<minutes>:<seconds>
 
## set maximum number of nodes to one
#SBATCH -N 1-1
 
## number of cores
#SBATCH -n <cores>
 
## name of queue, phase1 or phase2
#SBATCH -p <queue>
 
## licenses required
## if you need more than 4 cores in calculation you need to add aa_r_hpc licenses 
#SBATCH --licenses=aa_r_cfd:1,aa_r_hpc:<number_of_HPC_licenses> 
 
## load modules
module load cfx/<version>
 
## change directory
cd /home/<username>/<myworkdir>
 
## run CFX
cfx5solve -batch -double -def StaticMixer.def ${CFX_RESTARTRUN} -part $SLURM_NTASKS -part-mode metis-kway -par-local -start-method 'Platform MPI Distributed Parallel'

Multi node parallel

multinodejobname.sh
#!/bin/bash -l
 
## name of your job
#SBATCH -J <jobname>
 
## system error message output file
#SBATCH -e <jobname>_%j.err
 
## system message output file
#SBATCH -o <jobname>_%j.out
 
## send mail after job is finished
#SBATCH --mail-type=end
#SBATCH --mail-user=<username>@lut.fi
 
## a per-process memory limit in MB
#SBATCH --mem-per-cpu=<megabytes_per_CPU>
 
## how long a job takes, wallclock time d-hh:mm:ss
#SBATCH -t <days>-<hours>:<minutes>:<seconds>
 
## number of cores
#SBATCH -n <cores>
 
## name of queue, phase1 or phase2
#SBATCH -p <queue>
 
## licenses required
## if you need more than 4 cores in calculation you need to add aa_r_hpc licenses 
#SBATCH --licenses=aa_r_cfd:1,aa_r_hpc:<number_of_HPC_licenses> 
 
## load modules
module load cfx/<version>
 
## use ssh instead of rsh
export CFX5RSH=ssh
 
## change directory
cd /home/<username>/<myworkdir>
 
## create list of hosts in calculation
srun hostname -s > hostlist.$SLURM_JOB_ID
 
## format the host list for cfx
cfxhostlist=`tr '\n' ',' < hostlist.$SLURM_JOB_ID`
 
## run the partitioner and solver
cfx5solve -batch -double -par -par-dist "$cfxhostlist" -part $SLURM_NPROCS -part-mode metis-kway -def StaticMixer.def -start-method "Platform MPI Distributed Parallel"
 
## cleanup
rm hostlist.$SLURM_JOB_ID
 
/opt/webdata/webroot/wiki/data/pages/en/hpc/software/cfx.txt · Last modified: 2017/11/22 15:04 by mnikku
[unknown button type]
 
Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Noncommercial-Share Alike 3.0 Unported
Recent changes RSS feed Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki