Translations of this page:

OpenFoam

General information

OpenFOAM is open source CFD solver. Some version of OpenFOAM is already installed as module to all clusters.

Usage and getting started

Check available resources and tips from the network drive (\\win.lut.fi\shares\LES\ENTE\common\CFD_development\softwares\Openfoam), though internet offers more likely more and better resources.

Running in Clusters

First of all, take a fresh connection to your cluster (e.g., Celaeno) via PuTTY. Open celaeno.lut.fi. Then you will be at your home directory in the cluster (e.g., /home/xyz). Different versions of OpenFOAM (e.g., OF17x, OF231, etc.) are already installed for general purpose in the clusters. You can choose your option. Steps to getting started with OpenFOAM are below:

  • First load the module by typing module load openfoam/openmpi-1.8.4-gcc/gcc-4.9.2/2.3.1
  • Check the run directory by typing $FOAM_RUN. If you get an error that the folder doesn't exist, create the folder(s) specified in the error message. Test again.
  • Place your cases under their own folders in the run directory. Run them with SLURM.

Running with SLURM

You can run your case by modifying the example script below.

#!/bin/bash
###
### job script example with 4 cores on exactly 1 node
### parallel computation of damBreak case with finer mesh
###
 
## name of your job
#SBATCH -J foamjobname
 
## system error message output file
## leave %j as it's being replaced by JOB ID number
#SBATCH -e foamjobname.%j.std.err
 
## system message output file
#SBATCH -o foamjobname.%j.std.out
 
## send mail after job is finished
#SBATCH --mail-type=end
#SBATCH --mail-user=<lut_user_name>@lut.fi
 
## memory limit per allocated CPU core
## try to put this limit as low as reasonably achievable
## too low calculation will fail, too high resources are wasted
## limit is specified in MB
## example: 1 GB is 1000
#SBATCH --mem-per-cpu=1000
 
## how long a job takes, wallclock time d-hh:mm:ss
## here 1 hour is used
#SBATCH -t 0-01:00:00
 
## number of nodes (if necessary)
## -N 1 (job run on exactly one node)
## -N <minnodes:maxnodes>
#SBATCH -N 1
 
## number of cores
#SBATCH -n 4
 
## name of queue 
#SBATCH -p phase_name
 
## load necessary environment modules 
module load openfoam/openmpi-1.8.4-gcc/gcc-4.9.2/2.3.1
 
## change directory to your calculation directory
## note that the case has been already initialized and decomposed before
cd /home/<username>/openfoam/openmpi-1.8.4-gcc/gcc-4.9.2/2.3.1/run/tutorials/multiphase/interFoam/laminar/damBreakFine/
 
## run my MPI executable
srun --mpi=pmi2 interFoam -parallel
 
/opt/webdata/webroot/wiki/data/pages/en/cfd/modelling/openfoam.txt · Last modified: 2017/12/12 10:51 by mnikku
[unknown button type]
 
Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Noncommercial-Share Alike 3.0 Unported
Recent changes RSS feed Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki