We assume the canonical OpenFOAM directory structure, which includes the following key folders:
time: Contains files for particular fields, such as
initial values and boundary conditions that you must specify. For example,
initial conditions at t=0 are stored in the 0 directory.constant: Contains files that describe the objects in the simulation and the
physical properties being modeled.system: Contains files that describe the simulation, including solvers,
numerical parameters, and output files.Yes, absolutely! For greater flexibility, you can manually define and run each command step-by-step, giving you full control over your simulation.
Here's an example of how to set commands manually:
commands_single_machine = [
"runApplication surfaceFeatures",
"runApplication blockMesh",
"runApplication decomposePar -copyZero",
"runParallel snappyHexMesh -overwrite",
"runParallel potentialFoam",
"runParallel simpleFoam",
"runApplication reconstructParMesh -constant",
"runApplication reconstructPar -latestTime"
]
task = openfoam.run(
input_dir=input_dir,
commands=commands_single_machine,
on=cloud_machine)
Running OpenFOAM commands in parallel involves configuring your parallel execution settings and then running commands either via a shell script or individually.
You can run OpenFOAM commands in parallel within a script by either:
runParallel <command>mpirun -np 4 <command>If you're running commands manually (not via a script), you can still execute them in parallel by either:
runParallel <command>-parallel flag directly to the command, for example: simpleFoam -parallelWhen a command includes the -parallel flag, it is automatically recognized as parallel execution.
The following two commands are equivalent:
runParallel simpleFoam
simpleFoam -parallel
Internally, both run as:
mpirun -np <num_processes> simpleFoam -parallel
Before running your simulation, set the parallel settings in your machine configuration. For example:
cloud_machine = inductiva.resources.MachineGroup(
provider="GCP",
machine_type="c2d-standard-112",
np=5, # Number of processes (default: max threads)
use_hwthread_cpus=False, # Use hyperthreading (default: True)
mpi_version="4.1.6", # MPI version (default: 4.1.6)
spot=True
)
With this configuration, a command such as simpleFoam -parallel will be executed as:
mpirun -np 5 --use-hwthread-cpus simpleFoam -parallel
This runs the simulation using 5 processes, disables hyperthreading, and uses MPI version 4.1.6.
Note: These parallel settings (number of processes, hyperthreading, and MPI version) apply only when using the
-parallelflag. If you userunParallel, OpenFOAM manages parallelism internally and ignores these settings.
If your simulation runs successfully on your local machine but fails on Inductiva, it’s likely due to a version mismatch.
Make sure you're using the same software version and distribution (e.g., ESI or Foundation) on both your machine and Inductiva. If the version you need isn’t available on Inductiva yet, contact us — we’ll be happy to add it as soon as possible.
This usually means your Allrun script is not properly reporting errors.
Inductiva checks whether a simulation succeeded by looking at the script’s
exit code. A non-zero code means failure. However, in your case, a command
like runApplication blockMesh might be failing, but the script continues to
execute, often ending with a command like echo "Simulation Complete!",
which returns a success code (0).
Because the final command succeeds, the whole script appears successful, even though a key step failed.
Add this line to the top of your Allrun script:
set -e
This will make the script stop immediately if any command fails, and it will correctly return a non-zero exit code. That way, Inductiva can detect the failure and show it as such.
There are not enough slots available even though my machine has enough resources?Before jumping to solutions, it’s important to understand how your machine’s resources are structured.
Take, for example, a machine type like c2d-highcpu-16, which provides 16 virtual CPUs (vCPUs). The key word here is virtual — while the machine can run 16 threads in parallel, it's backed by only 8 physical cores. This distinction matters.
The behavior of runParallel in OpenFOAM depends on the version you’re using:
There are not enough slots available
This happens because runParallel internally calls mpirun and lets OpenFOAM decide how many cores to use. In some versions, OpenFOAM restricts execution to physical cores only.
If you want to fully utilize all your vCPUs regardless of how OpenFOAM detects resources, you can manually invoke mpirun with the --use-hwthread-cpus` flag. For example:
mpirun --use-hwthread-cpus -np 16 simpleFoam -parallel
This explicitly instructs mpirun to include hyperthreaded (virtual) CPUs, allowing your simulation to run across all 16 vCPUs.
Still can't find what you're looking for? Contact Us