You can finally simulate as much as you wish you could, not just as much as you can.
gprMax supports parallelism through MPI and OpenMP. For a deeper dive into how parallelism works in gprMax, refer to the official documentation.
In this tutorial, you’ll configure and run gprMax simulations sequentially and with MPI, learning how to cut runtimes from hours to minutes with Inductiva.
For this demonstration, we’ll use the B-scan with a bowtie antenna model from the gprMax example cases. The setup includes a metal cylinder with a diameter of 20 mm buried in a dielectric half-space with a relative permittivity of 6, and an antenna similar to the GSSI 1.5 GHz antenna.
For a B-scan, the antenna must be repositioned for each A-scan (trace). In this case, the B-scan covers a distance of 270 mm with traces every 5 mm, resulting in 54 separate model runs.
Download the required files here and save them to a folder named b-scan-case.
First, let's run the 54 models sequentially. This means the simulation will process the A-scans one after another: model 1, then model 2, and so on.
You can do this using the following command:
python -m gprMax cylinder_Bscan_GSSI_1500.in -n 54
Here, -n specifies the number of runs.
Each run produces a separate output file. To merge them into a single result file, run:
python -m tools.outputfiles_merge cylinder_Bscan_GSSI_1500.in
The required Python script to run this case sequentially on Inductiva is shown below:
import inductiva
# Instantiate machine group
cloud_machine = inductiva.resources.MachineGroup(
machine_type="c2d-highcpu-16",
provider="GCP",
spot=True)
input_dir = "/Path/to/b-scan-case"
# Initialize the Simulator
gprmax = inductiva.simulators.GprMax(version="3.1.7")
commands_sequential = [
"python -m gprMax cylinder_Bscan_GSSI_1500.in -n 54",
"python -m tools.outputfiles_merge cylinder_Bscan_GSSI_1500.in"
]
# Start sequential simulation
task_sequential = gprmax.run(\
input_dir=input_dir,
commands=commands_sequential,
on=cloud_machine,
n_vcpus=16)
# Wait for the simulations to finish
task_sequential.wait()
cloud_machine.terminate()
Running these 54 simulations sequentially took approximately 1 hour and 55 minutes.
Next, we will explore MPI-based parallel execution, which will significantly speed up this case.
gprMax supports MPI, allowing each of the 54 runs to be executed in parallel. This requires a machine with enough vCPUs to support all the runs. Hence, we'll be running the case on a c2d-highcpu-112, which has 112 vCPUs, providing ample resources for all 54 simulations to run concurrently.
⚠️ Note on vCPUs and Hyperthreading: In most cloud environments (e.g., Google Cloud), a vCPU represents a single thread rather than a full physical core. By default, Google Cloud VMs provide 2 vCPUs per physical core, so a
c2d-highcpu-112machine with 112 vCPUs typically has 56 physical cores with hyperthreading enabled.
Wrap the Python command with mpirun as follows:
mpirun -n 55 python -m gprMax cylinder_Bscan_GSSI_1500.in -n 54 --mpi-no-spawn
-n 55 specifies the number of processes, which should be one more than the number of runs (-n 54 + 1) to account for the master process--mpi-no-spawn is recommended according to the gprMax documentationHere’s the Python script to run the case with MPI:
import inductiva
# Instantiate machine group
cloud_machine = inductiva.resources.MachineGroup(
machine_type="c2d-highcpu-112",
provider="GCP",
spot=True)
input_dir = "/Path/to/b-scan-case"
# Initialize the Simulator
gprmax = inductiva.simulators.GprMax(version="3.1.7")
commands_mpi = [
"mpirun -n 55 python -m gprMax cylinder_Bscan_GSSI_1500.in -n 54 --mpi-no-spawn",
"python -m tools.outputfiles_merge cylinder_Bscan_GSSI_1500.in"
]
# Start MPI simulation
task_mpi = gprmax.run(\
input_dir=input_dir,
commands=commands_mpi,
on=cloud_machine,
n_vcpus=112)
# Wait for the simulations to finish
task_mpi.wait()
cloud_machine.terminate()
This MPI-based simulation significantly reduces the runtime compared to sequential execution, taking approximately 23 minutes.
The following table summarizes the performance and cost of sequential versus MPI-based execution for the B-scan with a bowtie antenna model case on Inductiva:
| Processing Type | Machine Type | Total Time | Estimated Cost (USD) |
|---|---|---|---|
| Sequential | c2d-highcpu-16 | 1h, 55 min | 0.15 |
| MPI | c2d-highcpu-112 | 23 min | 0.21 |
Running the simulations on Inductiva using MPI-based parallel execution drastically reduces the runtime compared to sequential processing, from nearly 2 hours down to under 25 minutes. Although the estimated cost for the larger machine is slightly higher, the time savings are significant, making MPI a highly efficient option for large-scale gprMax simulations.
Run Your First Simulation
Step-by-step guide to run your first gprMax simulation on Inductiva.AI. Easily launch, monitor and analyse results.
The Inductiva Guide to GROMACS
Learn to run and scale GROMACS simulations on the Inductiva.AI Cloud HPC platform. Explore tutorials and benchmarks for molecular dynamics & materials modeling.