Run the Galveston Island Beach and Dune Simulation

Learn how to run a real-world XBeach simulation on Inductiva.AI and scale it in the cloud.

This tutorial walks you through running a high-fidelity XBeach simulation using the Inductiva API, based on a real-world dataset that requires significant computational resources.

We will run and scale the Galveston Island use case from the GRIIDC repository, a research platform maintained by Texas A&M University-Corpus Christi’s Harte Research Institute for Gulf of Mexico Studies.

Prerequisites

  1. Download the dataset titled "XBeach model setup and results for beach and dune enhancement scenarios on Galveston Island, Texas", available here.
  2. Navigate to Files >> XBeach_Model_Runs >> Beach_Nourish_Only >> Input and download all the files in this directory.
  3. Save the downloaded files in a local folder named Beach_Nourish_Only, maintaining this directory structure:
    ls -las Beach_Nourish_Only
    total 130976
    0 drwxr-xr-x   12 paulobarbosa  staff       384 Nov  6 10:15 .
    0 drwx------@ 124 paulobarbosa  staff      3968 Nov  6 10:14 ..
    8 -rw-r--r--@   1 paulobarbosa  staff      3069 Nov  6 10:14 README.txt
26184 -rw-r--r--@   1 paulobarbosa  staff  13404906 Nov  6 10:14 bed.dep
26184 -rw-r--r--@   1 paulobarbosa  staff  13404906 Nov  6 10:14 bedfricfile.txt
   16 -rw-r--r--@   1 paulobarbosa  staff      5324 Nov  6 10:13 jonswap3.txt
26184 -rw-r--r--@   1 paulobarbosa  staff  13404906 Nov  6 10:14 nebed.dep
    8 -rw-r--r--@   1 paulobarbosa  staff      2296 Nov  6 10:15 params.txt
   16 -rw-r--r--@   1 paulobarbosa  staff      4850 Nov  6 10:14 tide.txt
26184 -rw-r--r--@   1 paulobarbosa  staff  13404906 Nov  6 10:14 x.grd
    8 -rw-r--r--@   1 paulobarbosa  staff       635 Nov  6 10:14 xbeach.slurm
26184 -rw-r--r--@   1 paulobarbosa  staff  13404906 Nov  6 10:14 y.grd

Adjust Simulation Parameters

To reduce simulation time, update the params.txt file with the following changes:

  • Add single_dir = 0 just after the header (required for XBeach v10+).
  • Set tstop to 34560 to shorten the simulation duration.

Run Your Simulation

Below is the script required to run this simulation using the Inductiva API.

In this example, we use a c2d-highcpu-56 cloud machine featuring 56 virtual CPUs (vCPUs) and a 20 GB data disk.

import inductiva

# Allocate cloud machine on Google Cloud Platform
cloud_machine = inductiva.resources.MachineGroup(
    provider="GCP",
    machine_type="c2d-highcpu-56",
    data_disk_gb=20,
    spot=True,)

# Initialize the Simulator
xbeach = inductiva.simulators.XBeach( \
    version="1.24")

# Run simulation
task = xbeach.run(
    input_dir="Beach_Nourish_Only",
    on=cloud_machine)

# Wait for the simulation to finish and download the results
task.wait()
cloud_machine.terminate()

task.download_outputs()

task.print_summary()

Note: Setting spot=True enables the use of spot machines, which are available at substantial discounts. However, your simulation may be interrupted if the cloud provider reclaims the machine.

When the simulation is complete, we terminate the machine, download the results and print a summary of the simulation as shown below.

Task status: Success

Timeline:
    Waiting for Input         at 29/06, 18:19:10      8.781 s
    In Queue                  at 29/06, 18:19:19      42.799 s
    Preparing to Compute      at 29/06, 18:20:02      4.636 s
    In Progress               at 29/06, 18:20:06      4598.671 s
        └> 4598.451 s      /opt/openmpi/4.1.6/bin/mpirun --use-hwthread-cpus xbeach params.txt
    Finalizing                at 29/06, 19:36:45      6.671 s
    Success                   at 29/06, 19:36:52

Data:
    Size of zipped output:    398.16 MB
    Size of unzipped output:  668.30 MB
    Number of output files:   29

Total estimated cost (US$): 0.41 US$
    Estimated computation cost (US$): 0.40 US$
    Task orchestration fee (US$): 0.010 US$

Note: A per-run orchestration fee (0.010 US$) applies to tasks run from 01 Dec 2025, in addition to the computation costs.
Learn more about costs at: https://inductiva.ai/guides/basics/how-much-does-it-cost

As you can see in the "In Progress" line, the part of the timeline that represents the actual execution of the simulation, the core computation time of this simulation was approximately 1 hour and 17 minutes.

Scaling Your Simulation

Upgrading to More Powerful Machines

One of Inductiva’s key advantages is how easily you can scale your simulations to larger, more powerful machines with minimal code changes. Scaling up simply requires updating the machine_type parameter when allocating your cloud machine.

You can:

  • Increase the number of vCPUs,
  • Upgrade to next-generation cloud machines,
  • Or do both.

Explore the full range of available machines here.

For example, running the simulation on a machine with more vCPUs, such as the c2d-highcpu-112, reduces runtime from 1 hour and 17 minutes to approximately 47 minutes, with a modest cost increase to US$0.48.

Using latest-generation c4d instances further improves performance, even with fewer vCPUs than comparable c2d machines (48 vs 56 vCPUs and 96 vs 112 vCPUs). Why? c4d processors are significantly faster per core, making them ideal when time-to-solution is critical. The trade-off is a higher price per vCPU.

Below is a comparison showing the effect of scaling this simulation:

Machine TypevCPUsExecution TimeEstimated Cost (USD)
c2d-highcpu-56561h, 17 min0.40
c2d-highcpu-11211247 min0.48
c4d-highcpu-48481h, 9 min0.97
c4d-highcpu-969644 min1.23

Hyperthreading Considerations

By default, Google Cloud machines have hyperthreading enabled, meaning each physical CPU core runs two hardware threads.

While hyperthreading can improve throughput in general workloads, in HPC simulations with many threads (30+), it can actually reduce performance due to bandwidth and cache contention.

To disable hyper-threading and use only physical cores, configure the machine group with threads_per_core=1:

cloud_machine = inductiva.resources.MachineGroup( \
    provider="GCP",
    machine_type="c2d-highcpu-56",
    threads_per_core=1,
    spot=True)

Below are the results of the same simulations with hyperthreading disabled (1 thread per core):

Machine TypeThreads (active vCPUs)Execution TimeEstimated Cost (USD)
c2d-highcpu-56281h, 19 min0.39
c2d-highcpu-1125645 min0.40
c4d-highcpu-964837 min1.02
c4d-highcpu-1929626 min1.43

Disabling hyperthreading improves performance, even with fewer threads. The c4d cores still outperform older c2d machines per core, confirming that faster processors make a measurable difference.

Key Takeaways

  • Scaling is easy: simply change the machine type; the rest of your code stays the same.
  • c4d machines are faster per core, even with fewer vCPUs, making them ideal when speed is critical, though at a higher cost.
  • Hyperthreading can slow memory-bound simulations; disabling it often improves performance.
  • Choose wisely: use c4d when runtime matters most, and c2d when cost per vCPU is the priority.

Inductiva gives you all the flexibility of modern cloud HPC without the headaches — faster results, effortless scaling, and minimal code changes.

It’s that simple! 🚀