Run your simulations only when you need to. No licenses, no subscription traps.
This tutorial will show you how to run COAWST simulations using the Inductiva API.
Unlike many simulators, COAWST must be compiled for each specific configuration. This means that you cannot use a single pre-compiled version across simulations - COAWST must be compiled using the appropriate settings tailored to your use case.
To simplify this process, you'll need to include a few extra files in your simulation inputs, alongside your standard configuration and data files:
build_coawst.sh)If you are using files already provided in the COAWST repository, you don’t need to upload them along with your input files. Instead, simply configure your build script to reference the path where Inductiva exposes those files. Alternatively, you can copy the necessary files from the exposed COAWST directory into your input files (more on this in the following section).
For each simulation, the COAWST directory will be available at:
📂 /workdir/output/artifacts/__COAWST
You are free to access and use any files within this directory.
Additionally, all input files will be located at:
📂 /workdir/output/artifacts/
Please keep this in mind when working with absolute paths.
When running COAWST, you can specify init_commands - a set of commands executed
before compilation. These are useful for copying the necessary files from the COAWST
directory to your working directory.
For example:
init_commands = [
# Copy LANDUSE.TBL for the simulation to ".".
# Note that "." points to /workdir/output/artifacts/
"cp /workdir/output/artifacts/__COAWST/LANDUSE.TBL ."
]
# Run simulation
task = coawst.run(
input_dir="/Path/to/input_files",
sim_config_filename="sim_config.in",
build_coawst_script="build_coawst.sh",
init_commands=init_commands, # Pre-compilation commands
n_vcpus=360,
use_hwthread=True,
on=cloud_machine
)
We will cover the JOE_TC/DiffGrid use case from the COAWST GitHub repository, where
This configuration illustrates how to set up and execute models using different grid resolutions within COAWST.
build_coawst.sh script from this linknamelist.inputwrfbdy_d01wrfinput_d01Create a folder named JOE_TC_DiffGrid and place all files inside. Your folder structure should look like this:
-rwxr-xr-x@ 1 paulobarbosa staff 3085 Feb 27 10:47 INPUT_JOE_TC_COARSE
-rwxr-xr-x@ 1 paulobarbosa staff 15528 Feb 27 10:56 build_coawst.sh
-rwxr-xr-x@ 1 paulobarbosa staff 7266 Feb 28 07:12 coupling_joe_tc.in
-rwxr-xr-x@ 1 paulobarbosa staff 2978 Nov 11 20:45 joe_tc.h
-rwxr-xr-x@ 1 paulobarbosa staff 116775 Nov 11 20:45 joe_tc_coarse_bathy.bot
-rwxr-xr-x@ 1 paulobarbosa staff 1671068 Nov 11 20:45 joe_tc_coarse_grd.nc
-rwxr-xr-x@ 1 paulobarbosa staff 195000 Nov 11 20:45 joe_tc_coarse_grid_coord.grd
-rwxr-xr-x@ 1 paulobarbosa staff 4763160 Nov 11 20:45 joe_tc_coarse_ocean_init.nc
-rwxr-xr-x@ 1 paulobarbosa staff 5132 Feb 28 07:19 namelist.input
-rwxr-xr-x@ 1 paulobarbosa staff 167380 Feb 28 07:13 ocean_joe_tc_coarse.in
-rwxr-xr-x@ 1 paulobarbosa staff 25403104 Nov 11 20:45 scrip_joe_tc_diffgrid.nc
-rwxr-xr-x@ 1 paulobarbosa staff 46541404 Nov 11 20:45 wrfbdy_d01
-rwxr-xr-x@ 1 paulobarbosa staff 70658632 Nov 11 20:45 wrfinput_d01
In this section, you will update the input files (build_coawst.sh and .in files) to ensure they reference the correct paths.
build_coawst.shMake the following changes:
COAWST_APPLICATION to match your header file (joe_tc.h),
capitalized and without the file extension:
export COAWST_APPLICATION=JOE_TC
MY_ROOT_DIR to:
export MY_ROOT_DIR=/workdir/output/artifacts/__COAWST
which_MPI to openmpi:
export which_MPI=openmpi
MY_HEADER_DIR and MY_ANALYTICAL_DIR point to the correct location
where joe_tc.h is stored:
export MY_HEADER_DIR=/workdir/output/artifacts
export MY_ANALYTICAL_DIR=/workdir/output/artifacts
These are all the necessary modifications to the script. Once updated, your build_coawst.sh will be properly
configured for the compilation process.
.in FilesNext, you need to update your simulation files to reflect the correct paths used in the COAWST repository.
Typically, this involves modifying references from Projects/JOE_TC/DiffGrid/file.txt to just file.txt.
Start by updating the coupling_joe_tc.in file:
WAV_name to WAV_name = INPUT_JOE_TC_COARSEOCN_name to OCN_name = ocean_joe_tc_coarse.inSCRIP_COAWST_NAME to SCRIP_COAWST_NAME = scrip_joe_tc_diffgrid.ncW2ONAME to W2ONAME == wav1_to_ocn1_weights.ncW2ANAME to W2ANAME == wav1_to_atm1_weights.ncA2ONAME to A2ONAME == atm1_to_ocn1_weights.ncA2WNAME to A2WNAME == atm1_to_wav1_weights.ncO2ANAME to O2ANAME == ocn1_to_atm1_weights.ncO2WNAME to O2WNAME == ocn1_to_wav1_weights.ncNext, update the wave model file INPUT_JOE_TC_COARSE:
READGRID COORDINATES 1 'Projects/JOE_TC/DiffGrid/joe_tc_coarse_grid_coord.grd' 4 0 0 FREE
to READGRID COORDINATES 1 'joe_tc_coarse_grid_coord.grd' 4 0 0 FREEREADINP BOTTOM 1 'Projects/JOE_TC/DiffGrid/joe_tc_coarse_bathy.bot' 4 0 FREE
to READINP BOTTOM 1 'joe_tc_coarse_bathy.bot' 4 0 FREE Lastly, update the ocean model file ocean_joe_tc_coarse.in:
VARNAME to VARNAME = /workdir/output/artifacts/__COAWST/ROMS/External/varinfo.datGRDNAMEto GRDNAME == joe_tc_coarse_grd.ncININAME to ININAME == joe_tc_coarse_ocean_init.ncYou're now ready to send your simulation to the Cloud!
Here is the code required to run a COAWST simulation using the Inductiva API:
"""COAWST Simulation."""
import inductiva
# Allocate a machine on Google Cloud Platform
cloud_machine = inductiva.resources.MachineGroup(
provider="GCP",
machine_type="c2-standard-4",
spot=True
)
# Initialize the simulator
coawst = inductiva.simulators.COAWST(
version="3.8")
# Run simulation
task = coawst.run(
input_dir="/Path/to/JOE_TC_DiffGrid",
sim_config_filename="coupling_joe_tc.in",
build_coawst_script="build_coawst.sh",
n_vcpus=3,
on=cloud_machine
)
# Wait for the simulation to finish and download the results
task.wait()
cloud_machine.terminate()
task.download_outputs()
task.print_summary()
Note: Setting
spot=Trueenables the use of spot machines, which are available at substantial discounts. However, your simulation may be interrupted if the cloud provider reclaims the machine.
In this example, we're using a relatively small cloud machine (c2-standard-4), which is equipped with 4 virtual CPUs.
COAWST requires a precise core allocation for its simulations, meaning the number of CPUs must exactly match the simulation's configuration.
In this case, the input files indicate that the simulation should run on 3 cores (n_vcpus=3) - no more, no less.
This configuration is defined in the coupling_joe_tc.in file:
! Number of parallel nodes assigned to each model in the coupled system.
! Their sum must be equal to the total number of processors.
NnodesATM = 1 ! atmospheric model
NnodesWAV = 1 ! wave model
NnodesOCN = 1 ! ocean model
NnodesHYD = 0 ! hydrology model
Each component of the simulation is assigned a specific number of cores. While we can increase this number later, we'll keep it as is for now.
Once the simulation is complete, we terminate the machine, download the results and print a summary of the simulation as shown below.
inductiva tasks info 6wt3dp49uhy45y708x848eu2y
Task status: Success
Timeline:
Waiting for Input at 27/02, 19:32:00 7.963 s
In Queue at 27/02, 19:32:08 15.545 s
Preparing to Compute at 27/02, 19:32:24 11.374 s
In Progress at 27/02, 19:32:35 36630.294 s
├> 12.082 s cp -r /opt/COAWST /workdir/output/artifacts/__COAWST
├> 1.063 s create_all_sim_links
├> 1335.318 s bash build_coawst.sh
├> 35278.638 s /opt/openmpi/4.1.6/bin/mpirun --use-hwthread-cpus --np 3 coawstM coupling_joe_tc.in
├> 1.234 s rm -r __COAWST
└> 1.065 s clean_all_sim_links
Finalizing at 28/02, 05:43:05 132.651 s
Success at 28/02, 05:45:18
Data:
Size of zipped output: 4.45 GB
Size of unzipped output: 4.79 GB
Number of output files: 43
Total estimated cost (US$): 0.72 US$
Estimated computation cost (US$): 0.71 US$
Task orchestration fee (US$): 0.010 US$
Note: A per-run orchestration fee (0.010 US$) applies to tasks run from 01 Dec 2025, in addition to the computation costs.
Learn more about costs at: https://inductiva.ai/guides/how-it-works/basics/how-much-does-it-cost
The simulation details might seem complex initially, but let's focus on the In Progress stage, as this is the part specific to your simulation.
All other steps are common to every simulation run on Inductiva.
In Progress stepsThe "In Progress" section outlines the commands executed during your simulation, along with their durations. Below is a breakdown of the key steps:
cp -r /opt/COAWST /workdir/output/artifacts/__COAWSTcreate_all_sim_linksCAMtr_volume_mixing_ratio, LANDUSE.TBL) to be available
in the working directory. By creating these symbolic links, we avoid the need to send all those files with the input.LANDUSE.TBL file in the input directory, it will be used for the simulation instead of the default version from the COAWST folder.bash build_coawst.sh/workdir/output/artifacts, and COAWST is
located at /workdir/output/artifacts/__COAWST.coawstM coupling_joe_tc.inrm -r __COAWSTclean_all_sim_linkscreate_all_sim_links command.These steps ensure that your COAWST simulation runs efficiently and is well-managed in the cloud. 🚀
Based on the execution times, the compilation took 1,335 seconds (around 22 minutes), while the simulation itself ran for 35,278 seconds (approximately 9 hours and 47 minutes).
It's important to note that compilation time varies depending on the COAWST configuration chosen. However, the simulation runtime is primarily influenced by the computational resources allocated—including the number of virtual CPUs.
In this section, we'll explore strategies to scale up your simulation, in order to reduce the simulation time.
As mentioned earlier, the number of virtual CPUs used for your simulation must exactly match the configuration specified in the input files. Therefore, to scale up the simulation, you'll need to modify the following three files:
coupling_joe_tc.inNnodesATM: Number of virtual CPUs assigned to the atmospheric model.NnodesWAV: Number of virtual CPUs assigned to the wave model.NnodesOCN: Number of virtual CPUs assigned to the ocean model.NnodesATM + NnodesWAV + NnodesOCN must equal the n_vcpus passed
in the python script.ocean_joe_tc_coarse.inNtileI: I-direction partition.NtileJ: J-direction partition.NtileI * NtileJ must equal NnodesOCN (as defined in the coupling file).namelist.inputnproc_xnproc_ynproc_x * nproc_y equal to NnodesATM to take full
advantage of the virtual cores assigned to the atmosferic model.Below is a list of the simulations with their respective results:
| Machine Type | vCPUs | Execution Time | Estimated Cost (USD) |
|---|---|---|---|
| c2-standard-4 | 4 | 9h, 47 min | 0.71 |
| c2-standard-60 | 60 | 1h | 1.37 |
For these simulations, we divided the number of virtual CPUs available on each machine equally among the three models (1, 20, and 36 virtual CPUs, respectively).
We used the following values for Ntile and nproc: (1 1), (4 5), and (6 6), respectively.
We’ve covered the key steps for setting up and running a COAWST simulation using the Inductiva API. We also explored the necessary modifications to input files for compiling and running COAWST.
By following this guide, you now have a clearer understanding of how to configure and efficiently run COAWST simulations on Inductiva's platform.