Embracing Uncertainty in Fisheries Science with IPMA - ​​Portuguese Institute for the Ocean and Atmosphere

The Inductiva Team

Author

Rui Coelho (IPMA, Principal Investigator)

Author

July 17, 2025

Tags:

Fisheries ScienceUncertainty in Fisheries ModelsStock Assessment ModelsHigh-Performance Computing for ScienceHPC in Fisheries ResearchCloud Computing for Scientific SimulationsEnsemble Modeling in FisheriesShortfin Mako Shark AssessmentInductiva AISustainable Fisheries Management
Group of fishing boats floating on calm water

At Inductiva.AI, we believe that the best science happens through collaboration—and that powerful computing can help tackle some of the world’s most complex challenges. One area where this is particularly true is fisheries science, where uncertainty is a constant companion.

In this edition of Collaborative Insights, we’re proud to share a project developed with Rui Coelho, Principal Investigator at IPMA (Portuguese Institute for the Ocean and Atmosphere). Rui and his team used Inductiva’s cloud-based HPC platform to dramatically accelerate their work in stock assessment modeling for the South Atlantic shortfin mako shark—a species whose conservation depends on rigorous science and timely insights.

Below, Rui shares the story of their research in his own words:

Embracing Uncertainty in Fisheries Science

All science must live with uncertainty, and fisheries science is no exception. Despite advances in data collection and modeling, we still face significant unknowns in the biological and observational parameters that are used in stock assessment models. Yet, we must still build models that represent the underlying population dynamics as accurately as possible, so that fisheries managers can act to ensure the sustainable use of our ocean resources.

One robust way to account for these uncertainties is by developing model grid ensembles—that is, many models, each reflecting a different plausible combination of input values within scientifically acceptable bounds. While conceptually “simple,” this approach presents a major computational challenge: running hundreds or thousands of models in parallel requires substantial processing power, memory, and infrastructure for managing, analyzing, and summarizing the resulting data.

In this project, we applied the JABBA stock assessment model to assess the status of the South Atlantic shortfin mako shark (Isurus oxyrinchus). This stock is managed internationally by ICCAT (International Commission for the Conservation of Atlantic Tunas). As top predators, pelagic sharks have life histories that generally make them more vulnerable to overexploitation, highlighting the need for particularly robust stock assessments and scientific advice. At the same time, pelagic sharks are often considered data-limited or data-moderate species, so significant uncertainties in their biology and fisheries data still exist and need to be considered during the stock assessments.

We started with a more classical approach, using a simple grid of 4 base case models that can be easily run on any modern computer. While those 4 base models already reflect some of the key uncertainties known for this species, we then moved forward and developed a larger ensemble of hundreds of models, each with key biological parameters randomly sampled from scientifically plausible distributions. The goal was to simulate a broad range of possible realities for this stock, providing a more robust basis for the scientific advice.

To make this possible, we partnered with Inductiva.AI, using their cloud-based High-Performance Computing (HPC) platform.

  • Phase 1: We generated input values and launched 500 model runs in parallel using simpler virtual machines (c2-standard-8, each with 8 vCPUs and 32 GB RAM). Running the full 500-model ensemble took only 24 minutes, generated over 12.88 GB of outputs, and had an estimated cost of just $2.29 USD. For comparison, the same computation on a standard modern laptop would take over 16 hours, and that without accounting for possible memory bottlenecks and risks of failure.
  • Phase 2: We consolidated and analyzed the ensemble model outputs using a single more powerful machine (c2-standard-60, with 60 vCPUs and 240 GB RAM). This step, which included diagnostics, convergence checks, result summarization, and model ensemble averaging, was completed in just slightly more than 10 minutes. Performing this task locally on a laptop would have been impractical, if not impossible, given the very large data volume and memory requirements.

I’ll end this text the same way I started it: all science must live with uncertainty. But rather than ignore it or oversimplify it, we should embrace it and ensure that it’s reflected in our models. Using platforms like Inductiva allows us to do exactly that, in a much simpler, faster, and more cost-effective way than has ever been possible before.

Figure 1: Stock trajectories and main results from the large model ensemble grid, run using Inductiva cloud-based High-Performance Computing (HPC), simulating multiple possible realities for the South Atlantic stock of shortfin mako shark.

Inductiva.AI Perspective

We’re thrilled to see how Rui and the IPMA team leveraged Inductiva’s platform to transform a computationally daunting challenge into a streamlined, affordable workflow. Running hundreds of complex simulations in under half an hour—and analyzing gigabytes of results in mere minutes—demonstrates the power of high-performance computing when it’s accessible and easy to use.

This collaboration perfectly embodies what we strive for at Inductiva: making advanced simulations faster, more cost-effective, and more accessible to scientists tackling real-world problems.

We’re honored to support projects like this and look forward to many more collaborations where computing power fuels scientific insight.

Interested in learning how Inductiva can support your research or industry project? Get in touch with our team. 

Check out our blog

V0.17 Inductiva banner

Benchmarks, Security, Scalability and Alerts

Lots of improvements on this v0.17 release: some are “invisible”, but mission critical, such as the platform improvements on security and scalability, others you will notice right away, such as the new awesome Benchmarks Dashboard or the Tasks’ System Metrics. Below, we’ll dive deeper into how to use these features to help you run simulations more efficiently and cost-effectively, and also breakdown when and why to use each of them.

Earth seen from space at night, with glowing cities tracing the shape of the continents.

Advancing a Rocket’s Airbrake System Development Through Scalable CFD Simulations: A Collaboration Between Porto Space Team and Inductiva.AI

At Inductiva.AI, our mission is to empower engineers and scientists to solve complex physical problems at scale. Recently, we partnered with the Porto Space Team, a student rocketry group designing a novel airbrake system for their EuRoC competition rocket, a project requiring advanced CFD (Computational Fluid Dynamics) analyses over a wide transonic regime.

Blog post banner, blue abstract background

From Supercomputer to Cloud: A New Era for OpenFOAM Simulations

Inductiva joined the 1st OpenFOAM HPC Challenge to test how cloud infrastructure stacks up against traditional HPC for large-scale CFD simulations. Running the DrivAer automotive benchmark, the team explored multiple hardware setups, hyperthreading choices, and domain decomposition strategies. The results? Inductiva’s flexible MPI clusters handled up to 768 partitions with impressive price-performance—even outperforming pricier hardware in some cases. For simulations below massive supercomputer scales, cloud HPC proves not only competitive but cost-effective, offering engineers and researchers agility without sacrificing speed. Curious how to fine-tune your OpenFOAM workloads in the cloud? Dive into the benchmarks and see what’s possible.