Let's shape the future of simulation together
Join us on Discord

Inductiva API v0.12: Benchmarking and Beyond

The Inductiva Team

Author

December 25, 2024

Tags:

Inductiva APIAPI ReleaseInductiva API v0.12 release featuresHow to choose the best cloud machine for simulationsCloud cost-saving tools for scientists and engineersAPI for Scientific Simulations
Banner image for blog post on V0.12 release

We’re excited to announce the release of v0.12 of the Inductiva API!

As we wrap up 2024, it’s hard not to feel a sense of pride when we look back at what our team has accomplished this year. Our first release of the year, v0.4, launched back in February. Since then, we’ve rolled out 16 updates across 9 versions, each packed with meaningful improvements and new features designed to support our awesome users: scientists and engineers tackling large-scale simulations.

Today, we’re thrilled to announce a feature that truly sets us apart in the market: Inductiva’s Benchmarking functionality is now part of the API! This powerful tool, originally developed for our internal use, is now available to all our users, adding even more value to your workflows.

Find the Best Machines for Your Workload

So, what is the Benchmarking functionality?

In short, it allows you to test a short sample simulation across dozens of available machine configurations, helping you make an informed decision about which option best suits your needs in terms of performance and cost.

As you’ve likely experienced, finding the “best machine for you” isn’t always straightforward. Cloud providers like Google Cloud offer a wide range of machine families, each with countless vCPU and RAM configuration options. Understanding what they all mean can quickly feel overwhelming,

Even if you manage to get a handle on the differences between the various hardware options, how do you figure out which configuration is actually the best for your specific simulation needs? With your own simulation software, time constraints, and budget to consider, navigating this sea of cloud machines can feel like an impossible task.

This is what we call the Allocation Problem, and it’s a tough one. It’s not just like finding a needle in a haystack (at least with a needle, you know what you’re looking for). It’s more like solving a jigsaw puzzle where the pieces keep changing shape as you try to fit them together.

But why should you even care? Why not just pick a machine that’s “good enough”? After all, don’t Google and AWS already offer great machines at reasonable prices?

The answer is: yes, they do. But here’s the catch: you could easily end up spending 10 times more than you need to.

In other words, you might be using a machine that’s far more expensive than necessary, paying 10 times more for the same simulation that could run just as well on a less expensive option with comparable performance.

You can’t truly know which machine is best for your needs upfront. The actual performance depends on a mix of factors, including the machine itself, the software you’re using, and how your simulations are configured. In other words, the only way to figure out if a machine is the right fit for you is to test it with your own simulation.

This is exactly what our new Benchmarking functionality is designed to do. It offers an easy, systematic way to run a sample simulation on multiple cloud machines, gather performance and cost metrics, and determine which option works best for your needs.

With Inductiva’s Benchmarking functionality, you no longer need to guess, you’ll measure. Over time, you’ll develop a clear intuition about which machines are the best fit for your simulations and which ones to avoid altogether.

And you’ll save a lot of money on cloud resources—big time!

To learn more about Benchmarking and how to run your own, check out our new tutorial: Quick Recipe to Run a Benchmark.

What Else is New in v0.12?

In addition to Benchmarking, v0.12 introduces several smaller but important features designed to enhance your experience.

First, our console—the web UI for managing simulation tasks, cloud resources, and output files—has received a series of usability improvements, making it even more intuitive and efficient.

Another exciting addition is the ability to directly use the outputs of previous simulations as inputs for new ones. This means you can start new simulations based on previous results without needing to download and re-upload large files. It significantly speeds up chained simulations and eliminates the hassle of relying on your local machine as an intermediate step.

What’s Next?

Plenty! We have exciting plans for 2025, starting with a game-changing feature that truly sets us apart. Soon, we’ll be enabling users to run Inductiva directly on their local resources. That’s right—you’ll be able to run simulations without incurring any cloud costs.

Zero costs?

Yes, you heard that right. Zero cloud costs.

Starting in 2025, users will be able to plug in their own resources to Inductiva and use the API to run simulations on those resources just as easily as they do on the cloud. No complicated setup, no extra hassle.

This feature allows users who have invested in their own infrastructure to strike the perfect balance: run simulations locally at zero cost, and scale up to the cloud when needed. It’s a game-changer, and no other provider offers this level of flexibility on behalf of the user.

Very soon, you’ll be able to do it all with Inductiva.

And that’s not all, there’s so much more to come in 2025.

Stay tuned!

Check out our blog

Collaborative Insights Inductiva + MotoStudent FEUP

Engineering the Future of Racing: How cloud-based HPC accelerates design

At Inductiva, we are committed to supporting academic teams that are tackling real-world engineering challenges through simulation. MotoStudent FEUP, a student-led team, is building an electric racing motorcycle for the MotoStudent Electric competition, a project requiring advanced CFD (Computational Fluid Dynamics) to analyse how aerodynamics would affect the motorcycle’s speed, stability, and structure.

V0.17 Inductiva banner

Benchmarks, Security, Scalability and Alerts

Lots of improvements on this v0.17 release: some are “invisible”, but mission critical, such as the platform improvements on security and scalability, others you will notice right away, such as the new awesome Benchmarks Dashboard or the Tasks’ System Metrics. Below, we’ll dive deeper into how to use these features to help you run simulations more efficiently and cost-effectively, and also breakdown when and why to use each of them.

Group of fishing boats floating on calm water

Embracing Uncertainty in Fisheries Science with IPMA - ​​Portuguese Institute for the Ocean and Atmosphere

In this edition of Collaborative Insights, we’re proud to share a project developed with Rui Coelho, Principal Investigator at IPMA (Portuguese Institute for the Ocean and Atmosphere). Rui and his team used Inductiva’s cloud-based HPC platform to dramatically accelerate their work in stock assessment modeling for the South Atlantic shortfin mako shark—a species whose conservation depends on rigorous science and timely insights.