Inductiva API v0.12: Benchmarking and Beyond

The Inductiva Team

Author

December 25, 2024

Tags:

Inductiva APIAPI ReleaseInductiva API v0.12 release featuresHow to choose the best cloud machine for simulationsCloud cost-saving tools for scientists and engineersAPI for Scientific Simulations
Banner image for blog post on V0.12 release

We’re excited to announce the release of v0.12 of the Inductiva API!

As we wrap up 2024, it’s hard not to feel a sense of pride when we look back at what our team has accomplished this year. Our first release of the year, v0.4, launched back in February. Since then, we’ve rolled out 16 updates across 9 versions, each packed with meaningful improvements and new features designed to support our awesome users: scientists and engineers tackling large-scale simulations.

Today, we’re thrilled to announce a feature that truly sets us apart in the market: Inductiva’s Benchmarking functionality is now part of the API! This powerful tool, originally developed for our internal use, is now available to all our users, adding even more value to your workflows.

Find the Best Machines for Your Workload

So, what is the Benchmarking functionality?

In short, it allows you to test a short sample simulation across dozens of available machine configurations, helping you make an informed decision about which option best suits your needs in terms of performance and cost.

As you’ve likely experienced, finding the “best machine for you” isn’t always straightforward. Cloud providers like Google Cloud offer a wide range of machine families, each with countless vCPU and RAM configuration options. Understanding what they all mean can quickly feel overwhelming,

Even if you manage to get a handle on the differences between the various hardware options, how do you figure out which configuration is actually the best for your specific simulation needs? With your own simulation software, time constraints, and budget to consider, navigating this sea of cloud machines can feel like an impossible task.

This is what we call the Allocation Problem, and it’s a tough one. It’s not just like finding a needle in a haystack (at least with a needle, you know what you’re looking for). It’s more like solving a jigsaw puzzle where the pieces keep changing shape as you try to fit them together.

But why should you even care? Why not just pick a machine that’s “good enough”? After all, don’t Google and AWS already offer great machines at reasonable prices?

The answer is: yes, they do. But here’s the catch: you could easily end up spending 10 times more than you need to.

In other words, you might be using a machine that’s far more expensive than necessary, paying 10 times more for the same simulation that could run just as well on a less expensive option with comparable performance.

You can’t truly know which machine is best for your needs upfront. The actual performance depends on a mix of factors, including the machine itself, the software you’re using, and how your simulations are configured. In other words, the only way to figure out if a machine is the right fit for you is to test it with your own simulation.

This is exactly what our new Benchmarking functionality is designed to do. It offers an easy, systematic way to run a sample simulation on multiple cloud machines, gather performance and cost metrics, and determine which option works best for your needs.

With Inductiva’s Benchmarking functionality, you no longer need to guess, you’ll measure. Over time, you’ll develop a clear intuition about which machines are the best fit for your simulations and which ones to avoid altogether.

And you’ll save a lot of money on cloud resources—big time!

To learn more about Benchmarking and how to run your own, check out our new tutorial: Quick Recipe to Run a Benchmark.

What Else is New in v0.12?

In addition to Benchmarking, v0.12 introduces several smaller but important features designed to enhance your experience.

First, our console—the web UI for managing simulation tasks, cloud resources, and output files—has received a series of usability improvements, making it even more intuitive and efficient.

Another exciting addition is the ability to directly use the outputs of previous simulations as inputs for new ones. This means you can start new simulations based on previous results without needing to download and re-upload large files. It significantly speeds up chained simulations and eliminates the hassle of relying on your local machine as an intermediate step.

What’s Next?

Plenty! We have exciting plans for 2025, starting with a game-changing feature that truly sets us apart. Soon, we’ll be enabling users to run Inductiva directly on their local resources. That’s right—you’ll be able to run simulations without incurring any cloud costs.

Zero costs?

Yes, you heard that right. Zero cloud costs.

Starting in 2025, users will be able to plug in their own resources to Inductiva and use the API to run simulations on those resources just as easily as they do on the cloud. No complicated setup, no extra hassle.

This feature allows users who have invested in their own infrastructure to strike the perfect balance: run simulations locally at zero cost, and scale up to the cloud when needed. It’s a game-changer, and no other provider offers this level of flexibility on behalf of the user.

Very soon, you’ll be able to do it all with Inductiva.

And that’s not all, there’s so much more to come in 2025.

Stay tuned!

Check out our blog

Inductiva API v0.14 release

GPU-Accelerated Simulations on Inductiva.AI – Faster, More Efficient HPC with Inductiva API v0.14

Inductiva API v0.14 introduces GPU-powered numerical simulations, real-time output monitoring, seamless AWS exports, and an expanded set of built-in simulators, enhancing high-performance computing (HPC) for scientific research. With GPU acceleration now available, users can leverage parallel computing for faster, large-scale simulations, significantly reducing runtime and boosting performance in molecular dynamics, CFD, fusion energy, and AI-driven physics modeling.

Banner image for blog post on Inductiva V0.13 release update

Unveiling v0.13: Local Meets Cloud and More

Run simulations your way with Bring Your Own Hardware (BYOH), leverage upgraded OpenFOAM and DualSPHysics support, and take full control with enhanced cost transparency. Transform your scientific computing workflow today!

Allocating Computational Resources in a Diverse Chip Ecosystem

Allocating Computational Resources in a Diverse Chip Ecosystem

Dive into the "Allocation Problem" in cloud computing and see how Inductiva’s benchmarking tools help you make better-informed decisions for performance and cost optimization.