Unveiling v0.13: Local Meets Cloud and More

The Inductiva Team

Author

January 27, 2025

Tags:

Bring Your Own Hardware (BYOH) for HPCOpenFOAM v2412 and v12 API integrationCost transparency in scientific computingInductiva API v0.13 releaseAPI for Scientific Simulations
Banner image for blog post on Inductiva V0.13 release update

We’re thrilled to announce the release of Inductiva API v0.13! This update isn’t just another step forward, it’s a big leap. Alongside many improvements across the platform, v0.13 introduces an exciting new feature that we believe will be a game-changer for our users. Once again, Inductiva sets itself apart in the world of High-Performance Computing (HPC) for numerical simulations.

Bring Your Own Hardware (BYOH)

Starting with v0.13, you now have the flexibility to run simulations not only in the cloud but also on your own hardware, all while using the same Python interface you already know and love. This new flexibility means you can choose the setup that works best for you without sacrificing any of Inductiva’s powerful features. Plus, everything remains fully compatible with the Inductiva Web Console, giving you the same seamless experience for managing and monitoring your tasks.

The best part? It’s as simple as changing one line of code. Just update the line where you define your machine group, and you’re good to go.

This is what we call Bring Your Own Hardware (BYOH). We believe this feature will not only accelerate your R&D work but also help you optimize your compute costs. When you run your simulations on your own hardware, your cloud compute costs for those tasks drop to zero.

Nothing. Zip. Nada. Zilch. Goose egg.

Seriously. That’s it. No tricks, no fine print, just complete control over your compute costs.

Naturally, if you choose to store data in the cloud, there may be some associated costs. However, these are typically a small fraction of the overall expenses compared to running your simulations entirely in the cloud.

Switch between local and cloud resources depending on your time constraints and scalability needs. With Inductiva, you can easily redirect computations to your own hardware for quick test runs or simulation refinements. When it’s time to bring out the big guns for longer, resource-heavy tasks, redirect Inductiva to a massive cloud machine, built to crush larger workloads with speed and efficiency.

Behind the Scenes: Meet Task Runner

How does this even work?

It’s all thanks to Task Runner, the core computational engine of Inductiva, which we’re now open-sourcing. Task Runner has always been working behind the scenes, managing every computation you request. Each time you start a cloud machine and send your simulation, a freshly installed Task Runner takes over. It handles everything end-to-end: downloading the necessary software, executing commands, managing outputs, and storing results in the right place. Along the way, it tracks errors and provides real-time feedback to keep you informed.

What our team has achieved over the past few weeks is a remarkable engineering milestone that brings significant benefits to our users. We’ve transformed the Task Runner into an autonomous, easy-to-install component that you can now start locally, all while maintaining the same familiar interface you’ve been using for cloud simulations, making the transition seamless and hassle-free.

You can install as many Task Runners as you need to take full advantage of your hardware resources. Whether you’re tapping into GPUs or running MPI jobs (required by certain simulators), Task Runner offers optimized options for both. Once Inductiva’s Task Runner is installed and configured, there’s no need to manually install any of the simulators we provide. Everything works seamlessly out of the box, so you can hit the ground running.

Forget about version compatibility headaches between local and cloud runs. If your simulation works with your local Task Runner, it will work on the cloud too. All you need to do is change a single line of code.

Why wait? Gather the machines you have, fire them up, and build your very own Inductiva cluster, fully in sync with our cloud infrastructure. Big things are coming as we push the Bring Your Own Hardware (BYOH) concept even further. Stay tuned, because the best is yet to come!

What Else is New with v0.13?

With every release, we focus on making Inductiva even more user-friendly and transparent, especially when it comes to understanding and managing resource costs. v0.13 is no exception, bringing thoughtful updates to our Web Console to enhance usability.

Revamped MachineGroups Section

Most notably, we revamped the MachineGroups section in the left-hand panel of the Console, organizing resources into three distinct sub-areas for better clarity and improved readability. These newly structured sub-areas are:

  • Active: Displays the MachineGroups currently running.
  • Terminated: Lists previous instances along with their corresponding costs.
  • Instance Types: Showcases the available computational resources you can use.
screenshot for machinegroups section in console.
MachineGroups section displaying active, terminated, and available instances.

Users can now use the Console to directly inspect configuration parameters and other key details for both active and terminated MachineGroups.

New Cost Breakdown Screen

We also added a new Cost Breakdown screen in the Account section, where you can review past spending with a detailed monthly breakdown of Compute and Storage costs.

screenshot of "Cost Breakdown" screen, inside the Account section
Cost Breakdown screen in the Account section showing a detailed monthly summary of Compute and Storage costs.

All costs and available user credits now update in near real-time, providing better clarity and control over spending. Additionally, MachineGroups automatically stop when user credits reach zero, preventing negative balances.

DualSPHysics Just Got Simpler, OpenFOAM More Powerful

In this release, we’ve made two major upgrades to our simulation capabilities. First, we’ve added support for the latest versions of both OpenFOAM distributions. Users can now run v2412 of the OpenFOAM ESI distribution and v12 of the OpenFOAM Foundation, alongside previously available versions.

Second, we’ve streamlined the experience of running DualSPHysics simulations on Inductiva. Previously, users had to explicitly list each DualSPHysics command required for their simulation. Now, the API supports the same shell scripts used to orchestrate local simulations. This means you can transition your DualSPHysics workflows to Inductiva with little to no changes to your existing setup.

What’s Next?

Our journey is far from over. We remain committed to democratizing access to High Performance Computing and empowering scientists and engineers to accelerate their R&D efforts.

No complicated setups, no extra steps, just computing without limits.

One of the most exciting features we’re working on is the ability to use Inductiva to connect with existing HPC infrastructures, such as those at universities or large government agencies. We believe that both traditional HPC and cloud computing are powerful options for large-scale scientific computing, but their fundamental differences often make it challenging for users to integrate them effectively.

Our goal is for Inductiva to become the “gold standard” for orchestrating jobs across these two computational infrastructures, providing users with a seamless way to combine their strengths. We’ve made significant progress toward this vision and will soon share the results of our first experiments.

Working with remote computational resources comes with usability challenges. Users often struggle to monitor what’s happening during remote computations and to access the data being produced in real time. Traditionally, this would require direct access to the remote machine, such as connecting via SSH—a solution we believe unnecessarily complicates modern computing for most use cases.

To address this, we’ve been actively enhancing the monitoring capabilities of our platform. By improving the visibility of every step in the computational process, we ensure that all relevant operational data is accessible via the API and Web Console, eliminating the need for direct machine access.

Additionally, we are working on a feature that will soon allow simulators to stream data directly to your local machine. This means you’ll be able to inspect simulation outputs as if they were being generated locally.

This brings us closer to realizing our dream of providing every user with an “infinite laptop” experience, where your laptop becomes your passport to limitless compute and storage power.

No complicated setups, no extra steps, just computing without limits.

Don’t Forget to Upgrade

Ensure you’re using the latest version by running:

pip install –upgrade inductiva

A big thank you to our team and collaborators for making this release possible.

Happy simulating!

Check out our blog

Inductiva API v0.14 release

GPU-Accelerated Simulations on Inductiva.AI – Faster, More Efficient HPC with Inductiva API v0.14

Inductiva API v0.14 introduces GPU-powered numerical simulations, real-time output monitoring, seamless AWS exports, and an expanded set of built-in simulators, enhancing high-performance computing (HPC) for scientific research. With GPU acceleration now available, users can leverage parallel computing for faster, large-scale simulations, significantly reducing runtime and boosting performance in molecular dynamics, CFD, fusion energy, and AI-driven physics modeling.

Banner image for blog post on V0.12 release

Inductiva API v0.12: Benchmarking and Beyond

Discover the latest features in Inductiva API v0.12, including powerful benchmarking tools to optimize performance and costs, enhanced usability, and more!

Allocating Computational Resources in a Diverse Chip Ecosystem

Allocating Computational Resources in a Diverse Chip Ecosystem

Dive into the "Allocation Problem" in cloud computing and see how Inductiva’s benchmarking tools help you make better-informed decisions for performance and cost optimization.