#
Flow #4: Simulating Fluid Dynamics

(with a Graph Network Simulator)

**Pedro Ferro Pereira**Main author

**David Carvalho**Co-author | Editor

**Ivan Pombo**Reviewer

**Fábio Cruz**Technical Reviewer

Our epic journey carries on. We’re on our way to build our own **Machine Learning** model that can predict the **dynamics** of a fluid of a simple (yet rich) physical scenario, as *computed* by a **simulator** based on **Smoothed Particle Hydrodynamics (SPH)**.

The groundwork has been done. We now understand how to model the **physics** underlying the evolution of a fluid (Flow #1) and how this dynamics can be simulated using **SPH**
(Flow #2). In Flow #3, we have found the right structures to encode (and later process) the simulated flow data in **graphs**.

In this post, we delve in to the architecture behind the Deep Learning model we will use — an **Encoder-Processor-Decoder (E-P-D) model**. As you will see, assembling this beast is best done in *Block Land*.

So, let’s flow!

# Flow #4:

Simulating Fluid Dynamics (with a Graph Network Simulator)

As we saw previously, *graphs* are data structures that comprise **nodes** and **edges** linking them.
**Features** can be assigned at the **node**, **edge** and **global** levels:

Fig. 1: A graph is a collection of nodes, linked by edges. Features are stored at the **node**, **edge** and **global** levels. Credits: David Carvalho / Inductiva

Such features are stored differently. Global features are stored in the **global features vector \(\mathbf{u}\)** while the other features are represented by matrices.

All *node feature vectors* \(\mathbf{x}_i\) (*edge feature vectors* \(\mathbf{e}_{ij}\)) are stored in the **node feature matrix \(\mathbf{X}\)** ( **edge feature matrix \(\mathbf{E}\)**). The connectivity between nodes is encoded in the **adjacency matrix \(\mathbf{A}\)**.

With these objects, we can represent the graph as \(\mathcal{G} = (\mathbf{X}, \mathbf{E}, \mathbf{A}, \mathbf{u})\).

*Great* — we know how to instantiate a graph. But we now need to understand how to **manipulate** and **process** this type of data structure.

Time to get *blocky*.

## The Blocky World

… of Graph Networks

The seminal work behind **Graph Neural Network (GNN)** by Scarcelli et al [1] tackles this challenge by *extending* the existing concepts in Neural Networks (NNs) to the realm of graphs.

An all-encompassing framework that could *generalise* these ideas to other functions other than NNs lead to the development of **Graph Networks (GN)**, championed by Battaglia et al [2].

Unsurprisingly, we will use GNs in the architecture of our DL model.

### Graph Networks

… block by block

So, what are these

networksabout?

The basic unit of computation in GNs is the **GN block**, a *module* that takes as input a graph \(\mathcal{G}\) and outputs a graph \(\mathcal{G}'\) with the **same** structure but **different** node, edge and global features.

To do this, **update functions** \(\varphi\) modify features at the *node*-, *edge*- and *global* levels. Additionally, **aggregation functions** \(\rho\) allow for overall information to be averaged over an appropriate *pool* of elements.

Technically, how are the computations in the GN block carried out?

In order to allow all levels to interact with each other, three **sub-blocks** *sequentially* apply these functions. If all blocks are called, a GN block looks like this:

Fig. 2: A *Graph Network* block. The input is a graph \(\mathcal{G}\). Through computations at the *edge*, *node* and *global* blocks, all features are transformed, resulting in a structurally similar graph \(\mathcal{G}'\) with *updated* features. Credits: Pedro Pereira / Inductiva

**Edge Block**-*firstly, we update all edge features*:

The features \(\mathbf{e}_{ij}\) of an edge connecting nodes \(i\) to \(j\) (with respective features \(\mathbf{x}_i\) and \(\mathbf{x}_j\)) are recomputed by the**update function \(\varphi^e\)**to \(\mathbf{e}_{ij}' = \varphi^e(\mathbf{e}_{ij}, \mathbf{x}_i, \mathbf{x}_j, \mathbf{u})\).**Node Block**-*then, node features are recomputed given the new edge features:*

The set of newly-updated edge features \(\mathcal{E}'_j = \{ \mathbf{e}'_{ij} \ \ \forall i \in \mathcal{N}_j \}\) belonging to the neighbourhood \(\mathcal{N}_j\) of node \(j\) is fed to the**aggregation function \(\rho^{e \rightarrow x}\)**.

With it, the node features \(\mathbf{x}_j\) are*updated*by applying**the update function \(\varphi^x\)**as \(\mathbf{x}'_j = \varphi^x (\mathbf{x}_j, \mathbf{e}'_j, \mathbf{u})\), where the edge information is represented in the quantity \(\mathbf{e}'_j = \rho^{e \rightarrow x}(\mathcal{E}'_j)\). .**Global Block**-*finally, with both edge and node features updated, global features are now updated:*

The newly-updated node and (edge) feature matrices \(\mathbf{X}'\) (\(\mathbf{E}'\)) are respectively fed to**aggregation functions \(\rho^{x \rightarrow u}\) and \(\rho^{e \rightarrow u}\)**so that features are aggregated into single representations \(\bar{\mathbf{x}}' = \rho^{x \rightarrow u}(\mathbf{X}')\) and \(\bar{\mathbf{e}}' = \rho^{e \rightarrow u}(\mathbf{E}')\). The global features \(\mathbf{u}\) are finally recomputed with the**update function \(\varphi^u\)**as \(\mathbf{u}' = \varphi^u(\mathbf{u}, \bar{\mathbf{e}}', \bar{\mathbf{x}}')\).

*Et voila*: all features have been **updated** — and the output graph \(\mathcal{G}' = (\mathbf{X}',\mathbf{E}',\mathbf{u}')\) is ready to serve.

Fig. 3: The three GN computation blocks. The element being updated is in blue. The elements required for the update are opaque, while the ones not required are transparent. The element in blue is also required for its own update. Credits: Pedro Pereira / Inductiva

Granted — this was heavy. But have a look at Fig. 3 — the main idea behind is nonetheless quite intuitive!

In GNs, information about the features is *incrementally* **passed** across the graph. In order to update **all** graph features, we *first* update the **edge** feature, *then* update the **node** features given those new edge features and *finally* update the **global** features given the new edge and node features.

The aggregation functions \(\rho^{e \rightarrow x}\) and \(\rho^{x \rightarrow u}\) can be any *permutation-invariant* function — e.g. a sum, average or max — to make sure **no** element is *favoured* as information is propagated.

When the *update functions* \(\varphi^x\), \(\varphi^e\) and \(\varphi^u\) are trained with NNs, e.g. **Multi-layer Perceptrons (MLPs)**, the block becomes a **Graph Neural Network** — and this is what we will use!

So, we now need to *assemble* our full model *GN block by GN block*.

It’s time to play Lego in *Block Land*.

## Deep Learning with

… an Encoder-Processor-Decoder

The architecture we will now describe and later use in the Deep Learning model is based on the **Encoder-Processor-Decoder (E-P-D)** model proposed by Sanchez-Gonzalez et al. [4]. A great read!

Let’s process what a E-P-D model is, *block-by-block*.

### The *E* in the *E-P-D* model

… the Encoder

Start with an input graph \(\mathcal{G} = (\mathbf{A},\mathbf{X}, \mathbf{E})\).
Before we do anything sophisticated, we want to find a **latent**, more expressive representation of the features before propagating them to the rest of the model.

This is what the **encoder** does. For this, it simply *updates* node and edge features **independently** with respective functions \(\varphi^x\) and \(\varphi^e\). The output is the latent graph \(\mathcal{G}'_0 = (\mathbf{A},\mathbf{X}'_0, \mathbf{E}'_0)\).

As a GN, the encoder looks like this:

Fig. 4: The encoder as a GN block. Each node and edge are updated *independently*. without sharing information with other nodes or edges. Credits: Pedro Pereira / Inductiva

### The *P* in the *E-P-D* model

… the Processor

It’s time to compute the effect of the edges on the nodes. This is done by the **processor**, which updates the node features given the edge features in \(M\) sequential processes. Simply put, these **message-passing** processes allow the nodes to interact with each other by sharing information with their neighbours.

Each of these steps happens inside a single GN block, all with the **same** architecture. The processor is then a stack of \(M\) GN blocks operating at the node-level **sequentially** on an **encoded** latent graph \(\mathcal{G}'_0 = (\mathbf{A},\mathbf{X}'_0, \mathbf{E}'_0)\). By the end, a final latent representation is outputted as \(G'_M = (A,X'_M,E'_M)\).

The single **message-passing** process is represented by this GN block:

Fig. 5: A single Message-passing step in the processor as a GN block, from graph \(\mathcal{G}_{m-1}\) to graph \(\mathcal{G}_{m}\). The “messages” are encoded by the edges, which are first updated. Then, they are *aggregated*, allowing the nodes to be updated. Credits: Pedro Pereira / Inductiva

### The *D* in the *E-P-D* model

… the decoder

Finally, after the \(M\) message-passing steps have been computed by the *processor*, un updated graph \(G'_M = (\mathbf{A}, \mathbf{X}'_M, \mathbf{E}'_M)\) is used by the **decoder** to extract meaningful information. The output is any valuable feature \(\mathbf{y}_i = \varphi^x (\textbf{X}_M)\).

As a GN, the decoder looks like this:

Fig. 6: The Decoder as a GN block. It simply performs a node update \(\varphi^x\) to extract the desired output. Credits: Pedro Pereira / Inductiva

### The Full

(E-P-D) Model

We can now assemble all of our GN blocks. Our **Encoder-Processor-Decoder model** can be thought as the *multi-block* Graph Network:

Fig. 7: The Encoder-Processor-Decoder model. The Encoder encodes an input graph \(G\) into a latent representation \(G'_0\). The processor performs \(M\) steps of message-passing (MP), returning a final latent representation \(\). The Decoder decodes the relevant information of each node and returns the desired output Y. Credits: Pedro Pereira / Inductiva

### A Graph Network Simulator

… for Fluid Dynamics

Remember what we’re after? *Indeed*: *we want to predict the acceleration \(\mathbf{a}_i\) for all particles \(i\) at a given timeframe \(t\)*.
For this, we will input the timeframe graph \(\mathcal{G}^t\) and obtain the target \(\mathbf{y} = \mathbf{a}^t\).

But wait: once this model has been completely trained, we become able to predict the particle accelerations **iteratively**, for any timeframe \(t\).
*Phenomenal* — The E-P-D model can now be used as a **simulator**!

But how exactly?

We have detailed in Flow #3 how a graph \(G^t\) and its features can be constructed for a single time frame \(t\). Now, we feed this graph to the E-P-D model to predict \(\mathbf{a}^t\).

We loop in time and integrate with an Euler scheme, which updates particle velocities and positions as:

\[\begin{align} \mathbf{v}^{t+1} = \mathbf{v}^t + \Delta t \ \mathbf{a}^t && \mathbf{r}^{t+1} = \mathbf{r}^{t} + \Delta t \ \mathbf{v}^{t+1}. \end{align}\]The complete particle **trajectories** can then be outputted. The updated positions and velocities not only are returned but also fed into the GNS **again** in order to predict the next time frame — *it’s play time!*

Fig. 8: The Graph Network Simulator. At time \(t\) we know all features to construct the graph \(\mathcal{G}\) — the positions \(\mathbf{p}^{t}\) and the \(C\) previous velocities \(\mathbf{v}^{t}\). \(\mathcal{G}\) is fed to the E-P-D model which outputs \(\mathbf{a}^t\). With that information, the position and velocity of the **next** timeframe can be computed and fed again to the model. Credits: Pedro Pereira / Inductiva

So there you have it. This **Graph Network Simulator (GNS)** is exactly what we will use to *deep learn* the dynamics of fluids!

### The Power

… of GNs

Thinking back, it is not surprising that *Graph Networks (GNs)* are up to the task. They provide extremely **powerful** and **versatile** ways to operate on graphs and beautifully capture the underlying principles of Classical Dynamics.

Why, you ask?

For one, it supports **arbitrary** function and feature representations. Features can be *more* than the usual real-valued vectors e.g. images.
Also, the structure of a GN block is **highly** configurable. In Fig. 2, notice how the arrows can be rearranged, added or deleted depending on the task at hand.

Like the E-P-D model we will be using, GN blocks may be composed into **multi-block** architectures — allowing for more *sophisticated* DL models which can capture very complex nonlinearities.

### In the

next post…

Next up: it’s time to *actually* use these models.

In the next post, we will be diving deeper into how the E-P-D model can be trained and see how the Graph Network Simulator based on it fares.

For this, we will benchmark the performance of the Graph Network Simulator when trained on SPH simulation data obtained in a classic scenario — that of a *dam break*.

### 🌊🌊🌊 **Until then, flow away!** 🌊🌊🌊

## References

& Remarks

[1] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini, *“The graph neural network
model,”* IEEE transactions on neural networks, vol. 20, no. 1, pp. 61–80, 2008.

[2] P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski,
A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner et al., *“Relational inductive biases, deep learning,
and graph networks,”* arXiv preprint arXiv:1806.01261, 2018.

[3] J. Bender and D. Koschier, *“Divergence-free smoothed particle hydrodynamics”* in Proceedings of
the 14th ACM SIGGRAPH/Eurographics symposium on computer animation, 2015, pp. 147–155.

[4] A. Sanchez-Gonzalez, J. Godwin, T. Pfaff, R. Ying, J. Leskovec, and P. Battaglia, “Learning to simulate complex physics with graph networks,” in International Conference on Machine Learning. PMLR, 2020, pp. 8459–8468.

[5] In addition to the message passing blocks, *residual connections* are added to node and edge
features between each message-passing block in the processor. These connections perform element-wise
addition from input to output. More here:

[6] K. He, X. Zhang, S. Ren, and J. Sun, *“Deep residual learning for image recognition”* in Proceedings
of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.

## Recent posts from our blog

Maria Castro

The Fluid Cube dataset contains 100 fluid dynamics simulations of a fluid block with varying viscosity and initial shape, velocity and positions, flowing inside a unit cube domain.

Ivan Pombo

This dataset contains 1000 voltage measurements obtained through simulation of current propagation inside a circular two-dimensional domain for distinct electrical conductivity profiles.