Physics-informed neural networks (PINNs),[1] also referred to as Theory-Trained Neural Networks (TTNs),[2] are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations (PDEs).
Low data availability for some biological and engineering problems limit the robustness of conventional machine learning models used for these applications.
[1] The prior knowledge of general physical laws acts in the training of neural networks (NNs) as a regularization agent that limits the space of admissible solutions, increasing the generalizability of the function approximation.
The solution of the Navier–Stokes equations with appropriate initial and boundary conditions allows the quantification of flow dynamics in a precisely defined geometry.
In this setting, these governing equations must be solved while accounting for prior assumptions, linearization, and adequate time and space discretization.
In general, deep neural networks could approximate any high-dimensional function given that sufficient training data are supplied.
[5] Potentially, an accurate solution of partial differential equations can be found without knowing the boundary conditions.
[6] Therefore, with some knowledge about the physical characteristics of the problem and some form of training data (even sparse and incomplete), PINN may be used for finding an optimal solution with high fidelity.
PINNs allow for addressing a wide range of problems in computational science and represent a pioneering technology leading to the development of new classes of numerical solvers for PDEs.
[7] Notably, the trained PINN network can be used for predicting the values on simulation grids of different resolutions without the need to be retrained.
Given noisy measurements of a generic dynamic system described by the equation above, PINNs can be designed to solve two classes of problems: The data-driven solution of PDE[1] computes the hidden state
This second term encourages the PINN to learn the structural information expressed by the partial differential equation during the training process.
This second term requires the structured information represented by the partial differential equations to be satisfied in the training process.
[12][13][14][15] PINN is unable to approximate PDEs that have strong non-linearity or sharp gradients that commonly occur in practical fluid flow problems.
With the capability of approximating strong non-linearity extremely light weight PINNs are used to solve PDEs in much larger discrete subdomains that increases accuracy substantially and decreases computational load as well.
Compared to PINN, the XPINN method has large representation and parallelization capacity due to the inherent property of deployment of multiple neural networks in the smaller subdomains.
The rigorous bounds on the errors resulting from the approximation of the nonlinear PDEs (incompressible Navier–Stokes equations) with PINNs and XPINNs are proved.
[15] However, DPINN debunks the use of residual (flux) matching at the domain interfaces as they hardly seem to improve the optimization.
Having competing objectives during the network's training can lead to unbalanced gradients while using gradient-based techniques, which causes PINNs to often struggle to accurately learn the underlying DE solution.
This drawback is overcome by using functional interpolation techniques such as the Theory of functional connections (TFC)'s constrained expression, in the Deep-TFC[20] framework, which reduces the solution search space of constrained problems to the subspace of neural network that analytically satisfies the constraints.
[22] X-TFC allows to improve the accuracy and performance of regular PINNs, and its robustness and reliability are proved for stiff problems, optimal control, aerospace, and rarefied gas dynamics applications.
This limitation of regular PINNs imposes high computational costs, specifically for a comprehensive investigation of geometric parameters in industrial designs.
PointNet has been primarily designed for deep learning of 3D object classification and segmentation by the research group of Leonidas J. Guibas.
[26][28] Physics-informed neural networks (PINNs) have proven particularly effective in solving inverse problems within differential equations,[29] demonstrating their applicability across science, engineering, and economics.
By leveraging the powerful function approximation capabilities of deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods like finite difference methods or Monte Carlo simulations, which struggle with the curse of dimensionality.
Deep BSDE methods use neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden.
Additionally, integrating Physics-informed neural networks (PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws into the neural network architecture, ensuring solutions adhere to governing stochastic differential equations, resulting in more accurate and reliable solutions.
BINNs introduce two key adaptations to the typical PINN framework: (i) the mechanistic terms of the governing PDE are replaced by neural networks, and (ii) the loss function
For (i), this adaptation has the advantage of relaxing the need to specify the governing differential equation a priori, either explicitly or by using a library of candidate terms.
Additionally, this approach circumvents the potential issue of misspecifying regularization terms in stricter theory-informed cases.