Back to Research

Physics-Informed Neural Networks: When Data Meets Physical Laws

A comprehensive introduction to PINNs — neural networks that learn to satisfy physical laws. We cover the core theory, implementation from scratch, and real-world applications in fluid dynamics.

null min read
PINNsScientific MLPDEsPhysicsDeep LearningTutorial

The Problem with Pure Data-Driven Models

Standard neural networks are extraordinary interpolators. Given enough data, they can approximate virtually any function. But in scientific computing, we rarely have enough data — and when we do, it may be noisy, sparse, or biased.

More fundamentally: physical systems obey laws. The Navier-Stokes equations govern fluid flow whether we measure it or not. Ignoring these constraints forces a neural network to re-discover them empirically — wastefully, and often imperfectly.

Physics-Informed Neural Networks (PINNs), introduced by Raissi, Perdikaris, and Karniadakis in 2019, solve this by embedding physical laws directly into the training objective.

The Core Idea

A PINN approximates the solution to a PDE by training a neural network to satisfy:

  1. Boundary conditions — the solution's values on the domain boundary
  2. Initial conditions — the solution's state at time zero
  3. The governing PDE — the physical law (computed via automatic differentiation)

The total loss combines four weighted terms:

L_total = λ_data · L_data + λ_bc · L_bc + λ_ic · L_ic + λ_pde · L_pde

The PDE residual term is the key innovation. For a general PDE N[u] = f, the residual loss is the mean squared residual evaluated at randomly sampled collocation points x_r, t_r in the interior of the domain — checking that the network actually satisfies the governing equation.

Implementation: Burgers' Equation

Let's implement a PINN for Burgers' equation — a canonical nonlinear PDE:

du/dt + u · du/dx = ν · d²u/dx²

where ν is the viscosity coefficient.

import torch
import torch.nn as nn

class PINN(nn.Module):
    def __init__(self, hidden_dim=64, num_layers=4):
        super().__init__()
        layers = [nn.Linear(2, hidden_dim), nn.Tanh()]
        for _ in range(num_layers - 1):
            layers += [nn.Linear(hidden_dim, hidden_dim), nn.Tanh()]
        layers.append(nn.Linear(hidden_dim, 1))
        self.net = nn.Sequential(*layers)

    def forward(self, x, t):
        # Concatenate spatial and temporal inputs
        inputs = torch.cat([x, t], dim=1)
        return self.net(inputs)


def burgers_residual(model, x, t, nu=0.01):
    """Compute Burgers' equation residual using autograd"""
    x.requires_grad_(True)
    t.requires_grad_(True)

    u = model(x, t)

    # First-order derivatives
    u_t = torch.autograd.grad(u, t, torch.ones_like(u), create_graph=True)[0]
    u_x = torch.autograd.grad(u, x, torch.ones_like(u), create_graph=True)[0]

    # Second-order spatial derivative
    u_xx = torch.autograd.grad(u_x, x, torch.ones_like(u_x), create_graph=True)[0]

    # Burgers' residual: u_t + u*u_x - nu*u_xx = 0
    residual = u_t + u * u_x - nu * u_xx
    return residual


def train_pinn(model, x_ic, u_ic, x_bc, u_bc, x_r, t_r, epochs=10000):
    optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)

    for epoch in range(epochs):
        optimizer.zero_grad()

        # Initial condition loss
        u_pred_ic = model(x_ic, torch.zeros_like(x_ic))
        loss_ic = torch.mean((u_pred_ic - u_ic) ** 2)

        # Boundary condition loss
        u_pred_bc = model(x_bc, torch.rand_like(x_bc))
        loss_bc = torch.mean((u_pred_bc - u_bc) ** 2)

        # PDE residual loss
        residual = burgers_residual(model, x_r, t_r)
        loss_pde = torch.mean(residual ** 2)

        # Weighted total loss
        loss = loss_ic + loss_bc + 0.1 * loss_pde

        loss.backward()
        optimizer.step()

        if epoch % 1000 == 0:
            print(f"Epoch {epoch}: Loss = {loss.item():.6f}")

    return model

Adaptive Collocation Sampling

A critical but often overlooked aspect: where you sample residual points matters enormously. Uniform random sampling wastes compute in smooth regions and undersamples sharp features.

Our adaptive approach (from our JCP 2024 paper) dynamically refines sampling in high-residual regions:

def adaptive_resample(model, x_candidates, t_candidates, k=0.5):
    """Resample collocation points based on PDE residual magnitude"""
    with torch.no_grad():
        residuals = torch.abs(burgers_residual(model, x_candidates, t_candidates))

    # Importance sampling — higher residual → higher probability
    probs = residuals.squeeze() ** k
    probs = probs / probs.sum()

    indices = torch.multinomial(probs, num_samples=len(x_candidates), replacement=True)
    return x_candidates[indices], t_candidates[indices]

This simple change reduced our training epochs by 3× on turbulent flow problems.

When to Use PINNs

PINNs are not always the right tool. Use them when:

  • Data is scarce but governing equations are known
  • Inverse problems — inferring PDE parameters from observations
  • Irregular domains — where mesh generation is costly
  • Rapid solution — once trained, inference is nearly instant

Avoid PINNs when:

  • You have abundant, clean data and no strong physics prior
  • The PDE is stiff or has sharp discontinuities (specialized methods exist)
  • Classical numerical solvers are already fast enough

Conclusion

PINNs represent a paradigm shift: instead of physics constraining ML, they inform it. The result is models that generalize better with less data, respect known physical laws, and can solve forward and inverse problems in a unified framework.

In my next post, I'll show how we extended PINNs to turbulent flow regimes using transformer architectures and attention-guided collocation — achieving accuracy competitive with full CFD simulations at 100× the speed.


Working on a physics simulation challenge? I consult on Physics-ML implementations — let's talk.