Fractional Brownian motion, heavy tails, and coalescence: a random walk approach
Loading...
Author(s)
Marsh, Joshua
Advisor(s)
Editor(s)
Collections
Supplementary to:
Permanent Link
Abstract
Random walks are ubiquitous. They are of interest in and of themselves and as the driving randomness controlling other processes. Vanilla random walks with finite variance and independent increments have Brownian motion scaling limits, but more exotic random walks can give fractional Brownian motion, which has dependent increments. Nonetheless, these random walks with dependent increments can be generated using ancillary random walks which do have independent increments, provided that the ancillary random walks have increments following certain heavy-tailed distributions with infinite variance.
In this thesis, we extend works which generate fractional Brownian motion using one-dimensional random walks to higher dimensions, generating a multi-dimensional family of fractional Brownian motions. Key to the constructions is the coalescence of transient random walks. The Green's function of random walks describes how the coalescence probability decays as the starting points of the random walks move further apart. The self-similarity of random sums in the domain of attraction of a stable law, combined with a local limit theorem to give quantitative bounds, gives another perspective on the large-scale coalescence behavior.
In another chapter, we investigate a percolation-style model in which each vertex of $\mathbb Z^d$ is connected to its k nearest neighbors under translation-invariant edge weight distributions. In the case where the edge weights are independent and identically distributed, coalescing branching random walks naturally arise in the analysis of infinite connected components.
Using these coalescing random walks, we show that when $k \geq d+1$, there is almost surely a unique infinite connected component, and this component contains every vertex. Along the way, we resolve a question about the recurrence of coalescing branching random walks with a branching factor of 2 on the lattice $\mathbb Z^d$: almost surely, the origin is occupied infinitely often.
Finally, we step away from random walks and turn to physics-informed neural networks (PINNs). Neural networks are parametric functions with strong approximation capabilities. Using a technique called automatic differentiation, arbitrary derivatives of neural networks can be computed. A PDE can be viewed as a constraint on the derivatives of a function. If we minimize the degree to which this constraint is not satisfied, we expect that the function converges to a solution of the PDE.
For a PDE defined on a domain $\Omega$, the PINN approach is to use automatic differentiation to give arbitrary derivatives of a neural network evaluated at points $x \in \Omega$, at which we minimize the degree to which the PDE is not satisfied. However, the choice of which points to evaluate the derivative at has significant impact on convergence speed of the neural network and on the quality of the final trained neural network.
We present an adversarial method for selection of points. It has the advantage of requiring low computational overhead, in contrast to many established methods. Numerically, we demonstrate its effectiveness in high dimensions, where classical numerical methods often struggle.
Sponsor
Date
2025-04-29
Extent
Resource Type
Text
Resource Subtype
Dissertation