NonlinearSolve.jl: Efficient rootfinding and solving of algebraic equations in Julia
12-01, 11:35–12:05 (Europe/Amsterdam), Auditorium

Many problems can be reduced down to solving f(x)=0, maybe even more than you think! Solving a stiff differential equation? Finding out where the ball hits the ground? Solving an inverse problem to find the parameters to fit a model? In this talk we'll showcase how SciML's NonlinearSolve.jl is a general system for solving nonlinear equations and demonstrate its ability to efficiently handle these kinds of problems with high stability and performance.


Solving nonlinear systems of equations can be surprisingly complex. In this talk we'll go into detail on the different aspects of NonlinearSolve.jl for efficiently and robustly solving such equations. We'll start by detailed the specialized methods for 1-dimensional interval nonlinear systems, known in the language of the package as an IntervalNonlinearProblem. While the most common method for bracketing solving f(x)=0 on a fixed space x in [x_min, x_max] is the bisection method, we will discuss how many advanced methods exist, such as Falsi and ITP, which can accelerate the convergence while giving the same numerical guarantees. We will show how for these types of problems, NonlinearSolve is around 28x faster than MATLAB 2022a with fzero, and also around 8x faster than Roots.jl.

Next we will discuss the standard NonlinearProblem, i.e. f(x)=0 where an initial x_0 is known and no brackets are given, usually because the system is higher dimensional (i.e. x is an array). In this case, NonlinearSolve.jl has a wide variety of different algorithms to handle these cases from the smallest case to the largest case. For the smallest cases, NonlinearSolve.jl provides the SimpleNonlinearSolve.jl solvers, a set of lower dependency solvers which are designed to be very efficient for small systems. Being fully non-allocating, these methods can specialize on the system size via static arrays and fully eliminate all overhead. Towards the other end, NonlinearSolve.jl's native NewtonRaphson and TrustRegion methods are designed for larger systems, supporting features like sparse automatic differentiation and integration with LinearSolve.jl for fully-customizable preconditioned Krylov linear solvers. NonlinearSolve.jl also has interfaces to other Julia packages, from Sundials.jl (KINSOL) and MINPACK to NLSolve.jl in order to have the most complete set of methods for benchmarking and giving users a full set of options.

The last nonlinear system type is the SteadyStateProblem, defined as a steady state to a dynamical system u' = f(u). While this can be treated the same as 0 = f(u), we will discuss how some of the numerical properties of such systems can be different, and how NonlinearSolve.jl provides a set of sophisticated methods for choosing between dynamical and rootfinding approaches for such equations.

Finally we will discuss the SciML integration. We will discuss how direct automatic differentiation of nonlinear solvers is inefficient, and how NonlinearSolve.jl has special AD overloads which allow for it to be more efficient than the naïve automatic differentiation approach for forward and reverse mode AD. Demonstrations of NonlinearSolve in an AD context, in particular in the context of DeepEquilibriumNetworks.jl, will be used to demonstrate the performance of NonlinearSolve for large-scale inverse problems and with neural network integration. If time allows, usage with GPUs will also be demonstrated.

Together, the audience will leave with an understanding of why NonlinearSolve.jl is growing into being the standard tool for solving nonlinear systems in the Julia programming language.

Dr. Rackauckas is a Research Affiliate and Co-PI of the Julia Lab at the Massachusetts Institute of Technology, VP of Modeling and Simulation at JuliaHub and Creator / Lead Developer of JuliaSim. He's also the Director of Scientific Research at Pumas-AI and Creator / Lead Developer of Pumas, and Lead Developer of the SciML Open Source Software Organization.

Dr. Rackauckas's research and software is focused on Scientific Machine Learning (SciML): the integration of domain models with artificial intelligence techniques like machine learning. By utilizing the structured scientific (differential equation) models together with the unstructured data-driven models of machine learning, our simulators can be accelerated, our science can better approximate the true systems, all while enjoying the robustness and explainability of mechanistic dynamical models.

This speaker also appears in: