Pricing Complexity: Tensor Networks Tackle Multi-Asset Options

Author: Denis Avetisyan


A novel application of quantized tensor trains offers a scalable solution for accurately pricing options on multiple underlying assets, a longstanding challenge in computational finance.

This paper demonstrates that quantized tensor trains enable efficient, full-grid solutions to the Black-Scholes PDE for multi-asset options, overcoming the curse of dimensionality.

Pricing multi-asset options presents a long-standing computational challenge due to the exponential growth of complexity with each added underlying asset. This paper, ‘Full grid solution for multi-asset options pricing with tensor networks’, introduces a novel approach leveraging quantized tensor trains (QTT) to overcome this dimensionality curse. By representing the Black-Scholes PDE in a compressed, tractable form, we demonstrate the ability to compute full-grid prices and Greeks for options on baskets and other complex payoffs in dimensions previously inaccessible to traditional solvers. Could this method pave the way for real-time, high-dimensional risk management and more accurate valuation of complex derivatives?


The Inevitable Imperfection of Price

The precise valuation of financial derivatives, most notably options, forms a cornerstone of modern finance, directly impacting both risk mitigation and informed investment decisions. These instruments, contracts whose value is derived from an underlying asset, are used extensively to hedge against potential losses or to speculate on future price movements. An accurate price reflects the fair exchange of risk between parties and prevents misallocation of capital. Consequently, mispricing can lead to significant financial instability, as evidenced by past market events. Sophisticated risk management strategies rely heavily on derivative pricing models to quantify exposure and establish appropriate hedging positions, while investors utilize these valuations to identify potentially profitable opportunities and ensure returns align with the level of risk undertaken.

The Black-Scholes partial differential equation (PDE) revolutionized financial modeling by offering a theoretical framework for determining the fair price of derivative contracts. However, its elegant analytical solution-providing a closed-form formula for option pricing-is contingent upon simplifying assumptions. Specifically, this solution holds only for European-style options-exercisable only at maturity-and relies on constant volatility, interest rates, and no dividends. When these conditions are relaxed-as is invariably the case with more complex financial instruments like American options, options on dividend-paying assets, or exotic derivatives-the analytical tractability of the Black-Scholes PDE vanishes. Consequently, practitioners and researchers must turn to numerical methods to approximate solutions, effectively transforming the PDE into a computational problem. These methods, while powerful, introduce their own complexities and computational costs, highlighting the limitations of the original Black-Scholes framework when confronted with the realities of modern financial markets.

Valuation of financial derivatives grows markedly more challenging as complexity increases, particularly when options depend on the interplay of multiple underlying assets. While analytical solutions like the Black-Scholes formula excel with single assets, these methods falter and become computationally impractical beyond approximately three correlated assets. Consequently, practitioners rely on robust numerical techniques-such as Monte Carlo simulations, finite difference methods, and lattice-based models-to approximate option prices in higher dimensions. These methods discretize time and/or the asset space, transforming the continuous pricing problem into a solvable, albeit computationally intensive, process. The accuracy of these numerical valuations is paramount, requiring careful consideration of discretization schemes, convergence properties, and the efficient handling of high-dimensional integrals to mitigate the ‘curse of dimensionality’ inherent in multi-asset derivatives.

The Illusion of Simultaneity

Traditional time-stepping algorithms, commonly employing the finite difference method for solving partial differential equations, operate by iteratively calculating the solution at discrete time intervals. This sequential process begins with an initial condition and progresses forward in time, with each time step dependent on the results of the previous one. While conceptually straightforward, this approach can become computationally expensive, particularly for problems requiring a large number of time steps or high accuracy. The computational cost scales approximately linearly with the number of time steps needed to reach the desired maturity date of the financial instrument being priced, and the necessary grid resolution in the underlying asset’s price dimension. This limitation restricts the practical application of these methods to relatively simple problems or those with coarse time discretizations.

The space-time formulation fundamentally differs from traditional time-stepping methods by conceptualizing time as a spatial dimension. Instead of sequentially calculating the solution at each time step, this approach allows for the simultaneous solution of the pricing problem across the entire time domain. This is achieved by discretizing both the spatial dimensions of the underlying asset(s) and the temporal dimension, creating a multi-dimensional grid representing the entire time-space domain. The resulting system of equations can then be solved using techniques applicable to multi-dimensional problems, offering the potential for significant computational efficiency compared to iterative, sequential time-stepping algorithms.

The space-time formulation necessitates constructing a full grid across both the temporal and spatial dimensions, effectively discretizing the entire time-space domain to facilitate simultaneous solution of the pricing problem. This contrasts with sequential time-stepping methods and allows for the valuation of options with complex dependencies, specifically those reliant on up to five underlying assets. Classical methods, such as binomial or trinomial trees, encounter significant computational constraints and reduced accuracy when dealing with options dependent on more than a few assets, limitations that the full-grid approach circumvents by directly addressing the problem in a higher-dimensional space.

The Ghost in the Exercise

Unlike European options, which can only be exercised at expiration, American options grant the holder the right to exercise the option at any point during the option’s life. This early exercise feature introduces significant computational challenges in valuation because determining the optimal exercise strategy requires evaluating the continuation value – the expected benefit of holding the option for a further period – at each possible exercise time. The possibility of early exercise fundamentally alters the boundary conditions used in option pricing models and necessitates iterative numerical techniques to solve for the option’s fair value, as a closed-form solution is generally unavailable.

The allowance of early exercise in American options introduces a significant computational challenge because the optimal exercise strategy is not known a priori. Unlike European options, where the holder simply waits until expiration, an American option holder must continually evaluate whether immediate exercise yields a greater payoff than continued holding. Consequently, valuation requires iterative methods such as dynamic programming, which systematically solves for the option value at each possible time step and underlying asset price, working backwards from expiration. These methods discretize the time and state space (underlying asset price) and solve for the option value at each grid point, considering both holding the option and exercising it immediately, thus determining the optimal strategy at each step. Alternative iterative methods, including binomial and trinomial trees, also address this challenge by recursively calculating the option value at each node, considering exercise at each step.

Valuation of American options, complicated by the possibility of early exercise, is achieved through the application of the Black-Scholes partial differential equation (PDE) within a space-time framework. This approach allows for the modeling of option prices as they evolve over time and across different asset price levels. Numerical methods applied to this formulation demonstrate a mean squared error (MSE) of less than 10^{-3}, corresponding to approximate pricing errors of 1-2%. Computational complexity for this valuation scheme is empirically determined to be O(cdχ²Aχ³b⁴), where c, d, χ, A, and b represent parameters related to the discretization scheme, volatility, and the number of time steps, respectively.

The pursuit of a ‘full-grid solution’ for multi-asset option pricing, as detailed within, echoes a fundamental truth about complex systems. One anticipates eventual limitations, even in meticulously constructed models. As Niels Bohr observed, “The opposite of trivial is not obvious.” This article’s exploration of quantized tensor trains isn’t about achieving a flawless, static valuation; it’s about embracing a methodology capable of adaptation. The inherent dimensionality of financial markets necessitates a framework that acknowledges, rather than resists, the inevitable emergence of imperfections. A system that never breaks is, indeed, a dead one; this work demonstrates a willingness to accept and navigate the inherent complexities, allowing for growth and refinement within the system itself.

The Looming Complexity

The demonstrated capacity to generate full-grid solutions for multi-asset options via quantized tensor trains offers, predictably, a new surface for old dependencies. The reduction in dimensionality, while significant, does not diminish the fundamental truth: each added asset, each layer of stochasticity, is a new vector of potential failure. This is not a solution, but a deferral. The system expands in its capacity to model complexity, yet simultaneously tightens the knot of its inevitable brittleness.

Future work will undoubtedly focus on scaling these techniques – more assets, more exotic payoffs, faster computation. But the core limitation remains architectural. The pursuit of ‘full-grid’ implies a faith in complete knowledge, a static representation of a dynamic reality. Every node in the tensor network is a point of potential divergence, every quantization a subtle distortion. The system doesn’t become more robust; it becomes a more elaborate mechanism for translating initial conditions into eventual collapse.

The true challenge lies not in conquering dimensionality, but in accepting its inherent unpredictability. Perhaps the next step isn’t finer grids, but methods for gracefully degrading performance, for embracing controlled approximation as a necessary condition of survival. The goal should not be to prevent failure, but to anticipate it, and to build systems capable of failing… elegantly.


Original article: https://arxiv.org/pdf/2601.00009.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2026-01-06 01:39