Presentation Abstracts
A Transparent Alternative to Neural Networks with an Application to Predicting Volatility
Mark Kritzman, Windham Capital Management and David Turkington, State Street Associates
Many prediction problems in finance involve complicated relationships that lie beyond the grasp of linear regression analysis. Machine learning models, such as neural networks, address this challenge with their ability to capture highly complex dynamics, but they are notoriously opaque and difficult to understand. We show that an alternative model-free prediction method, called relevance-based prediction, can capture many of the same essential nonlinearities as complex machine learning models, but it also provides transparency into how each observation contributes to each prediction. We explore the fundamental connections between these two approaches and compare their practical efficacy for predicting market volatility.
A Meta Reinforcement Learning Approach to Goals-Based Wealth Management
Harshad Khadilkar and Sukrit Mittal, Franklin Templeton
We develop a reinforcement learning approach (denoted MetaRL) that is pre-trained on thousands of goals-based wealth management (GBWM) problems, known as “curriculum
training.” The meta model contains two policy functions for (i) portfolio selection and (ii) goal-taking decisions for a range of scenarios, purely in inference mode, without requiring
separate training and optimization of each new investor problem, which improves inference speed. Even in inference mode, MetaRL delivers optimal expected utility that is 98% of exact individual solutions obtained via dynamic programming. Further, the MetaRL model enables solving problems with larger state spaces where dynamic programming becomes computationally infeasible. In future work, we could also use the meta-model as an initial policy to further optimize for individual problem instances.
The Virtue of Complexity
Bryan Kelly, Yale School of Management / AQR
Much of the extant literature predicts market returns with “simple” models that use only a few parameters. Contrary to conventional wisdom, we theoretically prove that simple models severely understate return predictability compared to “complex” models in which the number of parameters exceeds the number of observations. We empirically document the virtue of complexity in U.S. equity market return prediction. Our findings establish the rationale for modeling expected returns through machine learning.
We theoretically characterize the behavior of machine learning asset pricing models. We prove that expected out-of-sample model performance—in terms of SDF Sharpe ratio and test asset pricing errors—is improving in model parameterization (or “complexity”). Our empirical findings verify the theoretically predicted “virtue of complexity” in the cross-section of stock returns. Models with an extremely large number of factors (more than the number of training observations or base assets) outperform simpler alternatives by a large margin.