Calibrating agent-based models (ABMs) in economics and finance typically involves a derivative-free search in a very large parameter space. In this work, we benchmark a number of search methods in the calibration of a well-known macroeconomic ABM on real data, and further assess the performance of "mixed strategies"made by combining different methods. We find that methods based on random-forest surrogates are particularly efficient, and that combining search methods generally increases performance since the biases of any single method are mitigated. Moving from these observations, we propose a reinforcement learning (RL) scheme to automatically select and combine search methods on-the-fly during a calibration run. The RL agent keeps exploiting a specific method only as long as this keeps performing well, but explores new strategies when the specific method reaches a performance plateau. The resulting RL search scheme outperforms any other method or method combination tested, and does not rely on any prior information or trial and error procedure.

Glielmo, A., Favorito, M., Chanda, D., Delli Gatti, D., Reinforcement Learning for Combining Search Methods in the Calibration of Economic ABMs, in Proceedings of the 4th ACM International Conference on AI in Finance (ICAIF ’23), (NEW YORK, 27-29 November 2023), Association for Computing Machinery, Inc, NEW YORK -- USA 2023:Proceedings of the 4th ACM International Conference on AI in Finance (ICAIF ’23) 305-313. [10.1145/3604237.3626889] [https://hdl.handle.net/10807/281936]

Reinforcement Learning for Combining Search Methods in the Calibration of Economic ABMs

Chanda, Debmallya;Delli Gatti, Domenico
2023

Abstract

Calibrating agent-based models (ABMs) in economics and finance typically involves a derivative-free search in a very large parameter space. In this work, we benchmark a number of search methods in the calibration of a well-known macroeconomic ABM on real data, and further assess the performance of "mixed strategies"made by combining different methods. We find that methods based on random-forest surrogates are particularly efficient, and that combining search methods generally increases performance since the biases of any single method are mitigated. Moving from these observations, we propose a reinforcement learning (RL) scheme to automatically select and combine search methods on-the-fly during a calibration run. The RL agent keeps exploiting a specific method only as long as this keeps performing well, but explores new strategies when the specific method reaches a performance plateau. The resulting RL search scheme outperforms any other method or method combination tested, and does not rely on any prior information or trial and error procedure.
2023
Inglese
Proceedings of the 4th ACM International Conference on AI in Finance (ICAIF ’23)
4th ACM International Conference on AI in Finance, ICAIF 2023
NEW YORK
27-nov-2023
29-nov-2023
Association for Computing Machinery, Inc
Glielmo, A., Favorito, M., Chanda, D., Delli Gatti, D., Reinforcement Learning for Combining Search Methods in the Calibration of Economic ABMs, in Proceedings of the 4th ACM International Conference on AI in Finance (ICAIF ’23), (NEW YORK, 27-29 November 2023), Association for Computing Machinery, Inc, NEW YORK -- USA 2023:Proceedings of the 4th ACM International Conference on AI in Finance (ICAIF ’23) 305-313. [10.1145/3604237.3626889] [https://hdl.handle.net/10807/281936]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10807/281936
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact