
AI in Empirical Finance: Can LLMs Tackle Complex Research Questions?
Dr. Benjamin Clapham
AI in Empirical Finance: Can LLMs Tackle Complex Research Questions?
This project investigates the potential of Large Language Models (LLMs), such as ChatGPT, to address complex empirical research questions related to financial markets. While LLMs have been evaluated for tasks with clearly defined solutions across various disciplines, their capacity to tackle complex, multi-step research questions remains underexplored. We evaluate LLMs' ability to test six hypotheses concerning market efficiency, liquidity, and trading volume in the European derivatives market, comparing their outputs with those of 164 human research teams (Menkveld et al., 2024). By assessing LLM-generated code for hypothesis testing, verifying output accuracy, and examining variations in statistical estimates, we find that LLMs can accurately address relatively straightforward research questions without expert input and produce results similar to those of human research teams. However, LLMs struggle with more complex research questions or those requiring innovative solutions due to data limitations, necessitating expert intervention. This study highlights both the potential and limitations of LLMs in supporting researchers and non-experts in conducting complex analyses within financial markets.
More events
Sponsors




