Autores
Escamilla Ambrosio Ponciano Jorge
Título Algorithmic Trading Using Double Deep Q-Networks and Sentiment Analysis
Tipo Revista
Sub-tipo CONACYT
Descripción Information
Resumen In this work, we explore the application of deep reinforcement learning (DRL) to algorithmic trading. While algorithmic trading is focused on using computer algorithms to automate a predefined trading strategy, in this work, we train a Double Deep Q-Network (DDQN) agent to learn its own optimal trading policy, with the goal of maximising returns whilst managing risk. In this study, we extended our approach by augmenting the Markov Decision Process (MDP) states with sentiment analysis of financial statements, through which the agent achieved up to a 70% increase in the cumulative reward over the testing period and an increase in the Calmar ratio from 0.9 to 1.3. The experimental results also showed that the DDQN agent’s trading strategy was able to consistently outperform the benchmark set by the buy-and-hold strategy. Additionally, we further investigated the impact of the length of the window of past market data that the agent considers when deciding on the best trading action to take. The results of this study have validated DRL’s ability to find effective solutions and its importance in studying the behaviour of agents in markets. This work serves to provide future researchers with a foundation to develop more advanced and adaptive DRL-based trading systems. © 2024 by the authors.
Observaciones DOI 10.3390/info15080473
Lugar Basel
País Paises Bajos
No. de páginas Article number 473
Vol. / Cap. v. 15 no. 8
Inicio 2024-08-01
Fin
ISBN/ISSN