Optimizing Trading Strategies with Reinforcement Learning

Optimizing Trading Strategies with Reinforcement Learning

This capstone investigates whether deep-RL agents can reliably outperform conventional factor-based strategies on U.S. equities when realistic frictions (bid–ask spread, turnover, market impact) are included.

This capstone investigates whether deep-RL agents can reliably outperform conventional factor-based strategies on U.S. equities when realistic frictions (bid–ask spread, turnover, market impact) are included.

Category

May 15, 2024

Project

Project

Services

May 15, 2024

Web Design

May 15, 2024

Client

May 15, 2024

Framer Template

May 15, 2024

Year

May 15, 2024

May 2025

May 2025

This capstone investigates whether deep-RL agents can reliably outperform conventional factor-based strategies on U.S. equities when realistic frictions (bid–ask spread, turnover, market impact) are included.

The workflow is divided into three layers:


Layer Goal Key Artefacts Exploratory & Feature Engineering Clean market micro-structure data, engineer lagged / rolling factors Data/, DataPipelines/ notebooks & scripts Classical Baseline Benchmark with Random Forest, SVR, ARIMA forecasts ProjectCode/baselines/ Reinforcement Learning Train DQN / PPO agents inside a custom Gym environment that emits (state = engineered features) and rewards (risk-adjusted PnL) ProjectCode/rl_env/, ProjectCode/train.py, ProjectCode/evaluate.py

Let's talk

Time for me:

Email:

examulya@gmail.com

Reach out:

Designed by Amulya Saxena

© Copyright 2025

Let's talk

Time for me:

Email:

examulya@gmail.com

Reach out:

Designed by Amulya Saxena

© Copyright 2025