Edit: Someone pointed out this might be a VC trap, which would explain why there's such breathless writing about a bogus model with no actual results included.<p>This is a whopper of ai-will-totally-take-over-trading nonsense paper, you'll become less informed about reality if you read it. I'm not going to cover everything but to make sure nobody thinks some new gpt is going to give trading recommendations:<p>* It's not clear the group ever trained a model. If they have, there's no data about that. There's an infinitude of subtle traps when training financial models you have to be aware of.<p>* The proposed training and evaluation periods are remarkably short for the holding periods they suggest, if they were to have included good test results<p>* There's no information about how the exact timing of the data feeds they're giving, how they measure the price+time+cost of execution, how they think about market impact, etc.<p>* There's no mention of risk management aside from some vague risk-preference ideas the gpt might theoretically have<p>Putting that aside, there's a fundamental misconception held by the authors. If you have some mega-network that can parse all sorts of financial information/statements/whatever and meaningfully tell you information about the future, you're not going to add a ton of nonsense about understanding written language prompts to have a discussion with the user. The actually valuable thing is the predicted forward returns / target portfolio / whatever piece of information you're trying to get.
Where’s the beef? There’s no data. No evaluation of how well the model does. They’re also going to limit themselves to just the pandemic period (which is probably a very biased time period in the stock market).
As with any research / publication around trading and the stock market: if this strategy worked and provided an advantage, it would not be published free for all to read.<p>The one interesting thing here is using three different sets of agents to handle information at three levels of scope/timeframe. From my limited experience with trading, there are predictable behaviour patterns (not the information itself, but how people act on it), and these patterns tend to be self-similar across magnitudes of scale (not unlike fractals).<p>Otherwise, this is an unusually short paper, and light on data analysis / evidence.
I feel like 90% of <something>GPT projects are VC traps. Why would anyone discard models that are specifically designed for financial trading and use transformers that are not suitable for the job?
One thing I always assume about quantitative finance research, is that by the time it’s published in a journal some trading desk on Wall Street has already been using some version of the result for years. There is simply too much money to be made. Unfortunately for us there is also incredible secrecy and siloing of these innovative results.
There is literally no content in this paper besides a rough idea of how something like this could work, with a metric ton of bullshit bingo mixed in there. Man I really hate the academic paper system.
Maybe it works great and is super advanced, but my experiences so far with ChatGPT (even 4) shows that it tends to make basic mathematical errors sometimes.<p>You'll excuse me if I am skeptical that an LLM is going to do a better job at figuring out the market then some of the most highly paid minds in the world who have been at the problem for decades now.
monkeys throwing darts have often outperformed the index. what does that tell you ? evaluating trading algorithms for sustainable edge is science.<p>the competitive and zero sum nature of trading requires a deeper and almost scientific bent of mind to think well.