Edit: Someone pointed out this might be a VC trap, which would explain why there's such breathless writing about a bogus model with no actual results included.<p>This is a whopper of ai-will-totally-take-over-trading nonsense paper, you'll become less informed about reality if you read it. I'm not going to cover everything but to make sure nobody thinks some new gpt is going to give trading recommendations:<p>* It's not clear the group ever trained a model. If they have, there's no data about that. There's an infinitude of subtle traps when training financial models you have to be aware of.<p>* The proposed training and evaluation periods are remarkably short for the holding periods they suggest, if they were to have included good test results<p>* There's no information about how the exact timing of the data feeds they're giving, how they measure the price+time+cost of execution, how they think about market impact, etc.<p>* There's no mention of risk management aside from some vague risk-preference ideas the gpt might theoretically have<p>Putting that aside, there's a fundamental misconception held by the authors. If you have some mega-network that can parse all sorts of financial information/statements/whatever and meaningfully tell you information about the future, you're not going to add a ton of nonsense about understanding written language prompts to have a discussion with the user. The actually valuable thing is the predicted forward returns / target portfolio / whatever piece of information you're trying to get.