I am always a bit disappointed when only few arguments are provided. Ken Griffin might have a good understanding of the implications of LLM or not but that is hard to tell here.<p>Especially law will be interesting to watch. Lawmakers have to encode their intentions into text. LLM are good at detecting patterns in texts, find inconsistencies and so on. On top of that I would argue that we learn from the past to predict the future. Laws do not change as frequently as tech does. So LLMs might turn out to be excellent at understanding the law, at least written law, and experienced from all the cases they saw during training. I think law is in for a change similar to software.
> “Here’s the problem with large language models: they are built on the past, everything we do is about the future,” Griffin said.<p>So Ken sees books and backtesting trading strats as useless too?<p>I really expected more knowledgeable answers