I think this is one of those places where it can help to remember that AI != LLM. I don't mean that as a criticism of the video or its title, because for a lawyer expressing some deep and well-founded concerns in a video targeted for general people I don't expect a deep dive into the nature of the exact AIs we have today.<p>Whether or not AI lawyers could hypothetically be useful in 10 years, today's LLMs are observably and obviously not appropriate. A legal AI needs to have a concrete idea of sources it is citing. It needs to have some sort of equivalent of a human pulling down a book and knowing (however you choose to determine the meaning of that world) <i>exactly</i> what the reference refers to. Modern LLMs merely create "plausible sounding" citations. Which we've already seen with people using these technologies in the legal world.<p>But LLMs are not the be-all, end-all of "AI". Future AIs with different architectures and capabilities may be able to function as lawyers reasonably.<p>LLMs certainly can not today, though, just by the nature of their architecture.
The issue with GenAI is that is typically sounds correct, and presents its opinion in a way that seems authoritative, but might be wrong... and the only ways to <i>know</i> that it' wrong are 1. to try its suggestion and fail, or 2. to look at it with existing expertise.<p>I see this frequently when generating source code; it'll get me 80-90% there, but I need to fix, massage, and adjust. This is fairly easy, since the code-test-code loop is already part of a normal development cycle.<p>When it's a matter of law, though? You don't really get those kinds of do-overs, and if the judge catches you citing a hallucination in your argument? I doubt that would go over well.
From my experience of hiring human lawyers in the UK it's kind of terrible and crying out to be taken over at least in part by AI.<p>I'm not sure the "CALL (833) 3-MY-BIRD" guy is an unbiased guide as to the value of AI legal stuff.