TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Answering Legal Questions with LLMs

170 点作者 hugodutka大约 1 年前

27 条评论

_akhe大约 1 年前
I saw a RAG demo from a startup that allows you to upload patient&#x27;s medical docs, then the doctor can ask it questions like:<p>&gt; what&#x27;s the patient&#x27;s bp?<p>even questions about drugs, histories, interactions, etc. The AI keeps in mind the patient&#x27;s age and condition in its responses, when recommending things, etc. It reminded me of a time I was at the ER for a rib injury and could see my doctor Wikipedia&#x27;ing stuff - couldn&#x27;t believe they used so much Wikipedia to get their answers. This at least seems like an upgrade from that.<p>I can imagine the same thing with laws. Preload a city&#x27;s, county&#x27;s etc. entire set of laws and for a sentencing, upload a defendant&#x27;s criminal history report, plea, and other info then the DA&#x2F;judge&#x2F;whoever can ask questions to the AI legal advisor just like the doctor does with patient docs.<p>I mention this because RAG is perfect for these kinds of use cases, where you really can&#x27;t afford the hallucination - where you need its information to be based on specific cases - specific information.<p>I used to think AI would replace doctors before nurses, and lawyers before court clerks - now I think it&#x27;s the other way around. The doctor, the lawyer - like the software engineer - will simply be more powerful than ever and have lower overhead. The lower-down jobs will get eaten, never the knowledge work.
评论 #40200764 未加载
评论 #40200690 未加载
评论 #40201520 未加载
评论 #40201081 未加载
评论 #40201301 未加载
评论 #40199539 未加载
评论 #40199779 未加载
评论 #40204407 未加载
评论 #40201536 未加载
评论 #40206094 未加载
评论 #40201673 未加载
评论 #40201884 未加载
评论 #40200973 未加载
georgeecollins大约 1 年前
I wonder if this &quot;AI will replace your job&quot; is like &quot;AI will drive your car&quot; in that where once something can solve 95% of the problem the general public assumes the last 5% will come very quickly.<p>Rodney Brooks used to point out that self-driving was perceived by the public as happening very quickly, when he could show early examples in Germany from the 1950s. We all know this kind of AI has been in development a long time and it keeps improving. But people may be overestimating what it can do in the next five years -- like they did with cars.
评论 #40201512 未加载
评论 #40201909 未加载
liampulles大约 1 年前
Key point here is that the implementation combines an LLM summary with DIRECT REFERENCES to the source material: <a href="https:&#x2F;&#x2F;hotseatai.com&#x2F;ans&#x2F;does-the-development-and-deployment-of-q_2azysfQoPxFksVXKW8TYw2TGT79#legal-trace" rel="nofollow">https:&#x2F;&#x2F;hotseatai.com&#x2F;ans&#x2F;does-the-development-and-deploymen...</a><p>That seems to me a sensible approach, because it gives lawyers the context to make it easy to review the result (from my limited understanding).<p>I wonder if much of what would want couldn&#x27;t be achieved by analyzing and storing the text embeddings of legal paragraphs in a vector database, and then finding the top N closest results given the embedding of a legal question? Then its no longer a question of an LLM making stuff up, but more of a semantic search.
评论 #40201781 未加载
评论 #40201775 未加载
vouaobrasil大约 1 年前
The next step after this is more complicated laws because lawyers can now use LLMs, and thus laws even more opaque to ordinary folk who will have to use LLMs to understand anything. It&#x27;s an even more fragile system that will undoubtedly be in favour of those who can wield the most powerful LLM, or in other words, the rich and the corporations.<p>This is another example of technology making things temporarily easier, until the space is filled with an equal dose of complexity. It is Newton&#x27;s third law for technological growth: if technology asserts a force to make life simpler, society will fill that void with an equal force in the opposite direction to make it even more complex.
评论 #40199454 未加载
评论 #40200065 未加载
评论 #40199334 未加载
avidiax大约 1 年前
Is there perhaps a training data problem?<p>Even if the LLM were trained on the entire legal case law corpus, legal cases are not structured in a way that an LLM can follow. They reference distant case law as a reason for a ruling, they likely don&#x27;t explain specifically how presented evidence meets various bars. There are then cross-cutting legal concepts like spoliation that obviate the need for evidence or deductive reasoning in areas.<p>I think a similar issue likely exists in highly technical areas like protocol standards. I don&#x27;t think that an LLM, given 15,000 pages of 5G specifications, can tell you why a particular part of the spec says something, or given an observed misbehavior of a system, which parts of the spec are likely violated.
MontagFTB大约 1 年前
A tool like this should live in service to the legal profession. Like Copilot, without a human verifying, improving, and maintaining the work, it is risky (possibly negligent) to provide this service to end users.
评论 #40199710 未加载
评论 #40199230 未加载
评论 #40198865 未加载
kevingadd大约 1 年前
Don&#x27;t sections of regulations reference each other, and reference other regulations? This article says they only insert snippets of the section they believe to be directly relevant to the legal question. It seems to me that this automatically puts the bot in a position where it lacks all the information it needs to construct an informed answer. Or are the laws in some regions drafted in a &quot;stand-alone&quot; way where each section is fully independent by restating everything?<p>This feels like they&#x27;ve built an ai that justifies itself with shallow quotes instead of a deep understanding of what the law means in context.
评论 #40198848 未加载
bdw5204大约 1 年前
Using LLMs to understand laws seems like about as bad an idea as using them to write legal documents:<p><a href="https:&#x2F;&#x2F;apnews.com&#x2F;article&#x2F;artificial-intelligence-chatgpt-fake-case-lawyers-d6ae9fa79d0542db9e1455397aef381c" rel="nofollow">https:&#x2F;&#x2F;apnews.com&#x2F;article&#x2F;artificial-intelligence-chatgpt-f...</a>
efitz大约 1 年前
This was an excellent article describing how they broke down a complex task that an LLM was bad at, into a series of steps that the LLM could excel at. I think that this approach is probably broadly applicable across law (and perhaps medicine).
sandworm101大约 1 年前
Don&#x27;t be too worried about LLM arms races. Law is not as complicated as it seems on TV. Having access to a better LLM isn&#x27;t going to somehow give you access to the correct incantation necessary to dismiss a case. The vast majority, like 99.99% of cases, turn on completely understood legal issues. Everyone knows everything.
评论 #40202123 未加载
评论 #40200974 未加载
helpfulmandrill大约 1 年前
Naively I wonder if the tendency towards &quot;plausible bullsh*t&quot; could be problem here? Making very convincing legal arguments that rest of precedents that don&#x27;t exist etc.
评论 #40200339 未加载
anonylizard大约 1 年前
GPT-4 also cannot solve full programming problems, and frequently makes large errors even with a small focused context, as in Github Copilot Chat.<p>However, it is still extremely useful and productivity enhancing. When combined with the right workflow and UI. Programming is large enough of an industry, that has Microsoft building it out in VScode. I don&#x27;t think the legal industry has a similar tool.<p>Also, I think programmers are far more sensitive to radical changes. They see the constant leaps in performance, and are jumping in to use the AI tools, because they know what could be coming next with GPT-5. Lawyers are generally risk averse, not prone to hype, so far less eager customers for these new tools.
评论 #40199020 未加载
w10-1大约 1 年前
Yes, law applies rules to facts.<p>No, connecting the facts and rules will not give you the answer.<p>Lawyers are only required when there are real legal issues: boundary cases, procedural defenses, countervailing leverage...<p>But sometimes legal heroes like Witkins drag through all the cases and statutes, identifying potential issues and condensing them in summaries. New lawyers use these as a starting-point for their investigations.<p>So a Law LLM first needs to be trained on Witkins to understand the language of issues, as well as the applicable law.<p>Then somehow the facts need to be loaded in a form recognizable as such (somewhat like a doctor translating &quot;dizziness&quot; to &quot;postural hypotension&quot; with some queries). That would be an interesting LLM application in its own right.<p>Putting those together in a domain-specific way would be a great business: target California Divorce, Texas product-liability tort, etc.<p>Law firms changed from pipes to pyramids in the 1980&#x27;s as firms expanded their use of associates (and started the whole competition-to-partnership). This could replace associates, but then you&#x27;d lose the competitiveness that disciplines associates (and reduce buyers available for the partnership). Also, corporate clients nurture associates as potential replacements and redundant information sources, as a way of managing their dependence on external law firms. For LLM&#x27;s to have a sizable impact on law, you&#x27;d need to sort out the transaction cost economics features of law firms, both internally and externally.
niemandhier大约 1 年前
Legal reasoning is extremely interconnected, sometimes directly via inter law references, sometimes indirectly via agreement in the field. This makes setting a sensible context difficult.<p>I believe that it would be possible to teach an LLM to reason about law, but simple RAG will probably not work. Even the recursive summary trick outlined in the post probably is not enough, at least I couldn&#x27;t make it work.
nocoiner大约 1 年前
<i>We’ve learned that the combination of high latency, faulty reasoning, and limited document scope kills usage. No lawyer wants to expend effort to ask a detailed question, wait 10 minutes for an answer, wade through a 2-pages-long response, and find that the AI made an error.</i><p>Nor does any lawyer want to have that same experience with a junior associate (except insert “two hours” for “10 minutes”), yet here we are.
daft_pink大约 1 年前
I would say that it’s getting better at answering those questions. I have a list of difficult legal research questions that I worked on at work and gemeni pro and claude opus are definitely way better than 3 and 3.5 and 4.<p>I believe it will eventually get there and give good advice.
评论 #40200302 未加载
tagersenim大约 1 年前
Many laws, especially GDPR, can only be interpreted in conjunction with a lot of guidelines (WP29 for example), interpretations by the local Data Protection Authority, decisions by local and European courts, etc.<p>Given all of this information, I think the bot will be able to formulate and answer. However, the bot first needs to know what information is needed.<p>If a lawyer has to feed the bot certain specific parts of all of these documents, they might as well write the answer down themselves.
评论 #40199052 未加载
yieldcrv大约 1 年前
2024 and people still just realizing that LLM’s need subtasks and that “you’re prompting it wrong” is the answer to everything<p>Maybe “prompt engineering” really is the killer job
spdustin大约 1 年前
I&#x27;ve always felt that a &quot;smart&quot; person isn&#x27;t smart because they know everything, but because they know <i>how to find the answers</i>. Smart users of LLMs will use the output as an opportunity to learn how to think about their problem, and smart implementations of LLMs will guide the user to do so.<p>I&#x27;m not saying that every interaction must be Socratic, but that the LLM neither be nor present itself as the answer.
jrm4大约 1 年前
Yup. As a lawyer and IT instructor, the &quot;killer&quot; application really is &quot;knowledgeable literate human-like personal librarian&#x2F;intern&quot;<p>When they can do the following, we&#x27;ll really be getting somewhere.<p>&quot;If I&#x27;m interpreting this correctly, most sources say XXXXXX, does that sound right? If not, please help correct me?&quot;
ei625大约 1 年前
As the same as the software developer, the value of them isn&#x27;t just to have technical knowledge.
anonu大约 1 年前
&gt; We preprocess the regulation so that when a call contains a reference to “Annex III,” we know which pages to put into the “junior lawyer’s” prompt. This is the LLM-based RAG I mentioned in the introduction.<p>Is this RAG or just an iteration on more creative prompt engineering?
评论 #40200628 未加载
RecycledEle大约 1 年前
LLMs are Internet simulators. They will give you an answer the Internet thinks is a good answer. If you live in CA or NY, the legal advice might be passable. If you live in TX, the legal advice is horrible.<p>LLMs are biased because the Internet is biased.
cess11大约 1 年前
EU law is case driven, and besides the text of cases you also need to know the books interpreting them, general legal principles that might be applicable and hermeneutic traditions.<p>They are clearly a long way from a tool that can compete with a human lawyer.
balphi大约 1 年前
How are you using regex to end the while loop? Are you detecting a specific substring or is it something more complex?
评论 #40222291 未加载
beeboobaa3大约 1 年前
No thank you, let&#x27;s not.
2099miles大约 1 年前
Unintuitive LLM only rag?
评论 #40200826 未加载