What's kind of funny/telling about the current state of AI is that.. if it really worked as incredibly as all the pumps claim, couldn't you simply train it on all the relevant legal codes by jurisdiction?
But not really, its mostly just predicting the next token.
More likely than not it would be stuck in a rat nest of contradicting codes and rules.
The US Supreme Court ruling with regards to Colorado leaving Trump off the ballot was a complete farse. Their explanation was conveluted and contradictory, and they decided to include answers to questions that weren't directly part of the case. What is an LLM supposed to do with that, and how can an LLM trained on our laws be expected to make use of that when courts can, and sometimes do, go against the rules as written?
When advising clients, ie not litigation, most every relevant supreme court case is boiled down to a single sentence. Nuance isn't relevant to a client who is trying to avoid ever having to litigate anything. They don't want to be that close to any legal lines. So you wouldn't turn the AI loose on the judge's written decision, rather the boiled-down summaries written by a host of other professionals. Things like this:
The one sentence that matters is decided later though, right? The court doesn't write 10 pages and then point to a single sentance to listen to, that's a matter of what the public and/or law enforcement key in on.
For future cases the full explanation does still matter too, especially from the Supreme Court. People only remember 9 words from the Miranda decision but the rest of the 10 pages are still case law that can absolutely be used to impact future cases.
Cases yes. The pages matter to lawyers. But day to day clients pay lawyers for the practical (short) answers on which they can build corporate policies.
Maybe I'm way off base here, but in my opinion bothering with lawyers is useless unless I'm worried about litigation. If I only care about corporate policy then I won't bother with legal council at all, at best I'd lean on HR who can have more relevant insights related to company culture and change management.
No, I'm saying that if you can keep all of our laws in your head at once there are scenarios where you can't follow all of the laws.
I'm also saying that we have case law that contradicts itself and violates the rules of how the courts are supposed to work. Those examples, if included in training data, would confuse an LLM and likely lead to poor results.
But not really, its mostly just predicting the next token.