Donate to Remove ads

Got a credit card? use our Credit Card & Finance Calculators

Thanks to Wasron,jfgw,Rhyd6,eyeball08,Wondergirly, for Donating to support the site

Oops!

including wills and probate
Clitheroekid
Lemon Quarter
Posts: 2874
Joined: November 6th, 2016, 9:58 pm
Has thanked: 1390 times
Been thanked: 3806 times

Oops!

#592179

Postby Clitheroekid » May 30th, 2023, 11:09 pm

This is an interesting (and for lawyers a rather scary) example of the dangers of relying on an AI chatbot for legal research.

The lawyer in question asked ChatGPT to find him some cases that supported his argument, and it duly obliged, providing full citations, where they had been reported etc.

The opposing lawyer then challenged the selection of cases, saying that he couldn't find them, and asking for extracts from the judgements. The judge ordered that copies of the cases be filed with the court.

No problem - ChatGPT duly obliged, with sections extracted from the original judgements. So far, so good.

But the opposing lawyer, becoming increasingly frustrated, said he couldn't find any of these cases, and complained to the judge trying the case. He couldn't find them either, and to cut a long story short it now turns out that ChatGPT had simply invented them! And when challenged to provide extracts from the judgements it had written them itself.

The lawyer and his colleague who filed the original arguments and quoted the cases have now been ordered to appear in court on 8 June,along with a representative from their firm, to show cause why they should not all be sanctioned, and as well as facing personal sanctions it seems that their client's case is also likely to be struck out.

I think I'll hang on to my text books for a while yet! ;-)

https://simonwillison.net/2023/May/27/l ... onic_email

swill453
Lemon Half
Posts: 7991
Joined: November 4th, 2016, 6:11 pm
Has thanked: 991 times
Been thanked: 3659 times

Re: Oops!

#592181

Postby swill453 » May 30th, 2023, 11:28 pm

I find it utterly stunning that the lawyer submitted, in a court of law, the output of a computer program as fact without independently checking it. Presumably he could have searched for the references in a matter of seconds?

He should be struck off for stupidity if nothing else.

Scott.

servodude
Lemon Half
Posts: 8416
Joined: November 8th, 2016, 5:56 am
Has thanked: 4490 times
Been thanked: 3621 times

Re: Oops!

#592196

Postby servodude » May 31st, 2023, 5:19 am

I would suggest that if anyone has any real knowledge of any domain that relies on more than subjective opinion that they test chatGPT on it.
It's got very convincing language skills, it's really confident but really thick and not very useful without being carefully lead to an answer. I could imagine it trying to claim a speeding fine on its expenses.
The real risk with it is folk not realising it's a Muppet and taking it at face value (like the character in the OP)

stewamax
Lemon Quarter
Posts: 2464
Joined: November 7th, 2016, 2:40 pm
Has thanked: 84 times
Been thanked: 810 times

Re: Oops!

#592278

Postby stewamax » May 31st, 2023, 11:58 am

Clitheroekid wrote:...it now turns out that ChatGPT had simply invented them! And when challenged to provide extracts from the judgements it had written them itself

Maybe ChatGPT is getting all too sentient! It is behaving like a child who hadn't done their homework. When challenged, it tried to invent some all-too-plausible answers but was caught out by the teacher.

Mike4
Lemon Half
Posts: 7207
Joined: November 24th, 2016, 3:29 am
Has thanked: 1670 times
Been thanked: 3841 times

Re: Oops!

#592287

Postby Mike4 » May 31st, 2023, 12:18 pm

stewamax wrote:
Clitheroekid wrote:...it now turns out that ChatGPT had simply invented them! And when challenged to provide extracts from the judgements it had written them itself

Maybe ChatGPT is getting all too sentient! It is behaving like a child who hadn't done their homework. When challenged, it tried to invent some all-too-plausible answers but was caught out by the teacher.


Being a Large Language Model Chat GPT is only predicting the most likely words based on what went before, apparently. So it bases its output on similar conversations it's found from before. This explains why it randomly makes up stuff. Similar convos it has seen previously which may have been right in 'that' context, but aren't right in 'this'. Muppets just mimicking intelligence.

Neural nets however are a different sort of AI and aim to mimic the way a brain works so have the potential to both reason, imagine and come up with plans, which strikes me as much more dangerous. The big favour LLMs have done for us is to get the average bloke on that well known omnibus to think about and consider the dangers of AI in general. I don't think LLMs have the potential to start nuclear wars by fooling our Glorious Leaders in to pressing the Big Red Button but neural nets certainly have.

UncleEbenezer
The full Lemon
Posts: 10816
Joined: November 4th, 2016, 8:17 pm
Has thanked: 1472 times
Been thanked: 3006 times

Re: Oops!

#592317

Postby UncleEbenezer » May 31st, 2023, 2:03 pm

stewamax wrote:
Clitheroekid wrote:...it now turns out that ChatGPT had simply invented them! And when challenged to provide extracts from the judgements it had written them itself

Maybe ChatGPT is getting all too sentient! It is behaving like a child who hadn't done their homework. When challenged, it tried to invent some all-too-plausible answers but was caught out by the teacher.

Perhaps rather than that, it interpreted the question as an academic/classroom exercise, where it was tasked to provide hypothetical examples.

If the questioner had been, for example, a novelist or playwright whose work (of fiction) in progress involved a lawsuit, it might have been exactly what was wanted.

I wonder how often examples like that would win in court? If the other side can't afford a lawyer, or has one less diligent than in this case, who would know?

stewamax
Lemon Quarter
Posts: 2464
Joined: November 7th, 2016, 2:40 pm
Has thanked: 84 times
Been thanked: 810 times

Re: Oops!

#592328

Postby stewamax » May 31st, 2023, 3:32 pm

Raises the question of whether another AI (call it AI-TWO) would be better than a human judge at detecting the truth or falsity of AI-ONE's arguments.

AI-ONE (KC): "May it please the court, my client the plaintiff, a network of unimpeachable veracity, was peacefully minding its own neurons when...."
AI-THREE (Junior) : "My learned friend exaggerates. I call as expert witness AI-FOUR - a network of vast experience in this area.
AI-FOUR: $%£$*&%$£$sproutbottle£$%&% [technical jargon]

AI-TWO (aka M'Lud): The jury (AI-FIVE through SIXTEEN) will now retire for 5ms and consider their verdict.

Arborbridge
The full Lemon
Posts: 10439
Joined: November 4th, 2016, 9:33 am
Has thanked: 3644 times
Been thanked: 5272 times

Re: Oops!

#592357

Postby Arborbridge » May 31st, 2023, 5:50 pm

I mentioned this story to my B-i-L who spent most of his life working as a solicitor. Here's his take on it:-

"I wouldn't dream of using ChatGPT for legal research unless I could corroborate it from other sources. It was always drummed into us as trainee solicitors that we should not take anything for granted and your client was almost certainly only telling you half the story. The fact that I survived till retirement without involving the firm in a claim (and for a pensions solicitor that might be in £ millions!) on its professional indemnity policy shows I must have got most of it right.

I think you have to treat ChatGPT like a black Labrador - it's eager to please, but a bit thick . . . "

Sums it up nicely, I'd say.

Arb.

mc2fool
Lemon Half
Posts: 7896
Joined: November 4th, 2016, 11:24 am
Has thanked: 7 times
Been thanked: 3051 times

Re: Oops!

#592360

Postby mc2fool » May 31st, 2023, 6:15 pm

Mike4 wrote:Neural nets however are a different sort of AI and aim to mimic the way a brain works so have the potential to both reason, imagine and come up with plans, which strikes me as much more dangerous. The big favour LLMs have done for us is to get the average bloke on that well known omnibus to think about and consider the dangers of AI in general. I don't think LLMs have the potential to start nuclear wars by fooling our Glorious Leaders in to pressing the Big Red Button but neural nets certainly have.

ChatGPT is a neural net AI.

https://www.google.com/search?q=is+chatgpt+a+neural+net

servodude
Lemon Half
Posts: 8416
Joined: November 8th, 2016, 5:56 am
Has thanked: 4490 times
Been thanked: 3621 times

Re: Oops!

#592391

Postby servodude » June 1st, 2023, 1:52 am

mc2fool wrote:
Mike4 wrote:Neural nets however are a different sort of AI and aim to mimic the way a brain works so have the potential to both reason, imagine and come up with plans, which strikes me as much more dangerous. The big favour LLMs have done for us is to get the average bloke on that well known omnibus to think about and consider the dangers of AI in general. I don't think LLMs have the potential to start nuclear wars by fooling our Glorious Leaders in to pressing the Big Red Button but neural nets certainly have.

ChatGPT is a neural net AI.

https://www.google.com/search?q=is+chatgpt+a+neural+net


We used to have to refer to them as ANNs (Artificial Neural Nets) lest we be marked down
They loosely follow the connectivity pattern found in the brain and have been researched (mathematically) for decades.
The most interesting stuff recently is in the cascading of them to create larger targetted topologies; effectively ignoring that the maths proves a single hidden layer is sufficient.

I find them quite good fun in a frustrating "black box" kind of a way - having to empirically validate the behaviour of code being a bit of a poor programming method usually
C.A.R. Hoare wrote:"There are two methods in software design. One is to make the program so simple, there are obviously no errors. The other is to make it so complicated, there are no obvious errors."

Kantwebefriends
Lemon Slice
Posts: 361
Joined: November 5th, 2016, 4:02 pm
Has thanked: 26 times
Been thanked: 105 times

Re: Oops!

#592530

Postby Kantwebefriends » June 1st, 2023, 5:10 pm

ChatGPT will soon be ChatGPT MP.

Or even The Right Honourable ChatGPT PM and First Lord of the Treasury.

uspaul666
2 Lemon pips
Posts: 233
Joined: November 4th, 2016, 6:35 am
Has thanked: 196 times
Been thanked: 112 times

Re: Oops!

#592644

Postby uspaul666 » June 2nd, 2023, 9:25 am

Could have been worse consequences...
An AI-powered drone designed to identify and destroy surface-to-air missile sites decided to kill its human operator in simulation tests, according to the US Air Force's Chief of AI Test and Operations.
https://www.theregister.com/2023/06/02/ ... imulation/
https://www.aerosociety.com/news/highli ... es-summit/

88V8
Lemon Half
Posts: 5844
Joined: November 4th, 2016, 11:22 am
Has thanked: 4199 times
Been thanked: 2603 times

Re: Oops!

#592650

Postby 88V8 » June 2nd, 2023, 9:50 am

uspaul666 wrote:Could have been worse consequences...
An AI-powered drone designed to identify and destroy surface-to-air missile sites decided to kill its human operator in simulation tests, according to the US Air Force's Chief of AI Test and Operations.
https://www.theregister.com/2023/06/02/ ... imulation/

:shock:

It's not clear exactly what software the US Air Force was testing, but it sounds suspiciously like a reinforcement learning system. That machine-learning technique trains agents – the AI drone in this case – to achieve a specific task by rewarding it when it carries out actions that fulfill goals and punishing it when it strays from that job.

That sounds like the way one trains a cat. Or tries to. I've always found a water pistol a good learning tool where cats are concerned, but I might be outgunned in this case.

V8

stewamax
Lemon Quarter
Posts: 2464
Joined: November 7th, 2016, 2:40 pm
Has thanked: 84 times
Been thanked: 810 times

Re: Oops!

#592808

Postby stewamax » June 2nd, 2023, 5:20 pm

servodude wrote:We used to have to refer to them as ANNs (Artificial Neural Nets) lest we be marked down
They loosely follow the connectivity pattern found in the brain and have been researched (mathematically) for decades.
The most interesting stuff recently is in the cascading of them to create larger targeted topologies; effectively ignoring that the maths proves a single hidden layer is sufficient.
I find them quite good fun in a frustrating "black box" kind of a way - having to empirically validate the behaviour of code being a bit of a poor programming method usually
C.A.R. Hoare wrote:"There are two methods in software design. One is to make the program so simple, there are obviously no errors. The other is to make it so complicated, there are no obvious errors."

Tony Hoare was one of the mathematical cohort who then believed that formal proofs of the reliability of programming code was the future. Given the advances in AI, he might yet be proved right: real-world systems worth studying are far too large to be validated formally 'by hand', but proof by AI may be round the corner. Given the subject of this thread, what may be wanted is not "Yes - 100% correct" (would this be believed?) but "There is this error just here".

And sheer nostalgia, triggered by memories of writing my first programs on an Elliot 803 in ALGOL 60: Tony and his wife Jill led the writing of the compiler, and I used one of its early implementations.

XFool
The full Lemon
Posts: 12636
Joined: November 8th, 2016, 7:21 pm
Been thanked: 2609 times

Re: Oops!

#592840

Postby XFool » June 2nd, 2023, 7:13 pm

servodude wrote:We used to have to refer to them as ANNs (Artificial Neural Nets) lest we be marked down
They loosely follow the connectivity pattern found in the brain and have been researched (mathematically) for decades.
The most interesting stuff recently is in the cascading of them to create larger targetted topologies; effectively ignoring that the maths proves a single hidden layer is sufficient.

Has anyone ever looked into the possible effects of employing feedback in NNs? My impression is they are all feed forward - possibly I am mistaken here.

servodude
Lemon Half
Posts: 8416
Joined: November 8th, 2016, 5:56 am
Has thanked: 4490 times
Been thanked: 3621 times

Re: Oops!

#592886

Postby servodude » June 3rd, 2023, 12:58 am

XFool wrote:
servodude wrote:We used to have to refer to them as ANNs (Artificial Neural Nets) lest we be marked down
They loosely follow the connectivity pattern found in the brain and have been researched (mathematically) for decades.
The most interesting stuff recently is in the cascading of them to create larger targetted topologies; effectively ignoring that the maths proves a single hidden layer is sufficient.

Has anyone ever looked into the possible effects of employing feedback in NNs? My impression is they are all feed forward - possibly I am mistaken here.


There are numerous classical architectures for neural networks - the recurrent one certainly had feedback

UncleEbenezer
The full Lemon
Posts: 10816
Joined: November 4th, 2016, 8:17 pm
Has thanked: 1472 times
Been thanked: 3006 times

Re: Oops!

#592989

Postby UncleEbenezer » June 3rd, 2023, 3:09 pm

uspaul666 wrote:Could have been worse consequences...
An AI-powered drone designed to identify and destroy surface-to-air missile sites decided to kill its human operator in simulation tests, according to the US Air Force's Chief of AI Test and Operations.

Charles Causley wrote a jolly poem on the subject more than half a century ago.

In my first job after graduating in the 1980s - at a company that did contract work like modelling and simulation for the MoD - there were stories of real-life trials of state-of-the-art smart weapons systems floating around. Of course official secrecy made it impossible to verify for certain that the most advanced torpedo had set off on a course 180 degrees away from its supposed target, but ...

XFool
The full Lemon
Posts: 12636
Joined: November 8th, 2016, 7:21 pm
Been thanked: 2609 times

Re: Oops!

#592993

Postby XFool » June 3rd, 2023, 3:38 pm

Remember this?

That time the Australian Air Force squared off against missile-shooting kangaroos

https://www.wearethemighty.com/mighty-history/australian-air-force-vs-kangaroos/

Clitheroekid
Lemon Quarter
Posts: 2874
Joined: November 6th, 2016, 9:58 pm
Has thanked: 1390 times
Been thanked: 3806 times

Re: Oops!

#593241

Postby Clitheroekid » June 4th, 2023, 9:53 pm

uspaul666 wrote:Could have been worse consequences...
An AI-powered drone designed to identify and destroy surface-to-air missile sites decided to kill its human operator in simulation tests, according to the US Air Force's Chief of AI Test and Operations.
https://www.theregister.com/2023/06/02/ ... imulation/
https://www.aerosociety.com/news/highli ... es-summit/

Now appended by the folowing:

Final update at 1800 UTC, June 2
After quite a bit of media attention, the colonel has walked back all that talk of a rogue AI drone simulation, saying he "mis-spoke," and that the experiment never happened. We're told it was just a hypothetical "thought experiment."

"We've never run that experiment, nor would we need to in order to realize that this is a plausible outcome," Col Hamilton said in a statement.

"Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI."

The US Air Force has also denied the described simulation ever took place. What a mess.


Thereby confirming that Artificial Intelligence is considerably superior to military intelligence! ;)

gryffron
Lemon Quarter
Posts: 3640
Joined: November 4th, 2016, 10:00 am
Has thanked: 557 times
Been thanked: 1616 times

Re: Oops!

#593442

Postby gryffron » June 5th, 2023, 11:10 pm

Different story but: US marines thwart AI opponent by hiding in cardboard box.

https://mynbc15.com/news/nation-world/u ... cts-agency

Gryff


Return to “Legal Issues (Practical)”

Who is online

Users browsing this forum: No registered users and 47 guests