Lawyer led astray by ChatGPT apologises to court
An experienced lawyer who was misled by ChatGPT has apologised after citing several non-existent cases invented by the AI-powered tool in a legal filing.
US attorney Steven A. Schwartz, who has practised in New York for three decades, said he was “unaware of the possibility that [ChatGPT’s] content could be false” when he relied on it in his legal research, The New York Times reports.
The embarrassing mistake arose in a personal injury case brought against an airline. Lawyers for the plaintiff consulted ChatGPT to find case law to respond to the airline’s argument that the statute of limitations had expired.
The result was a brief filed with the court by one of Mr Schwartz’s colleagues which cited six cases which do not exist, as well as some which do.
Judge P. Kevin Castel, presiding over the case, said it was “unprecedented” to receive a submission containing “bogus judicial decisions, with bogus quotes and bogus internal citations”.
Peter LoDuca, the named lawyer in the case, has said he had “no reason to doubt the sincerity” of the research conducted by his colleague, Mr Schwartz.
Meanwhile, Mr Schwartz has provided screenshots of his “consultation” with ChatGPT, showing that he asked the chatbot to confirm that the cases it mentioned were real and it had said they were.
A hearing has been scheduled for Thursday 8 June to discuss consequences.