World

ChatGPT: Lawyers say AI tricked them into citing bogus case law


NEWYORK –

Two apologetic lawyers respond to an angry judge in Manhattan federal court who blamed ChatGPT on Thursday for tricking them into including fictional legal research in court records.

Attorneys Steven A. Schwartz and Peter LoDuca are facing possible punishment for filing a lawsuit against an airline that includes references to previous lawsuits that Schwartz thought were real, but actually were. was invented by chatbots that support artificial intelligence.

Schwartz explains that he used this groundbreaking program when looking for legal precedents to support a client’s lawsuit against Colombian airline Avianca for an injury that occurred on a 2019 flight.

The chatbot, which has captivated the world with its creation of essay-like responses to user prompts, has suggested a number of cases involving aviation mishaps that Schwartz couldn’t figure out through the usual methods used at his law firm.

The problem is, some of those cases aren’t real or involve airlines that don’t exist.

Schwartz told U.S. District Judge P. Kevin Castel that he was “operating on a misconception … that this website pulled these cases from a number of sources to which I did not have access. “

He said he had “failed miserably” to do follow-up research to make sure the citations were accurate.

“I didn’t understand that ChatGPT could create cases,” says Schwartz.

Microsoft has invested around $1 billion in OpenAI, the company behind ChatGPT.

Its success, demonstrating how artificial intelligence can change the way people work and learn, has generated fear among some. Hundreds of industry leaders signed a letter in May warning that “reducing the risk of extinction caused by AI must be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” .

Judge Castel seemed both confused and disturbed by this unusual turn of events and disappointed that the attorneys did not act quickly to correct the bogus legal citations when they were first consulted by his attorneys. Avianca and the court warned about the matter. Avianca pointed to bogus case law in a March filing.

The judge confronts Schwartz with a legal case invented by a computer program. It was originally described as a wrongful death by a woman suing an airline only to turn into a legal claim about a man who missed his flight to New York and was forced to pay for it. extra cost.

“Can we agree that it’s legitimately silly?” Castel asked.

Schwartz said he mistakenly thought the confusing presentation was due to excerpts drawn from different parts of the case.

When Castel finished his question, he asked Schwartz if he had anything to say.

“I want to sincerely apologize,” Schwartz said.

He added that he has suffered personal and professional consequences as a result of the mistake and feels “shame, humiliation and profound regret.”

He said that he and the company where he works – Levidow, Levidow & Oberman – have put in place safeguards to ensure nothing like this happens again.

LoDuca, another attorney involved in the case, said he trusted Schwartz and did not fully consider what he had compiled.

After the judge read aloud portions of a case cited to show how easy it was to see it as “nonsense”, LoDuca said: “I never realized that this was a case unreal.”

He said the results “hurt me terribly.”

Ronald Minkoff, an attorney for the law firm, told the judge the filing “was the result of carelessness, not ill will” and should not lead to sanctions.

He said attorneys have struggled with technology, especially new technology, “and it couldn’t get any easier.”

“Mr. Schwartz, who rarely does federal research, chose to use this new technology. He thought he was working with a standard search engine,” Minkoff said. what he’s doing is playing with real bullets.”

Daniel Shin, assistant professor and assistant director of research at the Center for Legal and Court Technology at the William & Mary School of Law, said he introduced the Avianca case at a conference last week, attracting thousands of people. Dozens of participants in person and online from state and federal. courts in the United States, including the Manhattan federal court.

He said the topic caused shock and confusion at the conference.

“We’re talking about the Southern District of New York, the federal district that handles major cases, 9/11 to all major financial crimes,” Shin said. “This is the first documented case of potential professional misconduct by an attorney using generalized AI.”

He said the case demonstrated that lawyers may not understand how ChatGPT works because it tends to be hallucinogenic, talking about fictional things in a way that sounds realistic but isn’t.

“It highlights the dangers of using promising AI technologies without knowing the risks,” says Shin.

The judge said he would introduce sanctions at a later date.

goznews

Goz News: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably.

Related Articles

Back to top button