
A group of lawyers received a $12,000 fine after a federal judge discovered they had used AI to create court documents containing false information. The judge described the AI-generated content as ‘hallucinated,’ meaning it presented inaccurate details as if they were facts.
Artificial intelligence is becoming more and more common in everyday life, with tools like ChatGPT, Grok, and Copilot being used by people all over the world. It’s also increasingly being adopted in workplaces, where it’s expected to help automate repetitive tasks and save time.
However, this technology has caused problems for many, particularly in the legal field. Several lawyers have filed documents with courts that were later proven false because the AI generated incorrect legal arguments and cited nonexistent precedents.
This has happened again in Kansas: a federal judge issued significant fines after discovering AI-created information was inaccurate.
$12k fines handed out to lawyers over AI materials
I just read that a judge hit a group of lawyers with around a $12,000 fine because they submitted some completely made-up legal cases – Reuters is reporting it’s being called ‘hallucinated’ material, which sounds crazy! Apparently, they presented things as real that weren’t, and the judge wasn’t happy.
Okay, so a judge basically put out a warning – and it’s something all of us who use AI tools should pay attention to. They said that any lawyer worth their salt knows using AI for legal stuff without double-checking it is super risky, and it’s a big no-no to sign off on anything filed in court without making sure it’s accurate. It’s like, you wouldn’t just blindly trust a walkthrough for a hard boss fight, right? You gotta verify things yourself, and it’s the same with legal documents – you’re responsible for what you submit!
Sandeep Seth received the largest penalty, a $5,000 fine, which he described as a humbling experience. Kenneth Kula and Christopher Joe each were fined $3,000.
Another counsellor, David Cooper, was fined $1000 over citations in the court documents.

Robinson noted the surprisingly large number of legal cases recently stemming from lawyers using unverified AI research, which frequently produces false or nonexistent legal precedents.
I was really surprised to hear about this! Apparently, some job seekers in Colorado are suing an AI company because it used a tool to score their resumes. They just want to know what information the AI has about them, which seems totally reasonable, right? It’s a little unsettling to think about a machine judging your qualifications without knowing how it works or what data it’s using.
Read More
- Lacari banned on Twitch & Kick after accidentally showing explicit files on notepad
- The Batman 2 Villain Update Backs Up DC Movie Rumor
- Adolescence’s Co-Creator Is Making A Lord Of The Flies Show. Everything We Know About The Book-To-Screen Adaptation
- YouTuber streams himself 24/7 in total isolation for an entire year
- What time is It: Welcome to Derry Episode 8 out?
- Warframe Turns To A Very Unexpected Person To Explain Its Lore: Werner Herzog
- Jane Austen Would Say: Bitcoin’s Turmoil-A Tale of HODL and Hysteria
- WhistlinDiesel teases update video after arrest, jokes about driving Killdozer to court
- EA announces paid 2026 expansion for F1 25 ahead of full game overhaul in 2027
- Amanda Seyfried “Not F***ing Apologizing” for Charlie Kirk Comments
2026-02-04 17:52