57 of 63: The Greg Lake Suspension and the End of the ‘Bad Lawyer’ Defense
Two weeks after Nebraska indefinitely suspended attorney Greg Lake for AI hallucinations and concealment, Sullivan & Cromwell apologized for the same pattern. The “bad lawyer” defense is gone. Eight questions every small firm needs to answer before the next filing.
Two weeks ago, the Nebraska Supreme Court indefinitely suspended attorney Greg Lake’s license to practice law.
Not fined. Not reprimanded. Not given a stern warning and sent back to his desk. Indefinitely suspended.
His offense: filing an appellate brief in which 57 of 63 citations were defective. Roughly 20 of them were completely fabricated, cases that never existed in any jurisdiction, generated by an AI tool that Lake apparently never verified. Three were entirely fictitious.
But here is the part that should keep every small firm attorney awake tonight: the AI didn’t suspend Greg Lake. Greg Lake suspended Greg Lake.
The hallucinated citations were the trigger. The misrepresentation to the court about how the brief was created is what ended his career. He didn’t tell the court he used AI. When the errors surfaced, he didn’t come clean. He waited until two days before the suspension order to file an affidavit admitting what happened, calling it a “grave error of judgment.”
By then it was too late. The Nebraska Supreme Court wasn’t punishing him for using AI. It was punishing him for the cascade of failures that followed: no verification, no supervision, no candor. Competence, diligence, and candor to the tribunal, Rules 1.1, 1.3, and 3.3, all gone in a single filing.
And his client, Jason Regan, is now facing $52,000 in opposing counsel’s fees. In a custody case. For his daughter. Those fees don’t get reversed because the lawyer got suspended. The governance failure rolled downhill to the person who could least afford it.
Before Sullivan & Cromwell, After Sullivan & Cromwell
For the past three years, every one of these stories had the same subtext: this only happens to bad lawyers. Sloppy lawyers. Young lawyers who don’t know better. Solo practitioners who cut corners because no one is watching.
That narrative died on April 23, 2026.
Sullivan & Cromwell, 900 attorneys, arguably the most prestigious corporate law firm in the world, sent an apology letter to a federal judge after opposing counsel caught 40-plus errors in an S&C court filing, including fabricated citations generated by AI. The co-head of S&C’s restructuring division wrote the letter. S&C’s public statement was telling: they have AI policies, but those policies “were not followed in the preparation of that particular document.”
Read that again. They have policies. The policies weren’t followed. On a filing reviewed by a partner at one of the most rigorous firms in legal practice.
We are now in an After Sullivan & Cromwell world. The “bad lawyer” defense is gone. If a firm with S&C’s resources, reputation, and internal controls can’t prevent this, then the problem isn’t bad lawyers. The problem is that most law firms, from nine-hundred-attorney institutions to solo practitioners, do not have operational governance infrastructure for AI use. They have tools. They may even have policies. What they don’t have is a system that ensures the policies are followed, the outputs are verified, the supervision is documented, and the client is protected when something goes wrong.
The Pattern No One Is Talking About
Every AI discipline case follows the same pattern. It is never just the hallucination.
- Greg Lake. Hallucinated citations + no verification + misrepresentation to the court. Indefinite suspension.
- Zachariah Crabill, Colorado. Hallucinated citations + discovered the errors before the hearing and said nothing + blamed a legal intern. Two-year suspension, 90 days served.
- Whiting v. City of Athens, Sixth Circuit. Hallucinated citations + continued submitting AI-generated filings after being warned. $30,000 in sanctions, case dismissed.
- Sullivan & Cromwell. Hallucinated citations + AI policies existed but weren’t followed + error caught by opposing counsel, not internal review. Public apology.
The AI generated the initial error. The lawyer’s failure to verify compounded it. The failure to disclose or correct compounded it again. And the absence of any documented governance process, supervision protocols, verification workflows, client communication standards, incident response procedures, left no defense when the tribunal asked: what did you do to prevent this?
What The Bar Is Actually Saying
The legal profession has not been silent on this. ABA Formal Opinion 512, issued in 2024, states clearly that lawyers must ensure competent use of AI tools, maintain confidentiality of client data, supervise AI-assisted work product, and communicate with clients about AI use in their matters.
The Florida Bar issued Ethics Opinion 24-1 on the same themes. Multiple state bar associations have followed. The 11th Judicial Circuit (Miami-Dade) and the 17th Judicial Circuit (Broward) have issued mandatory AI disclosure orders with sanctions for noncompliance. A proposed amendment to the Florida Evidence Code would codify an AI-specific definition at Section 90.951(5).
The direction is unmistakable. The Bar is not asking whether lawyers should govern their AI use. It is telling them how, and building enforcement mechanisms for when they don’t.
But there is a gap between the Bar telling lawyers what their obligations are and lawyers knowing what to do about them on Monday morning. Opinion 512 says “ensure competent use.” It does not say how to build a verification workflow. It says “maintain confidentiality.” It does not say how to evaluate whether a specific AI tool’s data processing agreement actually protects client data. It says “supervise.” It does not say what a supervision protocol looks like for a four-attorney firm where the managing partner is also the only person reviewing AI-assisted work product.
That operational gap, between the obligation and the implementation, is where the next hundred Greg Lakes are going to come from.
The Question That Matters
The question for every small firm attorney reading this is not “am I using AI responsibly?” Almost everyone believes they are.
If a tribunal asked me tomorrow to produce my AI governance framework, my verification protocols, my supervision documentation, my client disclosure records, my tool evaluation process, my training records, my incident response plan, could I produce it?
If the answer is no, the question becomes: what are you going to do about it before the answer matters?
Eight Questions to Ask Before Your Next AI-Assisted Filing
Print this. Answer honestly. If you can’t answer “yes” to all eight, you have a governance gap.
- 1. Tool inventory. Can you name every AI tool anyone in your firm is using, including staff and paralegals? Not the tools you authorized. The tools actually in use.
- 2. Data confidentiality. For each tool on that list, do you know whether client data entered into the tool is used to train the model? Have you reviewed the data processing agreement yourself, or are you relying on the vendor’s marketing language?
- 3. Verification. Do you have a defined process for verifying AI-generated work product against primary sources before it goes into a filing, a letter, or a client communication? Is that process written down, or does it live in your head?
- 4. Supervision. If a paralegal or associate uses AI on a matter, how do you know? Is there a documentation step, or are you assuming you’ll catch it in review?
- 5. Client communication. Have you told your clients you use AI in their matters? Do you have a standard disclosure, or are you deciding case by case? If a client asked, could you show them your policy?
- 6. Training. Has everyone in your firm who touches AI received training on your expectations for its use? When was the last time? Is there a record?
- 7. Incident response. If you discovered tomorrow that a filing contained a hallucinated citation, what would you do in the first hour? Is that plan written down, or would you be figuring it out in real time?
- 8. Monitoring. When was the last time you reviewed whether your AI practices still match your obligations? Are you tracking changes in your state’s ethics opinions, court orders, and bar guidance on AI use?
Every “no” is a gap. Every gap is the space where the next Greg Lake, the next Zachariah Crabill, the next Sullivan & Cromwell apology letter comes from.
The good news: these are solvable problems. The bad news: they don’t solve themselves.
Attorney Review Disclosure: This article describes a general framework. Every firm’s implementation must be reviewed by licensed counsel for jurisdiction-specific ethics compliance. Rules cited are ABA Model Rules and Florida Rules of Professional Conduct; other jurisdictions may differ. Nothing in this article constitutes legal advice. The cases and regulatory developments cited in this article are current as of April 29, 2026.
If you can’t answer “yes” to all eight questions above, the JDAI AI Readiness Self-Assessment will identify your biggest governance gaps in three minutes.