The Same Legal AI Tool Can Be Both Prohibited and Authorized
Mike OSS landed with a thousand GitHub stars in 72 hours. The takes have mostly missed what makes it interesting.
Late last month, a former Latham & Watkins attorney named Will Chen released Mike OSS, an open-source legal AI platform, on GitHub. Chen reports it drew more than a thousand stars in its first 72 hours, and it has been the subject of substantial discussion in legal tech circles since. The pitch is simple. Mike covers the core functions of Harvey and Legora, is AGPL-3.0 licensed, is self-hostable, and is free beyond whatever your firm pays Anthropic or Google for API tokens.
The reaction has been predictable. Big-firm partners are asking what they have been paying enterprise vendors hundreds of thousands of dollars a year for. Small-firm attorneys see a path to feature parity at a price they can actually carry. Across legal tech commentary, the take has mostly been some version of “this changes everything.”
It might. But not for the reason most people are saying.
Mike OSS is the most interesting test case I have worked through this year because, when you run it through a serious tool-evaluation framework, the same tool produces wildly different governance outcomes depending on how the firm deploys it. The same product, in three different configurations, scores anywhere from Prohibited to Conditionally Authorized.
That kind of variance is rare. Most AI tools have a fixed risk profile. You evaluate Westlaw Precision once and it scores the same for every firm. Mike does not work that way. And small firms, the ones Chen explicitly built it for, are the least likely to know which configuration they are actually running.
Here is what the four-factor evaluation framework reveals.
Factor 1: Data Confidentiality
The first factor is also the only one that can stop an evaluation cold. It asks where firm data goes when the tool processes it. Mike OSS has three realistic deployment modes, and they do not score the same.
Hard Fail
The mikeoss.com demo site routes everything through infrastructure Chen controls. Chen has said publicly that the site is meant as a demo, not a production environment, and that self-hosting is the intended path for any real firm use. Uploading client documents to the demo would be a hard fail under any serious framework.
Marginal Pass
A self-hosted instance running against a standard Anthropic or Google consumer API key still sends prompt content and document text to the LLM provider under standard consumer terms. No zero-retention guarantee, no data processing agreement. And depending on whether the firm uses Mike's default Supabase backend or fully self-hosts the supporting stack, document storage may or may not actually sit on firm-controlled infrastructure.
Approaches Standard
A self-hosted instance running against a negotiated enterprise zero-retention API contract is meaningfully different. It approaches the standard most firms would expect from an enterprise legal AI vendor.
Three configurations of the same tool. Three different confidentiality scores. The recent United States v. Heppner decision out of the Southern District of New York held that AI chatbots can break attorney-client privilege when used through public services. The configuration question is not academic.
Factor 2: Tool-Design Ethics Compliance
ABA Formal Opinion 512 is the operative national authority. It requires lawyers using AI tools to maintain reasonable understanding of the tool's capabilities, limitations, and data handling.
Mike's documentation is thin. The README is essentially a setup guide. There is no architecture documentation, no explanation of how the citation engine works under the hood, no published security audit. For an open-source project asking law firms to trust it with confidential documents, that is a gap.
The AGPL-3.0 license also creates a downstream obligation small firms rarely think about. If you use Mike to power a client-facing portal, you must release your source code modifications. That is a business-terms issue, not a confidentiality issue, but it matters.
Factor 3: Practice Area Fit
Mike is genuinely strong at document review, contract analysis, and tabular extraction with per-cell citations back to verbatim source quotes. Chen made some good design choices.
It does not do legal research. There is no Westlaw or Lexis integration, no fine-tuned legal model, no jurisdiction-specific reasoning, no statutory currency checking. For a personal injury or family law practice where legal research and statutory currency are core daily work, this gap is significant. For a transactional or diligence-heavy practice where contract review and structured extraction dominate, fit is solid.
The same tool. Different fit profiles depending on what the firm actually does.
Factor 4: Cost Relative to Firm Size
The headline price is zero. The total cost of ownership is not.
Self-hosting requires servers, deployment, security patching, and ongoing maintenance. A one-to-three attorney firm with no IT staff is going to need a contractor or managed hosting service to deploy Mike securely. Add LLM token costs, which scale with usage. The realistic monthly cost for a small firm running Mike well is somewhere in the hundreds, not zero.
That is still well below Harvey or Legora pricing. But the firm that thinks “free” and skips the security and maintenance work has built a governance failure into the deployment from day one. The cheap path fails Factor 1. The compliant path costs real money. Both are recoverable. Confusing the two is not.
The Takeaway for Small Firms
Mike OSS is not Harvey. It is also not snake oil. It is a serious tool that, in the right configuration with the right enterprise API terms and the right IT support, may be a defensible choice for a small firm doing transactional or diligence work.
It is also a tool that, deployed naively against a personal API key on a laptop with no security model, fails confidentiality on day one and exposes the firm to exactly the kind of malpractice risk that the appellate sanctions cases of the last two years were built to address.
The framework matters more than the tool. And the gap between “I read about Mike” and “I have a defensible governance position on Mike” is exactly where solo and small-firm AI governance lives right now.
If your firm is evaluating Mike, or any other AI tool, and you do not have a structured way to answer the four questions above with documented evidence, that is worth fixing. Before the next ethics audit. Before the next client question. Before the next motion to compel disclosure of AI use.
The four-factor framework is publicly available. Working through it on a real tool, in a real firm, with documentation that holds up under scrutiny, is what governance actually looks like.
This article is for informational purposes only and does not constitute legal advice. Consult qualified counsel for guidance specific to your situation.
JDAI helps law firms develop AI governance frameworks - from policy drafting and tool evaluation through attorney training and ongoing compliance support.
Take the AI Readiness Assessment Schedule a Consultation