LinkSquares Blog

The Biggest Risk in 2026: AI ComplAIceny and How to Fight It

Written by Jonathan Greenblatt | Apr 02, 2026

Trapped in a driverless car blocking a firetruck...

Last year I was in San Francisco for a conference, and I was excited while there to take my first autonomous vehicle ride. So at 7am I got into the passenger side of a Waymo outside the conference hotel headed to the University of San Francisco gym for an early morning swim (side note, the rooftop lap pool offers sweeping views of the SF skyline and is open to the public). As we drove down 4th street, I was taking video of the experience for the folks back home who hadn’t experienced the future. Everything seemed fine until we came to a stop light.  

Out of nowhere a firetruck came barreling down the road, and the Waymo…did nothing. It didn’t know how to handle the situation. There was no way for the firetruck to get around the car. The Waymo needed to pull over to the right but was at a loss to do that. The firetruck, appropriately, laid on the horn. And here I am in this car with no driver, sweating profusely, and seemingly no way to impact that situation. Fortunately, I noticed an emergency pull over button and the car compliantly pulled over allowing the firetruck to pass. After catching my breath, I called support to let them know about the situation and continued the last 2 tenths of a mile. I walked back to the hotel.

This autonomous vehicle ride was very emblematic to me of where AI is today.  It certainly has its place in making work quality higher, more efficient, and in some cases autonomous. But autonomous vehicles have a long way to go before they are ubiquitous, driving on highways, and completely self-controlled.

That doesn’t mean don’t use AI. It means make sure you have guardrails, test, really understand where it works well and less well, and use AI keeping in mind your risk tolerance for errors.

Testing

For legal professionals, the promise of AI is compelling: faster contract review, automated research, predictive analytics, and document generation. But as my Waymo experience illustrates, AI systems can fail in unexpected ways—and in legal practice, those failures can have serious consequences. The key is rigorous testing before deployment.

Start with controlled testing environments. Before using AI for client work, run it through scenarios with known outcomes. For contract analysis tools, feed them agreements you've already reviewed manually. For legal research AI, compare its results against traditional research methods. Document the AI's accuracy rate, false positives, and false negatives. Pay special attention to edge cases—the legal equivalent of a firetruck barreling down the street.

Testing should be ongoing, not a one-time event. AI models can drift over time, and new use cases may reveal limitations. Establish a feedback loop where attorneys flag AI errors or unexpected outputs. Track these incidents to identify patterns. Is the AI consistently missing certain clause types? Does it struggle with particular jurisdictions or practice areas? This data informs both your guardrails and your decisions about where AI adds value versus where human judgment remains essential.

Guardrails: Your Emergency Pull-Over Button

Just as I needed that emergency button in the Waymo, your AI implementation needs guardrails—mechanisms to catch errors, enable human intervention, and prevent AI from operating beyond its competence.

The first guardrail is human review. AI should augment attorney judgment, not replace it. For high-stakes work—litigation strategy, complex negotiations, novel legal questions—AI outputs should always be reviewed by experienced counsel. Consider implementing a tiered review system: junior attorneys review AI-assisted routine work, while senior attorneys review AI outputs for complex matters.

Second, establish clear boundaries for AI use. Define which tasks are appropriate for AI assistance and which require purely human analysis. Document these boundaries in your firm's policies. For example, AI might draft initial contract language, but final client advice on regulatory compliance requires attorney analysis. Create approval workflows that enforce these boundaries—technical guardrails that prevent AI from being used outside approved contexts.

Third, implement confidence thresholds and uncertainty flags. Many AI systems can indicate their confidence level in outputs. Set thresholds below which outputs automatically trigger human review. Train your team to recognize AI uncertainty signals—hedging language, multiple conflicting suggestions, or gaps in analysis. These are your AI's way of saying "I don't know how to handle this situation."

Finally, maintain audit trails. Document when AI was used, what outputs it generated, and what human review occurred. This serves multiple purposes: quality control, professional responsibility compliance, and continuous improvement of your AI systems. If an error occurs, you need to understand what happened and why.

Risk Assessment: When to Defer to AI (and When Not To)

Not all legal tasks carry the same risk, and your approach to AI should reflect that reality. The question isn't whether to use AI—it's when to defer decision-making to AI versus when to keep humans firmly in control.

Start by categorizing tasks by risk level. Low-risk tasks might include initial document review, legal research summaries, or drafting routine correspondence. Medium-risk tasks could involve contract analysis, due diligence checklists, or discovery document categorization. High-risk tasks include litigation strategy, regulatory compliance advice, ethical decisions, and anything involving novel legal questions or significant client exposure.

For low-risk tasks, AI can operate with lighter human oversight—spot-checking rather than comprehensive review. These are your highway driving scenarios where AI performs reliably. Medium-risk tasks require structured human review of AI outputs before they're finalized or relied upon. High-risk tasks should never be fully deferred to AI. Here, AI serves as a research assistant or draft generator, but human judgment drives the decision-making.

Consider also the reversibility of decisions. Can an error be easily corrected, or would it have lasting consequences? AI-generated meeting notes that can be revised? Lower risk. AI-drafted motion that gets filed with the court? Higher risk, requiring careful review. AI analysis that informs a client's major business decision? Highest risk—human attorneys must own that analysis.

Your risk tolerance should also account for your professional responsibilities. Model Rule 1.1 requires competent representation, including understanding the benefits and risks of relevant technology. Model Rule 5.3 requires reasonable efforts to ensure that nonlawyer assistants' conduct is compatible with professional obligations—and AI is the ultimate nonlawyer assistant. You can't defer your professional judgment to an algorithm any more than you could defer it to a paralegal without supervision.

The biggest risk in 2026 isn't AI itself—it's complacency about AI. It's assuming the technology is more capable than it is, skipping testing because it seems to work, or removing guardrails because they slow things down. My Waymo couldn't handle an emergency vehicle, but at least it had an emergency button. Make sure your AI implementations have the equivalent: robust testing, clear guardrails, and thoughtful risk assessment that keeps human judgment where it belongs—in control of the decisions that matter.