Skip to main content

Half of Legal Teams Are Poised to Close the Agentic AI Visibility Gap, Icertis Survey Finds

ⓘ This article is third-party content and does not represent the views of this site. We make no guarantees regarding its accuracy or completeness.

New Research from 1,000+ In-House Legal Professionals Points to Governance, Oversight, Self-Auditing, and Contract Intelligence as Critical Capabilities

Nearly half (47 percent) of in-house legal teams say they would not detect an unauthorized or incorrect AI action until after it had occurred – sometimes days or weeks later – according to new research from Icertis. The AI-native contract intelligence leader surveyed more than 1,000 U.S. corporate legal practitioners to reveal a profession in transition: While nearly 60 percent of legal teams say they are prepared to govern AI agents, many currently lack the clarity and control to monitor autonomous systems.

Autonomous AI is rewriting the rules of the enterprise, but companies first need to build the operational infrastructure to govern this new wave of innovation. Legal teams leading from the front recognize three critical capabilities – governance frameworks that set boundaries; human oversight and accountability that keep people in command of high-stakes decisions; and self-auditing AI that monitors its own actions in real time. Contracts play an integral role as an intelligence layer for automation, providing AI agents with business relationship context to act on operational requirements for the organization.

The survey responses underscore a technology landscape that is poised for autonomy. Key findings include:

  • Autonomy is no longer theoretical. For nearly 10 percent of legal professionals, human review of AI activity is already the exception – raising concerns about the guardrails that underpin agents. The vast majority (46 percent) primarily use AI as an assistive tool that does not act autonomously. Nearly 1 in 4 (23 percent) report that AI occasionally handles tasks autonomously, with humans in the loop.
  • AI is acting, and legal doesn’t always know. 40 percent of legal teams are confident they have real-time visibility into their AI agents’ actions, yet an equal percentage say they would only catch a substantive legal error in AI output after the fact. Self-auditing AI, live-monitoring, and reporting capabilities are the foundation for any agentic AI workflow.
  • Accuracy and trust remain obstacles. Only 26 percent of legal professionals are very confident that the AI their team uses is accurate enough for high-stakes decisions across the business. Nearly 50 percent say they must apply human judgment before trusting what AI produces. That’s why systems like contract intelligence must be designed for instantaneous visibility – so humans can validate AI results in time to act on them.
  • Legal AI is operating in silos. Nearly a quarter (23 percent) say their legal AI tools operate in full isolation from other systems. Connected AI systems, like contract intelligence that continuously audits and validates outputs against business and regulatory context, lay the foundation for greater autonomy, speed, and insight.
  • When AI makes mistakes, ownership varies. When asked who is accountable for errant AI, respondents were split on where responsibility lands: 23 percent pointed to the team deploying the agent – those responsible for selecting, configuring, and releasing it for use; 23 percent said the team that manages the agent – those overseeing its day-to-day operation and performance; and 22 percent said it was scenario dependent. Accountability varies across organizations because of different tools, policy structures, risk tolerances, and operating models. In any scenario, effective governance starts with effective policies, and contracts are where those policies become enforceable.

“The Icertis survey shows that the speed of AI innovation is outpacing the governance meant to oversee it – and legal is feeling this pressure on two fronts: with the use of AI agents in their own department, and through increasing usage by other functions,” said Bernadette Bulacan, Chief Evangelist, Icertis. “Contract intelligence enables legal teams to confidently use agents in their own processes while also ensuring agents across the enterprise have the context required to execute with accuracy, precision, and accountability.”

Icertis delivers an AI-native contract intelligence layer that connects agreements, data, and systems – giving legal teams the visibility and control they need to manage risk and drive autonomous action at scale. Powered by Vera and trained on 17 million contracts, Icertis delivers unmatched accuracy through deep contextual understanding of the relationships captured in every commercial agreement.

Read the full report to learn more.

About Icertis

Icertis is the AI-native contract intelligence company that turns enterprise strategy into faster execution at scale. Powered by Vera, the Icertis platform delivers an enterprise–wide contract intelligence layer that understands business and industry context – connecting agreements, data, and systems to drive the future of autonomous contracting.

Contacts

Report this content

If you believe this article contains misleading, harmful, or spam content, please let us know.

Report this article

Recent Quotes

View More
Symbol Price Change (%)
AMZN  271.62
-1.06 (-0.39%)
AAPL  291.18
-2.14 (-0.73%)
AMD  464.36
+9.17 (2.01%)
BAC  50.63
-0.68 (-1.33%)
GOOG  390.46
-6.59 (-1.66%)
META  601.45
-8.18 (-1.34%)
MSFT  409.28
-5.84 (-1.41%)
NVDA  220.46
+5.26 (2.44%)
ORCL  193.23
-2.72 (-1.39%)
TSLA  439.24
+10.88 (2.54%)
Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the Privacy Policy and Terms Of Service.