Skip to main content Skip to footer

AI is creating real opportunities for small and midsized law firms: including faster workflows and efficiency gains, sharper insights and innovative new capabilities. As clients are becoming more sophisticated in their legal service requirements, the competitive upside is hard to ignore.

However, a rush to jump in and get started with AI can push firms into dangerous territory if they’re not careful. Without a thoughtfully planned, robust governance strategy for the data that fuels these systems, even the most promising AI initiative can quickly become a minefield of unanticipated risk — turning what should be a competitive advantage into a serious liability.

AI without data governance is a non-starter

The first mistake happens when firms start collating all available data for AI model training without first asking: is this trustworthy, high-quality data? Has it been carefully curated and vetted to guarantee reliable prompt responses?

By simply uploading decades of documents without regard for relevance or quality, firms are quickly setting their AI initiatives up for failure. AI output is only as good as the data it draws upon. Firms that take an undisciplined approach to their data are at risk of creating a “rubbish in, rubbish out” scenario, where their legal professionals are provided with misleading, outdated or wholly inaccurate answers.

The other area to consider is the type of data being leveraged for AI. If this data contains confidential or personally identifiable information that is subject to strict retention and disposal requirements, firms risk incurring the wrath of the regulators. Including information that has been stored beyond what is legally permitted would put them at risk of penalties from the Solicitors Regulation Authority and the negative publicity that comes with it.

Unfortunately, the risks don’t stop there. What happens if the AI tool selected by the firm requires data to be extracted from the system of record (such as a document management system that has all the security and governance guardrails in place) and moved into a separate storage location to perform any required processing? Firms could end up with multiple copies of their data scattered across multiple locations without centralized visibility, which is a problem in itself.

If data is required to stay within a specific geography or jurisdiction, it must remain in place to avoid the risk of triggering a cross-border data transfer restricted by GDPR. Additionally, the EU AI Act requires firms to provide transparency over AI systems based on their level of risk, to improve accountability and allow users to engage with AI tools correctly.

All in all, it’s quite the conundrum. So, how can SME firms take advantage of the benefits that AI provides without exposing themselves to undue risk around their data?

Creating a secure and governed foundation

The solution starts with governance — a well-planned framework that recognizes the importance of good data quality before AI is used to automate legal workflows.

Achieving good data quality requires firms to understand the relevance of their information —does it contain outdated data or files that should have been destroyed years ago? — and take action to ensure data cleanliness. This is not a ‘one-off’ curation exercise. Good governance requires regular monitoring, with clear retention policies and disposition rules applied to all client content guarded by the firm.

Once data has been curated, it is important that any AI tools being evaluated work with data in place — ensuring that data never leaves the repository and the selected tool can process data where it resides, maintaining the foundational governance and protection offered by the system of record. Taken together, these steps give firms a stable foundation for whatever AI tools they choose to adopt next.

A clear answer to the data conundrum

Firms don’t need to fear AI — they just need a clear plan in place to manage the risks created by the intersection of AI and data. Successful use of this technology starts with the discipline that firms apply to the data that drives these tools. Build on governed, high-quality data and every AI initiative becomes safer, more effective and less risky. Extracting the real value of their data allows firms to go from risk to results and better business outcomes.

This article was first published by Burlington Media in their LPM Risk Supplement in February 2026.

Information Security & Compliance Specialist

Manuel Sanchez is Information Security & Compliance Specialist at iManage with extensive professional experience in information security, governance, and compliance.

Making Knowledge Work

Request a demo

Ready to see how iManage can make a difference to your organization?