AI Meets Security: Is Your Organization Ready?
The rise of artificial intelligence (AI) has brought both opportunities and challenges for the legal sphere. As legal organizations consider integrating AI into their operations, they need to first review their security practices and procedures to make sure they’re ready to safely adopt this powerful new technology.
What policies and restrictions are in place?
An assessment of whether the organization is truly prepared for AI should involve a top-down evaluation by the leadership team, focusing on how introducing Generative AI tools will impact their security posture and whether they have the guardrails in place to guide usage and prevent any serious missteps.
For example: does the organization have policies in place around usage of Gen AI tools, or plans to provide sanctioned tools from trusted vendors? Do employees understand the difference between using a free Gen AI tool versus a paid version of the same tool – as well as the risk of uploading company data into a free tool? (Hint: there’s no such a thing as “free”. Free tools will store and give access to user-provided data, including personal data, to all its users as part of its responses, potentially leading to privacy violations – which should be a non-starter for legal organizations).
In a notable incident during the early days of ChatGPT, Samsung employees inadvertently leaked sensitive company data by using the tool to assist with tasks. Workers reportedly entered confidential information, such as source code and internal meeting notes, into the tool. Since ChatGPT retains user inputs to improve its model, this data became part of its training set and accessible to all its users, raising significant security concerns.
This case highlights the risk of “shadow AI”, where employees use AI tools for company use without the consent or control from IT.
Does data stay secure and governed at all times?
One common misconception is that chatbots and large language models (LLMs) are the same thing. The LLM serves as the processing engine – and that LLM is hosted somewhere in the cloud. In other words, data is being fed out “into the wild”.
This loss of control over data is why many organizations are seeking to develop their own LLMs, so that their data doesn't have to go out outside of their secure environment.
This is similar in approach to the way iManage Work leverages AI across its platform and with products like Ask iManage and Insight+: the AI services work with the documents and files that are securely hosted within the DMS, so that sensitive data never leaves the system and doesn’t travel outside of the organization.
What does this look like in action? Take Ask iManage as an example. Ask iManage is an AI-powered assistant native to iManage Work, built to enhance how professionals work with documents, emails, and content. The latest release of Ask iManage introduces a guided actions interface making it even easier for users to leverage the benefits of generative AI without the need to be experts in crafting prompts.
Guided actions available today include Overview, to quickly see the main points of content; Extract, to grab exact text and data points from documents; Summarize, to generate summaries for specific topics within content; and Analyze, to check if content meets certain requirements.
Crucially, all these actions can be carried out without sensitive and confidential data ever leaving the DMS repository. The data remains fully secure and governed at all times.
Given that iManage is the preferred DMS across ILTA members – 61% of ILTA members select iManage as their preferred Document Management System, according to the 2024 ILTA Technology Survey – the ability to layer AI on top of the valuable content they already have gives legal professionals a seamless way to improve productivity and drive additional value from knowledge assets – all while protecting against security risks.
As an additional benefit, this base-level of control over data ensures that organizations have enough accountability built into the business to comply with new and emerging regulations like the EU AI Act, ensuring responsibility for safe and ethical use of deployed AI systems. Building accountability from the start will prepare organizations for any future regulatory changes.
Explore AI and security at ILTA Evolve
At iManage, we believe the path to successful AI integration lies in assessing your organizational readiness, understanding the technology, and planning adoption in a way that doesn’t take on unnecessary risk, but instead unlocks strategic potential.
We will be at booth 13 at ILTA Evolve 2025 this April 27-30 in Myrtle Beach, South Carolina. Come by and visit us – we are excited to be there and discuss everything we’ve been up to with AI, Security & Governance, and Risk & Compliance. We’d love to help you safely embark on your own AI journey.
We will also be moderating and speaking at various educational sessions throughout the conference.
Looking forward to a great ILTA Evolve and a deep dive into the intersection of AI and security.
About the author
Manuel Sanchez
Manuel Sanchez is Information Security & Compliance Specialist at iManage with extensive professional experience in information security, governance, and compliance.