ILTA Voices ILTACON EU - iManage video
English (US)
00:00:03.040 — 00:00:38.280
All right. Welcome back to Ultra Voices. I'm Michael here with Paul Walker and Laura Wenzel from I manage. How are you guys doing today. Doing great I'm amazing. Thank you. We're super excited to have you. And I love talking about image. Let's hop right in. Uh, we see organizations seeking to adopt AI, and many are discovering that their content isn't AI ready.
From your perspective over it, I manage why is information architecture the critical first step often overlooked? And what does AI ready content actually mean in practice?
00:00:39.920 — 00:05:10.760
Yeah. So Michael, I think, um, as we think about this question, I think it's critical to answer the question, what is information architecture for us? It's not necessarily a term that means a lot to a lot of people. And really what our information architecture is about, it's about the intelligence we have around data.
And that helps organizations and individuals both best access that data and leverage that data to different needs. Um, so it's about kind of centralizing governing that data. Opening up access to all the different tools and capabilities that you're going to try to leverage. Um, and really, what's also important is that you have different ways of applying different levels of context to that data.
So you don't just view it through one lens. You have different lens and the different parts of the business. And you have an individual user might have different lens to the point depending on the task that they're trying to achieve at any particular time. The other important massively critical thing about an information architecture is the security.
It's also about understanding governing the security, the auditability, the governance of that data, making sure that the right people who are looking at that data have access to the right data and can only see the data they're allowed to see. Um, you know, once you've got that information architecture in place that enables all sorts of possibilities.
And, you know, for my manager, that's what we call AI confidence, the ability to collect that knowledge. Leverage it from all the different tools that you should seek to bring to that. And, you know, I manage customers really have a leg up here. They have a bit of an advantage because they have been adding context to their content for years just by simply filing their information.
Um, and, you know, it adds that client information, the matter type, the matter. Um, and so they really do have sort of this advantage of having that additional context built in their everyday workflows. Um, the other area where I manage customers really have sort of a competitive advantage here is the fact that the platform is managed, meaning that, um, the disposition of what we like to call rot.
So redundant or obsolete or trivial content is already put in place. So it gives them the foundation for the information architecture that AI really needs um, for the results to be successful, effective and and frankly relevant. I gotta say, I love the Rot acronym for describing the data that you don't need anymore.
I think that's great. So we see leaders of law firms and legal departments who are understandably cautious about AI given confidentiality obligations. How do you think companies should think about the security and governance frameworks needed to deploy AI confidently? And what role does the underlying document management system play in enforcing those guardrails?
So this is probably an area that I am, you know, most interested and excited about. Um, as we do a ton of research and iManage both, you know, officially and quantitative fashions as well as qualitative. Um, this is an area that I know legal leaders are really struggling both in terms of end user expectations and client expectations. we know that there's a significant number of end users that are using AI with very little, um, guardrails.
Right. And that opens up an opportunity, obviously, to expose some, some content. Right. Whether it is client sensitive content or even firm sensitive content. So, um, knowing what you need to do in terms of establishing those guardrails is really challenging, right? Because legal leaders don't want to stifle innovation, but at the same time, they want to make sure that they have a secure and governance approach to this, um, to when their end users are leveraging AI.
And then the other piece of this that we see having a significant influence is what I like to call the pull and push. So clients are really pushing their law firms to use AI. They're putting it in RFPs. They have high expectations, but then they are also restricting sort of that use of AI. So, you know, legal leaders are put in a position where there have many conflicting requirements between their end users and clients, and they really have to put that infrastructure in place to not only reduce the risk of exposure, but to also capture the defense ability and the traceability of what AI is actually doing, what their content.
00:05:12.440 — 00:07:30.280
And I'll add to that, Michael. So as an event recently we've had 100 leading CIOs from from law firms. And there was a question raised by one of the panelists on stage as to whether the audience was managing access to public AI services, either by restricting that to their end users or via policy. So basically, having an AI policy that was distributed around the organization and trusting that the professionals within the organization would know what best to do and the best thing to do.
I was actually quite surprised about half the room said they were trusting on policy, so there's no restrictions in place. They trust the end users to do what they do now. I trust end users, but I also know what people can be like in a bit of a panic when they need to react to things. And maybe I'm, you know, of a certain age.
I remember a period about a decade ago when services like Dropbox were very common in the market, and many, many organizations didn't lock down those those systems. And it was a way of sharing files quickly, very quickly and easily with your, your, your clients, etc. and we saw some pretty horrendous big exposures of data happening that that generation.
And then organizations started restricting and locking down those, those repositories because they were able to provide alternatives to the user base. So providing a safe alternative that matches the same capability, um, as long as that's viable, as long as it's there, then it's a great way of controlling that.
Now again, fortunately, the AI ecosystem, um, has developed enough in the last 18 months so that we don't have to move all the content into these kind of third party public services anymore. There are now ways and means by which customers could control their control plane, so to speak. They can control the data within their environment, and then they can open up the access to that data to the different AI tools that they want to leverage.
So no longer they have to take the data out of the platform, put it into the AI tool to get the capability. They can keep the data in the platform and open up the access. And that's really, um, from a, from a document management platform. That's the ultimate guardrail in place. It's already leveraging trusted security boundaries that have been set in place in terms of that organization, and are relied upon as part of that organization.
00:07:31.440 — 00:11:41.650
Now, one thing that never ceases to amaze me is how fast technology progresses. I mean, you mentioned Dropbox, and that was only ten years ago, and you mentioned AI, how much it's come in just the last 18 months. Now we're seeing a shift now from simple automation to more sophisticated AI agents. They can reason and take action.
What's your view on how legal should approach this evolution, particularly around building agents that can interact with knowledge safely and effectively? Yeah, so we've definitely seen that that evolution in the last six months. And you know, I think this is the next big kind of one of the better word battlegrounds with the agent builders.
You know, the LMS have become pretty well established. They will continue to improve and get better in the background, but now we're able to turn those LMS into actually being able to do things. Up until now, we've all been about generative AI. That's asking a question and generating and a response to that question, and then asking follow up questions, etc., or generating an image or generating a piece of code, for example.
Now we're moving into the next period, which is enabling the answer to start the next steps to feed the logic of what happens next. And we're seeing the LMS being able to navigate across these flows. Um, in my advice to to customers really start simple. Don't necessarily start with your most business critical systems, your most confidential systems where the data is high risk.
And I see a number of customers starting looking at their internal processes. So things that aren't necessarily impacting clients, but it's a way for their tech teams and some of the innovation teams to get their hands on the technology, start learning the technology. I'd also advise customers to not go too heavily into any one agent build a technology just yet.
I think the market is pretty diverse. It's already diversifying. We saw that OpenAI announced their agent builder and released it around about three weeks ago. Salesforce put in their hat in the ring, and so we're going to see an increase in a number of different agent builder technologies. And I would very much advise customers to avoid vendor lock in at this point.
Try to keep their options open and evaluate the market constantly, and look at how they can just use these on internal processes, maybe some external lightweight processes and take parts of the workflow. You don't have to boil the ocean and do the entire workflow. You can start taking little chunks of the workflow, automating it, putting it around agents, testing it, and validating.
And as you'll see from the core of this kind of podcast, make sure the data is right there feeding into these agents. That's that's lesson 101. Yeah. I mean, just to kind of reinforce those, those tenets, it's really critical that the context, the content is clean and centralized and secure and accurate.
Right. Because once you have these autonomous agents actively acting on their own, there is opportunities to expose information, to make some decisions that you would not make. And the other piece of this that I feel pretty passionately about, you know, is be, as Paul said, the market is evolving. Everything is evolving very quickly.
But let's not lose sight of the fact that for some of these workflows and some of these agents, it's going to be critical to have that human judgment, right? I know we refer to as human in the loop, but at the end of the day, for some of these scenarios, it'll be critical for organizations to know at what point do you need to make sure that you insert that human judgment?
So those would be my two pieces of advice in this scenario. One of the additions to that, Laura, also, the governance of those flows, making sure you're capturing what's happening inside those flows into the records, uh, is another part of that vital chain. So I'll just add that in. Yep. Incredible advice and very worth listening to.
So I want to shift a little bit and talk about sort of the MCP approach. Now you've been very vocal about the model context protocol or MCP and standardized approaches to AI integration. Why does this matter for law firms and how should they be thinking about their technology architecture to avoid getting locked into proprietary AI silos.
00:11:42.970 — 00:13:59.970
So anyone who knows me or has met me recently knows that I could talk about this for probably about 3 or 4 hours, but I'll try to summarize it. And some people will say that MCP is just a fancy API. Um, I think it's very, very different to a fancy API. Yes, technically it's an intelligent wrapper around an API that enables the LM, an AI engine, to effectively navigate that API.
But really, when you're thinking about what it's able to do now, it's fundamentally different. It's lowered the bar, the technical bar to be able to take actions on platforms and interlink those platforms. This is really the enabler that makes a genetic AI happen. It's the bit that enables an LLM that can think and reason to take actions in platforms that we already have in place and in situ today, platforms such as our managed work platforms such as your practice management platforms and such as, you know, research data in LexisNexis, Thomson Reuters.
Enables you to leverage all these platforms and the means, able to talk to them without any code and navigate through them and take actions. I'll give you a very quick example today, Michael, the one I did earlier. It's not a legal example, but I got a document for my managed work and I said, read this document.
It's got a whole bunch of tasks inside this document. It was some notes that we made over several days of. I manage in some planning meetings, read those documents and turn them into tasks and add them to our task management application, which is for I managed design manage tracker. The AI engine Claude Enterprise read through that document and identified all the actions from those documents, and it turned them all into tasks in our task management system.
It set the dates, it set the priority, it set that who it was assigned to. And he did all that for me. It took less than five minutes. That's a job that, you know, it would have taken me probably two hours to do manually, but I know that I probably just wouldn't have gotten to it. It's one of those jobs that it's not just about it takes two hours.
Where do I find two hours to actually do this job? It takes me a week and then I'll forget about some of the details. So that very quick, simple automation. I didn't have to learn any code to do it. I took a document from my manager, got all the tasks, create them all, all with a simple prompt, very, very powerful.
And that's really what NP about the power that it brings to the normal user without a code or an IT having to get involved in the middle.
00:14:01.170 — 00:15:04.250
And I and I think when you think about AI in the various use cases, this is fundamental, right? Because AI is more than just the traditional tech. It is very much an extension of how people work. Um, I'd like to think of it as sort of a, you know, a new teammate. And so as all of the different practice areas and organizations have different workflows, um, it'll be really critical that we have a platform like MCP that keeps content secure and governed.
Um, that opens up the opportunity of that freedom and that choice. And doesn't kind of instill this notion of vendor lock in. I know that recently one of the conferences I attended. Panelists were discussing, you know, the next big AI vendor may not even, you know, be in the market today. So MCP is really foundational and fundamental for us to be those partners, for our customers to make sure that we support them on their AI journey.
Um, so it's really it's really critical.
00:15:05.410 — 00:15:13.210
Now, last question. If folks want to learn more about image, see what awesome stuff you guys are up to. Or book a demo. Where can they go?
00:15:15.010 — 00:15:23.010
I would just go to I managed to come, um, and that's where they can learn all about our entire platform, our portfolio, as well as book and demo.
00:15:24.290 — 00:15:32.970
That makes life easy, doesn't it? Yeah, that's been a great conversation. I can't thank you guys enough for joining me, and I look forward to seeing you guys soon. Thank you Mike.