Vincent is vLex’s response to implementing AI into legal research and it’s the most impressive one that I’ve seen for legal research. Damien Riehl was kind enough to give us a personalized demonstration (thanks for setting that up, Jenny!) and it was a real treat to be able to ask questions about it in real-time. I would say that the best way to see this in action is to schedule a demo for yourself but if you want to hear my hot-takes about the platform, please keep reading.
Vincent is Really Cool
Interface
Many times when you engage with these models they feel like a complete black-box. You put in some text, 🪄 presto-chango 🪄, and then it spits something back to you that seems related to what you put into it. Vincent instead offers you a fairly controlled interface that is centered around what you typically need for something like real-world legal research. That’s because this doesn’t look like a “chatbot,” sandbox-type experience and feels more like a tool that a professional would use.
You Can Tell Where It Gets the Information
This is huge because almost everything you need is on one page immediately. You ask it to draft a legal research memo and the cases are just to the right of the memo. The relevant portions of the cases have been summarized and presented there for you. A tool tells you how confident Vincent is that this is close to your request. Everything below 70% is dropped. You can toggle between cases, regs, statutes, and secondary materials available. Everything that could require a deeper dive has a hyperlink. You can get a sense of what this looks like from vLex’s website about Vincent here: https://vlex.com/vincent-ai.
Multi-Stage Prompting
vLex is probably best known for its deep archive of primary international materials. Vincent uses this to great effect (since we know that many of these NLP technologies started as translation tools). You can enter a natural language question in English, Vincent will translate it, run the search in the home country’s language, and then provide you with both the original text (so you could translate it yourself) and an English (or whatever) language translation. Sexy stuff for you FCIL researchers. Also, this is substantially more powerful than something that simply tries to grind through many iterations of similar keyword searches in other languages.
It’s also a noteable example of multistage prompting and retrieval in legal research. You can see that it is being fed through not one prompt but many complex chains to produce high-quality, useful output. The tools for US caselaw are similar: Your query is turned into several different prompts that run off in different directions through the vLex database to retrieve information. Some prompts are searching through cases, statutes, regs and their secondary materials to see what is useful; others might be summarizing cases as they relate to your query; other prompts are finding counterarguments; another prompt is evaluating them for confidence on the your specific subject etc. etc. and a final prompt is summarizing all of this information into a neat little report for you. In summary, they’re making great use of the technology’s potential by deploying it in many different ways. The final product is sort of a fabricated, personalized secondary source created by running tons of prompts over the underlying primary materials. In fact, Damien calls this a “Me-tise” 😂 (apologies to Damien if I stole his punchline) and he foresees it being a powerful new tool for legal researchers. I’ve been bullish on the fabrication of secondary materials since I first saw what these things could do so it was exciting to see a precursor of this in action.
Damien let us know that behind the scenes they are using a combination of the various LLM’s to achieve these results and cut costs when possible: Claude, Llama2 (Meta), and GPT4. We met with him shortly after the OpenAI controversy and he pointed out that they are able to swap models in vLex if necessary.
Secondary Materials and Market Share
We have all come to love and rely on specific secondary materials that exist in Westlaw and Lexis. vLex’s acquisition of Fastcase meant that they acquired a huge, fantastic database of primary US materials. The one pain point for people who may be interested in switching from Westlaw/Lexis to Fastcase was the relative dearth of secondary materials available. The features that I saw last week in vLex may fill that need for some users and it will be interesting to see if people are lured away from their favorite practice guide or treatise published by Lexis or Thomson Reuters because a robot can now do some of that work summarizing and analyzing vast quantities of primary law. It will also be interesting to see if Lexis and Westlaw will roll out these types of features, since they could be in direct competition with their robust (and pricey) secondary materials offerings.
Before I get a slew of angry emails: I recognize that a traditional secondary material does much more than summarize cases, statutes, and regulations but it does some of that (also remember we’re still in the infancy of this technology for legal research). If that is all the researcher needs, then these tools could work as a replacement for some people (and they don’t rely on monthly updates – they do this on demand). That may allow some people to cut ties from Lexis and Westlaw in a way that could shake up the industry in a way that disrupts the status quo. It could also be incredibly powerful for something like a 50-state survey or even surveys across many different countries. Feel free to let me know what an ignoramus I am in the comments if I am missing something here.
Outstanding Questions
Price
I’ll dive right in where you all have questions, “Can we afford this thing?” Dunno and it depends (super satisfying, I know). The difficulty here is that these things are still very expensive to operate. The more sophisticated the model, the larger the database, the more complex the stages of prompting, the various modalities (scanning documents, reading the screen, etc.) – the more it costs them. They are all trying to figure out how to create a pricing structure where they can 1) offer it to the widest audience possible and 2) remain profitable. As we know, their primary source of revenue is the big firms and so the product is currently only available in paid beta for select companies.
Damien and vLex are both refreshingly upfront and clear about this. No hand-waving or sales talk, which I think is why so many people in our industry look to people like Damien for information about these technologies as they are developed. Damien mentioned that they are taking the “democratize the law” call to action from Fastcase seriously and are looking for ways to make it affordable on the academic market.
Possible Future Options
This is all complete speculation on my part but some sort of limited version of the platform seems like it could be reasonable for the academic market (like BLaw does with their dockets): limited uses per day, limited uses per account, a “lesser” account with limited features, etc. As the market stands today academic law libraries have access to a limited version of Lexis AI, trial access to Casetext Cocounsel (unless you’re willing to pay), no access to Westlaw Copilot, no access to Harvey AI, and no access to vLex. I anticipate all of that will change as the prices come down. The point of frustration is obviously that we want to be able to evaluate these tools so that we can teach them to students, in addition to using them ourselves so that we can benefit from the technology.
In conclusion, Vincent by vLex represents a significant step forward in AI-driven legal research. Its sophisticated multi-stage prompting, transparent sourcing, and potential in fabricating secondary materials make it a formidable tool. The future of Vincent and similar AI platforms in the academic and broader legal research community is certainly something to watch closely.
Pingback: Evaluating Generative AI for Legal Research: A Benchmarking Project - AI Law Librarians