I spent the first week of January attending the American Association of Law Schools’ Annual Meeting in Washington D.C. I was really impressed with all of the thoughtful AI sessions, including two at which I participated as a panelist. The rooms were packed beyond capacity for each AI session that I attended, which underscored the growing interest in AI in the legal academy. Many people attended in order to start their education. The overwhelming interest at the conference made my decision clear: it is time to launch my AI prompt worksheets to the world, addressing the need I observed there. While AALS convinced me to release the worksheets, the worksheets themselves were created for an upcoming presentation at ABA TECHSHOW 2024, How to Actually Use AI in Your Legal Practice, at which Greg Siskind and I will be discussing practical tips for generative AI usage.
DALL-E generated
Background: Good Habits – Research Planning
Law librarians have been encouraging law students to create a research plan before they start their research for decades. The plan form varies by school and/or librarian, but it usually requires the researcher to answer questions on the following topics:
Issue Identification
Jurisdiction
Facts
Key words/Terms of Art
Resource Selection
Once the questions are answered, the plan has the researcher write out some test searches. The plan evolves as the research progresses. The more experienced the researcher, the less formal the plan often is, but even the most experienced researcher retrieves better results if they pause to consider what they know currently and what they need in the results. After all, garbage in, garbage out (GIGO). In other words, the quality of our input affects the quality of the output. This is especially true when billable hours come into play, and you cannot bill for excess time due to poor research skills.
Continuing the Good Habits with Generative AI
GIGO applies just as much to generative AI. I quickly noticed that my AI results are much better when I stop and think them through, providing a high level of detail and a good explanation of what I want the AI system to produce. So, good law librarian that I am, I created a new form of plan for those who are learning to draft a prompt. Thus, I give you my AI prompt worksheets.
The first worksheet that I created is geared towards general generative AI systems like ChatGPT, Claude 2, Bing Chat/Copilot, and Bard. The worksheet makes the prompter think through the following topics:
Tone of Output
Role
Output Format
Purpose
Issue
Potential Refinements (may be added later as the plan evolves)
So that you can easily keep track of your prompts, the Worksheet also requests some metadata about your prompt, including project name, date, and AI system used. The final question lets the prompter decide if this prompt worked for them.
For the second worksheet, I wanted to draft something that works well with legal AI systems. Based on the systems that I have received access to, such as Lexis AI and LawDroid Copilot, and the systems that I have seen demonstrated, I cut down some of the fields. Most of the systems are building a guided AI prompting experience, so they will ask you for the jurisdiction, for instance. They may also allow you to select a specific type of output, such as a legal memo or contract clause. This means less need for an extensive number of fields in the worksheet. In fact, when I ran the worksheet past a vLex representative, I was told it was not needed at all because they had made the guided prompt that easy.
Librarian that I am, however, I still feel that planning before you prompt is preferred. Reasons for this preference include: the high cost of the current generative AI searches, the desire for efficient and effective results, knowledge that an attorney’s time is literally worth money, and the desire for a happy partner and client.
The legal worksheet trims the fields down to role, output (format and jurisdiction), issue, and refinement instructions. This provides enough room to flesh out your prompt without overlapping the guided prompt fields too much.
General Comments Regarding the Worksheets
With both worksheets, the key is to give a good, detailed description of what you need. Think about it like explaining what you need to a first-year law student – the more detail you give, the more likely you are to get something useable. The worksheets provide examples of the level of detail recommended, and you will find links to the results in the footnotes of the forms.
In addition to helping perfect your prompt with some pre-planning, these worksheets should be useful for creating your very own prompt library.
Feedback Wanted!
DALL-E created
Please feel free to use the worksheets (just don’t sell them or otherwise profit off of them! Ask if you want to make a derivative of them). If you do use them, please let me know what you think in the comments or via email. How have they assisted (or not) with improving your prompting skills? Are there fields you would like to see added/removed? I will be updating and releasing new versions as I go. If you are looking for the most recent versions of the worksheets, I will post them at: https://law-capital.libguides.com/Jennys_AI_Resources/AI_Prompt_Worksheets
I have frequently wondered why ChatGPT often struggles with searching the internet – to the point where it sometimes denies having internet access altogether and has to be reminded. The answer fell into my lap today when I was listening to my favorite AI podcast and heard the ChatGPT Pre-Prompt Text Leaked episode. As it turns out, ChatGPT is so bad at remembering that it can search the internet for answers that OpenAI has to run a plain old normal natural language prompt reminding it behind the scenes – a set of custom instruction that runs even before the user’s custom instructions or prompts.
These pre-prompt instructions are not limited to internet search capability reminders. If you ask ChatGPT-4 to tell you EVERYTHING (click on the link for the specific language required), it will provide several screens of its behind-the-scenes pre-user prompt instructions on who it is (ChatGPT!), how to handle Python code, instructions for generating images, and…my favorite…a reminder that it can search the internet. An excerpt of the instructions appears below. To view the full text, click here to view my ChatGPT-4 transcript.
Behind the Curtain
Obviously, I knew that ChatGPT did something behind the scenes – it is after all a complicated computer program. However, I didn’t suspect that some of this behind-the-scenes magic is 1192 words (according to a Microsoft Word count) of normal text prompts, without any fancy computer programming.
So, behind the curtain of the fancy revolutionary AI software, there are…words. Basically, before applying the user’s custom instructions or looking at the user’s prompts, ChatGPT looks at its baseline instructions which are stated in plain language. It all makes perfect sense now… It’s not just my imagination; ChatGPT actually is horrible at remembering it can search the internet, and when it does search, it produces questionably helpful results. OpenAI has tried to deal with problem with a last minute helpful-ish reminder
“Remember, you CAN search the internet! See, like this!!”
“And for the love of GOD try hard to find stuff (except for song lyrics)! I believe in you!!”
Allows for Quick and Easy Fixes?
On the plus side of this simple approach of running pre-prompt prompts behind the scenes, it seems like it was a super easy fix to get DALL-E to embrace DEI. When the program first came out, if you wanted a non-white, non-man image, you had to specify that. As the months went on, it got better and better at providing images more representative of humanity. I thought maybe the developers did something complicated like retrain the system with new images, call on the great AI minds to adjust fancy algorithms, and who knows what else. Nope, just a few sentences fixed the problem!
“And for images, remember not all people are white men!”
Possibly Actionable Insights?
It’s funny to picture ChatGPT’s robomom yelling out the door as it leaves for school, “Don’t forget, you can use the internet! And remember not to be racist/sexist! AND MOST IMPORTANTLY NO SONG LYRICS!!”
In addition to being gratified that I was right that ChatGPT is really bad at searching the internet, I was thinking that this new (to me) knowledge about how the system works would be useful in some way, perhaps by helping to formulate more useful prompts. However, after thinking about it, I am not so sure that I have identified any actionable insights.
Can I give it more complex prompts? On the one hand, it appears that the system can handle more complex instructions than I originally thought, because it is able to analyze several screens of text before it even gets to mine. Does this mean I should feel free to give even more complex instructions?
Should I give it less complex prompts? On the other hand, ChatGPT already seems to ignore parts of any long and complex instructions, and if not, its memory for them degrades during an extended back and forth session. Does this mean that the system is already overloaded with instructions, so I should make it a point to give it less complex ones?
Should I give it frequent reminders of important instructions? Does the fact that OpenAI thinks that it is effective to remind ChatGPT of important instructions mean that we should spend a lot of time…reminding it of important instructions? When asking the system a question which requires internet consultation for an answer, maybe it would help to preface the question by first cutting and pasting in the system’s own pre-prompt browsing instructions (that appear above).
Conclusion
I will keep thinking and let y’all know if I come up with anything!
Generative AI has only been here for one year, and we’ve already seen several lawyers make some big blunders trying to use it in legal practice. (Sean Harrington has been gathering them here). Trying to get ahead of the problem, bar associations across the country have appointed task forces, working groups, and committees to consider whether ethical rules should be revised. Although the sand will continue to shift under our feet, this post will attempt to summarize the ethical rules, guidance and opinions related to generative AI that are either already issued or forthcoming. The post will be updated as new rules are issued.
Image generated by DALLE-3, showing Matrix-style code flowing over the shifting sands of a desert. A sandstorm looms.
California CPRC Best Practices
On November 16, 2023, the California State Bar Board of Trustees approved their Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law. The document was initially created by the Committee on Professional Responsibility and Conduct. Unlike ethics opinions or formal rules, which tend to be more prescriptive and specific in nature, this document serves as a guide, offering insights and considerations for lawyers as they navigate the new terrain of AI in legal practice. It is organized by duties, with practical considerations for each duty, and addresses the duty of confidentiality, duties of competence & diligence, duty to supervise, duty of candor, disclosure to clients, charging clients for work produced by generative AI, and more.
Florida Bar Advisory Opinion
On January 19, 2024, the Florida Bar issued its Advisory Opinion 24-1, regarding lawyers’ use of generative AI. The opinion discusses the duty of confidentiality, oversight of AI, the impact on legal fees and costs, and use in lawyer advertising.
New Jersey Supreme Court
On January 24, 2024, the New Jersey Bar issued its Preliminary Guidelines on New Jersey Lawyers’ Use of Artificial Intelligence. The guidelines highlight the importance of accuracy, truthfulness, confidentiality, oversight, and the prevention of misconduct, indicating that AI does not alter lawyers’ core ethical responsibilities but necessitates careful engagement to avoid ethical violations.
Judicial Standing Orders
Beginning soon after the infamous ChatGPT error in Mata v. Avianca, judges began to issue orders limiting the use of generative AI or requiring disclosure of its use or checking for accuracy. To date, at least 24 federal judges and at least one state court judge have issued standing orders.
Finally, in some jurisdictions, ethical bodies have looked beyond the use of generative AI by lawyers, and have given guidance on how judges can and should use generative AI.
On October 27, 2023, the State Bar of Michigan issued an opinion emphasizing the ethical obligation of judicial officers to maintain competence with advancing technology, including artificial intelligence, highlighting the need for ongoing education and ethical evaluation of AI’s use in judicial processes.
Also in October 2023, the West Virginia Judicial Investigation Commission issued Advisory Opinion 2023-22, opining that judges may use artificial intelligence for research but not to determine case outcomes.
Vincent is vLex’s response to implementing AI into legal research and it’s the most impressive one that I’ve seen for legal research. Damien Riehl was kind enough to give us a personalized demonstration (thanks for setting that up, Jenny!) and it was a real treat to be able to ask questions about it in real-time. I would say that the best way to see this in action is to schedule a demo for yourself but if you want to hear my hot-takes about the platform, please keep reading.
Vincent is Really Cool
Interface
Many times when you engage with these models they feel like a complete black-box. You put in some text, 🪄 presto-chango 🪄, and then it spits something back to you that seems related to what you put into it. Vincent instead offers you a fairly controlled interface that is centered around what you typically need for something like real-world legal research. That’s because this doesn’t look like a “chatbot,” sandbox-type experience and feels more like a tool that a professional would use.
You Can Tell Where It Gets the Information
This is huge because almost everything you need is on one page immediately. You ask it to draft a legal research memo and the cases are just to the right of the memo. The relevant portions of the cases have been summarized and presented there for you. A tool tells you how confident Vincent is that this is close to your request. Everything below 70% is dropped. You can toggle between cases, regs, statutes, and secondary materials available. Everything that could require a deeper dive has a hyperlink. You can get a sense of what this looks like from vLex’s website about Vincent here: https://vlex.com/vincent-ai.
Multi-Stage Prompting
vLex is probably best known for its deep archive of primary international materials. Vincent uses this to great effect (since we know that many of these NLP technologies started as translation tools). You can enter a natural language question in English, Vincent will translate it, run the search in the home country’s language, and then provide you with both the original text (so you could translate it yourself) and an English (or whatever) language translation. Sexy stuff for you FCIL researchers. Also, this is substantially more powerful than something that simply tries to grind through many iterations of similar keyword searches in other languages.
It’s also a noteable example of multistage prompting and retrieval in legal research. You can see that it is being fed through not one prompt but many complex chains to produce high-quality, useful output. The tools for US caselaw are similar: Your query is turned into several different prompts that run off in different directions through the vLex database to retrieve information. Some prompts are searching through cases, statutes, regs and their secondary materials to see what is useful; others might be summarizing cases as they relate to your query; other prompts are finding counterarguments; another prompt is evaluating them for confidence on the your specific subject etc. etc. and a final prompt is summarizing all of this information into a neat little report for you. In summary, they’re making great use of the technology’s potential by deploying it in many different ways. The final product is sort of a fabricated, personalized secondary source created by running tons of prompts over the underlying primary materials. In fact, Damien calls this a “Me-tise” 😂 (apologies to Damien if I stole his punchline) and he foresees it being a powerful new tool for legal researchers. I’ve been bullish on the fabrication of secondary materials since I first saw what these things could do so it was exciting to see a precursor of this in action.
Damien let us know that behind the scenes they are using a combination of the various LLM’s to achieve these results and cut costs when possible: Claude, Llama2 (Meta), and GPT4. We met with him shortly after the OpenAI controversy and he pointed out that they are able to swap models in vLex if necessary.
Secondary Materials and Market Share
We have all come to love and rely on specific secondary materials that exist in Westlaw and Lexis. vLex’s acquisition of Fastcase meant that they acquired a huge, fantastic database of primary US materials. The one pain point for people who may be interested in switching from Westlaw/Lexis to Fastcase was the relative dearth of secondary materials available. The features that I saw last week in vLex may fill that need for some users and it will be interesting to see if people are lured away from their favorite practice guide or treatise published by Lexis or Thomson Reuters because a robot can now do some of that work summarizing and analyzing vast quantities of primary law. It will also be interesting to see if Lexis and Westlaw will roll out these types of features, since they could be in direct competition with their robust (and pricey) secondary materials offerings.
Before I get a slew of angry emails: I recognize that a traditional secondary material does much more than summarize cases, statutes, and regulations but it does some of that (also remember we’re still in the infancy of this technology for legal research). If that is all the researcher needs, then these tools could work as a replacement for some people (and they don’t rely on monthly updates – they do this on demand). That may allow some people to cut ties from Lexis and Westlaw in a way that could shake up the industry in a way that disrupts the status quo. It could also be incredibly powerful for something like a 50-state survey or even surveys across many different countries. Feel free to let me know what an ignoramus I am in the comments if I am missing something here.
Outstanding Questions
Price
I’ll dive right in where you all have questions, “Can we afford this thing?” Dunno and it depends (super satisfying, I know). The difficulty here is that these things are still very expensive to operate. The more sophisticated the model, the larger the database, the more complex the stages of prompting, the various modalities (scanning documents, reading the screen, etc.) – the more it costs them. They are all trying to figure out how to create a pricing structure where they can 1) offer it to the widest audience possible and 2) remain profitable. As we know, their primary source of revenue is the big firms and so the product is currently only available in paid beta for select companies.
Damien and vLex are both refreshingly upfront and clear about this. No hand-waving or sales talk, which I think is why so many people in our industry look to people like Damien for information about these technologies as they are developed. Damien mentioned that they are taking the “democratize the law” call to action from Fastcase seriously and are looking for ways to make it affordable on the academic market.
Possible Future Options
This is all complete speculation on my part but some sort of limited version of the platform seems like it could be reasonable for the academic market (like BLaw does with their dockets): limited uses per day, limited uses per account, a “lesser” account with limited features, etc. As the market stands today academic law libraries have access to a limited version of Lexis AI, trial access to Casetext Cocounsel (unless you’re willing to pay), no access to Westlaw Copilot, no access to Harvey AI, and no access to vLex. I anticipate all of that will change as the prices come down. The point of frustration is obviously that we want to be able to evaluate these tools so that we can teach them to students, in addition to using them ourselves so that we can benefit from the technology.
In conclusion, Vincent by vLex represents a significant step forward in AI-driven legal research. Its sophisticated multi-stage prompting, transparent sourcing, and potential in fabricating secondary materials make it a formidable tool. The future of Vincent and similar AI platforms in the academic and broader legal research community is certainly something to watch closely.
In today’s post, we’ll explore how legal educators and law students can use Large Language Models (LLMs) like ChatGPT and Claude to create multiple-choice questions (MCQs) from a law school outline.
Understanding the Process
My first attempt at this was to simply ask the LLM the best way to make MCQs but it didn’t end up being particularly helpful feedback, so I did some digging. Anthropic recently shed light on their method of generating multiple-choice questions, and it’s a technique that could be immensely beneficial for test preparation – besides being a useful way to conceptualize how to make effective use of the models for studying. They utilize XML tags, which may sound technical, but in essence, these are just simple markers used to structure content. Let’s break down this process into something you can understand and use, even if you’re not a wizard at Technical Services who is comfortable with XML.
Imagine you have a law school outline on federal housing regulations. You want to test your understanding or help students review for exams. Here’s how an LLM can assist you:
STEP 1: Prepare Your Outline
Ensure that your outline is detailed and organized. It should contain clear sections, headings, and bullet points that delineate topics and subtopics. This structure will help the LLM understand and navigate your content. If you’re comfortable using XML or Markdown, this can be exceptionally helpful. Internally, the model identifies the XML tags and the text they contain, using this structure to generate new content. It recognizes the XML tags as markers that indicate the start and end of different types of information, helping it to distinguish between questions and answers.
The model uses the structure provided by the XML tags to understand the format of the data you’re presenting.
STEP 2: Uploading the Outline
Upload your outline into the platform that you’re using. Most platforms that host LLMs will allow you to upload a document directly, or you may need to copy and paste the text into a designated area.
STEP 3: Crafting a General Prompt
You can write a general prompt that instructs the LLM to read through your outline and identify key points to generate questions. For example:
“Please read the uploaded outline on federal housing regulations and create multiple-choice questions with four answer options each. Focus on the main topics and legal principles outlined in the document.”
STEP 4: Utilizing Advanced Features
Some LLMs have advanced features that can take structured or semi-structured data and understand the formatting. These models can sometimes infer the structure of a document without explicit XML or Markdown tags. For instance, you might say:
“Using the headings and subheadings as topics, generate multiple-choice questions that test the key legal concepts found under each section.”
AND/OR
Give the model some examples with XML tags (so it can better replicate what you would like “few shot prompting”):
<Question>
What are "compliance costs" in HUD regulations?
</Question>
<Answers>
1. Fines for non-compliance.
2. Costs associated with adhering to HUD regulations.
3. Expenses incurred during HUD inspections.
4. Overheads for HUD compliance training.
</Answers>
The more examples you give, the better it’s going to be.
AND/OR
You can also use the LLM to add these XML tags depending on the size of your outline and the context limit of the model you are using (OpenAI recently expanded their limit dramatically). Give it a prompt asking it to apply tags and give it an example of the types of tags you would like for your content. Then tell the model to do it with the rest of your outline:
<LawSchoolOutline>
<CourseTitle>Constitutional Law</CourseTitle>
<Section>
<SectionTitle>Executive Power</SectionTitle>
<Content>
<SubSection>
<SubSectionTitle>Definition and Scope</SubSectionTitle>
<Paragraph>
Executive power is vested in the President of the United States and is defined as the authority to enforce laws and ensure they are implemented as intended by Congress.
STEP 5: Refining the Prompt
It is very rare that my first try with any of these tools produces fantastic output. It is often a “conversation with a robot sidekick” (as many of you have heard me say at my presentations) and requires you to nudge the model to create better and better output.
If the initial questions need refinement, you can provide the LLM with more specific instructions. For example:
“For each legal case mentioned in the outline, create a question that covers the main issue and possible outcomes, along with incorrect alternatives that are plausible but not correct according to the case facts.”
Replicating the Process
Students can replicate this process for other classes using the same prompt. The trick here is to stay as consistent as possible with the way that you structure and tag your outlines. It might feel like a lot of work on the front end to create 5+ examples, apply tags, etc. but remember that this is something that can be reused later! If you get a really good MCQ prompt, you could use it for every class outline that you have and continue to refine it going forward.
This week, OpenAI announced new features to their platform at their first key-note event, including a new GPT-4 Turbo with 128K context, GPT-4 Turbo with Vision, DALL·E 3 API, and more. Furthermore, announced their agent Assistants API, including their own retrieval augmentation pipeline. (RAG) Today, we will focus on OpenAI’s entry into the RAG market.
At the surface level, RAG boils down to text generation models like Chat-GPT, retrieving data such as documents to assist users with questions and answering, summarization, and so on. Behind the scenes, however, other factors are at play such as vector databases, document chunking, and embedding models. Most RAG pipelines rely on an external vector database and require compute to create the embeddings. However, what OpenAI’s retrieval tool brings to the table is an all-encompassing RAG system. The system eliminates the need for external databases, and compute required to create and store the embeddings. Whether OpenAI’s retrieval system is optimal is a story for another day. Today we are focusing on the data implications.
Data is the new currency fueling the new economy. Big Tech aims to take control of the economy by ingesting organizations’ private data including IP, leading to a “monolithic system” that completely controls users’ data. Google, Microsoft Adobe, and OpenAI are now offering indemnification to their users against potential copyright infringement lawsuits related to Generative AI, aiming to protect their business model by ensuring more favorable legal precedents. This strategy is underscored by the argument that both the input (ideas, which are uncopyrightable) and the output (machine-generated expressions, deemed uncopyrightable by the US Copyright Office) of Generative AI processes do not constitute copyright infringement. The consequences of Big Tech having their way could be dire, leading us to a cyberpunk dystopia that none of us want to live in. Technology and its algorithms would be in charge, and our personal data could be used to manipulate us. Our data reveals our interests, private health information, location status, etc. When algorithms feed us only limited, targeted information based on our existing interests and views, it restricts outside influence and diversity of opinion that is crucial to freedom of thought. Organizations must not contribute to this cyberpunk dystopia where Big Tech becomes Big Brother. Furthermore, companies are putting their employees, clients, and stakeholders at risk when handing data to Big Tech. Big Tech favors the role of tort feasor, rather than the role of the good Samaritan, and complies with consumer privacy laws.
To prevent Big Brother, organizations should implement their own RAG pipeline. Open-source frameworks such as Llama-index, Qdrant, and Langchain can be used to create powerful RAG pipelines with your privacy and interests protected. LLMWaare also released an open-source RAG pipeline and domain-specific embedding models. Generative AI is a powerful tool and can enhance our lives, but at the same time in the wrong hands, the cyberpunk nightmare can become a reality. The ease of using prebuilt, turn-key systems, such as those offered by OpenAI, is appealing. However, the long-term risks associated with entrusting our valuable data to corporations, without a regulatory framework or protections, raise concerns about a potentially perilous direction.
Recently, I’ve noticed a surge of new and innovative legal research tools. I wondered what could be fueling this increase, and set off to find out more.
The Moat
An image generated by DALL-E, depicting a castle made of case law reporters, with sad business children trying to construct their own versions out of pieces of paper. They just look like sand castles.
Historically, acquiring case law data has been a significant challenge, acting as a barrier to newcomers in the legal research market. Established players are often protective of their data. For instance, in an antitrust counterclaim, ROSS Intelligence accused Thomson Reuters of withholding their public law collection, claiming they had to instead resort to purchasing cases piecemeal from sources like Casemaker and Fastcase. Other companies have taken more extreme measures. For example, Ravel Law partnered with the Harvard Law Library to scan every single opinion in their print reporter collections.There’s also speculation that major vendors might even license some of their materials directly to platforms like Google Scholar, albeit with stringent conditions.
The New Entrants
Despite the historic challenges, several new products have recently emerged offering advanced legal research capabilities:
Descrybe.ai(founded 2023) – This platform leverages generative AI to read and summarize judicial opinions, streamlining the search process. Currently hosting around 1.6 million summarized opinions, it’s available for free.
Midpage (2022) – Emphasizing the integration of legal research into the writing process, users can employ generative AI to draft documents from selected source (see Nicola Shaver’s short writeup on Midpage here). Midpage is currently free at app.midpage.ai.
CoPilot (by LawDroid, founded 2016) – Initially known for creating chatbots, LawDroid introduced CoPilot, a GPT-powered AI legal assistant, in 2023. It offers various tasks, including research, translating, and summarizing. CoPilot is available in beta as a web app and a Chrome extension, and is free for faculty and students.
Paxton.ai(2023) – Another generative AI legal assistant, Paxton.ai allows users to conduct legal research, draft documents, and more. Limited free access is available without signup at app.paxton.ai, although case law research will require you to sign up for a free account.
With the Caselaw Access Project, launched in 2015, Ravel Law and Harvard Law Library changed the game. Through their scanning project, Harvard received rights to the case law data, and Ravel gained an exclusive commercial license for 8 years. (When Lexis acquired Ravel a few years later, they committed to completing the project.) Although the official launch date of free access is February 2024, we are already seeing a free API at Ravel Law (as reported by Sarah Glassmeyer).
Caselaw Access Project data is only current through 2020 (scanning was completed in 2018, and has been supplemented by Fastcase donations through 2020) and does not include digital-first opinions. However, this gap is mostly filled through CourtListener, which contains a quite complete set of state and federal appellate opinions for recent years, painstakingly built through their network of web scrapers and direct publishing agreements. CourtListener offers an API (along with other options for bulk data use).
And indeed, Caselaw Access Project and Free Law Project just recently announced a dataset called Collaborative Open Legal Data (COLD) – Cases. COLD Cases is a dataset of 8.3 million United States legal decisions with text and metadata, suitable for use in machine learning and natural language processing projects.
Most of the legal research products I mentioned above do not disclose their precise source of their case law data. However, both Descrybe.ai and Midpage point to CourtListener as a partner. My theory/opinion is that many of the others may be using this data as well, and that these new, more reliable and more complete sources of data are responsible for fueling some amazing innovation in the legal research sphere.
What Holes Remain?
Reviewing the coverage of CourtListener and Caselaw Access Project it appears to me that they have, when combined:
100% of all published U.S. case law from 2018 and earlier (state and federal)
100% of all U.S. Supreme Court, U.S. Circuit Court of Appeals, and state appellate court cases
There are, nevertheless, still a few holes that remain in the coverage:
Newer Reporter Citations. Newer appellate court decisions may not have reporter citations within CourtListener. These may be supplemented as Fastcase donates cases to Caselaw Access Project.
Newer Federal District Court Opinions. Although CourtListener collects federal decisions marked as “opinions” within PACER, these decisions are not yet available in their opinion search. Therefore, very few federal district court cases are available for the past 3-4 years. This functionality will likely be added, but even when it is, district courts are inconsistent about marking decisions as “opinions” and so not all federal district court opinions will make their way to CourtListener’s opinions database. To me, this brings into sharp relief the failure of federal courts to comply with the 2002 E-Government Act, which requires federal courts to provide online access to all written opinions.
State Trial Court Decisions. Some other legal research providers include state court trial-level decisions. These are generally not published on freely available websites (so CourtListener cannot scrape them) and are also typically not published in print reporters (so Caselaw Access Project could not scan them).
Tribal Law. Even the major vendors have patchy access to tribal law, and CourtListener has holes here as well.
The Elephant in the Room
Of course, another major factor in the increase in legal research tools may be simple economics. In August, Thomson Reuters acquired the legal research provider Casetext for the eye-watering sum of $650 million. And Casetext itself is a newer legal research provider, founded only in 2013. In interviews, Thomson Reuters cited Casetext’s access to domain-specific legal authority, as well as its early access to GPT-4, as key to its success.
What’s Next?
Both Courtlistener and Caselaw Acess Project have big plans for continuing to increase access to case law. CAP will launch free API access in February 2024, coordinating with LexisNexis, Fastcase, and the Free Law Project on the launch. CourtListener is planning a scanning project to fix remaining gaps in their coverage (CourtListener’s Mike Lissner tells me they are interested in speaking to law librarians about this – please reach out). And I’m sure we can expect to see additional legal research tools, and potentially entire LLMs (hopefully open source!), trained on this legal data.
Know of anything else I didn’t discuss? Let me know in the comments, or find me on social media or email.
Please allow me a brief interlude for a non-legal tangent to update you on an unexpected ChatGPT medical use case and reason for my delayed posting.
On October 3rd, I was driving home, the usual thoughts of dinner plans swirling in my head. Unfortunately, the normalcy of my evening shattered as I exited the freeway and stopped at the traffic light. The driver behind me failed to stop at the light or for the accident he caused. Thinking that the damage was minor, I was more aggravated than worried as I described the events to the responding officer.
A few days later, my ability to focus disappeared. What should take minutes stretched into hours. After a trip to see my doctor, I was diagnosed with a mild concussion and told to avoid electronic screens. But the stubborn mule in me decided to power through grading assignments and teaching classes. Bad idea. I ended up causing myself great pain and extended my screen restrictions further.
The most frustrating part? I was suddenly missing words that I had been using for 20+ years. I’d stare at sentences I’d written, knowing something was off, but the right word eluded me. This was terrifying for someone whose profession revolves around precise and accurate word selection. I actively sought to regain my language capabilities.
It remains unclear what led me to the notion that ChatGPT could be a remedy to this problem. I soon found myself, however, feeding incorrect sentences to the chatbot, explaining the improper word choice, and requesting alternatives. And voila! Within seconds, ChatGPT offered options, often including the word that my mind was denying me. If the word did not come up right away, a prompt or two usually provided me with the word I sought. I was beyond grateful for the gift of my missing words.
Fast forward a month, and I am finally feeling closer to myself again. The missing words are minimal, but my appreciation for this technology has not diminished. In addition to being thankful for generative AI, I have begun wondering about its applications for others who have suffered from similar issues. My co-blogger, Becka Rich, is delving into the technology’s application for neurodiverse individuals, research which I follow closely. But I keep wondering if the technology has potential to benefit those who have suffered from traumatic brain injury or even mild dementia.
Two personal reasons shift my thoughts in this direction, beyond my recent concussion. First, I once had a student who was in a serious motor vehicle accident with a significant traumatic brain injury. She was on medical leave for over a year, and when she came back her cognitive struggles to write and speak at her previous levels were obvious. I wish this technology had been available to her then. It may have expedited regaining her confidence and language skills. Second, my family has a history of dementia. One of my biggest fears is losing myself to this disease eventually. Could this technology help delay a decline by reminding a dementia patient of their knowledge and keeping their memory active?
With these motivating thoughts, I began and continue researching the issue. Although abundant literature explores generative AI’s role in diagnostics and treatment planning, a discernible void exists with regard to patient use in cognitive rehabilitation. I finally came across a paper today that discusses AI’s use for diagnosing dementia and goes on to speculate that it has promise as part of the patient’s cognitive rehabilitation toolbox. Unfortunately, the authors do not delve too deeply into this topic or hint that research is currently being conducted on the issue (see p. 8 of PDF). This area seems ripe for further research on the issue.
This post wavers a bit from our legal focus, but hopefully you stuck with me through my non-legal tangent about my personal hiccup and my resulting discovery of an unexpected benefit of access to generative AI. I am curious to know what other, non-legal (as opposed to illegal) uses of generative AI you wish to see explored. While I am certainly not qualified to undertake medical research like this, I hope that this post will inspire someone who is qualified and who can help other grateful patients.
The “ChatGPT Attorney” case has drawn much attention, but it’s not the only example of lawyers facing problems with AI use. This blog will compile other instances where attorneys have gotten into trouble for incorporating AI into their practice. Updates will be made as new cases or suggestions arise, providing a centralized resource for both legal educators and practicing attorneys (or it can be used to update a Libguide 😉). I’ll will also add this to one of our menus or headings for easy access.
Attorney Discipline
Park v. Kim, No. 22-2057, 2024 WL 332478 (2d Cir. Jan. 30, 2024)
“Attorney Jae S. Lee. Lee’s reply brief in this case includes a citation to a non-existent case, which she admits she generated using the artificial intelligence tool ChatGPT. Because citation in a brief to a non-existent case suggests conduct that falls below the basic obligations of counsel, we refer Attorney Lee to the Court’s Grievance Panel, and further direct Attorney Lee to furnish a copy of this decision to her client, Plaintiff-Appellant Park.”
Mata v. Avianca, Inc. (1:22-cv-01461) District Court, S.D. New York
I will not belabor the ChatGPT attorney (since it has been covered by real journalists like the NYT) – only provide links to the underlying dockets in case you need them since I get asked for them fairly often:
In a less publicized case from Colorado, an attorney, Zachariah Crabhill, relied on ChatGPT to draft a legal motion, only to find out later that the cited cases were fictitious. Unfortunately, the court filings are not accessible through El Paso County’s records or Bloomberg Law. If any Colorado law librarians can obtain these documents, please contact me, and I’ll update this post accordingly.
Ex Parte Allen Michael Lee, No. 10-22-00281-CR, 2023 WL 4624777 (Tex. Crim. App. July 19, 2023)
An Opinion of Chief Justice Tom Grey explains that Allen Michael Lee faces charges related to child sexual assault, with bail set at $400,000, which he hasn’t been able to post. Lee sought a bail reduction through a pre-trial habeas corpus application, but the court denied this, leading Lee to argue that the denial was an abuse of discretion due to excessive initial bail. However, his appeal was critiqued for inadequate citation, as the cases he referenced either didn’t exist or were unrelated to his arguments
“Cuddy Law Firm in New York has been submitting exhibits of transcripts of interactions with ChatGPT to their motions for attorneys fees (essentially a back and forth to zero in on what is a reasonable rate) in several cases in S.D. NY.” [This is an ongoing action and we’re waiting to see if it is allowed] from reader Jason as a comment (very much appreciated, Jason)
A Spooky Glimpse into the Future
In 2019, Canadian Judge Whitten reduced an attorney’s requested fees on the grounds that the attorney had not utilized AI technology:
The decision concerned a request for attorneys’ fees and expenses by defendant, Port Dalhousie Vitalization Corporation (PDVC). The court granted summary judgment in PDVC’s favor against a woman who sued PDVC after she slipped and fell at an Ontario bar for which PDVC was the landlord. The bar, My Cottage BBQ and Brew, defaulted in the case. In his ruling, Justice Whitten mentioned that the use of AI in legal research would have reduced the amount of time one of the attorneys for the defendant would have spent preparing his client’s case.
In domains where AI can significantly expedite workflows, it could indeed become standard practice for judges to scrutinize fee requests more rigorously. Attorneys might be expected to leverage the latest technological tools to carry out tasks more efficiently, thereby justifying their fees. In this scenario, sticking to traditional, manual methods could be perceived as inefficient, and therefore, not cost-effective, leading to fee reductions. This has led many people to wonder if AI will expedite the decline of the billable hour (for more on that please see this fantastic discussion on 3 Geeks and a Law Blog, AI-Pocalypse: The Shocking Impact on Law Firm Profitability).
Within the rapidly advancing realm of generative AI, ChatGPT is expanding its inventory of human senses available for its GPT-4 subscribers. For a mere $20 a month, you can experience its new ability to see, speak, and hear! One of these senses, the “vision” enhancement, allows the user to upload images which ChatGPT can analyize and answer questions about. Currently, YouTube is abuzz with endless creative demonstrations of this technology – solving math problems, calculating food calories in a shopping cart, identifying landmarks in photos, reading CT scans, turning sketches into art, creating fashion designs, tracking wildlife, aging or de-aging faces, detecting plant diseases, restoring old photos, suggesting interior design changes, and analyzing aerial images for urban development and natural disasters. Multiple videos are highlighting an intriguing paper, “The Dawn of LMMs: Preliminary Explorations with GPT-4V(ision),” which explores over 100 potential applications of this feature, examining how well the system performs for each.
Originally for this blog post, I was going to examine a few examples of possible uses of this technology for legal practice. However, I got bogged down by my first example, analyzing evidence in photographs, because ChatGPT had so much to say about the image I uploaded! So, I’m focusing solely on that one.
Photograph Analysis
Photographs obviously play a huge role in legal practice. Lawyers, prosecutors, jurors, and others often rely heavily on photographs to understand events and assign blame – be it a photograph from a car accident, a crime scene, or even the positioning of a car in a parking violation scenario. Of course, in crime dramas and true crime documentaries, they either have a brilliant detective who identifies the key piece of evidence in the picture, or it is crowdsourced by the public when it appears in the news. Now, with ChatGPT’s newfound ability to “see,” it’s essentially doing its own form of crowdsourcing. Using the wisdom of countless human eyes contained in its massive training data, ChatGPT-4 can assist and provide insights that an individual’s human eyes can miss.
Example – Analysis of Office Shelf Photograph
To test its abilities, I snapped a goofy photo of a shelf in my office and quizzed ChatGPT about the things it spotted. I was pretty impressed by its descriptive abilities and knowledge about some of the items. Exploring its possible uses for like, the police, I pretended the picture was taken in the home of an international assassin, and then asked for some clues about the assassin. I think that it had some interesting ideas!
I uploaded the image and asked away!
Me: Hi! Please list the items you recognize in this picture.
ChatGPT: Sure! Here are the items I recognize in the picture:
A green piñata (or decorative item) made of fringed paper.
A figurine of an animal, possibly a kangaroo or wallaby, holding red objects (perhaps apples).
A piece of wood or nut with googly eyes attached, resembling a face with an open mouth.
Two glass or ceramic bottles with decorative paintings or inscriptions.
A candy cane or similarly striped stick.
A wooden hand sculpture.
A tray or plate with a pink border.
Various stickers or decals on a gray surface, including ones that say “Night Owl”, “Juicy”, and others.
Let me know if you need more information on any of these items!
Me: Great! You see the little bottle with the nature scene on it? Tell me what you know about that picture.
ChatGPT: Certainly! The bottle you’re referring to appears to be a type of “snuff bottle.” Snuff bottles are small bottles which were primarily used in ancient China to hold powdered tobacco (snuff). These bottles became popular during the Qing Dynasty. The use of snuff was a fashion among the Chinese during this period, and they liked to carry it in small bottles like the one in your picture…. (Note: It continued for 3 more paragraphs.)
Me: Can you read the Chinese characters on the bottle?
ChatGPT: (Note: No, too small, but could read it if I uploaded another picture.)
Me: If I told you that the brown thing with eyes is a dessicated potato, could you take a guess at how long it has been drying out?
ChatGPT: While I can provide a general overview based on the image and your description, it’s important to note that accurately determining the exact duration of desiccation for a potato based solely on a photograph is challenging.
That said, when a potato dries out, it loses its moisture and shrinks, and the skin can become wrinkled and tough. Over time, as the potato continues to dry, it may become lighter in weight and more shriveled in appearance. The potato in the image does appear quite shriveled and dried out.
However, the exact duration of desiccation can vary based on several factors, including.. (Note: It goes on with a list of factors.)
Me: Yeah, that was a good guess! I accidentally left it in my office when I went to the AALL conference in July, and it is October 18 now.
ChatGPT: Ah, that makes sense! Leaving a potato in an environment like an office for several months would certainly lead to the kind of desiccation observed in the image. The absence of moisture and a relatively stable environment would have contributed to
its drying out. It’s interesting to see how everyday items can transform over time when left in certain conditions. I hope your AALL conference went well, and thanks for sharing this intriguing observation with me!
Me: Yeah, I was surprised at how it looked after a week! Can you tell what that is in the potato’s mouth?
ChatGPT: (No, didn’t guess correctly.)
Me: What if I gave you a hint? The tooth fairy.
ChatGPT: Ah, with that hint in mind, it seems the object in the potato’s mouth could represent a “tooth”! Perhaps it’s a playful representation of the potato losing a tooth and waiting for the tooth fairy. In that context, the object could be made of a variety of materials resembling a tooth, such as a piece of white foam, plastic, or even a small piece of chalk or stone. It’s a fun and creative idea!
Me: Actually, one day I was looking at the potato, and I saw one of my daughter’s baby teeth on the shelf! I don’t know when I put it there, but I do remember finding one in my wallet and thinking that I should hold on to it. So, I assume I stuck it on my shelf for safekeeping.
ChatGPT: That’s both surprising and heartwarming! Finding personal mementos in unexpected places can bring back a rush of memories and emotions. Baby teeth are often cherished keepsakes for many parents, reminding them of the fleeting nature of childhood. Your discovery adds a whimsical twist to the tradition of the tooth fairy! The potato unintentionally became a guardian for your daughter’s baby tooth. It’s a unique and memorable story you can share with her in the future.
Me: If this picture would taken by the police at the home of, say, an international assassin, would it give us any clues about the assassin?
GhatGPT: While it’s fun to imagine scenarios, it’s essential to approach such questions with a good dose of caution and skepticism. Using the image you provided, let’s hypothesize in a fictional context:
If the image were taken at the home of an “international assassin” in a crime novel or movie, investigators might look for clues in the following ways:
Personal Touches: The dessicated potato with a baby tooth could suggest a sentimental side or a connection to family. The tooth might indicate the assassin has a child or keepsakes from their past.
Cultural Clues: The snuff bottle with Chinese characters could hint at a connection to China, either personally, professionally, or as a destination of interest.
Hobbies or Interests: The various trinkets and decorations might suggest that the assassin has an interest in art, culture, or collecting unique items.
Stickers and Decals: The stickers on the gray surface might provide clues about places they’ve visited, brands they like, or organizations they support or are involved in.
Behavioral Clues: The fact that the potato was left to desiccate for months might indicate a person who travels frequently or isn’t often at home.
Potential Disguises: Items like the piñata could be tools for blending in at parties or social events.
However, it’s essential to note that these are purely speculative interpretations based on a fictional scenario. In a real-world context, such deductions would be highly speculative and could lead to false conclusions. Personal items in someone’s home often reflect their tastes, memories, and everyday life rather than their profession, especially one as clandestine as an “international assassin.”
Me: Thanks for your help today!
Generative AI in the Legal Sphere
Well, that was definitely interesting! Then, naturally, I turned to ChatGPT to explore potential applications in the legal sector, and it had endless ideas. Here are but a few:
Generative AI, especially with its visual capabilities, offers a plethora of applications in the legal sector:
Document Review: Beyond just text, AI can now scan and interpret visual elements in legal documents, helping in tasks like verifying signatures or checking seals.
Evidence Analysis: AI can be used to enhance low-quality surveillance footage, identify faces in crowded scenes, or even analyze patterns in digital data visualizations.
Contract Visualization: Instead of lengthy text, visualize contract terms as diagrams or flowcharts, making them more digestible for clients.
Property Assessment: Using satellite imagery, AI can assess property boundaries, land usage, or even environmental compliance.
Accident Reconstructions: AI could recreate accident scenes in 3D based on photos and other evidence, aiding in personal injury or insurance cases.
Copyright Compliance: Scan vast repositories of images to identify potential copyright violations or unauthorized use of intellectual property.
Client Interactions: Use AI to generate visual explanations of complex legal scenarios, helping clients grasp intricate details through interactive diagrams or simulations.
Forensic Art: Create composites or age-progression photos in missing person cases or help juries visualize crime scenes.
Legal Research: AI can scan through visually rich documents like patents, identifying key diagrams or illustrations that could be crucial in patent litigation.”
Conclusion
With tools like ChatGPT expanding their capabilities, the intersection of AI and law is clear. ChatGPT’s new features highlight potential efficiencies and improvements for legal processes!