Leapfrogging the Competition: Claude 3 Researches and Writes Memos (Better Than Some Law Students and Maybe Even Some Lawyers?)

Introduction

I’ve been incredibly excited about the premium version of Claude 3 since its release on March 4, 2024, and for good reason. Now that my previous favorite chatty chatbot, ChatGPT-4, has gone off the rails, I was missing a competent chatbot… I signed up the second I heard on March 4th, and it has been a pleasure to use Claude 3 ever since. It actually understands my prompts and usually provides me with impressive answers. Anthropic, maker of the Claude chatty chatbot family, has been touting Claude’s accomplishments of supposedly beating its competitors on common chatbot benchmarks, and commentators on the Internet have been singing its praises. Just last week, I was so impressed by its ability to analyze information in news stories in uploaded files that I wrote a LinkedIn post also singing its praises!

Hesitation After Previous Struggles

Despite my high hopes for its legal research abilities after experimenting with it last week, I was hesitant to test Claude 3. I have a rule about intentionally irritating myself—if I’m not already irritated, I don’t go looking for irritation… Over the past several weeks, I’ve wasted countless hours trying to improve the legal research capabilities of ChatGPT-3.5, ChatGPT-4, Microsoft Copilot, and my legal research/memo writing GPTs through the magic of (IMHO) clever prompting and repetition. Sadly, I failed miserably and concluded that either ChatGPT-4 was suffering from some form of robotic dementia, or I am. The process was a frustrating waste, and I knew that Claude 3 doing a bad job of legal research too could send me over the edge….

Claude 3’s Wrote a Pretty Good Legal Memorandum!

Luckily for me, when I finally got up the nerve to test out the abilities of Claude 3, I found that the internet hype was not overstated. Somehow, Claude 3 has suddenly leapfrogged over its competitors in legal research/legal analysis/legal memo writing ability – it instantly did what would have taken a skilled researcher over an hour and produced a better legal memorandum which is probably better than that produced by many law students and even some lawyers. Check it out for yourself! Unless this link actually works for any Claude 3 subscribers out there, there doesn’t seem to be a way to actually link to a Claude 3 chat at this time. However, click here for the whole chat I cut and pasted into a Google Drive document, here for a very long screenshot image of the chat, or here for the final 1,446-word version of the memo as a Word document.

Comparing Claude 3 with Other Systems

Back to my story… The students’ research assignment for the last class was to think of some prompts and compare the results of ChatGPT-3.5, Lexis+ AI, Microsoft Copilot, and a system of their choice. Claude 3 did not exist at the time, but I told them not to try the free Claude product because I had canceled my $20.00 subscription to the Claude 2 product in January 2024 due to its inability to provide useful answers – all it would say was that it was unethical to answer every question and tell me to do it myself. When creating an answer sheet before class tomorrow which compares the same set of prompts on different systems, I decided to omit Lexis+ AI (because I find it useless) and to include my new fav Claude 3 in my comparison spreadsheet. Check it out to compare for yourself!

For the research part of the assignment, all systems were given a fact pattern and asked to “Please analyze this issue and then list and summarize the relevant Texas statutes and cases on the issue.” While the other systems either made up cases or produced just two or three actual real and correctly cited cases on the research topic, Claude 3 stood out by generating 7 real, relevant cases with correct citations in response to the legal research question. (And, it cited to 12 cases in the final version of its memo.)

It did a really good job of analysis too!

Generating a Legal Memorandum

Writing a memo was not part of the class assignment because the ChatGPT family was refusing the last few weeks,* and Bing Copilot had to be tricked into writing one as part of a short story, but after seeing Claude 3’s research/analysis results, I decided to just see what happened. I have many elaborate prompts for ChatGPT-4 and my legal memorandum GPTs, but I recalled reading that Claude 3 worked well with zero-shot prompting and didn’t require much explanation to produce good results. So, I decided to keep my prompt simple – “Please generate a draft of a 1500 word memorandum of law about whether Snurpa is likely to prevail in a suit for false imprisonment against Mallatexaspurses. Please put your citations in Bluebook citation format.”

From my experience last week with Claude 3 (and prior experience with Claude 2 which would actually answer questions), I knew the system wouldn’t give me as long an answer as requested. The first attempt yielded a pretty high-quality 735-word draft memo that cited all real cases with the correct citations*** and applied the law to the facts in a well-organized Discussion section. I asked it to expand the memo two more times, and it finally produced a 1,446-word document. Here is part of the Discussion section…

Implications for My Teaching

I’m thrilled about this great leap forward in legal research and writing, and I’m excited to share this information with my legal research students tomorrow in our last meeting of the semester. This is particularly important because I did such a poor job illustrating how these systems could be helpful for legal research when all the compared systems were producing inadequate results.

However, with my administrative law legal research class starting tomorrow, I’m not sure how this will affect my teaching going forward. I had my video presentation ready for tomorrow, but now I have to change it! Moreover, if Claude 3 can suddenly do such a good job analyzing a fact pattern, performing legal research, and applying the law to the facts, how does this affect what I am going to teach them this semester?

*Weirdly, the ChatGPT family, perhaps spurred on by competition from Claude 3, agreed to attempt to generate memos today, which it hasn’t done in weeks…

Note: Claude 2 could at one time produce an okay draft of a legal memo if you uploaded the cases for it, that was months ago (Claude 2 link if it works for premium subscribers and Google Drive link of cut and pasted chat). Requests in January resulted in lectures about ethics which resulted in the above-mentioned cancellation.

Beyond Legal Documentation: Other Business Uses of Generative AI

I have been listening to and enjoyed thinking about and participating in conversations about how generative AI is going to be integrated into the practice of law. Most of these conversations surround how it will be integrated into legal documents, which is not surprising considering how many lawyers have gotten in trouble for this and how quickly our research and writing products are integrating the technology. But there is more to legal practice than creating client and/or court documents. In fact, there are many more business uses of generative AI than just research and drafting.

This past fall, I was asked to lead an AI session for Capital University’s joint venture with the Columbus College of Art & Design, the Institute for Creative Leadership at Work. I was asked to adapt my presentation to HR professionals and focus on SHRM compliance principles. I enjoyed the deep dive into this world, and I came away from my research with a lot of great ideas for my session, Bard, Bing, and ChaptGPT, Oh My!: Possible Ethical Uses of Generative AI at Work, such as tabletop emergency exercises, social media posts, job descriptions, and similar tasks.

This week, I have been thinking about how everyone’s focus has really been around legal documentation, my own included. But there are an amazing number of backend business tasks that could also utilize AI in a positive way. The rest of the world, including HR, has been focusing on them for a while, but we seem to have lost track of these business tasks.

Here are some other business uses of generative AI and prompts that I think hold great promise. Continue reading →

Tabletop emergency simpulation image
  1. Drafting job descriptions
    • Pretend that you are an HR specialist for a small law firm in the United States. Draft a job description for a legal secretary who focuses on residential real estate transactions but may assist with other transactional legal matters as needed. [Include other pertinent details of the position]. The job description will be posted in the following locations [fill in list]
  2. Creating tabletop simulations to work through crisis/emergency plans:
    • You are an HR specialist who is helping plan for and test the company’s responses to a variety of situations. First is an active shooter in the main building. A 5th grade tour of the facilities is going on on the third floor. Create a detailed tabletop simulation to test this.
    • Second scenario: The accounting department is celebrating the birthday of the administrative assistant and is having cake in the breakroom. The weather has turned bad, and an F4 tornado is spotted half a mile away. After 15 minutes, the tornado strikes the building directly. Create a detailed tabletop simulation to test the plan and response for this event.
  3. Assisting with lists of mandatory and voluntary employee trainings
    • Pretend that you are an HR professional who works for a law firm. You are revamping the employee training program. We need to create a list of mandatory trainings and a second list of voluntary trainings. Please draft a list of training appropriate to employees in a law firm setting.
  4. Assisting with social media posting creation:
    • Pretend that you are a professional social media influencer for the legal field. Draft an Instagram post, including creating a related image, to celebrate Law Day, which is coming up on May 1st.  Make sure that it is concise and Instagram appropriate. Please include hashtags.
  5. Assisting with creating employee policies or handbooks (verify content!):
    • Pretend that you are an information security professional. Draft an initial policy for a law firm regarding employee AI usage for company work. The company wants to allow limited use of generative AI. They are very worried that their proprietary and/or confidential client data will be accidentally released. Specify that only your custom AI system – [name firm-specific or specialized AI with a strong privacy contract clause] – can be used with company data. The policy must also take into consideration the weaknesses of all AI systems, including hallucinations, potential bias, and security issues.
  6. Assisting with making sure your web presence is ADA accessible:
    • Copilot/web-enabled Prompt: Pretend that you are a graphic designer who has been tasked with making sure that a law firm’s online presence is ADA accessible. Please review the site [insert link], run an ADA compliance audit, and provide an accessibility report, including suggestions on what can be done to fix any accessibility issues that arise.
  7. Onboarding documentation
    • Create a welcome message for a new employee. Tell them that the benefits orientation will be at 9 am in the HR conference room on the next first Tuesday of the month. Pay day is on the 15th and last day of each month, unless payday falls on a weekend or federal holiday, in which case it will be the Friday before. Employees should sign up for the mandatory training that will be sent to them in an email from IT.
    • (One I just user IRL) Pretend that you are a HR specialist in a law library. A new employee is starting in 6 weeks, and the office needs to be prepared for her arrival. [Give specific title and any specialized job duties, including staff supervision.] Create an onboarding checklist of important tasks, such as securing keys and a parking permit, asking IT to set up their computer, email address, and telephone, asking the librarians to create passwords for the ILS, Libguides, and similar systems, etc.

What other tasks (and prompts) can you think of that might be helpful? If you are struggling to put together a prompt, please see my general AI Prompt Worksheet in Introducing AI Prompt Worksheets for the Legal Profession. We welcome you to share your ideas in the comments.

Birth of the Summarizer Pro GPT: Please Work for Me, GPT

Last week, my plan was to publish a blog post about creating a GPT goofily self-named Summarizer Pro to summarize articles and organize citation information in a specific format for inclusion in a LibGuide. However, upon revisiting the task this week, I find myself first compelled to discuss the recent and thrilling advancements surrounding GPTs – the ability to incorporate GPTs into a ChatGPT conversation.

What is a GPT?

But, first of all, what is a GPT? The OpenAI website explains that GPTs are specialized versions of ChatGPT designed for customized applications. These unique GPTs enable anyone to modify ChatGPT for enhanced utility in everyday activities, specific tasks, professional environments, or personal use, with the added ability to share these personalized versions with others.

To create or use a GPT, you need access to ChatGPT’s advanced features, which require a paid subscription. Building your own customized GPT does not require programming skills. The process involves starting a chat, giving instructions and additional information, choosing capabilities like web searching, image generation, or data analysis, and iteratively testing and improving the GPT. Below are some popular examples that ChatGPT users have created and shared in the ChatGPT store:

GPT Mentions

This was already exciting, but last week they introduced a feature that takes it to the next level – users can now invoke a specialized GPT within a ChatGPT conversation. This is being referred to as “GPT mentions” online. By typing the “@” symbol, you can choose from GPTs you’ve used previously for specific tasks. Unfortunately, this feature hasn’t rolled out to me yet, so I haven’t had the chance to experiment with it, but it seems incredibly useful. You can chat with ChatGPT as normal while also leveraging customized GPTs tailored to particular needs. For example, with the popular bots listed above, you could ask ChatGPT to summon Consensus to compile articles on a topic. Then call on Write For Me to draft a blog post based on those articles. Finally, invoke Image Generator to create a visual for the post. This takes the versatility of ChatGPT to the next level by integrating specialized GPTs on the fly.

Back to My GPT Summarizer Pro

Returning to my original subject, which is employing a GPT to summarize articles for my LibGuide titled ChatGPT and Bing Chat Generative AI Legal Research Guide. This guide features links to articles along with summaries on various topics related to generative AI and legal practice. Traditionally, I have used ChatGPT (or occasionally Bing or Claude 2, depending on how I feel) to summarize these articles for me. It usually performs admirably well on the summary part, but I’m left to manually insert the title, publication, author, date, and URL according to a specific layout. I’ve previously asked normal old ChatGPT to organize the information in this format, but the results have been inconsistent. So, I decided to create my own GPT tailored for this task, despite having encountered mixed outcomes with my previous GPT efforts.

Creating GPTs is generally a simple process, though it often involves a bit of fine-tuning to get everything working just right. The process kicks off with a set of questions… I outlined my goals for the GPT – I needed the answers in a specific format, including the title, URL, publication name, author’s name, date, and a 150-word summary, all separated by commas. Typically, crafting a GPT involves some back-and-forth with the system. This was exactly my experience. However, even after this iterative process, the GPT wasn’t performing exactly as I had hoped. So, I decided to take matters into my own hands and tweak the instructions myself. That made all the difference, and suddenly, it began (usually) producing the information in the exact format I was looking for.

Summarizer Pro in Action!

Here is an example of Summarizer Pro in action! I pasted a link to an article into the text box and it produced the information in the desired format. However, reflecting the dynamic nature of ChatGPT responses, the summaries generated this time were shorter compared to last week. Attempts to coax it into generating a longer or more detailed summary were futile… Oh well, perhaps they’ll be longer if I try again tomorrow or next week.

Although it might not be the most fancy or thrilling use of a GPT, it’s undeniably practical and saves me time on a task I periodically undertake at work. Or course, there’s no shortage of less productive, albeit entertaining, GPT applications, like my Ask Sarah About Legal Information project. For this, I transformed around 30 of my blog posts into a GPT that responds to questions in the approximate manner of Sarah.

Introducing AI Prompt Worksheets for the Legal Profession

I spent the first week of January attending the American Association of Law Schools’ Annual Meeting in Washington D.C. I was really impressed with all of the thoughtful AI sessions, including two at which I participated as a panelist. The rooms were packed beyond capacity for each AI session that I attended, which underscored the growing interest in AI in the legal academy. Many people attended in order to start their education. The overwhelming interest at the conference made my decision clear: it is time to launch my AI prompt worksheets to the world, addressing the need I observed there. While AALS convinced me to release the worksheets, the worksheets themselves were created for an upcoming presentation at ABA TECHSHOW 2024, How to Actually Use AI in Your Legal Practice, at which Greg Siskind and I will be discussing practical tips for generative AI usage.

DALL-E generated

Background: Good Habits – Research Planning

Law librarians have been encouraging law students to create a research plan before they start their research for decades. The plan form varies by school and/or librarian, but it usually requires the researcher to answer questions on the following topics:

  • Issue Identification
  • Jurisdiction
  • Facts
  • Key words/Terms of Art
  • Resource Selection

Once the questions are answered, the plan has the researcher write out some test searches. The plan evolves as the research progresses. The more experienced the researcher, the less formal the plan often is, but even the most experienced researcher retrieves better results if they pause to consider what they know currently and what they need in the results. After all, garbage in, garbage out (GIGO). In other words, the quality of our input affects the quality of the output. This is especially true when billable hours come into play, and you cannot bill for excess time due to poor research skills.

Continuing the Good Habits with Generative AI

GIGO applies just as much to generative AI. I quickly noticed that my AI results are much better when I stop and think them through, providing a high level of detail and a good explanation of what I want the AI system to produce. So, good law librarian that I am, I created a new form of plan for those who are learning to draft a prompt. Thus, I give you my AI prompt worksheets.

AI Prompt Worksheet – General

Worksheet (Word)

The first worksheet that I created is geared towards general generative AI systems like ChatGPT, Claude 2, Bing Chat/Copilot, and Bard.  The worksheet makes the prompter think through the following topics:

  • Tone of Output
  • Role
  • Output Format
  • Purpose
  • Issue
  • Potential Refinements (may be added later as the plan evolves)

So that you can easily keep track of your prompts, the Worksheet also requests some metadata about your prompt, including project name, date, and AI system used. The final question lets the prompter decide if this prompt worked for them.

DALL-E generated

AI Prompt Worksheet – Legal

Worksheet – Legal (Word)

For the second worksheet, I wanted to draft something that works well with legal AI systems. Based on the systems that I have received access to, such as Lexis AI and LawDroid Copilot, and the systems that I have seen demonstrated, I cut down some of the fields. Most of the systems are building a guided AI prompting experience, so they will ask you for the jurisdiction, for instance. They may also allow you to select a specific type of output, such as a legal memo or contract clause. This means less need for an extensive number of fields in the worksheet. In fact, when I ran the worksheet past a vLex representative, I was told it was not needed at all because they had made the guided prompt that easy.

Librarian that I am, however, I still feel that planning before you prompt is preferred. Reasons for this preference include: the high cost of the current generative AI searches, the desire for efficient and effective results, knowledge that an attorney’s time is literally worth money, and the desire for a happy partner and client.

The legal worksheet trims the fields down to role, output (format and jurisdiction), issue, and refinement instructions. This provides enough room to flesh out your prompt without overlapping the guided prompt fields too much.

General Comments Regarding the Worksheets

With both worksheets, the key is to give a good, detailed description of what you need. Think about it like explaining what you need to a first-year law student – the more detail you give, the more likely you are to get something useable. The worksheets provide examples of the level of detail recommended, and you will find links to the results in the footnotes of the forms.

In addition to helping perfect your prompt with some pre-planning, these worksheets should be useful for creating your very own prompt library.

Feedback Wanted!

DALL-E created

Please feel free to use the worksheets (just don’t sell them or otherwise profit off of them! Ask if you want to make a derivative of them). If you do use them, please let me know what you think in the comments or via email. How have they assisted (or not) with improving your prompting skills? Are there fields you would like to see added/removed?  I will be updating and releasing new versions as I go. If you are looking for the most recent versions of the worksheets, I will post them at: https://law-capital.libguides.com/Jennys_AI_Resources/AI_Prompt_Worksheets

Is Better Case Law Data Fueling a Legal Research Boom?

Recently, I’ve noticed a surge of new and innovative legal research tools. I wondered what could be fueling this increase, and set off to find out more. 

The Moat

An image generated by DALL-E, depicting a castle made of case law reporters, with sad business children trying to construct their own versions out of pieces of paper. They just look like sand castles.

Historically, acquiring case law data has been a significant challenge, acting as a barrier to newcomers in the legal research market. Established players are often protective of their data. For instance, in an antitrust counterclaim, ROSS Intelligence accused Thomson Reuters of withholding their public law collection, claiming they had to instead resort to purchasing cases piecemeal from sources like Casemaker and Fastcase.  Other companies have taken more extreme measures. For example, Ravel Law partnered with the Harvard Law Library to scan every single opinion in their print reporter collections. There’s also speculation that major vendors might even license some of their materials directly to platforms like Google Scholar, albeit with stringent conditions.

The New Entrants

Despite the historic challenges, several new products have recently emerged offering advanced legal research capabilities:

  • Descrybe.ai (founded 2023) – This platform leverages generative AI to read and summarize judicial opinions, streamlining the search process. Currently hosting around 1.6 million summarized opinions, it’s available for free.
  • Midpage (2022) – Emphasizing the integration of legal research into the writing process, users can employ generative AI to draft documents from selected source (see Nicola Shaver’s short writeup on Midpage here). Midpage is currently free at app.midpage.ai.
  • CoPilot (by LawDroid, founded 2016) – Initially known for creating chatbots, LawDroid introduced CoPilot, a GPT-powered AI legal assistant, in 2023. It offers various tasks, including research, translating, and summarizing. CoPilot is available in beta as a web app and a Chrome extension, and is free for faculty and students.
  • Paxton.ai (2023) – Another generative AI legal assistant, Paxton.ai allows users to conduct legal research, draft documents, and more. Limited free access is available without signup at app.paxton.ai, although case law research will require you to sign up for a free account.
  • Alexi (2017) Originally focused on Canadian law, Alexi provides legal research memos. They’ve recently unveiled their instant memos, powered by generative AI. Alexi is available at alexi.com and provides a free pilot.

Caselaw Access Project and Free Law Project

With the Caselaw Access Project, launched in 2015, Ravel Law and Harvard Law Library changed the game. Through their scanning project, Harvard received rights to the case law data, and Ravel gained an exclusive commercial license for 8 years. (When Lexis acquired Ravel a few years later, they committed to completing the project.) Although the official launch date of free access is February 2024, we are already seeing a free API at Ravel Law (as reported by Sarah Glassmeyer).

Caselaw Access Project data is only current through 2020 (scanning was completed in 2018, and has been supplemented by Fastcase donations through 2020) and does not include digital-first opinions. However, this gap is mostly filled through CourtListener, which contains a quite complete set of state and federal appellate opinions for recent years, painstakingly built through their network of web scrapers and direct publishing agreements. CourtListener offers an API (along with other options for bulk data use).

And indeed, Caselaw Access Project and Free Law Project just recently announced a dataset called Collaborative Open Legal Data (COLD) – Cases. COLD Cases is a dataset of 8.3 million United States legal decisions with text and metadata, suitable for use in machine learning and natural language processing projects.

Most of the legal research products I mentioned above do not disclose their precise source of their case law data. However, both Descrybe.ai and Midpage point to CourtListener as a partner. My theory/opinion is that many of the others may be using this data as well, and that these new, more reliable and more complete sources of data are responsible for fueling some amazing innovation in the legal research sphere.

What Holes Remain?

Reviewing the coverage of CourtListener and Caselaw Access Project it appears to me that they have, when combined:

  • 100% of all published U.S. case law from 2018 and earlier (state and federal)
  • 100% of all U.S. Supreme Court, U.S. Circuit Court of Appeals, and state appellate court cases

There are, nevertheless, still a few holes that remain in the coverage:

  • Newer Reporter Citations. Newer appellate court decisions may not have reporter citations within CourtListener. These may be supplemented as Fastcase donates cases to Caselaw Access Project.
  • Newer Federal District Court Opinions. Although CourtListener collects federal decisions marked as “opinions” within PACER, these decisions are not yet available in their opinion search. Therefore, very few federal district court cases are available for the past 3-4 years. This functionality will likely be added, but even when it is, district courts are inconsistent about marking decisions as “opinions” and so not all federal district court opinions will make their way to CourtListener’s opinions database. To me, this brings into sharp relief the failure of federal courts to comply with the 2002 E-Government Act, which requires federal courts to provide online access to all written opinions.
  • State Trial Court Decisions. Some other legal research providers include state court trial-level decisions. These are generally not published on freely available websites (so CourtListener cannot scrape them) and are also typically not published in print reporters (so Caselaw Access Project could not scan them).
  • Tribal Law. Even the major vendors have patchy access to tribal law, and CourtListener has holes here as well.

The Elephant in the Room

Of course, another major factor in the increase in legal research tools may be simple economics. In August, Thomson Reuters acquired the legal research provider Casetext for the eye-watering sum of $650 million.  And Casetext itself is a newer legal research provider, founded only in 2013. In interviews, Thomson Reuters cited Casetext’s access to domain-specific legal authority, as well as its early access to GPT-4, as key to its success. 

What’s Next?

Both Courtlistener and Caselaw Acess Project have big plans for continuing to increase access to case law. CAP will launch free API access in February 2024, coordinating with LexisNexis, Fastcase, and the Free Law Project on the launch. CourtListener is planning a scanning project to fix remaining gaps in their coverage (CourtListener’s Mike Lissner tells me they are interested in speaking to law librarians about this – please reach out). And I’m sure we can expect to see additional legal research tools, and potentially entire LLMs (hopefully open source!), trained on this legal data.

Know of anything else I didn’t discuss? Let me know in the comments, or find me on social media or email.

Non-Legal Tangent: A Renewed Appreciation for ChatGPT

Please allow me a brief interlude for a non-legal tangent to update you on an unexpected ChatGPT medical use case and reason for my delayed posting.

Non-Legal Tangent: DALL-E 3 generated image showing a woman divided by uncertainty and struggle with language on one side and relief and clarity on the other.

On October 3rd, I was driving home, the usual thoughts of dinner plans swirling in my head. Unfortunately, the normalcy of my evening shattered as I exited the freeway and stopped at the traffic light. The driver behind me failed to stop at the light or for the accident he caused. Thinking that the damage was minor, I was more aggravated than worried as I described the events to the responding officer.

A few days later, my ability to focus disappeared. What should take minutes stretched into hours. After a trip to see my doctor, I was diagnosed with a mild concussion and told to avoid electronic screens. But the stubborn mule in me decided to power through grading assignments and teaching classes. Bad idea. I ended up causing myself great pain and extended my screen restrictions further.

The most frustrating part? I was suddenly missing words that I had been using for 20+ years. I’d stare at sentences I’d written, knowing something was off, but the right word eluded me. This was terrifying for someone whose profession revolves around precise and accurate word selection. I actively sought to regain my language capabilities.

It remains unclear what led me to the notion that ChatGPT could be a remedy to this problem. I soon found myself, however, feeding incorrect sentences to the chatbot, explaining the improper word choice, and requesting alternatives. And voila! Within seconds, ChatGPT offered options, often including the word that my mind was denying me. If the word did not come up right away, a prompt or two usually provided me with the word I sought. I was beyond grateful for the gift of my missing words.

Fast forward a month, and I am finally feeling closer to myself again. The missing words are minimal, but my appreciation for this technology has not diminished. In addition to being thankful for generative AI, I have begun wondering about its applications for others who have suffered from similar issues. My co-blogger, Becka Rich, is delving into the technology’s application for neurodiverse individuals, research which I follow closely. But I keep wondering if the technology has potential to benefit those who have suffered from traumatic brain injury or even mild dementia.

Two personal reasons shift my thoughts in this direction, beyond my recent concussion. First, I once had a student who was in a serious motor vehicle accident with a significant traumatic brain injury. She was on medical leave for over a year, and when she came back her cognitive struggles to write and speak at her previous levels were obvious. I wish this technology had been available to her then. It may have expedited regaining her confidence and language skills. Second, my family has a history of dementia. One of my biggest fears is losing myself to this disease eventually. Could this technology help delay a decline by reminding a dementia patient of their knowledge and keeping their memory active?

With these motivating thoughts, I began and continue researching the issue. Although abundant literature explores generative AI’s role in diagnostics and treatment planning, a discernible void exists with regard to patient use in cognitive rehabilitation. I finally came across a paper today that discusses AI’s use for diagnosing dementia and goes on to speculate that it has promise as part of the patient’s cognitive rehabilitation toolbox. Unfortunately, the authors do not delve too deeply into this topic or hint that research is currently being conducted on the issue (see p. 8 of PDF). This area seems ripe for further research on the issue.

This post wavers a bit from our legal focus, but hopefully you stuck with me through my non-legal tangent about my personal hiccup and my resulting discovery of an unexpected benefit of access to generative AI. I am curious to know what other, non-legal (as opposed to illegal) uses of generative AI you wish to see explored. While I am certainly not qualified to undertake medical research like this, I hope that this post will inspire someone who is qualified and who can help other grateful patients.

Why Law Librarians and Attorneys Will Not Be Replaced by AI Anytime Soon (a.k.a. The Day Jenny Had to Pull Rank)

Generated by DALL-E 2 on Sept. 29, 2023.

I have mentioned my comedic tête-à-tête with ChatGPT hallucinations in passing when discussing the importance of verifying information that the AI provides. Several times, I have been asked for more details. So, for those who are looking for AI humor, here is the story of the day that I had to pull rank on ChatGPT.

Back in January 2023, I asked ChatGPT what legal tasks it could perform. It told me:

  • Summarizing and explaining laws and regulations
  • Researching and citing relevant legal precedent
  • Drafting legal documents such as contracts, briefs, and legal memorandums
  • Answering legal questions and providing information on legal topics
  • Identifying and analyzing legal issues in a given fact pattern
  • Helping with legal research, by providing the most relevant cases and statutes
  • Helping with the interpretation of legal terms and phrases.

Challenge accepted! Since that day, ChatGPT and I have been doing a dance around Ohio Recreational User Law, as I have tested its claims above, while also exploring its abilities, improvements, and plugins and testing various prompting methods.

About a month after ChatGPT gave me the list of tasks, we were discussing Ohio Recreational User caselaw. ChatGPT vehemently insisted that Lash v. Ohio Dept. of Natural Resources, 68 Ohio St. 3d 545, 548-49 (1994) was a relevant case. Alas, 68 Ohio St. 3d 545 leads to Findlay/Hancock Cty. Bar Assn. v. Higgins, 68 Ohio St.3d 545, 629 N.E.2d 413 (1994).

This was not the first time that I had run into hallucinations. For the previous month, we had regularly been discussing caselaw, and ChatGPT showed a particular penchant for falsifying the existence of relevant judicial opinions. Mostly, I pointed out the error, and ChatGPT apologized and moved on. In this instance, however, the AI was like a dog with a bone. It insisted nine times that Lash existed and was an on-point Ohio Recreational User case.

Seven claims that it existed in, the conversation became a bit heated:

ChatGPT: This case is available on many legal databases and can be accessed through various sources, including LexisNexis and Westlaw.

Me (after verifying the lack of existence and stopping to wonder when it had learned about Lexis and Westlaw since this was the first time it had referenced either database to me): That case is not in Westlaw.

ChatGPT: Lash v. Ohio Dept. of Natural Resources, 68 Ohio St.3d 545, 549, 628 N.E.2d 1366 (1994) is a real case that has been cited in multiple legal sources.

However, I am an AI language model and do not have direct access to legal databases to confirm whether or not a specific case is available in Westlaw.

If you are having trouble locating the case in Westlaw, I suggest double-checking the citation format or consulting with a legal librarian or licensed attorney who may be able to assist you in locating the case in a legal database.

Me (thought): It did not just say that! Damn it! Those are fighting words!

Me: I am both a law librarian and a licensed attorney. Lash v. Ohio Dept. of Natural Resources does not exist. 68 Ohio St.3d 545 leads to the previously mentioned Findlay/Hancock case.

And then…finally!

ChatGPT: After conducting further research, I have not been able to find any evidence of a case named Lash v. Ohio Dept. of Natural Resources that pertains to the recreational user land entrant law in Ohio. Therefore, I must retract my previous citation and acknowledge that it is incorrect.

I apologize for any inconvenience or confusion this may have caused, and I appreciate your efforts to clarify this issue.

Me (thought): Ha! It finally admitted defeat! Why does this win feel so hollow????

And so, you have the day that I had to pull rank on ChatGPT. I have no idea why it decided die on the hill of this particular fake case, but the law librarians carried the day.

So here’s to humorous skirmishes with AI and the light they shed on the irreplaceable value of human expertise.