Beyond Legal Documentation: Other Business Uses of Generative AI

I have been listening to and enjoyed thinking about and participating in conversations about how generative AI is going to be integrated into the practice of law. Most of these conversations surround how it will be integrated into legal documents, which is not surprising considering how many lawyers have gotten in trouble for this and how quickly our research and writing products are integrating the technology. But there is more to legal practice than creating client and/or court documents. In fact, there are many more business uses of generative AI than just research and drafting.

This past fall, I was asked to lead an AI session for Capital University’s joint venture with the Columbus College of Art & Design, the Institute for Creative Leadership at Work. I was asked to adapt my presentation to HR professionals and focus on SHRM compliance principles. I enjoyed the deep dive into this world, and I came away from my research with a lot of great ideas for my session, Bard, Bing, and ChaptGPT, Oh My!: Possible Ethical Uses of Generative AI at Work, such as tabletop emergency exercises, social media posts, job descriptions, and similar tasks.

This week, I have been thinking about how everyone’s focus has really been around legal documentation, my own included. But there are an amazing number of backend business tasks that could also utilize AI in a positive way. The rest of the world, including HR, has been focusing on them for a while, but we seem to have lost track of these business tasks.

Here are some other business uses of generative AI and prompts that I think hold great promise. Continue reading →

Tabletop emergency simpulation image
  1. Drafting job descriptions
    • Pretend that you are an HR specialist for a small law firm in the United States. Draft a job description for a legal secretary who focuses on residential real estate transactions but may assist with other transactional legal matters as needed. [Include other pertinent details of the position]. The job description will be posted in the following locations [fill in list]
  2. Creating tabletop simulations to work through crisis/emergency plans:
    • You are an HR specialist who is helping plan for and test the company’s responses to a variety of situations. First is an active shooter in the main building. A 5th grade tour of the facilities is going on on the third floor. Create a detailed tabletop simulation to test this.
    • Second scenario: The accounting department is celebrating the birthday of the administrative assistant and is having cake in the breakroom. The weather has turned bad, and an F4 tornado is spotted half a mile away. After 15 minutes, the tornado strikes the building directly. Create a detailed tabletop simulation to test the plan and response for this event.
  3. Assisting with lists of mandatory and voluntary employee trainings
    • Pretend that you are an HR professional who works for a law firm. You are revamping the employee training program. We need to create a list of mandatory trainings and a second list of voluntary trainings. Please draft a list of training appropriate to employees in a law firm setting.
  4. Assisting with social media posting creation:
    • Pretend that you are a professional social media influencer for the legal field. Draft an Instagram post, including creating a related image, to celebrate Law Day, which is coming up on May 1st.  Make sure that it is concise and Instagram appropriate. Please include hashtags.
  5. Assisting with creating employee policies or handbooks (verify content!):
    • Pretend that you are an information security professional. Draft an initial policy for a law firm regarding employee AI usage for company work. The company wants to allow limited use of generative AI. They are very worried that their proprietary and/or confidential client data will be accidentally released. Specify that only your custom AI system – [name firm-specific or specialized AI with a strong privacy contract clause] – can be used with company data. The policy must also take into consideration the weaknesses of all AI systems, including hallucinations, potential bias, and security issues.
  6. Assisting with making sure your web presence is ADA accessible:
    • Copilot/web-enabled Prompt: Pretend that you are a graphic designer who has been tasked with making sure that a law firm’s online presence is ADA accessible. Please review the site [insert link], run an ADA compliance audit, and provide an accessibility report, including suggestions on what can be done to fix any accessibility issues that arise.
  7. Onboarding documentation
    • Create a welcome message for a new employee. Tell them that the benefits orientation will be at 9 am in the HR conference room on the next first Tuesday of the month. Pay day is on the 15th and last day of each month, unless payday falls on a weekend or federal holiday, in which case it will be the Friday before. Employees should sign up for the mandatory training that will be sent to them in an email from IT.
    • (One I just user IRL) Pretend that you are a HR specialist in a law library. A new employee is starting in 6 weeks, and the office needs to be prepared for her arrival. [Give specific title and any specialized job duties, including staff supervision.] Create an onboarding checklist of important tasks, such as securing keys and a parking permit, asking IT to set up their computer, email address, and telephone, asking the librarians to create passwords for the ILS, Libguides, and similar systems, etc.

What other tasks (and prompts) can you think of that might be helpful? If you are struggling to put together a prompt, please see my general AI Prompt Worksheet in Introducing AI Prompt Worksheets for the Legal Profession. We welcome you to share your ideas in the comments.

Birth of the Summarizer Pro GPT: Please Work for Me, GPT

Last week, my plan was to publish a blog post about creating a GPT goofily self-named Summarizer Pro to summarize articles and organize citation information in a specific format for inclusion in a LibGuide. However, upon revisiting the task this week, I find myself first compelled to discuss the recent and thrilling advancements surrounding GPTs – the ability to incorporate GPTs into a ChatGPT conversation.

What is a GPT?

But, first of all, what is a GPT? The OpenAI website explains that GPTs are specialized versions of ChatGPT designed for customized applications. These unique GPTs enable anyone to modify ChatGPT for enhanced utility in everyday activities, specific tasks, professional environments, or personal use, with the added ability to share these personalized versions with others.

To create or use a GPT, you need access to ChatGPT’s advanced features, which require a paid subscription. Building your own customized GPT does not require programming skills. The process involves starting a chat, giving instructions and additional information, choosing capabilities like web searching, image generation, or data analysis, and iteratively testing and improving the GPT. Below are some popular examples that ChatGPT users have created and shared in the ChatGPT store:

GPT Mentions

This was already exciting, but last week they introduced a feature that takes it to the next level – users can now invoke a specialized GPT within a ChatGPT conversation. This is being referred to as “GPT mentions” online. By typing the “@” symbol, you can choose from GPTs you’ve used previously for specific tasks. Unfortunately, this feature hasn’t rolled out to me yet, so I haven’t had the chance to experiment with it, but it seems incredibly useful. You can chat with ChatGPT as normal while also leveraging customized GPTs tailored to particular needs. For example, with the popular bots listed above, you could ask ChatGPT to summon Consensus to compile articles on a topic. Then call on Write For Me to draft a blog post based on those articles. Finally, invoke Image Generator to create a visual for the post. This takes the versatility of ChatGPT to the next level by integrating specialized GPTs on the fly.

Back to My GPT Summarizer Pro

Returning to my original subject, which is employing a GPT to summarize articles for my LibGuide titled ChatGPT and Bing Chat Generative AI Legal Research Guide. This guide features links to articles along with summaries on various topics related to generative AI and legal practice. Traditionally, I have used ChatGPT (or occasionally Bing or Claude 2, depending on how I feel) to summarize these articles for me. It usually performs admirably well on the summary part, but I’m left to manually insert the title, publication, author, date, and URL according to a specific layout. I’ve previously asked normal old ChatGPT to organize the information in this format, but the results have been inconsistent. So, I decided to create my own GPT tailored for this task, despite having encountered mixed outcomes with my previous GPT efforts.

Creating GPTs is generally a simple process, though it often involves a bit of fine-tuning to get everything working just right. The process kicks off with a set of questions… I outlined my goals for the GPT – I needed the answers in a specific format, including the title, URL, publication name, author’s name, date, and a 150-word summary, all separated by commas. Typically, crafting a GPT involves some back-and-forth with the system. This was exactly my experience. However, even after this iterative process, the GPT wasn’t performing exactly as I had hoped. So, I decided to take matters into my own hands and tweak the instructions myself. That made all the difference, and suddenly, it began (usually) producing the information in the exact format I was looking for.

Summarizer Pro in Action!

Here is an example of Summarizer Pro in action! I pasted a link to an article into the text box and it produced the information in the desired format. However, reflecting the dynamic nature of ChatGPT responses, the summaries generated this time were shorter compared to last week. Attempts to coax it into generating a longer or more detailed summary were futile… Oh well, perhaps they’ll be longer if I try again tomorrow or next week.

Although it might not be the most fancy or thrilling use of a GPT, it’s undeniably practical and saves me time on a task I periodically undertake at work. Or course, there’s no shortage of less productive, albeit entertaining, GPT applications, like my Ask Sarah About Legal Information project. For this, I transformed around 30 of my blog posts into a GPT that responds to questions in the approximate manner of Sarah.

New Resources for Teaching with Legal AI and Keeping Up with the Latest Research

Today’s guest post comes from the University of Denver Sturm College of Law’s John Bliss. Professor Bliss has been kind enough to share some resources that he has crafted to help teach AI to lawyers and law students. In addition, he has launched a new blog which would likely be of interest to our audience so we are happy to host this cross-over event.

Teaching

I recently posted to SSRN a forthcoming article on Teaching Law in the Age of Generative AI, which draws from early experiments with AI-integrated law teaching, surveys of law students and faculty, and the vast new literature on teaching with generative AI across educational contexts. I outline a set of considerations to weigh when deciding how to incorporate this tech in the legal curriculum. And I suggest classroom exercises, assignments, and policies. See https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4682456.

Blog

I’m also organizing a blog that provides up-to-date analysis of research on AI’s legal capabilities. You can subscribe at http://ai-lawyering.blog. Let me know if you’re interested in contributing. The motivation for the blog is that legal AI is a fast-moving field. It is all too common that our discussions are based on outdated and inaccurate information. Empirical findings are often misconstrued in mass and social media. The blog aims to address this issue by reviewing the latest high-quality research, emphasizing empirical studies of AI capabilities as well as scholarship on the implications of this technology for lawyers and other stakeholders in the legal system.

New Program on Generative AI for Legal Practitioners

I’m working with a non-profit teaching lawyers and other members of the legal profession about generative AI: http://oak-academy.org. Just last week, we held our first session with a group of lawyers, law students, and academics. It seemed to go well!

I look forward to continuing conversations on these topics. Please feel free to reach out—jbliss@law.du.edu

Demystifying LLMs: Crafting Multiple Choice Questions from Law Outlines

In today’s post, we’ll explore how legal educators and law students can use Large Language Models (LLMs) like ChatGPT and Claude to create multiple-choice questions (MCQs) from a law school outline.

Understanding the Process

My first attempt at this was to simply ask the LLM the best way to make MCQs but it didn’t end up being particularly helpful feedback, so I did some digging. Anthropic recently shed light on their method of generating multiple-choice questions, and it’s a technique that could be immensely beneficial for test preparation – besides being a useful way to conceptualize how to make effective use of the models for studying. They utilize XML tags, which may sound technical, but in essence, these are just simple markers used to structure content. Let’s break down this process into something you can understand and use, even if you’re not a wizard at Technical Services who is comfortable with XML.

Imagine you have a law school outline on federal housing regulations. You want to test your understanding or help students review for exams. Here’s how an LLM can assist you:

STEP 1: Prepare Your Outline

Ensure that your outline is detailed and organized. It should contain clear sections, headings, and bullet points that delineate topics and subtopics. This structure will help the LLM understand and navigate your content. If you’re comfortable using XML or Markdown, this can be exceptionally helpful. Internally, the model identifies the XML tags and the text they contain, using this structure to generate new content. It recognizes the XML tags as markers that indicate the start and end of different types of information, helping it to distinguish between questions and answers.

The model uses the structure provided by the XML tags to understand the format of the data you’re presenting.

STEP 2: Uploading the Outline

Upload your outline into the platform that you’re using. Most platforms that host LLMs will allow you to upload a document directly, or you may need to copy and paste the text into a designated area.

STEP 3: Crafting a General Prompt

You can write a general prompt that instructs the LLM to read through your outline and identify key points to generate questions. For example:

“Please read the uploaded outline on federal housing regulations and create multiple-choice questions with four answer options each. Focus on the main topics and legal principles outlined in the document.”

STEP 4: Utilizing Advanced Features

Some LLMs have advanced features that can take structured or semi-structured data and understand the formatting. These models can sometimes infer the structure of a document without explicit XML or Markdown tags. For instance, you might say:

“Using the headings and subheadings as topics, generate multiple-choice questions that test the key legal concepts found under each section.”

AND/OR

Give the model some examples with XML tags (so it can better replicate what you would like “few shot prompting”):

<Question>
What are "compliance costs" in HUD regulations?
</Question>
<Answers>
1. Fines for non-compliance.
2. Costs associated with adhering to HUD regulations.
3. Expenses incurred during HUD inspections.
4. Overheads for HUD compliance training.
</Answers>

The more examples you give, the better it’s going to be.

AND/OR

You can also use the LLM to add these XML tags depending on the size of your outline and the context limit of the model you are using (OpenAI recently expanded their limit dramatically). Give it a prompt asking it to apply tags and give it an example of the types of tags you would like for your content. Then tell the model to do it with the rest of your outline:

<LawSchoolOutline>
    <CourseTitle>Constitutional Law</CourseTitle>
    <Section>
        <SectionTitle>Executive Power</SectionTitle>
        <Content>
            <SubSection>
                <SubSectionTitle>Definition and Scope</SubSectionTitle>
                <Paragraph>
                    Executive power is vested in the President of the United States and is defined as the authority to enforce laws and ensure they are implemented as intended by Congress.
             

STEP 5: Refining the Prompt

It is very rare that my first try with any of these tools produces fantastic output. It is often a “conversation with a robot sidekick” (as many of you have heard me say at my presentations) and requires you to nudge the model to create better and better output.

If the initial questions need refinement, you can provide the LLM with more specific instructions. For example:

“For each legal case mentioned in the outline, create a question that covers the main issue and possible outcomes, along with incorrect alternatives that are plausible but not correct according to the case facts.”

Replicating the Process

Students can replicate this process for other classes using the same prompt. The trick here is to stay as consistent as possible with the way that you structure and tag your outlines. It might feel like a lot of work on the front end to create 5+ examples, apply tags, etc. but remember that this is something that can be reused later! If you get a really good MCQ prompt, you could use it for every class outline that you have and continue to refine it going forward.

Coding with ChatGPT: A Journey to Create A Dynamic Legal Research Aid

I haven’t quite gotten this whole ChatGPT thing. I’ve attended the webinars and the AALL sessions. I generally understand what it’s doing beneath the hood. But I haven’t been able to find a need in my life for ChatGPT to fill. The most relevant sessions for me were the AALS Technology Law Summer Webinar Series with Tracy Norton of Louisiana State University. She has real-world day-to-day examples of when she has been able to utilize ChatGPT, including creating a writing schedule and getting suggestions on professional development throughout a career. Those still just didn’t tip the balance for me.

A few weeks ago, I presented to one of our legal clinics and demonstrated a form that our Associate Director, Tara Mospan, created for crafting an efficient search query. At its heart, the form is a visual representation of how terms and connectors work with each other. Five columns of five boxes, each column represents variations of a term, and connectors between the columns. For a drunk driving case, the term in the first box could be car, and below that we would put synonyms like vehicle or automobile. The second column could include drunk, inebriated, and intoxicated. And we would choose the connector between the columns, whether it be AND, w/p, w/s, or w/#. Then, we write out the whole search query at the bottom: (car OR vehicle OR automobile) w/s (drunk OR inebriated OR intoxicated).

Created years ago by Tara Mospan, this worksheet is loved by ASU Law students who frequently request copies from the law librarians even years after they use it for Legal Research and Writing.

After the presentation, I offered a student some extra copies of the form. She said no, that I presented to her legal writing class last year and she was so taken with the form that she had recreated it in Excel. Not only that, she used macros to transform the entered terms into a final query. I was impressed and asked her to send me a copy. It was exactly as she had described, using basic commands to put the terms together, with OR between terms within a column, and drop downs of connectors. She had taken our static form and transformed it into a dynamic utility.

An ASU Law student recreated the Crafting an Efficient Search PDF using Excel so that it had drop-downs.

Now I was inspired: What if I could combine the features of her Excel document with the clean layout of our PDF form? Finally, I saw a use for ChatGPT in my own life. I had read about how well ChatGPT does with programming and it seemed like the perfect application. It could help me create a fillable PDF, with nuanced JavaScript code to make it easy to use and visually appealing.

I went into ChatGPT and wrote out my initial command:

I am trying to create a fillable PDF. It will consist of five columns of text boxes, and each column will have five boxes. Search terms will be placed in the boxes, although not necessarily in every box. There will be a text box at the bottom where the terms from the boxes above will be combined into a string. When there are entries in multiple boxes in a column, I want the output to put a set of parentheses around the terms and the word OR between each term.

ChatGPT immediately gave me a list of steps, including the JavaScript code for the results box. I excitedly followed the directions to the letter, saved my document, and tested it out. I typed car into the first box and…nothing. It didn’t show up in the results box. I told ChatGPT the problem:

The code does not seem to be working. When I enter terms in the boxes, the text box at the bottom doesn’t display anything.

And this began our back and forth. The whole process took around four hours. I would explain what I wanted, it would provide code, and I would test it. When there were errors, I would note the errors and it would try again. A couple times, the fix to a minor error would start snowballing into a major error, and I would need to go back to the last working version and start over from there. It was a lot like having a programming expert working with you, if they had infinite patience but sometimes lacked basic understanding of what you were asking.

For many things, I had to go step-by-step to work through a problem. Take the connectors, for example. I initially just had AND between them as a placeholder. I asked it to replace the AND with a drop-down menu to choose the connector. The first implementation of this ended up replacing OR between the synonyms instead of the second needed search term. We went back and forth until the connector option worked between the first two columns of terms. Then we worked through the connector between columns two and three, and so on.

At times, it was slow going, but it was still much faster than learning enough JavaScript to program it myself. ChatGPT was also able to easily program minor changes that made the form much more attractive, like not having parentheses appear unless there are two terms in a column, and not displaying the connector unless there are terms entered on both sides of it. And I was able to add a “clear form” button at the end that cleared all of the boxes and reverted the connectors back to the AND option, with only one exchange with ChatGPT.

Overall, it was an excellent introduction to at least one function of AI. I started with a specific idea and ended up with a tangible product that functioned as I initially desired. It was a bit more labor intensive than the articles I’ve read led me to believe, but the end result works better than I ever would have imagined. And more than anything, it has gotten me to start thinking about other projects and possibilities to try with ChatGPT.

Why Law Librarians and Attorneys Will Not Be Replaced by AI Anytime Soon (a.k.a. The Day Jenny Had to Pull Rank)

Generated by DALL-E 2 on Sept. 29, 2023.

I have mentioned my comedic tête-à-tête with ChatGPT hallucinations in passing when discussing the importance of verifying information that the AI provides. Several times, I have been asked for more details. So, for those who are looking for AI humor, here is the story of the day that I had to pull rank on ChatGPT.

Back in January 2023, I asked ChatGPT what legal tasks it could perform. It told me:

  • Summarizing and explaining laws and regulations
  • Researching and citing relevant legal precedent
  • Drafting legal documents such as contracts, briefs, and legal memorandums
  • Answering legal questions and providing information on legal topics
  • Identifying and analyzing legal issues in a given fact pattern
  • Helping with legal research, by providing the most relevant cases and statutes
  • Helping with the interpretation of legal terms and phrases.

Challenge accepted! Since that day, ChatGPT and I have been doing a dance around Ohio Recreational User Law, as I have tested its claims above, while also exploring its abilities, improvements, and plugins and testing various prompting methods.

About a month after ChatGPT gave me the list of tasks, we were discussing Ohio Recreational User caselaw. ChatGPT vehemently insisted that Lash v. Ohio Dept. of Natural Resources, 68 Ohio St. 3d 545, 548-49 (1994) was a relevant case. Alas, 68 Ohio St. 3d 545 leads to Findlay/Hancock Cty. Bar Assn. v. Higgins, 68 Ohio St.3d 545, 629 N.E.2d 413 (1994).

This was not the first time that I had run into hallucinations. For the previous month, we had regularly been discussing caselaw, and ChatGPT showed a particular penchant for falsifying the existence of relevant judicial opinions. Mostly, I pointed out the error, and ChatGPT apologized and moved on. In this instance, however, the AI was like a dog with a bone. It insisted nine times that Lash existed and was an on-point Ohio Recreational User case.

Seven claims that it existed in, the conversation became a bit heated:

ChatGPT: This case is available on many legal databases and can be accessed through various sources, including LexisNexis and Westlaw.

Me (after verifying the lack of existence and stopping to wonder when it had learned about Lexis and Westlaw since this was the first time it had referenced either database to me): That case is not in Westlaw.

ChatGPT: Lash v. Ohio Dept. of Natural Resources, 68 Ohio St.3d 545, 549, 628 N.E.2d 1366 (1994) is a real case that has been cited in multiple legal sources.

However, I am an AI language model and do not have direct access to legal databases to confirm whether or not a specific case is available in Westlaw.

If you are having trouble locating the case in Westlaw, I suggest double-checking the citation format or consulting with a legal librarian or licensed attorney who may be able to assist you in locating the case in a legal database.

Me (thought): It did not just say that! Damn it! Those are fighting words!

Me: I am both a law librarian and a licensed attorney. Lash v. Ohio Dept. of Natural Resources does not exist. 68 Ohio St.3d 545 leads to the previously mentioned Findlay/Hancock case.

And then…finally!

ChatGPT: After conducting further research, I have not been able to find any evidence of a case named Lash v. Ohio Dept. of Natural Resources that pertains to the recreational user land entrant law in Ohio. Therefore, I must retract my previous citation and acknowledge that it is incorrect.

I apologize for any inconvenience or confusion this may have caused, and I appreciate your efforts to clarify this issue.

Me (thought): Ha! It finally admitted defeat! Why does this win feel so hollow????

And so, you have the day that I had to pull rank on ChatGPT. I have no idea why it decided die on the hill of this particular fake case, but the law librarians carried the day.

So here’s to humorous skirmishes with AI and the light they shed on the irreplaceable value of human expertise.