On this episode, we speak with Nicole Bradick, CEO and founder of Theory & Principle, a legal technology design company. Nicole gives us an update on how their product, Map Engine, is being used by clients to track regulations and laws around the world. She also discusses how the legal industry is becoming more sophisticated in terms of user design and experience, and how this is changing the way law firms and legal tech companies approach product development.

Nicole’s passion for user design and experience is evident throughout the conversation, and she emphasizes how it can make or break a product’s success in the market. She notes that clients are becoming more knowledgeable about UX, and are able to identify and ask for better design. Additionally, law firms and legal tech companies are recognizing that better design is not just a nice-to-have, but a business imperative. Nicole is dedicated to educating the legal community on the importance of UX, and helping them integrate it into their product development process. She believes that legal technology should be built with the user in mind, and that this approach will lead to better outcomes, both for clients and for the industry as a whole.

Nicole sees immense value in starting T&P Studios because it allows her to bring her expertise in designing and launching products to clients who have great ideas but lack the resources to bring them to market. She describes a unique partnership with Simpson Thatcher’s Pro Bono team, where they collaborated to build a product that they wanted to exist in the market, but didn’t want to take on the long-term burden of owning software. With T&P Studios, they were able to co-develop the product and bring it to market, while Simpson Thatcher now has their version of it as well. This model of collaboration and revenue sharing allows T&P Studios to work with other law firms and organizations to build and launch products that solve real problems in the legal industry, without the upfront capital expense.

Listen on mobile platforms:  Apple Podcasts LogoApple Podcasts |  Spotify LogoSpotify

 

Contact Us:

Twitter: @gebauerm, or @glambert
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: Jerry David DeCicca

Transcript:

Continue Reading Revolutionizing Legal Technology Design with T&P Studios’ Nicole Bradick (TGIR Ep. 195)

In a special episode of The Geek in Review podcast, we wanted to play a recent episode of the Future Ready Business (FRB) Podcast. FRB is a podcast that Greg Lambert produces and is hosted by Jackson Walker attorneys Art Cavazos and Erin Camp and is focused on how new ideas, regulations, laws, and overall societal changes affect the way businesses operate.

In this episode, Art Cavazos and Erin Camp host Courtney White and William Nilson, attorneys from Jackson Walker’s Houston and Austin offices, and discuss the future of the fashion industry. The conversation touches on how the intersection of art and business has evolved, with topics such as sustainability, diversity, and social media influencers’ impact on the industry. The group also discusses the growing relationship between technology and fashion, including the role of artificial intelligence in streamlining production and enabling customization. Social media’s role in marketing and intellectual property concerns relating to the fashion industry round out the discussion. It is a great conversation, and we hope you enjoy it as much as we did.

Links:

Listen on mobile platforms:  Apple Podcasts LogoApple Podcasts |  Spotify LogoSpotify

Contact Us:

Twitter: @gebauerm, or @glambert
Voicemail: 713-487-7821
Music: Jerry David DeCicca

Transcript

Continue Reading The Future of Fashion and the Law (TGIR Ep. 194)

[Ed. Note – Please welcome back Jessica de Perio Wittman & Kathleen (Katie) Brown as guest bloggers. – GL]

In case you didn’t know, the National Conference of Bar Examiners (NCBE) will release a brand-new version of the bar exam in 2026.  The NCBE conducted a study in 2018 and asked practicing attorneys and non-licensed lawyers about basic technology tasks in their law practice.  Attorneys said they expect proficiency in word processing, research platforms, electronic communication, desktop publishing, and document storage, including the cloud.  This should not be a surprise because D. Casey Flaherty has been talking about minimum tech expectations in the practice of law since 2012.  His technology audit proved that many attorneys do not possess basic technology competency per Model Rule 1.1 and Comment 8. Over 10 years later, we are still talking about the importance of technology competency in the legal profession and highlighting ever-present shortcomings in basic technology skills.  Flaherty himself stated that “lawyers in general are woefully deficient in using the software tools at their disposal – e.g., Word, Acrobat, Excel.”

Joseph Lawson, Law Library Director at the Harris County Robert W. Hainsworth Law Library, identified that a lack of time and training opportunities prevent solo and small firm practitioners from accessing legal technology.  The 2019 American Bar Association Tech Report confirms Lawson’s hypothesis:  only 28 percent of solos report the availability of technology training, while more than 95 percent of attorneys at large firms reported access to training.

Some may argue that law firms should not spend their time and money on offering basic technology training because the training should be offered in law school.  We address how law schools provide technology training in our 2023 article, “Taking on the Ethical Obligation of Technology Competency in the Academy: An Empirical Analysis of Practice-Based Technology Training Today”.  In our longitudinal study, we found that 670 technology courses were offered in the technology space.  Now, 670 courses may sound like a large number, but this number includes every e-discovery, cybersecurity, law office management, and law practice and technology course in the country.  This results in an average of 3.38 technology courses at each of the ABA-accredited law schools.  This statistic also includes the University of North Texas, which is currently the only ABA-accredited law school that mandates the completion of a Practice-Related Technology requirement for all J.D. candidates.  To learn more about how law schools are attempting to address the disconnect in technology training, we encourage you to watch the recorded version of the University of St. Thomas Law Journal Fall 2022 Symposium, A Roadmap for Law School Modernity: Teaching Technology Competence (available at https://youtu.be/hILd5qJ1G4I).

Today, the “next big thing” in legal technology is ChatGPT and generative AI, and we recognize that, in contrast, it’s not sexy to talk about basic technology skills. Or the fact that many attorneys still do not possess them.  But we need to continue having these conversations about basic technology training and possessing the requisite skills for efficient legal practice.  All attorneys should know how to:

  • Download forms from databases
  • Use formatting styles
  • Create tables of authority
  • Use Quick Parts and Autotext
  • Save Word documents as efile-ready PDFs, and
  • Set up shortcut keys to insert a section symbol.

Some believe that our law students were exposed to these basic skills because they grew up surrounded by technology.   Iantha Haight disproves the assumption of native technology competency in her article “Digital Natives, Techno Transplants: Framing Minimum Technology Standards for Law School Graduates”.  She claims that the term digital natives “lulls educators into thinking students need no additional training in technology to be prepared for the workforce.”  Even though we have started to dispel the myth of the digital native in the legal classroom, we must now deal with a new generation of law students who went to “Google School”.

What does it mean to be a Google School student?  These students were handed a Chromebook or an iPad with some (or all) of Google G Suite for Education.   Today, some colleges and universities have been using Google Workspace for Education (previously called G Suite for Education) for at least a decade.  In 2017, Google reported that about 15 million primary- and secondary-school students in the United States use Google Classroom.  By this time, Chromebooks accounted for 58 percent of mobile devices shipped to primary and secondary schools in the United States.   In 2019, Google reported that all eight Ivy League schools use G Suite for Education as a productivity tool of choice for their faculty, staff, and students.  For a discussion on how Google Schools are impacting the law school classroom, you can listen to this podcast: https://www.geeklawblog.com/2022/08/teaching-and-pressuring-law-professors-to-teach-technology-katie-brown-tgir-ep-171.html.

Law schools have the challenge of minimizing the use of Google products in the classroom because most law firms don’t allow employees to use Google apps on their work devices. Microsoft and Adobe productivity tools currently have a large footprint on the legal academy and the legal profession. As a result, there is a disconnect in technology knowledge and skill when you compare what students were accustomed to prior to law school and what they’ll be expected to know when they head into practice.  If the next gen bar exam is intended to simulate scenarios in modern-day practice, then the NCBE must also award points to test takers for successfully completing basic technology tasks that they would be expected to use in practice.  The NCBE can ask test takers to:

  • Create documents with specific margins, page numbers, and styles, like the formats expected from local court rules
  • Create a table of authorities or a table of contents
  • Draft an email using mail merge skills
  • Convert a Word document into a PDF and
  • Remove any metadata damaging to their client

We recognize that this is not a complete list, but it provides examples for how the NCBE could test basic technology skills that are expected in modern-day law practice.  Only then, can bar examiners determine whether test takers have the requisite knowledge and skills for entry-level practice.

Author Bio:

Jessica de Perio Wittman (UConn) and Kathleen (Katie) Brown (Charleston) have been friends since their law school days at Seattle University.  Although the two have lived in different states for the past 13 years and now serve as Law Library directors at their respective schools, they still manage to hold Zoom marathon writing sessions on a weekly basis.

 

 

Benjamin Alarie and Abdi Aidid are legal experts who are heavily involved in the development of legal technology. They are releasing a new book, The Legal Singularity: How Artificial Intelligence Can Make Law Radically Better later this year. 
Benjamin Alarie is a tax law professor at the University of Toronto and has been in the tax law profession since 2004. He became interested in the future of legal education and how artificial intelligence will affect the profession, which led him to co-found Blue J, a legal technology company in Toronto. On the other hand, Abdi Aidid practiced as a commercial litigator in New York before becoming the Vice President of Legal Research at Blue J. He led the team of lawyers and research analysts and helped develop AI-informed predictive tools, which predict how future courts are likely to rule on new legal situations. Abdi is now a full-time law professor at the University of Toronto, teaching subjects like torts and civil procedure.
Naming the book “The Legal Singularity” is a big claim by the authors, so we asked them to explain what they meant by it. According to Abdi Aidid, the legal singularity is the practical elimination of legal uncertainty and its impact on our institutions and society. It is a future state where the law is unknowable in real time and on demand, and we can start doing things that we were not previously able to do because the law was either difficult to ascertain or we did not have a normative consensus around what the law ought to be. The concept of the legal singularity is related to the idea of a technological singularity, but it is not a totalizing event like the technological singularity. Instead, it is an equally socially important concept that focuses on how technological improvements affect the law and related institutions.
Alarie and Aidid suggest that the legal market needs to address bias in AI tools by keeping humans in the loop in arbitration and judicial contexts for a significant period of time. They believe that even as the legal singularity approaches and people begin to have confidence in algorithmic decision making, humans should still be involved in the process to audit machine-generated decisions. They argue that this is necessary because the law deals with deeply human questions, and there is more at stake than just ones and zeros. They believe that humans have to contribute to the legal system’s notions of mercy, fairness, empathy, and procedural justice. They also suggest that involving humans in the process helps to inform the technology before disastrous consequences and helps to refine it. Therefore, they emphasize the need for human review of machine judgments, which will lead to accelerated learning in the law. Furthermore, they highlight that the legal market needs to distinguish between the kinds of problems that are a reflection of unaddressed social problems or those that are new technological problems. They stress that the legal market is still collectively responsible for resolving these issues.

Listen on mobile platforms:  Apple Podcasts LogoApple Podcasts |  Spotify LogoSpotify
Transcript

Continue Reading The Legal Singularity and the Future of Legal Research – Benjamin Alarie and Abdi Aidid (TGIR Ep. 193)

In this episode of The Geek in Review, hosts Marlene Gebauer and Greg Lambert interview M.C. Sungaila, an appellate attorney and the host of The Portia Project podcast. The podcast is geared towards highlighting women in traditional and non-traditional legal careers and is set to celebrate its 100th episode during Women’s History Month in March. M.C. Sungaila initially intended to highlight women appellate judges and justices in a book, but quickly realized that a podcast would be the best medium to capture the stories of these women. The podcast now includes women leaders across the industry and beyond, providing a career touchstone for law students and showcasing where women are leading inside and outside the legal profession.
The Portia Project podcast explores a range of courts, including state, federal, and magistrate courts, as well as the process of becoming a judge, and was a finalist for the California Legal Award for Innovation in Diversity and Inclusion. M.C. talks about partnerships with organizations like Girls Inc. to amplify their work. The podcast eventually expanded beyond the judiciary to include legal tech founders, legal design innovators, and others who are making an impact in the legal world. M.C. Sungaila encourages law students to explore these new career paths.
There is a common thread among the guests in that there is no straight path to success, and everyone has unique experiences and skills that lead to their success. M.C. emphasizes the importance of recognizing that success can be different for everyone, and there are many paths to success. She plans to continue focusing on women judges, especially appellate judges, and to include more unique journeys and different approaches to legal practice in the podcast. Additionally, she hopes to branch out beyond the legal industry to bring in guests from other disciplines to provide new thoughts and ideas for women in the law.
M.C. Sungaila discusses the disproportionate share (in a good way) of women on the Supreme Court benches in Michigan and Washington and the desire to diversify the courts in those states. She also talks about the lightning round questions she asks her guests and how it helps her get to know them as people. M.C. shares her optimism for the future of women in the legal industry and the importance of being people centered. We ask MC about her motto, which she attributes to her mother’s notes to her throughout her career, such as “make this the best day ever” and “paint your canvas with your own brush.”
M.C. Sungaila’s Portia Project podcast is an excellent resource for law students and individuals interested in learning about the diverse career paths and approaches to legal practice for women in the legal industry.

Listen on mobile platforms:  Apple Podcasts LogoApple Podcasts |  Spotify LogoSpotify
Contact Us:
Twitter: @gebauerm, or @glambert
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: 
Jerry David DeCicca
Transcript

Continue Reading Breaking Barriers: The Portia Project’s MC Sungaila on the Unique Paths to Success for Women Lawyers and Judges (TGIR Ep. 192)

The promise of AI has been around for decades, but it is the last three months that has finally caused an awakening so forceful, that even the legal industry understands it needs to be ready for the upcoming Age of AI. This week’s guest has worked toward that goal of integrating AI and other technologies into the practice of law for more than forty years. Johannes (Jan) Scholtes is Chief Data Scientist for IPRO – ZyLAB, and Extraordinairy (Full) Professor Text Mining at Maastricht University in The Netherlands. He joins us this week to discuss the need for lawyers and law firms to use these tools to enhance the power of the practice of law. And he warns that if the traditional legal resources of lawyers and firms won’t step up, there are others who will step in to fill that void.
While the AI tools like GPT and other generative AI tools have finally begun to be true language tools, there is still a lot that these tools simply cannot do. Scholtes says that there is plenty of legal work to be done, and in fact perhaps more work now that the computers can do most of the heavy lifting and allow the lawyers to do the thinking and strategy.
Scholtes compares the relationship between the lawyer and the technology to be that of a pilot and co-pilot. A relationship in which the co-pilot cannot be completely trusted but can be trained to assist through the process of vertical training. This means that a law firm needs to work with the AI to have it better understand how to process legal information. Having the technology alongside the lawyers provides a stronger legal representation than just the lawyers or the technology alone. In addition to reducing risk and improving outcomes, Scholtes also projects that Lawyer + AI means higher rates and better profitability, while the clients receive better results.
It is exciting to be at the beginning of this change in the way law is practiced. It is important, however, that law firms, lawyers, and legal professionals understand how to teach and control the technology, and that there needs to be transparency in how the tools work and make decisions. His recommendation is that if all you are offered is and AI Black Box, then you should simply walk away. That lack of trust will come back to bite you.
For more insights from Jan Scholtes, visit his blog, Legal Tech Bridge.

Listen on mobile platforms:  Apple Podcasts LogoApple Podcasts |  Spotify LogoSpotify

Contact Us:

Twitter: @gebauerm, or @glambert
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: Jerry David DeCicca

Transcript:

Continue Reading Johannes Scholtes: AI Is Finally Here. Now the Hard Work Begins for the Legal Industry (TGIR Ep. 191)

It is pretty apparent that we are in a super Hype Cycle when it comes to AI tools like ChatGPT, but for many of us in the legal profession, we’re not used to reaching this point of the cycle at the same time as the rest of the world. Because things are happening so fast, we wanted to bring in someone like Colin Lachance from Jurisage to talk about how they are integrating Generative AI tools into their products.
Greg was going down an AI rabbit hole on Twitter this week when Colin mentioned his own project he was launching. Jurisage’s tool, MyJr (pronounced “My Junior”) is part of a joint venture between Jurisage and AltaML, and is designed to change how researchers access information by allowing the AI tool to synthesis and read cases as the researchers search and analyze the information. Rather than opening up web browser tab after tab and scanning cited cases for relevant information, the idea behind MyJr is to have it quickly answer that information for you. If you need to know what the relevant arguments are from each side in Smith v. Jones, as MyJr to pass that along to you. Ask it a plain language question, get a quick and plain language answer.
Lachance is working to use the GPT 3.5 tool to pass along cases and create what he calls “guardrails” with the cases so that the prompt and the results limit themselves to the case itself. This protects the researcher from the AI “creating” the answer from all the non-relevant information it has collected in its large language model of machine learning. Lachance has additional goals for using AI within Jurisage’s data, but he’s focused tools like MyJr establishing trust with those using it for researching Canadian, and soon US caselaw.
The MyJr product works as a browser extension and identifies Canadian and US case law citations on any web page. It delivers a preview into key details about the cited case, and a link to a free full-text version, in a popup when the user hovers over the citation. Clicking through to a “more insights” dashboard reveals additional detail as well as access to the upcoming “Chat with a case” feature (Feb 20th for Canadian case, a month later for US). While the paid version of the dashboard won’t officially launch until late March, user can get unlimited pre-sale access today as well as secure a future 50% discount option for a one-time payment of $7.

Listen on mobile platforms:  Apple Podcasts LogoApple Podcasts |  Spotify LogoSpotify
More information on Jurisage and MyJr can be found here:
Contact Us:
Twitter: @gebauerm, or @glambert
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: 
Jerry David DeCicca
Transcript

Continue Reading Colin Lachance on Jurisage’s MyJr and How He’s Looking at AI to Assist in the Synthesis and Reading of Legal Cases (TGIR Ep. 190)

This week we have Damien Riehl, VP, Litigation Workflow and Analytics Content at FastCase, and one of the drivers behind SALI (Standards Advancement for   for the Legal Industry.) Damien is definitely a “big thinker” when it comes to the benefits of creating and using standards for the legal industry. SALI is a system of tagging legal information to allow for better filtering and analysis. It works like Amazon’s product tags, where a user can search for a specific area of law, such as patent law, and then choose between various services such as advice, registration, transactional, dispute, or bankruptcy services. The tags cover everything from the substance of law to the business of law, with over 13,000 tags in the latest version. SALI is being adopted by major legal information providers such as Thomson Reuters, Lexis, Bloomberg, NetDocuments, and iManage, with each provider using the same standardized identifiers for legal work. With this standardization, it will be possible to perform the same API query across different providers and receive consistent results. Imagine the potential of being able to ask one question that is understood by all your database and external systems?
In that same vein, we expand our discussion to include how Artificial Intelligence tools like Large Language Models (i.e., ChatGPT, Google BARD, Meta’s LLM) could assist legal professionals in their quest to find information, create documents, and help outline legal processes and practices.
He proposed three ways of thinking about the work being done by these models, which are largely analogous to traditional methods. The first way is what Riehl refers to as a “bullshitter,” where a model generates information without providing citations for the information. The second way is called a “searcher,” where a model generates a legal brief, but does not provide citations, forcing the user to search for support. The third way is called a “researcher,” where the model finds relevant cases and statutes, extracts relevant propositions, and crafts a brief based on them.
Riehl believes that option three, being a researcher, is the most likely to win in the future, as it provides “ground truth” from the start. He cites Fastcase’s acquisition of Judicata as an example of how AI can be used to help with research by providing unique identifiers for every proposition and citation, enabling users to evaluate the credibility of the information. In conclusion, Riehl sees a future where AI is used to help researchers by providing a pick list of the most common propositions and citations, which can then be further evaluated by the researcher.
One thing is very clear, we are just at the beginning of a shift in how the legal industry processes information. Riehl’s one-two combination of SALI Standards combined with additional AI and human capabilities will create a divide amongst the bullshitters, the searchers, and the researchers.

Listen on mobile platforms:  Apple Podcasts LogoApple Podcasts |  Spotify LogoSpotify
Contact Us:
Twitter: @gebauerm, or @glambert
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Transcript

Continue Reading The Bullshitter, The Searcher, and The Researcher – Damien Riehl on the Dynamic Shift in How the Legal Profession Will Leverage Standards and Artificial Intelligence

“Whether you like it or not, everybody’s searching for us online. And everybody is looking at your LinkedIn profile, whether you’re on LinkedIn every day, or once a year, so you might as well make it work for you.” – Stefanie Marrone
Stefanie Marrone is an Outsource Marketer who advises legal professionals on improving their social media presence. Even legal professionals in large law firms can benefit from a strong social media presence because clients and potential clients relate to the individual more than they do the firm. Marrone’s experience in firms like Proskauer and MoFo helped shaped her understanding of how important it is to have a strategy when it comes to branding. LinkedIn is her suggested primary platform for lawyers and legal professionals because that is the most likely platform where you’ll find your peers and clients.
One of the most effective forms of content, even on LinkedIn, is short-form video. In addition, list posts, infographics, carousel images, and finding ways to bring even firm posts to life helps draw attention to social media posts. For lawyers who have a marketing team, Stefanie suggests establishing a social media training program, especially for LinkedIn.
While we would all love to have some metric that identifies the return on investment of social media, it is not as easy as the number of likes a post receives. Success on social media is a combination of brand awareness, influence on decision making, and information dissemination. However, Marrone points out that many firms have thousands, or even tens of thousands of followers, and if the only engagement you are receiving is minimal, or from a few people, then it is clear that your social media strategy is not working.
Marrone also points out that lawyers and legal professionals should stick to one or two platforms and not spread yourselves too thin. LinkedIn, YouTube, and Twitter are probably the safest bets, but it depends on the message you are trying to convey.

Listen on mobile platforms:  Apple Podcasts LogoApple Podcasts |  Spotify LogoSpotify
To learn more from Stefanie, check out:
Contact Us:
Twitter: @gebauerm, or @glambert
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: 
Jerry David DeCicca
Transcript

Continue Reading Successful Brand Awareness for Legal Professionals – Tips from Stefanie Marrone (TGIR Ep. 188)

[Note: Please welcome guest bloggers Jennifer Wondracek, Director of the Law Library, Professor of Legal Research & Writing at Capital University Law School, and Rebecca Rich, Assistant Dean for the Law Library and Technology Services, and Assistant Teaching Professor at Drexel University Thomas R. Kline School of Law. – GL]

AI content generation tools, such as ChatGPT, have been in the news a lot lately. It’s the new cool tool for doing everything from coding to graphic art to writing legal briefs. It was even, briefly, used for a robot lawyer that was going to argue in court. And Greg Lambert wrote about it a few weeks ago on this very blog in What a Law Librarian Does with AI Tools like ChatGPT – Organize and Summarize. This post continues Greg’s discussion on ChatGPT use.

AI content generation tools are also the new education bogeyman. A myriad of headlines have been written in the last two months about how ChatGPT is the death of the essay and multiple-choice exams. It’s the newest in a line of digital tools–starting with the Internet and Wikipedia–that students might use to cheat in legal education. But we think this is a bit of an overreaction. ChatGPT and other similar AI-generative content creation tools can, and have, absolutely been misused; but we found that even with expert prompt creation and a high level of expertise, ChatGPT et al. are not yet capable of producing student work that is indistinguishable from real student work. Not all share this belief. Several professors at the University of Minnesota Law School ran some exams through ChatGPT, using a closed universe of cases provided to the program. The exams resulted in an average grade of C+ across the four exams, which were graded blindly. But they noted a few important takeaways, including:

[W]hen ChatGPT’s essay questions were incorrect, they were dramatically incorrect, often garnering the worst scores in the class. Perhaps not surprisingly, this outcome was particularly likely when essay questions required students to assess or draw upon specific cases, theories, or doctrines that were covered in class.

And

In writing essays, ChatGPT displayed a strong grasp of basic legal rules and had consistently solid organization and composition. However, it struggled to identify relevant issues and often only superficially applied rules to facts as compared to real law students.

The authors of this blog post have also done some experiments with ChatGPT. Jenny was curious about the kind of legal work that ChatGPT thought it could perform. When asked what types of legal tasks it could do, ChatGPT listed seven options, ranging from summarizing laws to drafting legal documents. The option that caught Jenny’s eye was “Helping with legal research, by providing the most relevant cases and statutes.” Challenge accepted.

Using a current problem her Legal Research and Writing students were working with, Jenny asked ChatGPT “What are the most relevant cases and statutes to determine if someone is a recreational user land entrant under Ohio law?” A few seconds later, ChatGPT gave her two statutes and three cases with brief summaries of each. While it had the general premise correct that a landowner is not liable for injury to a recreational user, assuming all of the requirements are met, it provided incorrect definitions, and every statute and case cited were incorrect. It also disagreed with itself about the duty of care owed to the recreational user in another sentence. Neither statute provided led to R.C. 1533.18 or 1533.181, the Ohio statutes for this law. When asked for more citations for the three cases listed, Jenny received both regional reporter and Ohio citations that were readable, if not quite properly Bluebooked. Investigations into the cases determined that none of the three existed by name and each of the six reporter citations led to a unique case, none of which were remotely responsive to the question. In the end, ChatGPT gave Jenny a partially correct answer with two incorrect statutes, three made-up cases, and six incorrect cases. Not a good day for accurate legal research!

Becka experimented with the law-review style research and policy paper prompts she uses for her Education Law and AI and the Law classes and had a similar experience.  Even with prompts to write longer papers, ChatGPT produced short, generically written papers with no or minimal citation (including often made-up citations!) and no analysis.  A five-page paper would have an average of two footnotes per page even when prompted to add more. Becka shared the results with one of her students who commented that even she could tell this was an F paper.  Becka also experimented with having ChatGPT create a class policy presentation. Again, even after several refining prompts, the presentation was, at best, a C- presentation.

Given the legal writing learning curve and low level of longer form writing experience of many of our students, along with their documented increased level of stress and reduced mental health, it is understandable that instructors are nonetheless concerned about the use of AI content generation tools.

As with plagiarism tools, there is now a profitable market for detecting the use of AI generated content using AI.  There are currently at least two startups developing tools: AICheatCheck and CrossPlag, both of which have usable demos. GLTR and GPTZero were developed by a collaboration between an MIT and a Harvard professor and a Princeton University student respectively (for more about these tools and a comparison of how they work, take a look at this RIPS-SIS post). Our friendly neighborhood plagiarism detection companies, Turnitin and Copyleaks, are also in the process of adding AI content generation tool use detection to their products. OpenAI (ChatGPT’s company) has developed a tool to assist in detecting the use of its tool in writing and is in the process of adding watermarking to ChatGPT-generated content.

None of these detection tools are 100% effective, so it may also be helpful to consider adding ChatGPT detection options to your paper grading rubric.  Some options:

  • ChatGPT generated text is formulaic: it generally follows the 5-paragraph essay, stereotypical topic sentence at the top structure.
  • Sentence length does not vary as much as human-generated text does.
  • ChatGPT generated text is light on analysis and applying facts to an issue.

Also remember that ChatGPT isn’t good at citation and doesn’t have any information in it from after 2021 yet. Well-done, indistinguishable from humans is a difficult enough problem to solve that no one’s gotten there yet (though an Israeli start-up is trying).

Lastly, we recommend considering teaching students about ChatGPT rather than banning it.  There are so many AI-assisted drafting tools available for lawyers now that we’d be doing them a disservice otherwise (e.g. ClearBrief, Clawdia, and Docugami). The Sentient Syllabus Project has three great principles for doing so:

  1. AI cannot pass this class,
  2. AI contributions must be limited and true, and
  3. AI use should be open and documented.

On to the next experiment!