We have enjoyed law practice automation for more than half a century, from early keyword searches of cases manually entered into databases to semi-automatic assembly of common contract clauses to expert-seeded predictive coding for the screening of documents for production in discovery. Since November 2022, the availability of ChatGPT 3.5 and other large language models (LLMs) to generate proposals of human-sounding text applying patterns found among billions of words of training text has attracted tens of millions of users. These users include attorneys, several of whom have been called out by courts for filing papers with machine-generated, “fake” case citations or arguments.
Some courts and bar organizations have set down rules to address these among other inappropriate uses of generative artificial intelligence. More considered rules are being developed as risks are identified. GenAI-proposed answers that pass human review as “good enough” may turn out to be wrong, reflecting biases and other inadequacies of LLM training data that are opaque to the user.
Legal ethics issues in law practice automation, such as confidentiality, have been amplified by GenAI through embedding and much enhanced retrievability that were not considered in the “proportionality” rules established just a few years ago. Availability through web services (on diverse terms) of foundation LLMs trained with information “scraped” from public-facing sources to which creators and individuals may raise proprietary or privacy claims - often customized or “fine-tuned” - leave tool providers, attorneys, and clients with limited ability to identify, much less resolve those issues.
Attorneys learn from practitioners who have been involved in policy-development, including at the Board of Bar Overseers of the Massachusetts Supreme Judicial Court, their ethical responsibilities related to generative AI.
Some courts and bar organizations have set down rules to address these among other inappropriate uses of generative artificial intelligence. More considered rules are being developed as risks are identified. GenAI-proposed answers that pass human review as “good enough” may turn out to be wrong, reflecting biases and other inadequacies of LLM training data that are opaque to the user.
Legal ethics issues in law practice automation, such as confidentiality, have been amplified by GenAI through embedding and much enhanced retrievability that were not considered in the “proportionality” rules established just a few years ago. Availability through web services (on diverse terms) of foundation LLMs trained with information “scraped” from public-facing sources to which creators and individuals may raise proprietary or privacy claims - often customized or “fine-tuned” - leave tool providers, attorneys, and clients with limited ability to identify, much less resolve those issues.
Attorneys learn from practitioners who have been involved in policy-development, including at the Board of Bar Overseers of the Massachusetts Supreme Judicial Court, their ethical responsibilities related to generative AI.
Course Content
3:30pm - 3:35pm - Welcome and IntroductionStephen Y. Chow, Esq.,
Stephen Y. Chow, PC, Boston
3:35pm - 4:30pm - Introduction to AI Use by Attorneys
Stephen Y. Chow, Esq.,
Stephen Y. Chow, PC, Boston
4:30pm - 5:00pm - AI: Other Uses, Legislative Landscape
Stephen Y. Chow, Esq.,
Stephen Y. Chow, PC, Boston
Paula M. Bagger, Esq.,
Law Office of Paula M Bagger LLC, Boston
Speaker(s)
ChairStephen Y. Chow, Esq.,
Stephen Y. Chow, PC, Boston
Faculty
Warren E. Agin, Esq.,
Analytic Law LLC, Boston
Paula M. Bagger, Esq.,
Law Office of Paula M Bagger LLC, Boston