Mata v Avianca was not an outlier. Johnson v Dunn was not an outlier. Stanford research found large language models hallucinate factual claims in 15 to 20 percent of specialised responses, and Harvard Law School's analysis found 93 percent of AI-generated legal citations had serious errors. ChatGPT will confidently confirm fabricated cases when asked. This is not a tooling problem. It is a verification problem. Every lawyer using AI on client work needs a personal verification protocol in writing, and most do not have one.
Course objectives
Target audiencePracticing lawyers across litigation, corporate, transactional, and regulatory practice. Associates and partners equally welcome. Paralegals and legal assistants who draft or research are welcome.
Prerequisites
The Mata-to-Johnson trajectory. Walk through Mata v Avianca (2023, $5K fine, dismissal with prejudice) and Johnson v Dunn (July 2025, disqualification, bar referrals to multiple states). Pattern recognition: it is never the AI that is sanctioned. It is the lawyer who failed to verify.
How LLMs work, in plain language. Probabilistic text generation versus retrieval. Why citations are particularly prone to hallucination. Why a model cheerfully confirming its own output proves nothing. The 15 to 20 percent specialised hallucination rate, the 72 percent legal citation rate, and what they mean for your filings.
Ethics and the Model Rules. ABA Formal Opinion 512 (July 2024). Rule 1.1 competence. Rule 1.6 confidentiality (and what counts as a third-party disclosure when you paste a client document into ChatGPT). Rule 3.3 candor to the tribunal. Rule 5.1 and 5.3 supervisory responsibilities. Lab: a hypothetical assigned to participants who must identify which rules are at risk.
The verification protocol. A written, repeatable process every participant uses on every AI output. Citations verified in Westlaw or Lexis. Quotations checked in the original. Statutes confirmed in the controlling jurisdiction. Two-person review for filings. AI use logged. Lab: each participant writes their own verification protocol and the room reviews it.
Picking the right tool for the task. General-purpose AI (ChatGPT, Claude, Gemini) for brainstorming, summarisation of non-confidential material, plain-English drafting. Legal-specific AI (Westlaw Precision, Lexis+ AI, CoCounsel) for legal research and citation. Spellbook for contract drafting and review in Word. Never general-purpose AI for citations or controlling statutes. Decision table walked through with examples.
Confidentiality and privilege traps. What goes into the prompt may leave the firm depending on the tool's terms. Zero Data Retention agreements. SOC 2 Type II. The difference between an enterprise tier and a free tier in this respect. Lab: three real-world prompting scenarios and the participant must say what is acceptable, what needs ZDR, and what should never be sent at all.

Duration: 1 day
Price: 10 900 NOK
Language: English
Format: Classroom, virtual classroom, or in-company
Hva er AI for Lawyers Foundations?
AI for Lawyers Foundations er et introduksjonskurs som gir jurister og advokater en praktisk og strategisk forståelse av hvordan kunstig intelligens kan brukes i juridisk arbeid.
Hvem passer kurset for?
Kurset passer for advokater, juridiske rådgivere, compliance-roller og andre som jobber med juridiske problemstillinger og ønsker å forstå hvordan AI påvirker fagområdet.
Trenger jeg teknisk bakgrunn for å delta?
Nei, kurset krever ingen teknisk erfaring og er tilpasset juridiske fagpersoner.
Er kurset praktisk rettet?
Ja, kurset kombinerer introduksjon til AI med praktiske eksempler og scenarioer relevante for juridisk arbeid og dokumenthåndtering.
Hva lærer jeg som er nyttig i praksis?
Du lærer hvordan AI kan brukes til blant annet analyse, dokumentarbeid, oppsummeringer, research og effektivisering av juridiske prosesser, samtidig som du får innsikt i risiko, kvalitet og ansvarlig bruk av AI.