Tort Law as a Tool for Mitigating Catastrophic Risk from Artificial Intelligence
Weil tackles a fundamental challenge in AI governance: traditional tort law breaks down when the primary risk is catastrophic harm where no one remains to compensate. The paper’s core insight is that expected liability from catastrophic scenarios needs to be “pulled forward” into recoverable damages in sub-catastrophic cases—essentially loading the full expected disvalue of potential extinction onto smaller harms that actually reach court. This requires radical doctrinal change: Weil proposes punitive damages without the traditional requirement of malice or recklessness, since even careful development creates catastrophic risk. The paper systematically examines complementary legal mechanisms: treating advanced AI training/deployment as an abnormally dangerous activity subject to strict liability (lowering plaintiffs’ burden of proof), expanding foreseeability doctrines so that developers can’t claim unforeseeable harms when the risk was statistically predictable, and reconsidering how tort law values human life to better capture existential risks. Weil also proposes legislative interventions including mandatory liability insurance for AI developers (forcing them to internalize risk through premiums), diverting punitive damages into a public AI safety fund, and pre-announcing these liability rules to shape development incentives before catastrophic cases arise. The paper is notably honest about tort law’s limitations—liability only works if developers remain solvent and judgeable, can’t handle truly unforeseeable risks, and may be too slow relative to AI capability timelines. This represents sophisticated institutional design that takes AI catastrophic risk seriously while working within (and proposing realistic extensions to) existing legal frameworks. The work received positive reception from the AI safety community and media coverage from Vox, reflecting its policy relevance and accessibility to non-technical audiences.