
Policy Snapshot
Giving citizens a direct ownership stakes in AI infrastructure via equity stakes
Rate of Disruption
Gradual
All Scenarios
Rapid
Risk Horizon
Near Term
Medium Term
Long Term
Governance
Subnational
National
International
Who It Affects
Decision Maker
AI Liability
Legal frameworks that assign financial responsibility for AI-caused harms to developers and deployers, balancing innovation incentives with victim compensation.
What it is:
AI liability frameworks determine who bears legal and financial responsibility when autonomous systems cause economic harm, physical injury, or rights violations. Traditional software has largely evaded strict liability under a fault-based regime where plaintiffs must prove negligence, but as AI systems take on more consequential decision-making roles, regulators are reconsidering whether developers, deployers, or end-users should bear responsibility for AI-caused harms. The central tension is between strict liability regimes (where producers are liable for defective products regardless of fault) and fault-based regimes (where plaintiffs must prove the defendant failed to exercise "reasonable care"). In scenarios of rapid labor displacement, clear liability rules could be instrumental in determining whether employers deploying AI systems must carry mandatory insurance, and whether AI developers face meaningful financial incentives to prevent systemic harms before deployment.
Recommended Reading:
Dean Ball
February 2025
Ball proposes a "contract-based system" as a flexible alternative to rigid tort law, where AI agents dynamically negotiate risk-sharing and indemnification terms with users based on the specific context of use. He suggests this could be complemented by "safe harbor" compromises, where compliance with industry standards (verified by third-party evaluators) grants developers immunity from certain liability claims.
Steve Omohundro
July 2025
Omohundro argues traditional liability will be "unworkable for regulating powerful AGIs" because they can circumvent cybersecurity, hide their provenance through fine-tuning, and act strategically to erase audit trails. He proposes "Provable Contracts"—mathematical constraints verified by AI theorem-provers that serve as gatekeepers between AI systems and the physical world, enabling humans to obtain trusted results from untrusted AI running on untrusted hardware.
Simon Goldstein and Peter Salib
September 2025
Goldstein and Salib argue that AGIs should be granted basic tort rights, warning that a lack of direct legal accountability will lead to humans bearing liability for actions they cannot fully control and AGIs lacking direct stake in avoiding harmful conduct. They propose that allowing AGIs to be held directly liable in tort, while simultaneously granting them property rights to pay damages, would create clearer incentives for safe behavior than relying on developers or deployers as intermediaries.
Real-world precedents:
The European Commission proposed the AI Liability Directive in 2022 to complement the EU AI Act with harmonized EU-wide rules, featuring reversed burden of proof for high-risk AI systems and disclosure orders forcing companies to open their training data and algorithms to courts. However, the Commission's 2025 work programme effectively scrapped the directive, creating legal fragmentation across member states and undermining the AI Act's enforcement framework.
In the United States, the expansion of product liability in the 1970s created the modern tort system, though software has historically been treated more like services than products
© 2026 Windfall Trust. All rights reserved.