Adaptive Enforcement with AI-Augmented Monitoring
We study a novel dynamic inspection game in which a regulator commits to a detection technology, and a regulated firm chooses whether to engage in harmful conduct and, if detected and sanctioned, whether to incur a redesign cost that generates a modified violation requiring renewed detection. In this adaptive environment, bounded sanctions give rise to three Markov perfect equilibria: full compliance, harm until recognition, and persistent redesign. We show that AI-augmented monitoring can shift the equilibrium from persistent redesign to harm until recognition, but only when regulatory investment crosses a regime-shifting threshold. Below this threshold, greater monitoring intensity may increase enforcement workload without reducing aggregate harm, as the regulator repeatedly detects adaptive violations while the firm continues to redesign. Thus, partial investments in AI monitoring can generate congestion rather than deterrence. When the firm can also adopt AI to reduce its redesign cost, the regulator’s deterrence threshold rises, reinforcing the strategic interaction between enforcement and evasion technologies. Moreover, congestion becomes particularly salient when AI-flagged violations require human review and regulatory review capacity is binding. In this case, the precision of AI triage—especially its false positive rate—matters as much as detection intensity. Enforcement effectiveness therefore depends not only on expanding detection, but also on allocating scarce human review resources efficiently.
-
-
Copy CitationGinger Zhe Jin, D. Daniel Sokol, and Liad Wagman, "Adaptive Enforcement with AI-Augmented Monitoring," NBER Working Paper 35010 (2026), https://doi.org/10.3386/w35010.Download Citation