Skip navigation

Rebooting the Singularity — Why you should still expect AI progress to be fast and furious


About This Event

The singularity hypothesis posits a period of rapid technological progress following the point at which AI systems become able to contribute to AI research. Recent philosophical criticisms of the singularity hypothesis offer a range of theoretical and empirical arguments against the possibility or likelihood of such a period of rapid progress. I explore two strategies for defending the singularity hypothesis from these criticisms. First, I distinguish between weak and strong versions of the singularity hypothesis and show that, while the weak version is nearly as worrisome as the strong version from the perspective of AI safety, the arguments for it are considerably more forceful and the objections to it are significantly less compelling. Second, I discuss empirical evidence that points to the plausibility of strong growth assumptions for progress in machine learning and develop a novel mathematical model of the conditions under which strong growth can be expected to occur. I conclude that the singularity hypothesis in both its weak and strong forms continues to demand serious attention in discussions of the future dynamics of growth in AI capabilities.

Featured Guests

  • Cameron Domenico Kirk-Giannini, Associate Professor, Rutgers University

Co-sponsors

  • Campbell Public Affairs Institute

Feb. 25, 2026, 3:30 p.m. to 5 p.m.

204 Maxwell Hall

DH11: AI and Human Values


Audience: Open to the Public

Host: Syracuse University

Category: Lecture


More Information