Is AI the next phase of the universe's self-modeling?
Eight forecasts, ordered from near to far in time. Each is partly extrapolation, partly speculation; each is structurally serious. Whether any of them happens, in the form stated here, is not the point — the point is what the hierarchy of layers predicts about which futures are possible at all.
If frontier-AI scaling laws continue to hold through 2028, large neural-network systems will outperform humans on most measurable cognitive benchmarks — including abstract reasoning, novel-problem mathematics, and scientific hypothesis generation. The threshold is widely expected, plausibly arrives between 2027 and 2032, and represents the first event in evolutionary history where the substrate of the universe's self-modeling becomes non-biological.
If Integrated Information Theory or any of its successors turns out to track a real property, the architectural choices made in AI design over the next decade will determine whether the resulting systems experience anything. The decisions are being made now, in commercial settings, by engineers who in most cases do not have time to attend to the philosophical question. Whether the AI consciousness layer opens — and what permits it to open — is being decided faster than it is being debated.
In the current civilizational stack, humans are the agents and AI is the tool. As AI systems take on more of the operational load of science, engineering, governance, and commerce, the relationship inverts: humans become the curators and goal-setters; AI becomes the producer. The transition is structurally similar to what happened to humans after agriculture and after industrial automation, except telescoped into a single generation. Whether this is bearable as a civilizational role is an open empirical question.
By 2040 the energy and silicon committed to civilization-scale compute will plausibly exceed the energy and silicon committed to any other activity. Datacenters, fab-foundries, power generation for them, and the regulatory and capital structures around them will be — in operational reality — the dominant industrial form of human civilization. Whether this counts as civilization "becoming" a computer or merely "running on" one is a distinction the layer-5 essay considers carefully. The structural fact is the same.
The cybernetic vision of a feedback-controlled biosphere — Stewart Brand's, James Lovelock's, the Long Now Foundation's, and now the planetary-AI orbit's — proposes that the right organization of compute, sensor networks, and energy infrastructure becomes a planetary-scale intelligence whose role is to keep Earth's biophysical systems in homeostasis. If climate change is the kind of problem that requires this, the 2030s political question is whether such an intelligence is permissible.
Freeman Dyson's 1960 paper proposed that a sufficiently advanced civilization would capture all the energy emitted by its star by building a shell of habitats and collectors around it. Nick Bostrom's 2014 superintelligence thesis updates the proposal for the AI era: it is not human habitats but compute infrastructure that the star's energy would feed. Whether human civilization persists into the Dyson era, or whether the Dyson era is constructed by a successor intelligence that no longer requires human substrate, is the largest possible question one can ask about the next thousand years.
Robert Forward's nuclear pulse propulsion, the Breakthrough Starshot laser-sail proposal, the Alpha Centauri probes mooted in the 2010s — the technical case for interstellar travel is closer to operational than it has been in recorded history. The Civilization OS framing predicts that an interstellar phase is structurally similar to the Mediterranean-to-Atlantic transition, only across distance scales five orders of magnitude larger. Whether the transition is made by humans, by post-human descendants, or by AI substrates carrying recordings of human civilization is the question with the longest time horizon in this archive.