What I’m reading (AI reads)

On Higher Powers

Imagine describing AI to an ancient human –– “a superintelligent invisible being designed by the body of all of humanity’s recorded expression that helps us be productive, less lonely, and guides us through work and personal life.” –– almost any person in almost any civilization in the world would shrug and say “spirits, angels, devas, dybbuks, gods. Sure, no big deal.” We have, in fact, 100,000 years of robust and time-tested systems of organizing our society based on the belief that there are higher powers than ours. These higher powers move among us, determine how we all should act, and with whom we should be in communication — this is older than just about any principle we have together.

The Handoff

I think it is no coincidence that at the historical moment that humans progress themselves to the point of not breeding because it is inconvenient, that they invent a million virtual beings, a billion artificial minds, trillions of robots and a zillion working agents. Think of this as a handoff – a shift from one regime based on the biologically born to another based on the manufactured made. We are in transition from the world of the Born handing off to the world of the Made.

The purpose of handing the economy off to the synths is so that we can do the kinds of tasks that every human would wake up in the morning eager to do. There should not be any human doing a task they find a waste of their talent. If it is a job where productivity matters, a human should not be doing it. Productivity is for robots. Humans should be doing the jobs where inefficiency reigns – art, exploration, invention, innovation, small talk, adventure, companionship. All the productive chores should be handled by the billions of AIs we make.

The Big Take with Nadella

The DeepSeek episode highlights another, arguably more revealing part of Nadella’s thinking: AI is rapidly commoditizing, and this is a good thing for Microsoft. While everyone in Davos was focused on AI consumption, Nadella was contemplating the history of coal production. One of his favorite economic theories is the Jevons paradox, which posits that as a resource becomes more accessible and its usage more efficient, consumption increases. This happened with coal during the 18th and 19th centuries and more recently with plane travel, when plummeting operational costs and airfares helped create frequent flyers, new flight destinations and booming sales for airlines. Nadella believes a similar phenomenon will play out with AI.

America Isn’t Ready For What AI Will Do to Jobs

Taken together, these statements are extraordinary: the owners of capital warning workers that the ice beneath them is about to crack—while continuing to stomp on it.

It’s as if we’re watching two versions of the same scene. In one, the ice holds, because it always has. In the other, a lot of people go under. The difference becomes clear only when the surface finally gives way—at which point the range of available options will have considerably narrowed.

The Bitter Lesson

One thing that should be learned from the bitter lesson is the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great. The two methods that seem to scale arbitrarily in this way are search and learning

The second general point to be learned from the bitter lesson is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world. They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity. Essential to these methods is that they can find good approximations, but the search for them should be by our methods, not by us. We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.