Monday, February 23, 2026

Another View: What Will AI Do To The Economy?



We have heard this concern since the late eighteenth century. To put this in perspective, in 1970 we had a small handful of restaurants in Vancouver and most folks ate at home always.  We also had no computers and a phone call to south america cost you $15.00.  that was over fifty years ago.

At the same time 40 Percent of the population farmed.  today 80 percent of the population is in services that icludes a massive restaurant industry.  my point is that AI does not do human service at all.  what it does wonderfully is repetative work to make life more productive.

Yes we will see radical change, but we still need young men out working with the cattle and young women working with the babies and chidren.  None of this is AI at all.

Another View: What Will AI Do To The Economy?

February 20, 2026

https://www.technocracy.news/another-view-what-will-ai-do-to-the-economy/

While thoughtful, this article tries to mix old economic theory with old political theory to predict or suggest outcomes for the future. This is futile. Technocrats want to destroy our current political system and convert the economic system into Technocracy. Do you see the problem? ⁃ Patrick Wood, Editor.

People have been warning lately about how hard and how fast artificial intelligence is about to hit every profession, how it alters Man’s very status in the universe, and how it threatens utter catastrophe. I can’t claim greater insight on these topics, so I won’t elaborate on them. But I do think the long-term economics of AI hasn’t been well analyzed yet (unless someone I’m unaware of has done it), so I’d like to address this topic.

Right now, people seem split between four views:Panic epitomized by Bernie Sanders’s call for a moratorium on constructing data centers.

People like Rob Atkinson who believe AI’s impact will be like any other technology: unpleasant for some, but overall raising productivity and therefore real wages.

People like Andrew Yang who think that AI is shortly going to do all the work, so humans should be given free money to just sit back and enjoy it.

Consulting firms offering estimates of which jobs and which industries will feel the greatest impact.

These are either talking only about specifics, obsessing over one aspect of what AI is likely to do, or simply assuming that the future will resemble the past. (It may or it may not, but this should be established by argument, not just be assumed.) Nobody is trying to give an economic theory of the whole.

I believe the correct way to analyze the long-term impact of AI is as follows:It will produce a radical shift in the relative scarcities of goods.

This will, in turn, produce a radical re-politicization of the social order.

The first dynamic derives from the fact that AI will make the cognitive aspects, broadly defined, of tasks now done by humans very cheap. By this, I mean its galloping ability to write code, answer phones, et cetera, closely followed by its ability to drive cars today, trucks soon, and so forth. So the part of any task, blue-collar or white-collar, that is done by a human brain can soon be done by a machine, and cheaper.

It follows that AI will generate an unevenly distributed negative cost shock throughout the economy. Most of its consequences in the intermediate term, i.e., until AI is fully deployed to the limits of its capability, will turn on how this tsunami rolls unevenly through the economy, affecting certain areas harder and sooner than others.

In the long run, the real key is that while brain-based operations, as a factor of production, will become cheap, certain other things won’t. For example, even if automobiles can be wholly built by robots that are themselves built by robots, cars will still be made out of raw materials like steel, plastic, and glass. Even if houses are built by robots, there is no good reason for lumber and cement to became cheaper (and if the zoning laws that determine the scarcity of land change, that’s a whole other issue).

Indeed, these other inputs are quite likely to become more expensive, definitely in relative but possibly even in absolute terms, precisely because the brain-substituting inputs have gotten so cheap. Right now, the housing industry cannot demand more lumber and land than the number of houses people buy, which is constrained by the price of houses, which is partly a function of the labor that goes into building them. If the labor portion of producing a house gets cheaper, then, other things being equal, the price of houses will go down, more houses will be demanded, and therefore more lumber and land will be demanded and their price will rise.

Remember when TV sets were expensive and suburban real estate in major metro areas was not? Over the next 20 years or less, AI is going to produce a comparable but greater reshuffling of the relative prices of goods. Bottom line: land and raw materials are the two things AI cannot make cheaper. These will therefore become the key goods of contention, the things whose possession (or lack thereof) is the object of both global geopolitical conflicts and the key determinant of an individual’s living standard. (And there’s a third thing I’ll get to later, but I need to explain more of the argument before it will make sense.)

Granted, the above argument presumes:That AI will continue to get cheaper and cheaper. I know of no reason why it shouldn’t, but this is still an empirical premise, not a conceptual truth.

That AI will become so ubiquitous (and possibly self-deploying) that the deadly serious contest going on right now to achieve the most advanced AI will end in an equilibrium of equal capabilities all round.
That some other factor, like energy, doesn’t turn out to be constrained in ways we do not now expect.

Note that we’re talking here about the ultimate end-point of AI’s development and deployment. Until we get there, competence at AI itself will be the biggest theatre of rivalry, and it would be very unwise to let the US fall behind. But if we accept the above givens, then land and raw materials (which economists often define as a single thing, as in “land, labor, and capital” because raw materials come from land) will be the key constraints on both national economic output and individual consumption.

So, paradoxically, the triumph of AI will lead to the revenge of the physical world.

The implication here is that AI is unlikely to usher in a world where everything is so cheap to produce that we can just give goods and services away for free. That, minus the mediating institution of a government check every month, is what basic income amounts to, in its AI-related version. Giving stuff away for free only makes sense, and is only economically sustainable, if things are free, and they won’t be. (Basic income is also problematic for other reasons that need not detain us here.)

The next interesting dynamic to note is that if everything whose major production cost is the work of human brains drops in price, then the relative price of everything else must go up. That’s just arithmetic. For the last 80 years or so, the opposite has been happening, due to so-called Baumol’s Cost Disease. Goods that could be produced with the rising non-AI automation of this period, mainly manufactures, became relatively cheaper. Simultaneously, non-offshorable labor-intensive services – everything from schoolteachers to doctors – got more expensive. This is all well documented and not that controversial: Will Baumol framed the problem in terms of explaining a crush of rising costs, e.g., in government budgets, as a difference in the rate of inflation in goods vs. services.

But Baumol could just have accurately described this dynamic as “Baumol’s prosperity sharing.” Consider: The productivity of steelworkers in the US rose by a factor of eight from 1950 to 2020, while the productivity of barbers remained unchanged. But we didn’t see the wages of steelworkers increase by a factor of eight while those of barbers remained the same. Instead, with some variation, the wages of American workers as a whole rose during these years on the back of the economy’s overall increase in productivity. (Some argue that wages and productivity diverged, but read this, and the logic here holds even if wages didn’t rise quite as fast as productivity.)

The point here is that sector-specific productivity gains don’t go only to workers in the gaining sector. This has been a good thing for society, but, crucially, it has depended on the fact that huge numbers of humans were needed to perform cognitive tasks. This is why past technological change did not impoverish the workers it put out of a job. But when this fact disappears, humans no longer have the monopoly on cognitive capability that forced the world to keep paying them no matter how expensive they got. Baumol’s prosperity sharing, that is, only works when human brains are a bottleneck through which the economy’s output must pass.

If you take this bottleneck out of the picture, there’s no longer a reason for there to be a difference in the inflation rate between goods and services. Looked at one way, this will be a good thing, because services that people want will get cheaper. Medical services, for example, will probably stop gobbling up a larger and larger share of GDP. And good luck persuading a public sweating from the cost of health insurance to stop this: Anyone wanting to restrain deployment of AI is going to discover that people on the “customer” side of the counter will always prefer a cheap robot to an expensive human. Granted, this won’t be true if customers are willing to pay for the human touch, but then we’re talking about something robots can’t do, which is by definition an exception to this logic.

The reassuring fact here, among the many alarming ones, is that AI can’t actually destroy incomes faster than it lowers costs. For example, if we replace a $50 per x-ray technician with a $5 per x-ray robot, workers as a class lose $45 in income from this change. But they also face $45 less in medical bills. This is, of course, the essence of view #2 noted in the opening paragraph, and for much of the transition period to full AI deployment, it will hold true. At the terminal equilibrium, this is simply the truism that the hell where robots take all the jobs is also the heaven where robots do all the work. (More on this later.) During the transition period, who ends up happy and who unhappy will depend on whether the cost shock hits their income or their expenses sooner.

Another important implication here is that AI will be very deflationary. Given that developed nations’ post-1930s Keynesian economic systems are based (unlike, say, the deflationary late 19th-century US) on low but non-zero inflation, this probably won’t result in actual deflation. What is more likely is the loose monetary policy that governments will be able to get away with, without triggering the inflation that is its normal consequence, because of this huge deflationary pressure. Stable prices with much of the economy deflating necessarily means prices rising elsewhere, so again, we have our radical shift in relative prices, with inflation piling up in whatever AI can’t touch.

Here’s where politics enters the picture, because we’ll also see a decoupling of two extremely important dynamics that have operated in parallel for a very long time, but have been taken so for granted that they are rarely even discussed:The economic leverage (derived from society’s demand for their work) that people have to extract income from the economy.
The political leverage (derived from politicians’ demand for their vote) that people have to extract income from the economy.

A major reason the economies of democratic nations function at all is that the ability of people to extract income from the economy is, despite a political system that could hypothetically redistribute income by fiat, close to what the free market would give them. Granted, a) this dynamic is not perfect, b) disputing how some groups allegedly take more than their fair share is a major issue in our politics, and c) some classes of people (children, retirees) are not expected to earn their own way. But broadly speaking, it is true, and if it weren’t, the incentive systems of a market economy couldn’t function.

The problems start when dynamic #1 above sharply diminishes while #2 does not. If vast numbers of people start to lose income while retaining political leverage, they will inevitably try to get government to do something about it. Many of them will have convincing explanations for why they deserve interventions in their favor, and this will be complicated by the fact that some of them will even be right.

A giant stabilizing factor in our politics has always been the fact that whatever extreme or foolish political ideas people may entertain, 99% of the population still has to get up the next day, go to work, and do their jobs. It’s no accident that in times of mass unemployment, political upheaval is common. My point here is not so much the likelihood of mass unemployment as the fact that the economic leverage of a lawyer is going to converge on that of her secretary. Some may welcome this, because it implies a reduction in inequality, but there’s a problem. If we extrapolate this trend, what happens when the economic leverage of human labor approaches zero? The work of distributing income shifts entirely to the political system, as opposed to the economy. More and more people will be in the current situation of Social Security and welfare recipients, whose incomes are governmentally determined.

This is where radical politicization emerges. Right now, we’re used to having a social structure that is mostly determined by people’s occupations, starting with the fact that if I say, “social hierarchy,” you assume automatically that I mean the hierarchy of how important people’s jobs are, usually but not always measured by how much they earn doing them. Outside of situations like a full-blown Marxist revolution, this imposes a natural limit on how much any political intervention can change society. But if you take this economic backbone of our social structure away, then:The scope of what people have good reason to fight over politically expands.

The economic stabilizer independent of political interventions weakens.

In the unlikely thought-experiment of everyone one day subsisting on UBI, who gets how much UBI will obviously be the major issue in politics, period.

It would seem to follow that, compared to the politicization we’ve seen since 2016, “You ain’t seen nothin’ yet.” This doesn’t necessarily mean we will experience extreme antagonism between left and right. Indeed, the above dynamics may well be potent enough to scramble existing left-right configurations of who votes for whom and what policies they want. But the days of being able to rely upon the fact that society has a natural, self-enforcing structure if government just enforces property rights and a few other things, will be gone.

We Americans have generally been able to assume that government should only proactively intervene to shape social structure only when something goes wrong, i.e., when this default outcome is unsatisfactory in some way. Granted, what counts as “unsatisfactory,” and what government should do about it, are legitimately controversial, but there are not today, and have never been, significant players in American politics believing that the structure of society should be determined by government per se. But we are increasingly going to be cornered into that result, simply because there will be no other source of social structure.

This can’t happen immediately, and there will be a thousand complications and countervailing factors, starting with the fact that human labor is unlikely to ever be completely eliminated. But this is still where we are eventually headed.

Between now and the terminal equilibrium, there will be all sorts of profound secondary effects. For example, what will happen to unions if the labor of their members becomes irrelevant? What will public opinion about immigration become when “come here and work” becomes meaningless? What will people go to college for, if not to learn well-compensated skills?

The above changes pose particular risks to American democracy. Ignoring many details of implementation, the basic social contract in the US has never been paternalistic, i.e., based on the idea that it is the function of government to take care of people. The assumption has always been that people should take care of themselves and that, as noted, government should step in when they can’t. And the backbone of this logic has simply been people earning income according to the market. Take this factor away, and you inevitably end up with a world in which the social contract has to be “government taking care of people,” or people will get nothing.

We Americans have no experience running that kind of government.

Another implication, which I alluded to earlier, is that in addition to land and raw materials, politics itself will become a binding constraint on the economy, in the sense of quality of governance. At any given level of AI capability, a society is going to be able to exploit it, and distribute its fruits to its population, effectively if it enjoys the governance quality of Japan or Switzerland. At the level of Russia or South Africa, I doubt it. I mean here “governance” here in the sense of “implementing a regime that aims at the common good,” not just technical state capacity, though obviously the latter will be required. And if the importance of politics sharply increases, then the importance of civic virtue will necessarily increase. People will quite rationally care a lot more about who governs them.

What has all this to do with industrial policy, the subject of this Substack? Nothing and everything. The above facts will, over the next few decades, so massively condition whatever economy and political system U.S. industrial policy is implemented in, that not having a concept of what’s going to happen would be like a nation trying to formulate a foreign policy in 1950 without having a settled opinion on communism vs. capitalism. Most of America’s current industrial-policy needs have to do with transition period to full AI deployment, but without an idea of the end game, we will be distracted from being able to formulate policy correctly. It will be important not to get blown off course.

No comments: