I have been making this argument forever, essentially because i am a mathematician and a scientist and i have been intimately involved with the creation of economic protocols for business propositions for over fifty years.  Early i understood that base economic thinking was terribly misleading and broken.


I ultimately synthesized input from anthropology, sociology and empirical market making to reorder the foundations of my economics.  

The critical object of economics is not the lone person but the natural community instead which is approximately 150 individuals and ruled through what i additionally call the rule of twelve in some form or the other.  This natural community is able to operate an internal economy and sustain its members with a fair share of community resources.  There is more, a lot more, but we start there and never make the huge error of assuming some portion is not part of the whole as occurs in large communities producing serious unfairness in terms of trade and whose presence distort economic thinking. .

.

Rip it up and start again: the case for a new economics





Until we ditch the old textbook, we'll never face up to the challenges of the modern world—or move beyond neoliberalism


by Howard Reed / April 13, 2018 / Leave a comment







Economics is broken—it’s time for “deconomics”. Photo: Prospect composite



When the great crash hit a decade ago, the public realised that the economics profession was clueless. The claim that “boom and bust” had been solved came crashing down, along with about 7 per cent of national income.



Philosophers sometimes define science as an endeavour that makes verifiable predictions, so the failure to see major shocks coming makes this a very dismal science. But there are still more fundamental problems which precious few economists acknowledge, and even the most radical have thus far made only modest progress in solving.



After 10 years in the shadow of the crisis, the profession’s more open minds have recognised there is serious re-thinking to be done. Behavioural economics, which takes the trouble to watch and learn from how real people interact in experimental settings, has moved from the margins to become a speciality that can win Nobel prizes. Under the auspices of the Institute for New Economic Thinking, Adair Turner has been floating radical ideas about the government printing its way out of debt. The “Rethinking Economics” initiative has brought together students unhappy with the old textbooks and academics willing to debate what they teach. This is all very much to the good.



But the truth is that most of the “reforms” have been about adding modules to the basic template, leaving the core of the old discipline essentially intact. My view is that this is insufficient, and treats the symptoms rather than the underlying malaise. The real problems go to the theoretical core of modern economic theory—the so-called “neoclassical” paradigm.



This has underpinned the academic discipline for well over a century, and has more recently come to warp public policy too. Its tentacles reach far and wide—from our universities, which are now run on economistic lines that do not deem the humanities worthy of a teaching subsidy, to the bewildering structures of the NHS internal market.



What we need is not more tweaks, but a “deconomics,” which decontaminates the discipline, deconstructs its theoretical heart, and rebuilds from first principles. This may sound melodramatic, and—coming from a career economist—perhaps perverse. I retain enough of an economist’s instinct to be aware of the costs of starting over with an analytical blank slate. The admission of near total uncertainty would create a frightening void which could only be filled by vast and expensive new research. It can only be countenanced if the core tenets of “neoclassical” theory are not only awry, but so badly misleading that it would sometimes be better to operate without any theory at all. This, however, is the judgment that I have reluctantly reached.

The neoclassical takeover



Before I can explain why, we need to get straight on what exactly “neoclassical” economics is, and also understand how it came to dominate the discipline.



Neoclassicism is not, it is important to say, the same thing as “neoliberalism,” the rightward turn in economic policy since the 1970s. An upright neoclassical might well be against cosy public-private neoliberal deals, like PFI; in principle, one could be an egalitarian neoclassical, committed to redistribution. But neoclassical theory did, without doubt, help with the rationalisation of all the outsourcing, privatisation and the supposedly trickle-down tax cuts that have defined the long era which Thatcher and Reagan began.



The core of the theory, though, is much older than that—and more abstract. It starts with the presumption that the individual firm or person is the best unit of analysis for making sense of a complex world. This atomism ought to be questioned—climatologists, after all, don’t make sense of the weather by thinking about individual molecules in the air.



Neoclassicism assumes, furthermore, that firms are out to get as much as they can of profit, and people are out to get as much as they can of “utility,” or well-being. This doesn’t sound like how real people, or many real companies, generally behave.



Finally, it assumes that everyone will act rationally, which implies not only a certain consistency, but also that they took full account of all available and relevant information. In a world of obsessions and wilful blindness, this seems like a simplification that needs to be questioned, but—again—that is not a challenge that economists have dedicated much time to, at least until very recently.



Why not? There has never been any shortage of big brains in economics. Neoclassicism came to reign unchallenged because it offered a map that made navigable—or appeared to make navigable—some formidably tricky terrain. Max Planck, the physicist who originated quantum theory, once told John Maynard Keynes he considered studying economics in his youth, but had found it “too difficult.” He had a point. The contrast between the uncertainties of the social sciences and the precision of physics, for which too many economists have an unfortunate envy, is stark.



Known unknowns abound. There is, of course, something in “the law” of supply and demand: when a good is scarce it often gets more expensive. But even the most fundamental economic relationships are often ambiguous.



If I raise your pay, will you work more or work less? It is impossible to say: you might be spurred on by the reward of the higher wage, or you might decide you can afford more time off. If the interest rate rises, will you save more? Again, we can’t say: the extra return might tempt you to squirrel away more, but with a higher return guaranteed you might decide you can get the pension you need with less saving. And this is before we grapple with the reality that most economic decisions of any consequence—especially in finance—are shot through with risks and uncertainties, which greatly thicken the fog.






The classical economists—such as Smith, Ricardo and Marx—engaged with big questions about how human economies had evolved, interrogating the connection between the social and the industrial dimensions. But mindful that their analysis was politically-charged, they often disagreed. And they could never satisfactorily settle the question of value—what things were truly worth. Amid a hankering for more objectivity, the neoclassicals stepped in.



Around the turn of the 20th century, economists began to flinch away from the baffling big picture, and instead drilled down—on the basis of postulation, not observation—into what they thought was going on at the level of the individual firm and person.



Starting with those sparse assumptions about rational choice, profit maximisation and so on, from the time of Alfred Marshall’s Principles of Economics (1890) onwards, a vast, mathematically elegant and computationally tractable edifice of theory was constructed which demonstrated how buyers and sellers could truck and barter their way to a harmonious balance, or “equilibrium.”



However bewildering the real-world lurches in any given market could seem, they could now be soothingly broken down and understood in terms of firms and individuals calmly weighing decisions to invest an extra pound or work an extra hour “at the margin.” Because every last price could be explained by the interaction of supply and demand, those interminable classical arguments about what things were worth dissolved: value was simply defined by the market. And among the inherent confusion that accompanies so many basic economic relationships, here was an apparatus that could claw back some authority. You begin to see the allure of the model, no matter how flawed it might be.


With tweaks round the edges, it proved possible to accommodate instances where the market wouldn’t work. In the 1920s, Arthur Pigou bolted on the idea of the “externality,” to deal with cases where there are benefits or costs not captured in market prices that spill over from one firm or person to another: a factory, for example, might pour a polluting substance into a river in a way that reduces the income of local fishermen. Accommodating such realities saved the model from absurdity, and did so without compromising the core: “market failures” might exist, but they were conceived as exceptions.



Indeed, the next big advance, so-called “general equilibrium,” doubled down on the wider presumption against failure. Instead of studying the market in—say—bread, motor cars or bank loans in isolation, this theory considers the linkages between them, aggregating up to the whole economy and concluding that they will—if left to their own devices—interact in a way that guarantees a harmonious balance across the economy as a whole.



The breathtaking thing about this work was the timing. It was in the early 1950s that Kenneth Arrow and Gerard Debreu devised the mathematical proof that—under standard neoclassical assumptions—this wondrous general equilibrium must exist. But the world had only just escaped the Great Depression, which had ruinously demonstrated that it was not wise to rely on the free and uncoordinated interaction of economic “atoms” to achieve collective prosperity. And economics had only just absorbed the teachings of Keynes, who had presented the neoclassicals with the most formidable challenge they have ever faced.



Keynes pointed out that certain preconditions had to be in place for the butcher, the baker and brewer to be able to trade for mutual advantage. Unlike in the mechanical world of the neoclassicals, time and sequencing mattered. One of our trio of craftsmen would have to make the first move to buy from the others, and he would need to be confident that he would be able to finance his purchase by selling his own wares, which couldn’t be assumed if he could see that the others had fallen on hard times.



Keynes’s commonsensical argument eventually overcame the resistance. Politicians and central bankers came to understand that it was part of their job to sustain enough “effective” demand in the system. Right up to our own recent age of austerity, there would be intermittent attempts—by free market ideologues and political interests that gain from “sound money”—to turn back the clock to a pre-Keynesian age.



But these didn’t stick. Margaret Thatcher was eventually forced to dilute her monetarism, and George Osborne’s plan to eliminate the deficit in just five years had to be quietly stretched out into a 15-year effort. In politics, at least, reality has a way of tripping up ideologues.



Academic economics, by contrast, never faced a comprehensive reckoning. Instead, general equilibrium was installed at the core of the textbooks. The great Keynesian insight about the big picture economy was simply attached to the side of the main structure—a sort of quirky outbuilding to the neoclassical temple.

The model is broken



And this temple is where, whether they acknowledge it or not, most economists still worship. They might protest that they don’t spend their days with their heads in undergraduate textbooks, but the old theory still shapes prior assumptions about what gets counted, and how the numbers get carved up. The tenets of neoclassicism shape their day-to-day work in advising businesses, evaluating investment projects and advising on industrial regulation. The same faith shapes their avowedly objective cost-benefit reports that determine which rail lines, green energy projects or schools we build. Within the academy, it has made deep incursions into the territory of neighbouring disciplines which used to think quite differently, such as sociology and psychology. Beyond the ivory tower, the religion has seeped into the institutional mindset of the civil service and the media, infusing every decision and debate about legal protection for workers and obligations on firms. Laissez-faire does not always win out, but intervention always has to overcome the great theological fear—that it will knock things out of the divine balance achieved when the market is left to itself.



Some economists will take umbrage at this, and dismiss it as wild rhetoric. So let me explain what I mean as specifically as possible, by interrogating particular aspects of the basic textbook theory. Not abstruse aspects, either, but those that deal with business, with jobs and with all of us as consumers. If neoclassical economics can’t deal with these basic things, then—surely—it is time to put it out to pasture.



When it comes to business, the animating paradigm—perfect competition—envisages every firm as in the grip of monomania, exclusively concerned with its own profit and nothing else. Of course, the mainstream model has developed detailed offshoots to deal with the fact that real life is sometimes more complicated—as when firms appoint managers who are more concerned with lining their own pockets (or going to the golf course) than enriching the shareholders.



But the basic model, in which there are so many competing, identical profit-fixated firms that they are all obliged to charge exactly the same fair price, retains its stranglehold. No matter that in almost any complex industrial market structure, some form of imperfect competition or even outright monopoly will be a better first approximation of reality, everything else is made sense of as a deviation from it. This is how the world is meant to work. So the instinct remains to nudge the world towards the vision (or hallucination) of extreme and uniquely efficient competition.



Technical theories in so-called “welfare economics” show how an idealised form of the competitive economy can be relied on to achieve optimal results, and formally entrench a fantasy form of “red in tooth and claw capitalism” as the ideal. But in any real world setting, it is no such thing. In fact, one of the most bizarrely neglected achievements of mathematical economics has been to prove that there is no guarantee that moving towards this idealised vision will do any good.



Back in the 1960s, the “Theory of the Second Best” established that as soon as any part of the economy diverged from the perfectly competitive paradigm, there is no reason to assume that things can be improved by moving towards it. Think about that for a moment. Any deviation at all—the existence of a local monopoly, say, or of a smidgeon of taxation—is enough to render it useless as a guide to what policies will actually make people better off. But this Second Best theory tears up a faulty map, without offering a new one, and so the profession finds its implications too difficult. This explosive critique hardly comes up in applied economics. Most practitioners will never come across it after graduating.

Turning to the way that firms deal with workers, the neoclassical offering is “marginal productivity theory,” according to the wage that each of us commands is equal to the value of the extra production we enable. Most of the neoclassical temple was constructed in the early 20th century, and in the context of the Fordist factory line it probably made a bit more sense. In theory at least, you could imagine a manager weighing up how many more cars he could turn out every year by adding one more name to his payroll, and calculating that it would be worth paying a wage up to the value of the extra output. But in many modern business context, it is impossible to know how a manager could start to do the same calculation.



Most jobs are in services, where there is no equivalent to counting the cars rolling off the line; all that can be tallied objectively are the receipts that come in. For the growing army of freelance service professionals, “output” is defined by the wage they can charge in the market, and so marginal product theory collapses into marginal product tautology. Much more generally in today’s complex workplaces, what each extra worker contributes to output will be clear as mud. When tasks are specialised, there might be several individuals whose absence could bring production to a halt. Think of a small, hi-tech manufacturing firm, which couldn’t function without its one designer, its coder, or its engineer. Every one of these could claim that the marginal output they facilitated was the entire output of the firm. But it wouldn’t add up for them all to be paid at that rate. So how would things be settled? In large part by bargaining. This, however, is a scrappy process involving institutions and personality as well as economic muscle; and the arid propositions of neoclassicism shed no light here.



Again, there are practical consequences from the shortfalls of the theory. Like anyone else, labour market economists would rather talk about things where they’ve got something to say. And so, when confronted by the extraordinary widening of wage disparity in the Anglo-Saxon economies of the 1980s, they were reluctant to discuss the social and political currents that might have affected the relative confidence with which different groups bargained. Instead, the mainstream latched on to theories that put everything down to technology, postulating that the computers were somehow transforming the potential of well-to-do graduates but not the “low skilled.” No matter that inequality wasn’t rocketing in the same way in other parts of the world; it was almost as if it had ceased to be a political question.



Third and finally, let’s turn to “consumer theory,” while noting the implications of the basic account of the individual being based around consumption rather than, say, citizenship. The theory here has more flexibility than that for firms. This time the charge is not so much that it is wrong, as that it is empty.



Consumers are assumed to be maximising well-being or “utility,” but seeing as utility is never observed, this assumption predicts nothing about the world. That makes it impossible to contest. Show the economist surprising or seemingly irrational behaviour, and he won’t stop and worry about whether the individual is failing to act in line with his maximising model; he will deduce that the individual is “deriving utility,” from a different set of preferences from other people—in other words, getting her kicks from other things.



They will go to extraordinary lengths to hang on to this dubious picture of the human being as calculating machine—coming up, for example, with “rational addiction” models—rather than asking challenging questions about the peculiar frame through which they view the individual. But their devotion is misplaced. Much of the time consumer theory says nothing at all, and such work as it does reduces the individual to a husk of a human, attributing the same monomaniacal fixation on material outcomes that is assumed with profit-maximising firms.

Hollow victories



Look across business, shopping and the world of work, then, and neoclassical economics offers a slippery mix of the reductive and the hollow. And there is an underlying small “c” conservativism behind the whole enterprise.



Every aspect of economic behaviour is held to be explained by theory except the “fundamentals” of technology and consumer tastes, the two things that it takes as given, and yet also the two things that we might well be most interested in.



In a financial crisis, for example, it is not just calculations but tastes which change—investors suddenly and collectively develop a preference for a bird in the hand over risky rewards. You need to grapple with that if you want to predict where the next crash is coming from; neoclassicism is no help here. If you are an official tasked with public health, you’re likely to be interested in seductive advertising campaigns that increase the appetite for junk food or alcohol. Textbook economics is not going to help you with that, because—with tastes a given—it has no analytical apparatus to explain commercial behaviour which is explicitly designed to alter preferences.



The neglect of “technology” is even harder to forgive. In contrast with its classical antecedents, which were all about economic change, neoclassical theory is, in its heart, static. This is baked in to its definition of efficiency, which is all about the optimal allocation of a given set of resources. But the immense rise in GDP per head over the last 250 years is not because society is allocating the same set of resources more efficiently; it is because technological change has produced a far wider set of useful goods and services.



In recent decades, the theory has evolved round the edges to include some analysis of technical change but it is not part of the fundamental structure; instead, it is typically assigned to the “very long run” and treated as an afterthought. The great Robert Solow first imported the idea “total factor productivity,” or technical know-how, into the framework, but it dropped in without explanation, rather like “manna from heaven.” More recent models of “endogenous growth” have tried to trace innovation back to specific things companies do, such as R&D, but this is a dynamic bolt-on to a static model; it still sits uneasily with the basic (and unreplaced) neoclassical account of the firm. The approach is still 180 degrees the wrong way round.



The mainstream definition of “efficiency,” the standard by which everything is judged, is also strange. So-called “Pareto efficiency” is a situation in which no one can be made better off without making anyone worse off. There are some politics implicit in that—redistributing £1 from the Duke of Westminster, who won’t notice, to a hungry rough sleeper outside of Westminster tube station, would, one might have thought, raise the total sum of human happiness. In the name of objectivity, however, this concept refuses to judge whether this switch has made things better or worse. Someone is better off, someone else worse off, and so it has nothing to add.



For me, the most general problem with the neoclassical architecture is that it airbrushes out the negative consequences of a destructive behavioural mindset–selfish individualism. Indeed, it regards such behaviour not only as inevitable, but desirable. Whatever the purpose of its practitioners—many of whom are thoughtful people, with progressive views—this is the work that it does as ideology. Under the cloak of scientific rationalism, it smuggles in values that narrow the realm of the politically possible and entrench the powerful.



 Power relations in the labour market go unexamined. Profit-maximisation in mainstream economics becomes not merely a description of what does happen in a capitalist economy, but a template for what should happen. As for the account of the individual, experiments on students have suggested that those who are exposed to the mental virus of mainstream economics emerge behaving more in line with its selfish predictions that those who have studied other topics. Proof, if it were needed, that it is not merely awry, but malign.



The ultra-individualistic attitudes and behaviours held up as normal by neoclassicism would, in other contexts, be regarded as psychopathic. Once right-wing think tanks realised they could deploy it to provide a convenient cover story for the maxim “greed is good,” while hiding the dubious ethics behind a well-established wall of theory, they began to succeed where the more explicit moral exhortations of cheerleaders for ultra-capitalism like Ayn Rand had initially failed—in enabling the realisation of the “neoliberal” agenda.



Not all mainstream economics is right-wing, but its textbook is one which the right has found exceptionally useful. By working within it, the likes of Milton Friedman’s “Chicago School” have been vastly influential over 40 years. Their triumphs include the weakening of trade unions, the privatisation and marketisation of public services and utilities, restrictions in social security, and the individualisation of investment risk, via the replacement of final salary pensions with schemes that offer nothing more than a punt on the markets. The effects include increased inequality and an atomised social fabric.

Time for “deconomics”



So where do we go from here? If we accept that we need fundamental reform, what should the new economics—“de-conomics” as I’m calling it—look like? Much, of course, must be left to be determined by new research, working to new priorities within new paradigms. However, we can sketch out some desirable characteristics of a retooled economics.



First, we need to accept that there is no such thing as “value-free” analysis of the economy. As I’ve explained, neoclassical economics pretends to be ethically neutral while smuggling in an individualistic, anti-social ethos. In reality, any statement about the economy that goes beyond descriptive statistics (for example: “the annual rate of CPI inflation was 2.7 per cent last month”) is a value judgment.



Only by acknowledging that can we have an honest debate about how our economy and society work. Mainstream economists are emphatically not all laissez-faire libertarians, but the textbook they are working from allows thinkers and commentators with an atomistic worldview to commandeer a privileged position in the debate. By using the old model, with its assumptions about self-interest resolving in a harmonious balance, the laissez-faire libertarians can and do claim to be expounding a “value-free” expert insight, when their analysis is, in fact, actually heavily value-laden.



Second, the analysis needs to be based around how human beings actually operate—rather than how neoclassicism asserts that “rational economic person (or firm)” should operate. Prospect recently published a defence of mainstream economics by some of the UK’s leading academic practitioners, which argued that current economic models were useful in the same way that the London tube map is useful—as an abstraction that preserves the essential features for navigation while removing all the other irrelevancies.






A better analogy would be that economists are trying to apply the map of a fictional, perfectly orderly city—let’s call it “Econopolis”—to a messy, sprawling real-world city such as London. Because few, if any, of the tube stations on the Econopolis map correspond to locations in London, economists get lost when trying to navigate.



Stretching the analogy further, with the neoliberal turn in economic policy, it is as if city planners had bulldozed parts of the real city, and assembled new flats, businesses and transport infrastructure in these zones so that they better matched the layout of the fictional tube map. This is entirely upside down. We need to stop conducting radical surgery on our economies so that they may conform to a fictitious neoclassical ideal, and instead base our theory on the real world.



Third, we need to put the good life centre stage, rather than prioritising the areas that are most amenable to analysis via late-19th century linear mathematics. Technological progress and power relationships between firms, workers and governments need to be at the heart of economic discourse and research. In some ways this moves us away from neoclassical economics and back towards the classical economics of Smith, Ricardo and Marx—although, of course, with much better data, and infinitely more computational power to crunch it.



Finally, economics needs to be pluralistic. For the last half-century neoclassical economics has been gradually colonising other social science disciplines such as sociology and political science. It is high time this process reversed itself so that there was two-way traffic and a mutually beneficial learning exchange between disciplines. It is possible—and probably desirable—that the “deconomics” of the future looks more like psychology, sociology or anthropology than it does today’s arid economics.



Is this an ambitious agenda for change? Yes, perhaps wildly so. But it is essential if we don’t want to reach a stage—in pretty short order—where economists aren’t laughed out of the room after failing to predict the next crash.



Neoclassical economics is, after all, a relatively young discipline, its textbooks little more than a century old. The change I am seeking is no more fundamental than the transition from classical to neoclassical economics, and that was accomplished without the discipline imploding. And this time around we’ve got then-unimaginable data and other resources. So there can be no excuse for delay. Let economists free themselves of a misleading map, and then—with clear eyes—look at the world anew.