December 18, 2018

Basic Income on the Res, Part 2

For nearly two decades, Duke Medical School professor Jane Costello has been studying the impact of casino money on the health and wellbeing of the North Carolina Cherokee tribe. For long, balanced articles about her work, see “What Happens When the Poor Receive a Stipend?The New York Times (2014) and “Free Money: The Surprising Effects Of A Basic Income Supplied By GovernmentWired Magazine (2017).

The NY Times article lists several  encouraging results. Here are a few:

The number of Cherokee living below the poverty line had declined by half.

The frequency of behavioral problems declined by 40 percent, nearly  reaching the risk of children who had never been poor.

Crimes committed by Cherokee youth declined.

On-time high school graduation rates improved.

The earlier the supplements arrived in a child’s life, the better that child’s mental health in early adulthood.

The money seemed to improve parenting quality.

Prof. Costello also noted neurological benefits, particularly brain development in the ”hippocampus and amygdala, brain regions important for memory and emotional well-being.”

Randall Akee, an economist at UCLA and a collaborator with Prof. Costello, speculated about the impact of these findings on the cost of welfare benefits:

A cash infusion in childhood seemed to lower the risk of problems in adulthood. That suggests that poverty makes people unwell, and that meaningful intervention is relatively simple.

Bearing that in mind, [Prof. Akee] argues that the supplements actually save money in the long run. He calculates that 5 to 10 years after age 19, the savings incurred by the Cherokee income supplements surpass the initial costs — the payments to parents while the children were minors. That’s a conservative estimate, he says, based on reduced criminality, a reduced need for psychiatric care and savings gained from not repeating grades.

The Wired article tracks the experiences of “Skooter” McCoy, who left the Cherokee Reservation to play small college football the year the casino money distributions began, and of his son Spencer McCoy, who was born that same year. Skooter returned to the Reservation to coach football at the local high school and is now general manager of the Cherokee Boys Club, a nonprofit that provides day care, foster care, and other tribal services.

The casino money made it possible for him to support his young family, but the money his children will receive is potentially life-altering on a different scale.

‘If you’ve lived in a small rural community and never saw anybody leave, never saw anyone with a white-collar job or leading any organization, you always kind of keep your mindset right here,’ he says, forming a little circle with his hands in front of his face. ‘Our kids today? The kids at the high school?’ He throws his arms out wide. ‘They believe the sky’s the limit. It’s really changed the entire mindset of the community these past 20 years.’

The Cherokees’ experience began with the same provisions for a one-time distribution at age 18 of the money set aside for minors that we saw last time in the Seneca tribe’s program, but the Cherokees later amended their law to call for payments in three stages — still not ideal, but a move toward sensibility. Skooter calls the coming-of-age payments “big money,” and has seen his share of abuse, but his son Spencer appears to be taking a different path:

When Spencer first got his ‘big money,’ he says, ‘I’d get online and I was looking for trucks and stuff, but I thought at the end of the day, it wasn’t really worth it.’ Aside from a used bass boat he bought to take out fishing, Spencer has stashed most of the money away in hopes of using it to start his own business one day.

After reviewing Prof. Costello’s work, the Wired article examines the use of UBI as a response to technological unemployment, concluding as follows:

The true impact of the money on the tribe may not really be known until Spencer’s generation, the first born after the casino opened, is grown up. For the techies backing basic income as a remedy to the slow-moving national crisis that is economic inequality, that may prove a tedious wait.

Still, if anything is to be learned from the Cherokee experiment, it’s this: To imagine that a basic income, or something like it, would suddenly satisfy the disillusioned, out-of-work Rust Belt worker is as wrong-headed as imagining it would do no good at all, or drive people to stop working.

There is a third possibility: that an infusion of cash into struggling households would lift up the youth in those households in all the subtle but still meaningful ways Costello has observed over the years, until finally, when they come of age, they are better prepared for the brave new world of work, whether the robots are coming or not.

We’ll look more at “the robots are coming” and Silicon Valley’s response to technological unemployment next time. Meanwhile, for related information, see this summary re: U.S. government benefits to Indian tribes, and see this article re: another current version of UBI — the Alaska oil money trust fund.

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

Basic Income on the Res

Thomas Sowell has a platinum resume: Marine Corps war vet, bachelor’s Harvard, master’s Columbia, Ph.D. U of Chicago, professor at Cornell and UCLA, Urban Institute and the Hoover Institute at Stanford, books, articles…. You get the point: when he talks economic and social policy, people listen.

The people at The Institute for Family Studies (IFS) were listening when they published a blog post earlier this year entitled “What We Can Learn From Native Americans About a Universal Basic Income.” The article describes the Seneca tribe’s practice of distributing casino money to its members, and focuses on the particularly disastrous provisions pertaining to the money for minors:

Half the money for children under 18 is given to their parents, and the other half is put into a trust. When a Seneca youth turns 18 and can show that he or she has graduated from high school or earned a GED, he or she receives a lump sum of $30,000. Those who don’t get a high-school degree have to wait until they’re 21 to receive the money.

Government officials and other members of the nation tell me that the best thing most young adults do with this money is to buy a new truck. These are kids who have never had very much before; so when someone hands them a huge check, they clearly don’t know what to do. Store owners report that young people will come in to buy candy, handing $50 or $100 without expecting any change. These young people seem to have no concept of saving or investing.

I used to practice estate planning, and need to point out that the Seneca approach to minor beneficiaries unfortunately borrows the worst kind of legislation drafting laziness from intestacy law, uniform gifts to minors acts, and similar laws involving minors and money. Their experience therefore has nothing to do with UBI specifically. Of course dropping a wad of cash on an unprepared 18 or 21 year-old is a dumb idea. Of course the kids “have no concept of saving or investing.” (Like the rest of us do.) Moving on, the article cites more disasters:

The money “is almost never saved for education.

“Despite a vast apparatus to help Seneca members set up businesses, almost no one starts one.

“Unless people are employed by the tribe (either through the casino or in tribal government), they are largely unemployed.

“Theft is also a problem. One official told me that they have had reports of elder abuse where children and grandchildren were stealing payments from older members of the tribe.

“The results of all this can be seen in the poverty rates for the Senecas, which have continued to rise. Their territory is divided into two reservations. As of 2011, the Allegany reservation poverty rate was 33.3 percent and the Cattaraugus reservation poverty rate was 64.9 percent, the highest in Cattaraugus County. During the first decade that the casino was operating, the poverty rate in Cattaraugus County, which includes part of the Seneca Territory, increased from 12.8 in 2000 to 18.7 in 2011.”

Finally, the article ends by citing Thomas Sowell:

Writing about the concept of a Universal Basic Income last year, Thomas Sowell summed up the situation: ‘The track record of divorcing personal rewards from personal contributions hardly justifies more of the same, even when it is in a more sophisticated form. Sophisticated social disaster is still disaster—and we already have too much of that.’

The Sowell article cited by the IFS blogger was “Is Personal Responsibility Obsolete?” (Investor’s Business Daily, June 6, 2016). It begins this way:

Among the many disturbing signs of our times are conservatives and libertarians of high intelligence and high principles who are advocating government programs that relieve people of the necessity of working to provide their own livelihoods.

Generations ago, both religious people and socialists were agreed on the proposition that ‘he who does not work, neither shall he eat.’ Both would come to the aid of those unable to work. But the idea that people who simply choose not to work should be supported by money taken from those who are working was rejected across the ideological spectrum.

And so we see the standard anti-UBI fightin’ words:

“divorcing personal reward from personal contributions”

“government programs that relieve people of the necessity of working to provide their own livelihoods”

“people who simply choose not to work”

“money taken from those who are working”

I confess, I can’t help but wonder what people who say those things think they would do with UBI money. Again moving along….

Other tribes also distribute casino money. The following is from What Happens When the Poor Receive a Stipend?”, published by The New York Times as part of a 2017 series on economic inequality called “The Great Divide.”

Scientists interested in the link between poverty and mental health, however, often face a more fundamental problem: a relative dearth of experiments that test and compare potential interventions.

So when, in 1996, the Eastern Band of Cherokee Indians in North Carolina’s Great Smoky Mountains opened a casino, Jane Costello, an epidemiologist at Duke University Medical School, saw an opportunity. The tribe elected to distribute a proportion of the profits equally among its 8,000 members. Professor Costello wondered whether the extra money would change psychiatric outcomes among poor Cherokee families.

Same idea, different tribe. How’d they do? We’ll find out next time.

 

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

There’s No Such Thing as a Free Lunch — True or False?

Last time, we were introduced to the idea of a universal basic income (UBI). We can assume that the pros and cons have been thoroughly researched and reasonably analyzed, and that each side holds its position with utmost conviction.

We can also assume that none of that reasonableness and conviction will convert anyone from one side to the other, or win over the uncommitted. Reason doesn’t move us: we use it to justify what we already decided, based on what we believe. SeeWhy Facts Don’t Change Our Minds,” The New Yorker (February 2017) and “This Article Won’t Change Your Mind,” The Atlantic (March 2017).

History doesn’t guide us either — see Why We Refuse to Learn From History from Big Think and Why Don’t We Learn From History, from military historian Sir Basil Henry Liddell Hart. The latter contains conventional wisdom such as this:

The most instructive, indeed the only method of learning to bear with dignity the vicissitude of fortune, is to recall the catastrophes of others.

History is the best help, being a record of how things usually go wrong.

There are two roads to the reformation for mankind— one through misfortunes of their own, the other through the misfortunes of others; the former is the most unmistakable, the latter the less painful.

I would add that the only hope for humanity, now, is that my particular field of study, warfare, will become purely a subject of antiquarian interest. For with the advent of atomic weapons we have come either to the last page of war, at any rate on the major international scale we have known in the past, or to the last page of history.

That’s seems like good advice, but it mostly goes unheeded. It seems we’d rather make our own mistakes.

If reasoned analysis and historical perspective don’t inform our responses to radically new ideas like UBI, then what does? Many things, but cultural belief is high on the list. Policy is rooted in culture, culture is rooted in shared beliefs, and beliefs are rooted in history. Cultural beliefs shape individual bias, and the whole belief system becomes sacred in the culture’s mythology. Try to subverts cultural beliefs, and the response is outrage and entrenchment.

All of which means that each of us probably had a quick true or false answer to the question in this week’s blog post title, and were ready to defend it with something that sounded reasonable. Our answer likely signals our kneejerk response to the idea of UBI. The “free lunch”— or, more accurately, “free money” — issue appears to be the UBI Great Divide: get to that point, and you’re either pro or con, and there’s no neutral option. (See this for more about where the “no free lunch” phrase came from.[1])

The Great Divide is what tanked President Nixon’s UBI legislation. The plan, which would have paid a family of four $1,600/year (equivalent to $10,428 today) was set to launch in the midst of an outpouring of political self-congratulation and media endorsement, only to be scuttled by a memo from a White House staffer that described the failure of a British UBI experiment 150 years earlier. UBI was in fact a free lunch; its fate was thus sealed.

As it turns out, whether the experiment failed or not was lost in a 19th Century fog of cultural belief, so that opponents of the experiment pounced on a bogus report about its impact to justify passing the Poor Law Amendment Act of 1834 — which is what they wanted to do anyway. The new Poor Law was that era’s version of workfare, and was generated by the worst kind of scarcity mentality applied to the worst kind of scarcity. Besides creating the backdrop to Charles Dickens’ writing, the new Poor Law’s philosophical roots still support today’s welfare system:

The new Poor Law introduced perhaps the most heinous form of “public assistance” that the world has ever witnessed. Believing the workhouses to be the only effective remedy against sloth and depravity, the Royal Commission forced the poor into senseless slave labor, from breaking stones to walking on treadmills. . . .

For the whole history lesson, see “The Bizarre Tale Of President Nixon’s Basic Income Plan.”

And so we’re back to asking whether UBI is a free lunch or not. If it is, then it’s an affront to a culture that values self-sufficiency. If it isn’t, then it requires a vastly different cultural value system to support it. The former believes that doing something — “making a living” at a job — is how you earn your daily bread. The latter believes you’re entitled do sustenance if you are something: i.e., a citizen or member of the nation, state, city, or other institution or community providing the UBI. The former is about activity, the latter is about identity. This Wired article captures the distinction:

The idea [of UBI] is not exactly new—Thomas Paine proposed a form of basic income back in 1797—but in this country, aside from Social Security and Medicare, most government payouts are based on individual need rather than simply citizenship.

UBI is about “simply citizenship.” It requires a cultural belief that everybody in the group shares its prosperity. Cultural identity alone ensures basic sustenance — it’s a right, and that right makes Poor Laws and workfare obsolete.

The notion of cultural identity invites comparison between UBI and the “casino money” some Native American tribes pay their members. How’s that working? We’ll look at that next time.


[1] Yes, Milton Friedman did in fact say it, although he wasn’t the only one. And in a surprising twist, he has been criticized for advocating his own version of UBI.

 

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

Old Dog, Old Trick, New Showtime

Blockchain consultant and futurist Michael Spencer called it a conspiracy by the 0.01 percenters to enslave the rest of us for good.[1] A growing number of those 0.01 percenters have already supported it, but they’re not alone: this poll conducted shortly after the 2016 election showed that half of Americans supported it as well. A parade of think tanks (here’s one) and other professional skeptics (more than I can cite with hyperlinks in a single sentence) have given it a thorough vetting and mostly concluded yeah well maybe it’s worth a try.

What is “it”? This idea: give the poor what they lack — money. Ensure everyone a livable income while getting rid of the expensive and draconian welfare system. And just to be fair, go ahead and give everyone else money, too, even the billionaires.

The idea mostly goes by the name “universal basic income” (UBI). It’s rooted in the futuristic fear that technology will eventually put humans out of work. That’s not an old fear: UBI is “far from a new idea,” says Martin Ford, another Silicon Valley entrepreneur and a popular TED talker, in his New York Times Bestselling Rise of the Robots: Technology and the Threat of a Jobless Future.

In the context of the contemporary American political landscape . . . a guaranteed income is likely to be disparaged as “socialism” and a massive expansion of the welfare state. The idea’s historical origins, however, suggest something quite different. While a basic income has been embraced by economists and intellectuals on both sides of the political spectrum, the idea has been advocated especially forcefully by conservatives and libertarians.

Friedrich Hayek, who has become an iconic figure among today’s conservatives, was a strong proponent of the idea. In his three-volume work. Law, Legislation and Liberty, published between 1973 and 1979, Hayek suggested that a guaranteed income would be a legitimate government policy designed to provide against adversity, and that the need for this type of safety net is the direct result of the transition to a more open and mobile society where many individuals can no longer rely on traditional support systems:

There is, however, yet another class of common risks with regard to which the need for government action has until recently not been generally admitted. . . . The problem here is chiefly the fate of those who for various reasons cannot make their living in the market . . . that is, all people suffering from adverse conditions which may affect anyone and against which most individuals cannot alone make adequate protection but in which a society that has reached a certain level of wealth can afford to provide for all.

LBJ foresaw the possibility of massive technological unemployment back in the 60s and appointed an “Ad Hoc Committee on the Triple Revolution” to study the topic. The Committee included co-Nobel Prize winners Friedrich Hayek and Swedish economist and sociologist Gunnar Myrdal.[2] Rise of the Robots describes the Committee’s findings:

“Cybernation” (or automation) would soon result in an economy where “potentially unlimited output can be achieved by systems of machines which will require little cooperation from human beings.” The result would be massive unemployment, soaring inequality, and, ultimately, falling demand for goods and services as consumers increasingly lacked the purchasing power necessary to continue driving economic growth.

The Ad Hoc Committee went on to propose a radical solution: the eventual implementation of a guaranteed minimum income made possible by the “economy of abundance” such widespread automation would create, and which would “take the place of the patchwork of welfare measures” that were then in place to address poverty.

The Triple Revolution report was released to the media and sent to President Johnson, the secretary of labor, and congressional leaders in March 1964. An accompanying cover letter warned ominously that if something akin to the report’s proposed solutions was not implemented, “the nation will be thrown into unprecedented economic and social disorder.” A front-page story with extensive quotations from the report appeared in the next day’s New York Times, and numerous other newspapers and magazines ran stories and editorials (most of which were critical), in some cases even printing the entire text of the report.

The Triple Revolution marked what was perhaps the crest of a wave of worry about the impact of automation that had arisen following World War II. The specter of mass joblessness as machines displaced workers had incited fear many times in the past — going all the way back to Britain’s Luddite uprising in 1812 — but in the 1950s the 60s, the concern was especially acute and was articulated by some of the United States’ most prominent and intellectually capable individuals.

Four months after the Johnson administration received the Triple Revolution report, the president signed a bill creating the National Commission on Technology, Automation, and Economic Progress. In his remarks at the bills signing ceremony, Johnson said that “automation can be the ally of our prosperity if we will just look ahead, if we will understand what is to come, and if we will set our course wisely after proper planning for the future.” The newly formed Commission then . . . quickly faded into obscurity.

A few years later, Richard Nixon introduced UBI legislation that he called “The most significant piece of social legislation in our nation’s history.” That legislation also faded into obscurity — more on that another time.

Thus, UBI is an old idea responding to an old fear: how do we make a living if we can’t work for it? A half century after LBJ and Nixon, that fear is all too real, and lots of people think it might be time for the historical UBI solution to make its appearance.

But not everyone is jumping on the UBI bandwagon. The very thought that jobs might not be the source of our sustenance is the rallying cry of UBI’s most strident opponents.

More on UBI next time.


[1] Spencer followed with a similarly scathing assessment in this article.

[2] Myrdal’s study of race relations was influential in Brown v. Board of Education. He was also an architect of the Swedish social democratic welfare state. Hayek and Myrdal were jointly awarded the Nobel Prize in Economics in 1974.

Fireflies and Algorithms

We’ve been looking at workfare — the legislated link between jobs and the social safety net. An article published last week — “Fireflies And Algorithms — The Coming Explosion Of Companies[1] brought the specter of workfare to the legal profession.

Reading it, my life flashed before my eyes, beginning with one particular memory: me, a newly-hired associate, resplendent in my three-piece gray pinstripe suit, joining the 4:30 queue at the Secretary of State’s office, clutching hot-off-the-word-processor Articles of Incorporation and a firm check for the filing fee, fretting whether I’d get my copy time-stamped by closing time. We always had to file today, for reasons I don’t remember.

Entity choice and creation spanned transactional practice: corporate, securities, mergers and acquisitions, franchising, tax, intellectual property, real property, commercial leasing… The practice enjoyed its glory days when LLCs were invented, and when a raft of new entity hybrids followed… well, that was an embarrassment of riches.

It was a big deal to set up a new entity and get it just right — make sure the correct ABC acquired the correct XYZ, draw the whole thing up in x’s and o’s, and finance it with somebody else’s money. To do all that required strategic alliances with brokers, planners, agents, promoters, accountants, investment bankers, financiers… Important people initiated the process, and there was a sense of substantiality and permanence about it, with overtones of mahogany and leather, brandy and cigars. These were entities that would create and engage whole communities of real people doing real jobs to deliver real goods and services to real consumers. Dissolving an entity was an equally big deal, requiring somber evaluation and critical reluctance, not to mention more time-stamped paperwork.

“Fireflies and Algorithms” sweeps it all away — whoosh! just like that!— and describes its replacement: an inhuman world of here-and-gone entities created and dissolved without the intent of all those important people or all that help from all those people in the law and allied businesses. (How many jobs are we talking about, I wonder — tens, maybe hundreds of thousands?) The new entities will do to choice of entity practice what automated trading did to the stock market, as described in this UCLA Law Review article:

Modern finance is becoming an industry in which the main players are no longer entirely human. Instead, the key players are now cyborgs: part machine, part human. Modern finance is transforming into what this Article calls cyborg finance.

In that “cyborg finance” world,

[The “enhanced velocity” of automated, algorithmic trading] has shortened the timeline of finance from days to hours, to minutes, to seconds, to nanoseconds. The accelerated velocity means not only faster trade executions but also faster investment turnovers. “At the end of World War II, the average holding period for a stock was four years. By 2000, it was eight months. By 2008, it was two months. And by 2011 it was twenty-two seconds.

“Fireflies and Algorithms” says the business entity world is in for the same dynamic, and therefore we can expect:

[W]hat we’re calling ‘firefly companies’ — the blink-and-you-miss-it scenario brought about by ultra-short-life companies, combined with registers that remove records once a company has been dissolved, meaning that effectively they are invisible.

Firefly companies are formed by algorithms, not by human initiative. Each is created for a single transaction — one contract, one sale, one span of ownership. They’re peer-reviewed, digitally secure, self-executing, self-policing, and trans-jurisdictional — all for free or minimal cost. And all of that is memorialized not in SOS or SEC filings but in blockchain.

“So what does all this mean?” the article asks:

How do we make sense of a world where companies — which are, remember, artificial legal constructs created out of thin air to have legal personality — can come into existence for brief periods of time, like fireflies in the night, perform or collaborate on an act, and then disappear? Where there are perhaps not 300 million companies, but 1 billion, or 10 billion?

Think about it. And then — if it hasn’t happened yet — watch your life flash before your eyes.

Or if not your life, at least your job. Consider, for example, a widely-cited 2013 study that predicted 57% of U.S. jobs could be lost to automation. Even if that prediction is only half true, that’s still a lot of jobs. And consider a recent LawGeex contest, in which artificial intelligence absolutely smoked an elite group of transactional lawyers:

In a landmark study, 20 top US corporate lawyers with decades of experience in corporate law and contract review were pitted against an AI. Their task was to spot issues in five Non-Disclosure Agreements (NDAs), which are a contractual basis for most business deals.

The study, carried out with leading legal academics and experts, saw the LawGeex AI achieve an average 94% accuracy rate, higher than the lawyers who achieved an average rate of 85%. It took the lawyers an average of 92 minutes to complete the NDA issue spotting, compared to 26 seconds for the LawGeex AI. The longest time taken by a lawyer to complete the test was 156 minutes, and the shortest time was 51 minutes.

These developments significantly expand the pool of people potentially needing help through bad times. Currently, that means workfare. But how can you have workfare if technology is wiping out jobs?

More on that next time.


[1] The article was published by OpenCorporates, which according to its website is “the world’s largest open database of the corporate world and winner of the Open Data Business Award.”

 

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

The Success Delusion

How did the social safety net turn into a poverty trap? It was a victim of the success of the job as an economic force.

Psychologists call it “the success delusion.” You do something and get a result you like, so you keep doing it, expecting more of the same. It keeps working until one day it doesn’t. Do you try something new? No, you double down — it worked before, surely it will work again. You keep doubling down until you’ve made a mess.

You’re a victim of your own success. If you could listen, hindsight would tell you that there was more to it than what you were doing, that a lot of what happened was you being in the right place at the right time. You might believe that or not, but what matters now is that the times have changed and you didn’t.

That’s what happened to social welfare. 40 years of post-WWII economic success positioned the steady job as the cornerstone of economic prosperity and upward mobility. Then, in the 80s and 90s, about the time the job was starting to lose its economic vitality, policy-makers doubled down on it: work had raised the welfare of the whole world since the days of the telegraph and railroad, and surely it was still the best route out of poverty. So now we had workfare instead of welfare, and, as we saw last time, social welfare became “a system of suspicion and shame.”

Standin’ in line marking time
Waiting for the welfare dime
‘Cause they can’t buy a job
The man in the silk suit hurries by
As he catches the poor old lady’s eyes
Just for fun he says, “Get a job.”

That’s The Way It Is”
Bruce Hornsby and the Range

Rutger Bregman sums it up this way:

We’re saddled with a welfare state from a bygone era when the breadwinners were still mostly men and people spent their whole lives working at the same company. The pension system and employment protection rules are still keyed to those fortunate to have a steady job, public assistance is rooted in the misconception that we can rely on the economy to generate enough jobs, and welfare benefits are often not a trampoline, but a trap.

Utopia for Realists (2017).

Guy Standing explains it this way:

The period from the nineteenth century to the 1970’s saw what Karl Polanyi, in his famous 1944 book, dubbed “The Great Transformation.”

The essence of labourism was that labour rights — more correctly , entitlements — should be provided to those (mostly men) who performed labour and to their spouses and children.

Those in full-time jobs obtained rising real wages, a growing array of “contributory” non-wage benefits, and entitlements to social security for themselves and their family. As workers previously had little security, this was a progressive step.

Labourism promoted the view that the more labour people did, the more privileged they should be, and the less they did the less privileged they should be. The ultimate fetishism was Lenin’s dictate, enshrined in the Soviet constitution, that anybody who did not labour should not eat.

The labourist model frayed in the 1980’s, as labour markets became more flexible and increasing numbers of people moved from job to job and in and of employment.

To defend labour-based welfare, social democratic governments turned to means testing, targeting benefits on those deemed the deserving poor.

The shift to means testing was fatal. As previous generations of social democrats had understood, benefits designed only for the poor are invariably poor benefits and stand to lose support among the rest of society.

Ironically, it was mainly social democratic parties that shifted policy towards workfare, requiring the unemployed to apply for non-existent or unsuitable jobs, or to do menial, dead-end jobs or phony training courses in return for increasingly meagre benefits.

Today, we are living in a Second Gilded Age — with one significant difference. In the first, which ended in the Great Crash of 1929, inequality grew sharply but wages on average rose as well. The Second Gilded Age has also involved growing inequality, but this time real wages on average have stagnated or fallen. Meanwhile, those relying on state benefits have fallen further behind, many pushed into homelessness, penury and dependence on inadequate private charity.

Since the 1980s, the share of income going to labour has shrunk, globally and in most countries of economic significance. . . . The labour share fell in the USA from 53 per cent in 1970 to 43.5 per cent in 2013. Most dramatically, it slid by over twenty percentage points in China and also dropped steeply in the rising industrial giant of South Korea.

Besides falling wages, there has been an increase in wage differentials and a less-documented decline in the share of people receiving non-wage benefits, such as occupational pensions, paid holidays, sick leave or medical coverage. Thus worker compensation, in terms of “social income,” has fallen by more than revealed by wages alone.

As a consequence of these developments, “in-work poverty” has rocketed. In some OECD [Organisation for Economic Cooperation and Development — 34 industrialized member countries], including Britain, the USA, Spain and Poland, a majority of those in poverty live in households where at least one person has a job.

The mantra that “work is the best route out of poverty” is simply false.

The Corruption of Capitalism (2017).

Not only are jobs doing a poor job at social welfare — for both employed and unemployed alike — but they are themselves an endangered species. More to come…

 

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

Poverty Gets Personal

“In the sixties we waged a war on poverty and poverty won.” – Ronald Reagan

“Poverty is a ‘personality defect.’” – Margaret Thatcher

The Gipper was referring to LBJ and his Great Society, but he got it wrong: the Great Society failed to eliminate poverty because it never got all the way to dealing with it. Instead it took a more politically acceptable path focused on education and community involvement — not bad things, but there’s a difference. As for the Iron Lady, there’s actually some truth in what she said (we’ll look at that in a moment), but I suspect not in the way she probably meant it. She was more likely voicing the common attitude that the poor are intellectually impaired, morally flawed, prone to bad lifestyle choices, and criminally inclined, and therefore worthy of only the most grudging kind of help. That attitude and the Great Society’s reputed loss[1] in its War on Poverty explain a lot about today’s prevailing approach to poverty relief.

Rutger Bregman tackles this tough subject in his book Utopia for Realists: And How We Can Get There (2017):

A world without poverty— it might be the oldest utopia around. But anybody who takes this dream seriously must inevitably face a few tough questions. Why are the poor more likely to commit crimes? Why are they more prone to obesity? Why do they use more alcohol and drugs? In short, why do the poor make so many dumb decisions?

He continues with more tough questions:

What if the poor aren’t actually able to help themselves? What if all the incentives, all the information and education are like water off a duck’s back? And what if all those well-meant nudges [toward self-help and away from government assistance] only make the situation worse?

He then profiles the work of Eldar Shafir, a psychologist at Princeton, and Sendhill Mullainathan, an economist at Harvard, who formulated a theory of poverty based on the concept of “scarcity mentality.” Their research shows that the chronic poor are really good at scrambling after short term solutions, but tend to be inept at sustainable long-term thinking. It’s a matter of mental bandwidth: today’s urgency gets all the attention, leaving other matters to go begging (sometimes literally). In fact, their research estimates that poverty costs a person about 13-14 IQ points. In other words, living in a chronic state of being poor can eventually rewire the human brain to the point where clear thinking and prudent behavior are challenged.

Hence the grain of truth in Margaret Thatcher’s comment.

One problem with that attitude, though, is that it uses the terms “poor” and “poverty” interchangeably. But not everyone who’s poor is also impoverished. At the simplest level, the poor are poor because they lack money. But poverty goes further: it’s a chronic condition that generates a specific outlook and way of approaching life. When that condition is shared, it becomes a culture. You know it when you’re around poverty; you might not know it when you’re around poor.

Government assistance programs don’t make that distinction. As a result, as Bregman states, social welfare has “devolved into a behemoth of control and humiliation.”

An army of social services workers is needed to guide people through the jungle of eligibility, application, approval, and recapture procedures. . . . The welfare state, which should foster people’s sense of security and pride, has degenerated into a system of suspicion and shame.

Is it really that bad? Try applying for food stamps sometime.

Our bank account was thin after a business failure and some health issues. Following the advice of family, my wife applied for food stamps. Her experience was everything Bregman describes. Case in point: after two mandatory daylong job search classes (how to write a resume, set up a LinkedIn page, use the internet to check out online job postings…), she had to prove her willingness to work by reporting for 8 hours per week of wall-washing duty at a church community center. She washed the same walls every week — the same walls that other people were also washing every week — the cleanest walls in Denver. Washing walls — pointlessly, needlessly, endlessly — to prove you’re not a slacker.

Help with the grocery bill was bittersweet for a couple months, then we opted out. It’s easy to intellectualize and debate about “all the information and education” and “the jungle of eligibility, application, approval, and recapture procedures.” It’s not so easy when they get personal. We were poor but not impoverished, and the system was just too demoralizing to continue. Maybe that was the point.

Plus, earning money reduces or eliminates benefits — a result which economist Guy Standing calculates is equivalent to the imposition of an 80% tax. The quandary is obvious: earn money or opt out of the system— either way, you pay the tax. Most people — even the cognitively-impaired — wouldn’t agree to a deal like that.

How did “Brother, can you spare a dime?” turn into this? Curiously, the current welfare system derived from the same post-WWII economic surge that rewarded working people. We’ll look at how that happened next week. In the meantime, have a listen:

This week’s post uses portions of a LinkedIn Pulse article I wrote last year about poverty, crime, and homelessness. Next week’s post will also tap that source. You might like to jump ahead and read the article: Why Don’t We Just solve Some Problems For a Change?


[1] Not everyone agrees that we lost the War on Poverty. See this article that considers both sides.

 

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

Archeconomics

I made up the term “archeconomics.” I’m using “arch” in the sense of “first principles” — e.g., as in “archetype.” An “arch” is the larger version of the smaller expressions of itself — e.g., not just a villain but an arch-villain, not just an angel but an archangel. Life goes big when an arch-something is at work: experience expands beyond circumstance, meaning magnifies, significance is exaggerated.

Archeconomics is therefore the larger story behind economics.

I ended last week’s post by referring to the larger story behind the rentier economy. As usually happens when I’m on a research trail, several commentaries have appeared in my various feeds lately that look beyond the usual opinionated mash of current events and instead address over-arching ideas and issues. All of them deal in one way or another with the current status and possible future of the liberal worldview — an arch-topic if there ever was one.

The term “liberal” in this context doesn’t refer to political liberal vs. conservative, but rather to historical liberalism, which among other things gave us post-WWII neo-liberal economics. Mega-bestselling author Yuval Noah Harari describes this kind of liberalism in his latest book 21 Lessons for the 21st Century:

In Western political discourse the term “liberal” is sometimes used today in a much narrower sense, to denote those who support specific causes such as gay marriage, gun control, and abortion rights. Yet most so-called conservatives also embrace the broad liberal worldview.

The liberal story cherishes human liberty as its number one value. It argues that all authority ultimately stems from the free will of individual humans, as expressed in their feelings, desires, and choices. In politics, liberalism believes that the voter knows best. It therefore upholds democratic elections. In economics, liberalism maintains that the customer is always right. It therefore hails free-market principles. In personal matters, liberalism encourages people to listen to themselves, be true to themselves, and allow their hearts — as long as they do not infringe on the liberties of others. This personal freedom is enshrined in human rights.

If you read Harari’s books Sapiens and Homo Deus. you have a sense of what you’ll find in 21 Lessons, but I found it worth reading on its own terms. Two recent special magazine editions also take on the fate of liberalism: “Is Democracy Dying? from The Atlantic andA Manifesto for Renewing Liberalism” from The Economist. The titles speak for themselves, and both are offered by publications with nearly two centuries of liberal editorial perspectives.

Another historical liberal offering from a conservative political point of view is “How Trumpism Will Outlast Trump,” from Time Magazine. Here’s the article’s précis:

These intellectuals are committed to a new economic nationalism . . . They’re looking past Trump . . . to assert a fundamental truth: whatever you think of him, Donald Trump has shown a major failing in the way America’s political parties have been serving their constituents. The future of Trump’s revolution may depend on whether this young group can help fix the economy.

Finally, here’s a trio of offerings that invoke environmental economics — the impact of the global ecology on global economics being another archeconomics topic. The first is a scientific study published last week that predicted significant environmental degradation within a surprisingly short time. Second is an article about the study that wants to know “Why We Keep Ignoring Even the Most Dire Climate Change Warnings.” Third is last week’s announcement that the winner of this year’s Nobel Prize in Economics is an environmental economist.

Some or all of those titles should satisfy if you’re in the mood for some arch-reading.

Next time, we’ll return to plain old economics, with a look at how the low income social strata is faring in all the dust-up over rentiers and economic inequality, robotcs and machine learning, and the sagging paycheck going to human labor.

 

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

The Rentier Economy: A Primer (Part 2)

My plan for this week’s post was to present further data about the extent of the rentier economy and then provide a digest of articles for further reading.

Turns out that wasn’t so easy. The data is there, but it’s mostly buried in categories like corporate capitalization, profits, and market concentration. Extracting it into blog-post-sized nuggets wasn’t going to be that easy.

Further, the data was generally only footnoted in a maelstrom of worldwide commentary. Economists and journalists treated it as a given, barely worthy of note, and were much more interested in revealing, analyzing, and debating what it means. The resulting discourse spans the globe — north to south, east to west, and all around the middle — and there is widespread agreement on the basics:

  • Economic thinking has traditionally focused on income from profits generated from the sale of goods and services produced by human labor. In this model, as profits rise, so do wages.
  • Beginning in the 1980s, globalization began moving production to cheap labor offshore.
  • Since the turn of the millennium, artificial intelligence and robotics have eliminated jobs in the developed world at a pace slowed only by the comparative costs of technology vs. human labor.
  • As a result, lower per unit costs of production have generated soaring profits while wages have stagnated in the developed world. I.e., the link between higher profits and higher wages no longer holds.

Let’s pause for a moment, because that point is huge. Erik Brynjolfsson, director of the MIT Center for Digital Business, and Andrew McAfee, principal research scientist at MIT, wrote about it in their widely cited book The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (2014). The following is from a chapter-by-chapter digest written by an all-star cast of economists:

Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress.

On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States. For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.

Okay, point made. Let’s move on to the rest of the rentier story:

  • These trends have been going on the past four decades, but increased in velocity since the 2007–2009 Recession. The result has been a shift to a new kind of job market characterized by part-time, on-demand, contractual freelance positions that pay less and don’t offer fringe benefits. Those who still hold conventional jobs with salaries and benefits are a dying breed, and probably don’t even realize it.
  • As non-wage earner production has soared, so have profits, resulting in a surplus of corporate cash. Low labor costs and technology have created a boom in corporate investment in patents and other rentable IT assets.
  • Rent-seeking behavior has been increasingly supported by government policy — such as the “regressive regulation” and other “legalized monopoly” dynamics we’ve been looking at in the past few weeks.
  • The combination of long-term wage stagnation and spiraling rentier profits has driven economic inequality to levels rivaled only by pre-revolutionary France, the Gilded Age of the Robber Barons, and the Roaring 20s.
  • Further, because the rentier economy depends on government policy, it is particularly susceptible to plutocracies, oligarchies, “crony-capitalism,” and other forms of corruption, leading to public mistrust in big business, government, and the social/economic elite.
  • These developments have put globalization on the defensive, resulting in reactionary politics such as populism, nationalism, authoritarianism, and trade protectionism.

As you see, my attempt to put some numbers to the terms “rent” and “rentier” led me straight into some neighborhoods I’ve been trying to stay out of in this series. Finding myself there reminded me of my first encounter with the rentier economy nine years ago, when of course I had no idea that’s what I’d run into. I was at a conference of entrepreneurs, writers, consultants, life coaches, and other optimistic types. We started by introducing ourselves from the microphone at the front of the room. Success story followed success story, then one guy blew up the room by telling how back in the earliest days of the internet, he and Starbucks’ Howard Schultz spent $250K buying up domain names for the biggest corporations and brand names. Last year, he said, he made $76 Million from selling or renting them back.

He was a rentier, and I was in the wrong room. When it was my turn at the mic, I opened my mouth and nothing came out. Welcome to the real world, my idealistic friend.

As it turns out, following the rentier pathway eventually leads us all the way through the opinionated commentary and current headlines to a much bigger worldwide issue. We’ll go there next time.

 

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

The Rentier Economy — Primer Part 1

As we saw last week, the original Monopoly game — then known as The Landlord’s Game — offered a choice of two different games, one played under “Prosperity” rules and the other under “Monopoly” rules. The post-WWII economic surge was a real-life Prosperity game: it generated a rising tide of economic benefit that floated all boats across all social classes. The surge peaked in the 1970’s, and since then the Monopoly rules have increasingly asserted themselves, resulting in, among other things, stagnant employee compensation (except for the top 10%) and rising returns to capital owners — the lion’s share paid in the form of rents. The latter reflects the rise of a “rentier economy.”

First, we need to define “rent”:

Economists use the term ‘rent’ in a special way. For them, rent refers . . . to the excess payment made to any factor of production (land, labor, or capital) due to scarcity.

The scarcity factor that gives rise to rents can be natural, as with the case of land.

But rents can also arise from artificial scarcity — in particular, government policies that confer special advantages on favored market participants.

The Captured Economy: How the Powerful Enrich Themselves, Slow Down Growth, and Increase Inequality, Brink Lindsey and Steven Teles (2017).

And “rentier”:

A rentier is someone who gains income from possession of assets, rather than from labour. A rentier corporation is a firm that gains much of its revenue from rental income rather than from production of goods and services, notably from financial assets or intellectual property. A rentier state has institutions and policies that favour the interests of rentiers. A rentier economy is one that receives a large share of income in the form of rent.

The Corruption of Capitalism, Why Rentiers Thrive and Work Does Not Pay, Guy Standing (2016)

Economists didn’t see the rentier economy coming. They especially didn’t foresee how government policy would create it. The following is from The Corruption of Capitalism:

John Maynard Keynes, the most influential economist of the mid-twentieth century, famously dismissed the rentier as the ‘functionless investor’ who gained income solely from ownership of capital, exploiting its ‘scarcity value.’ He concluded in his epochal General Theory that, as capitalism spread, it would mean the “euthanasia of the rentier,” and, consequently, the euthanasia of the cumulative oppressive power of the capitalist to exploit the scarcity value of capital:

“Whilst there may be intrinsic reasons for the scarcity of land, there are no intrinsic reasons for the scarcity of capital. . . . I see, therefore, the rentier aspect of capitalism as a transitional phase which will disappear when it has done its work.”

Keynes was mistaken because he did not foresee how the neoliberal framework built since the 1980’s would allow individuals and firms to generate ‘contrived scarcity’ of assets from which to gain rental income. Nor did he foresee how the modern ‘competitiveness’ agenda would give asset owners power to extract rental subsidies from the state.

Eighty years later, the rentier is anything but dead; rentiers have become the main beneficiaries of capitalism’s emerging income distribution system.

The old income distribution system that tied income to jobs has disintegrated.

And this is from The Captured Economy:

The last few decades have been a perplexing time in American economic life. Following a temporary spike during the Internet boom of the 1990’s, rates of economic growth have been exceptionally sluggish. At the same time, incomes at the very top have exploded while those further down have stagnated.

As a technical matter, rent is a morally neutral concept. . . . Nevertheless, the term ‘rent’ is most commonly used in a moralized sense to refer specifically to bad rents. In particular, the expression ‘rent-seeking’ refers to business activity that seeks to increase profits without creating anything of value through distortions to market processes, such as constraints on the entry of new firms.

Those advantages can also take the form of subsidies or rules that impose extra burdens on both existing and potential competitors. The rents enjoyed through government favoritism not only misallocate resources in the short term but they also discourage dynamism and growth over the long term. Their existence encourages an ongoing negative-sum scramble for more favors instead of innovation and the diffusion of good ideas.

Economists have had an explanation for the latter trend, which is that returns to skill have increased dramatically, largely because of globalization and information technology. There is clearly something to this explanation, but why should the more efficient operation of markets be accompanied by a decline in economic growth?

Our answer is that increasing returns to skill and other market-based drivers of rising inequality are only part of the story. Yes, in some ways the US economy has certainly grown more open to the free play of market forces during the course of the past few decades. But in other ways, economic returns are now determined much more by success in the political arena and less by the forces of market competition. By suppressing and distorting markets, the proliferation of regulatory rents has also led to less wealth for everyone.

To be continued.

 

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

The Landlord’s Game

“Buy land – they aren’t making it anymore.”
Mark Twain

You know how Monopoly games never end? A group of academicians wanted to know why. Here’s an article about them, and here’s their write-up. Their conclusion? Statistically, a game of Monopoly played casually (without strategy) could in fact go on forever.

I once played a game that actually ended. I had a strategy: buy everything you land on, build houses and hotels as fast as possible, and always mortgage everything to the hilt to finance acquisition and expansion. I got down to my last five dollars before I bankrupted everybody else. It only took a couple hours. Okay, so the other players were my kids. Some example I am. Whatever economic lessons we might have gained from the experience, they certainly weren’t what the game’s creator had in mind.

While Andrew Carnegie and friends were getting rich building American infrastructure, industry, and institutions, American society was experiencing a clash between the new rich and those still living in poverty. In 1879, economist Henry George proposed a resolution in his book Progress and Poverty: An Inquiry into the Cause of Industrial Depressions and of Increase of Want with Increase of Wealth: The Remedy.

Travelling around America in the 1870s, George had witnessed persistent destitution amid growing wealth, and he believed it was largely the inequity of land ownership that bound these two forces — poverty and progress — together. So instead of following Twain by encouraging his fellow citizens to buy land, he called on the state to tax it. On what grounds? Because much of land’s value comes not from what is built on the plot but from nature’s gift of water or minerals that might lie beneath its surface, or from the communally created value of its surroundings: nearby roads and railways; a thriving economy, a safe neighborhood; good local schools and hospitals. And he argued that the tax receipts should be invested on behalf of all.

From “Monopoly Was Invented To Demonstrate The Evils Of Capitalism,by new economist Kate Raworth.[1]

George’s book eventually reached the hands of Elizabeth Magie, the daughter of newspaperman James Magie and a social change rabble-rouser in her own right. Influenced by her father’s politics and Henry George’s vision, she created The Landlord’s Game in 1904 and gave it two sets of rules, intending for it to be an economic learning experience. Again quoting from Ms. Raworth’s article:

Under the ‘Prosperity’ set of rules, every player gained each time someone acquired a new property (designed to reflect George’s policy of taxing the value of land), and the game was won (by all!) when the player who had started out with the least money had doubled it. Under the ‘Monopolist’ set of rules, in contrast, players got ahead by acquiring properties and collecting rent from all those who were unfortunate enough to land there — and whoever managed to bankrupt the rest emerged as the sole winner (sound a little familiar?).

The purpose of the dual sets of rules, said Magie, was for players to experience a ‘practical demonstration of the present system of land grabbing with all its usual outcomes and consequences’ and hence to understand how different approaches to property ownership can lead to vastly different social outcomes.

The game was soon a hit among Left-wing intellectuals, on college campuses including the Wharton School, Harvard and Columbia, and also among Quaker communities, some of which modified the rules and redrew the board with street names from Atlantic City. Among the players of this Quaker adaptation was an unemployed man called Charles Darrow, who later sold such a modified version to the games company Parker Brothers as his own.

Once the game’s true origins came to light, Parker Brothers bought up Magie’s patent, but then re-launched the board game simply as Monopoly, and provided the eager public with just one set of rules: those that celebrate the triumph of one over all. Worse, they marketed it along with the claim that the game’s inventor was Darrow, who they said had dreamed it up in the 1930s, sold it to Parker Brothers, and become a millionaire. It was a rags-to-riches fabrication that ironically exemplified Monopoly’s implicit values: chase wealth and crush your opponents if you want to come out on top.

“Chase wealth and crush your opponents” — that was my winning Monopoly strategy. It requires a shift away from the labor economy — selling things workers make or services they provide — to the rentier economy — owning assets you can charge other people to access and use. The scarcer the assets, the more you can charge. Scarcity can be natural, as is the case with land, or it can be artificial, the result of the kind of “regressive regulation” we looked at last time, that limits access to capital markets, protects intellectual property, bars entry to the professions, and concentrates high-end land development through zoning and land use restrictions.

Artificial scarcity can also be the result of cultural belief systems — such as those that underlie the kind of stuff that shows up in your LinkedIn and Facebook feeds: “7 Ways to Get Rich in Rental Real Estate” or “How to Create a Passive Income From Book Sales and Webinars.” In fact, it seems our brains are so habitually immersed in Monopoly thinking that proposals such as Henry George’s land ownership  tax — or its current equivalents such as superstar economist Thomas Piketty’s wealth tax, Harvard law and ethics professor Lawrence Lessig’s notions of a creative commons, or the widely-studied and broadly-endorsed initiation of a “universal basic income” — are generally tossed off as hopelessly idealistic and out of touch.

More to come.


[1] Kate Raworth holds positions at both Oxford and Cambridge. We previously looked at her book Doughnut Economics: Seven Ways to Think Like a 21st-Century Economist  (2017).

 

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

Monopoly: The Ultimate in Upward Mobility

The Horatio Alger rags-to-riches ideal was born in the Gilded Age of the Robber Barons. A century and a half later, it remains an enduring icon of the American Dream and still makes for inspiring stump speeches.

If only it were true.

Truth is, something more powerful than pluck fueled the Robber Barons, and continues to fuel today’s Meristocrats and Robber Nerds. Yes, things like ingenuity, vision, determination, and hard work have had a lot to do with it, both historically and currently, but the essential element for creating mega-companies (sometimes whole new industries) and staggering personal wealth has been none other than government policy, which by definition favors selected economic activities over others.

A trio of distinguished economics and political science professors[1] provide one of the more provocative summaries of this economic reality in their book Violence and Social Orders: A Conceptual Framework for Interpreting Recorded Human History (2009). Harvard sociologist Steven Pinker described it this way:[2]

The economists Douglass North, John Wallis, and Barry Weingast argue that the most natural way for states to function, both in history and in many parts of the world today, is for elites to agree not to plunder and kill each other, in exchange for which they are awarded a fief, franchise, charter, monopoly, turf, or patronage network that allows them to control some sector of the economy and live off the rents (in the economist’s sense of income extracted from exclusive access to a resource).

This practice is sometimes called the “Medici Cycle,” after the famous Florentines:

In Towards a Political Theory of the Firm, [Luigi Zingales of the University of Chicago Booth School of Business] theorizes that firms use their economic power to acquire political power. They then apply that political power to achieve greater economic gains, which in turn helps them acquire ever more political power. It’s a cycle Zingales likens to the Medici dynasty of 15th-century Florence, Italy. The Medicis leveraged their lending relationships with the Roman Catholic Church into considerable political influence in Renaissance Europe.[3]

As an example, consider how Andrew Carnegie made his money:

The competitive strategy of the steelmakers in 1875 was simple: Collude and fix prices. . . . Carnegie was invited to join the newly formed Bessemer Steel Association. The association was a cartel, and in the days before antitrust laws, completely legal. Rather than compete tooth and nail for every bit of railroad business, it made far more sense for the steelmakers to establish quotas to limit the total supply in the market. By agreement, each firm was to produce its quota and sell into the market at agreed-upon prices.[4]

The upside is that Medici Cycle government policies have supported all kinds of timely innovation and inventions, social and cultural trends, and quality of life improvements. The downside is what happens when monopolistic are allowed to go unchecked for too long. Researching this article, I came across several recent expressions of concern that this is happening on many levels in the current U.S. economy:

1) In their book The Captured Economy: How the Powerful Enrich Themselves, Slow Down Growth, and Increase Inequality (2017), Brink Lindsey and Steven M. Teles[5] describe their concern with “regressive regulation” — monopoly-perpetuating policies — especially these four types:

(1) subsidies for financial institutions that lead to too much risk-taking in both borrowing and lending;

(2) excessive monopoly privileges granted under copyright and patent law;

(3) the protection of incumbent service providers under occupational licensing; and

(4) artificial housing scarcity created by land-use regulations.

2) Nobel Laureate Joseph Stiglitz and the Roosevelt Institute issued a 2015 report that lists numerous government policies that support or deter monopoly. You can download the full report here or read a Business Insider article published earlier this month that serves as a sort of executive summary of the report, and also brought it up to date: Nobel Prize-Winning Economist Joseph Stiglitz Says The US Has A Major Monopoly Problem.

3) This recent article from The Institute For New Economic Thinking describes the derivative problem of “monopsony”:

Center stage in the meeting of the Federal Research Bank of Kansas City’s annual symposium in Jackson, Wyoming this August was a discussion of the repercussions of having a small number of companies dominating the labor markets where they hire workers–what economists call ‘monopsony.’

In a nutshell, the problem with monopsony is that, “When a small group of companies can dominate a labor market, wages—and workers—suffer.”

4) Finally, state-supported monopoly is also evident in the current “rentier economy,” which, as the Steven Pinker quote above indicates, is the result of government policy that grants “exclusive access to a resource.” This is another instance of “regressive regulation.”

We’ll be looking more at the rentier economy in the weeks to come. But first, next week we’ll look at a surprising twist in the original version of the Monopoly board game.


[1] Douglass C. North is co-recipient of the 1993 Nobel Memorial Prize in Economic Science. He is Spencer T. Olin Professor in Arts and Sciences at Washington University, St Louis and Bartlett Burnap Senior Fellow at the Hoover Institution at Stanford University. Barry R. Weingast is Ward C. Krebs Family Professor in the Department of Political Science and a Senior Fellow at the Hoover Institution at Stanford University. John Joseph Wallis is Professor of Economics at the University of Maryland and a research associate at the National Bureau of Economic Research.

[2] As described in Enlightenment Now: The Case For Reason, Science, Humanism, and Progress, Steven Pinker (2018).

[3] From this post on the CFA Institute’s Enterprising Investor blog.

[4] From Americana: A 400-Year History of American Capitalism, Bhu Srinivasan (2017). I’m not the only one who didn’t learn about this in my American history class. See this interview with the author of Lies My Teacher Told Me: Everything Your American History Textbook Got Wrong.

[5] The authors combine for one of the more unique economic collaborations I’ve come across in my research. They’re a pair of political science professors at Johns Hopkins University who are also associated with the Niskanen Center, a libertarian think tank. Brink Lindsey is a libertarian, so no surprise there, but Steven M. Teles is a liberal, and together they offer an mix of perspectives that provides heartening evidence that not everyone of conflicting persuasions is so entirely polarized that they can’t talk to each other or agree about anything.

 

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”