February 22, 2019

Working With Passion [2]

I’m wild again
Beguiled again
A simpering, whimpering child again
Bewitched, bothered and bewildered am I.

From the Broadway show “Pal Joey,”
Rodgers and Hart

(Click here or on the image below to treat yourself to the silken sound of Ella Fitzgerald.)

The ManagementSpeak argument for working with passion is that disengagement is expensive and risky:  it compromises products and services, generates client and customer dissatisfaction, stirs up co-worker resentment and mistrust, impairs leadership judgment, exposes the firm and the people in it to ethical and legal hazards.

The Compassionate ManagementSpeak (if there is such a thing) argument is that disengagement wears down human beings: it makes us unhappy and unproductive at work, and sours the rest of life.

The Working With Passion Remedy is that we need to fall in love again — with work, to be sure, but if we do, we’ll probably also fall in love again with being alive. The company wins, and so do the people in it.

Trouble is, there’s a joker in the deck:  the part about love. Love involves an unpredictable mix of brain and body hormones that generate its familiar delights and dark sides. This article from Harvard[1] catalogues the hormonal progression from lust to love to long-term bonding:  testosterone, estrogen, vasopressin, dopamine, norepinephrine, serotonin, oxytocin. The dynamic blend of these volatile hormones accounts for both delight and disaster. Not only that, but falling in love can “turn off regions in our brain that regulate critical thinking, self-awareness, and rational behavior, including parts of the prefrontal cortex. In short, love makes us dumb.”

The Dark Side of Dumb includes addiction and bipolar disorder — both of which involve a condition known as “mania.” According to the Mayo Clinic, mania is characterized by:

  • Feeling abnormally upbeat, jumpy or wired.
  • Increased activity, energy or agitation.
  • Exaggerated sense of well-being and self-confidence (euphoria).
  • Decreased need for sleep.[2]

There’s more, but if we’re desperate, just that much makes us think what’s not to like? It sure beats the usual drudge. What have we got to lose?

A lot, actually. Passion turns us into high risk takers at best, delusional risk takers at worst. We go to a workshop (like the ones I used to lead), we take a vacation, go on a retreat, read a self-help bestseller… and we get a hormonal jolt of inspiration. It feels good — way better than business as usual, in fact so good that no amount of warning (I also gave plenty of those in my workshops) can deter us from taking the plunge.

I’ve done it myself:  I was in the grip of it when I made my bumbling exit out of law practice. I wrote a book about that experience, and here’s what I said about mania:

When we’re in [a state of mania], life has a heightened sense of meaning and purpose, serendipity and synchronicity rule the day, and everything in and around us is an amazing unified oneness – perfect, whole, and complete. It’s the place where auspicious connections are easily made, where imagination makes visions and dreams come true.

Neuroscientists locate that state of mind in the brain’s prefrontal cortex. That’s where the brain tells us all is well, where all of our perceptions come together into a meaningful whole, in a happy stew of the right hormones and chemicals in the right balance to make us feel really, really good.

Compare that to the opposite state of depression, where all is disjointed, fragmented, without meaning or purpose, where social bonds are severed and life is a random walk of disintegration, where the most basic life activities are burdensome, and fruitfulness is a pipedream.

But watch out, the neuroscientists tell us:  you can have too much of a good thing. Get the wrong mix in your neurotransmitter soup, and your natural high can be replaced with delusion, hallucinations, paranoia, schizophrenia, obsessive compulsive disorder, Tourette’s Disease, addictions.

Not the prettiest list.

That’s why mania is plutonium for the for human soul:  powerful almost beyond measure, equally suited to creation or destruction, and tricky to control once we let it loose. But dark side or not, mania is why we dream big dreams, and the bigger they are, the more mania we need. If we want to make our dreams come true, we risk mania’s dark side.

“Mania is plutonium for the human soul.”

Love risks mania, so does working with passion. Both create, both destroy. That doesn’t mean don’t go there, just keep your eyes open if you do. I don’t regret my personal Working With Passion Remedy, you might not regret yours either.

But then again, you might. And now you’ve been warned.

You also hear about finding your calling or purpose in your work as a cure for the disengagement blues. We’ll talk about that next time.

I tell my mania story in Life Beyond Reason:  A Memoir of Mania. It’s available as a free download here, or you can get it inexpensively in print or digitally from Amazon here. I’m currently writing a sequel about how I’ve been learning to make mania safe and sustainable.


[1]Love, Actually: The Science Behind Lust, Attraction, And Companionship,” Katherine Wu, Harvard Graduate School of Arts and Sciences website ( Feb. 14, 2017).

[2] For more on mania, see also this article from Psych Central.

Work Less, Do More

Anybody else remember May Day baskets? You made a little basket, put dandelions or candy in it, left it at the door of the girl next door’s house, rang the doorbell and ran away. If she heard, she was obligated to chase you and give you a kiss if she caught you. (That never happened.)

Hey c’mon… winters were long in Minnesota…

On May Day 1926, Henry Ford gave his factory floor workers the ultimate May Day basket: the 40-hour work week, all the way down from 60 hours. Ford’s office workers got their reduced workweek three months later.

Ford was progressive, and then some. Twelve years earlier, he’d given them another surprise: a raise from $2.34 per day all the way up to $5.00.[1] You had to love the man, and they did. Little wonder that productivity skyrocketed. Ford’s employees were working lees, doing more, and now they could also afford to buy his cars — although only with prior approval from Ford’s Sociological Dept, which looked after workers’ personal, home, family, and financial health.

We’ve been living with Ford’s 40-hour work week for 93 years now. Some people think maybe it’s time for an upgrade — they suggest a four-day work week.

This position is backed up by Academic research. Multiple studies support the view that a shorter working week would make people happier and more productive, while OECD figures show that countries with a culture of long working hours often score poorly for productivity and GDP per hour worked.

Meanwhile, one company in New Zealand that trialed a four-day working week last year confirmed it would adopt the measure on a permanent basis.[2]

Academics who studied the trial reported lower stress levels, higher levels of job satisfaction and an improved sense of work-life balance. Critically, they also say workers were 20% more productive.

Three-day weekend, anyone?

From this article about a presentation on the four-day work week at the recent World Economic Forum conclave in Davos, Switzerland.


Another WEF article indicates that research reveals an inverse relationship between hours worked (units of input) and productivity (units of output). The extra day off per week raises employee morale, improves health and wellbeing, and yes, raises productivity. And although some jobs really need to be staffed more days per week. that’s readily addressed through job-sharing.

It seems intuitive, doesn’t it, that happier, better rested workers will do more, and probably do it better, in less time? Not everyone is so easily convinced — here’s a sample of articles that do their journalistic best to present both upsides and downsides, while barely concealing an overall thumbs up: Wired, Huffington Post, Stuff.

From what I can tell from a review of those articles and several others like them, the dividing line between pro and con seems to be how comfortable corporate managers and politicos are with the word “progressive.” The New Zealand Guardian Trust Company is the one that took the four-day plunge, and these days New Zealand is floating on a progressive tide — see these articles: Business Insider, Business Insider, The Independent.:

Next time, we’ll start looking at some other common advice about how to improve the workplace, such as finding your true calling/vocation, getting a sense of meaning and purpose in your work, following your dreams, doing what you love, etc. Good advice? Bad advice? We’ll look into it.


[1] That was for the male workers; the females got the same raise two years later.

[2] These are the researchers who conducted the New Zealand pilot.

If you like Kevin Rhodes’s posts, you might enjoy his new Iconoclast.blog, which focuses on several themes that have appeared in this blog over the years, such as how belief creates culture and culture creates behavior, and why growth and change are difficult but doable. You can also follow Iconoclast.blog on Facebook.

The Lonely Worker

In four years, my law firm went from me and my laptop to $800,000 and climbing, and suddenly we were twelve of us in newly decked out offices complete with $100,000 in telecommunications and electronics upgrades.

Obviously we’d hit a sweet spot, and we were having fun. We laughed a lot. We ate together, visited each other’s homes. We took firm ski days and watched the Rockies at Coors Field. We had crazy non-policies like “take as much vacation as you need to come to work refreshed.” We had the coolest Christmas event ever. And we did kick-ass legal work.

But then the numbers got bigger and I got serious. An accountant said our vacation policy was unsustainable — we needed one, in a real live employee manual. I wrote one but never had the heart to show it to anyone. We sat in meetings with consultants formulating heartless strategic plans we all ignored. We had an employee retreat that was just plain weird.

The worst thing I took seriously was myself. I totally blew the lesson basketball Hall-of-Famer and Orlando Magic founder Pat William put in the title of his book Humility:  The Secret Ingredient of Success. Time and chance had favored us — I’d stumbled  into doing the right thing in the right place at the right time. Work had often been a rollicking, happy social occasion. But then I decided I must  have been responsible for it, and paved Paradise, put up a parking lot, and didn’t know what we had ‘til it was gone.

We’d been in our new offices one week. My wife and I had flown  back the day before from a cushy five-day CLE at a resort in San Diego, and I was heading out to visit our new satellite office when the phone rang. It was the associate-soon-to-be-partner  we’d put in charge. “There’s something going on you need to know about,” he said.

The date was September 11th. The second plane had just hit the second tower.

Our clients — mostly small businesses — got hammered in the mini-recession that followed. As a result, so did we. I sought advice from two Denver law firm icons. They were sympathetic — they’d done that, too — expanded too much too quickly and paid for it in a downturn. A couple other people said you have to let people go — I followed their advice and let one person go — a move I mourn to this day. That’s when I decided we’ll survive or go down, but we’re doing it together.

We limped along until January 2004, when the new leader of our major referral source called to say they were “moving in a new direction” and March 31st would be the date we were officially toast. For the next three months I wrote job recommendations, we gave people their furniture and computers, sold the rest, archived files…

When I went to the office on April 1st (April Fool’s Day), the place echoed. I’d never felt so lonely in my life. Rotten timing, victim of circumstance, happens to everyone… yeah maybe, but all I could think was I miss my friends.

We don’t usually associate loneliness with work. We ought to, says Emily Esfahani-Smith in her book The Power of Meaning: Crafting a Life That Matters. She cites findings that 20% consider loneliness a “major source of unhappiness in their lives,” that 1/3 of Americans 45 of older say they’re lonely, and that close relationships at work are a major source of meaning. Former Surgeon General Vivek Murphy agrees and then some:

There is good reason to be concerned about social connection in our current world. Loneliness is a growing health epidemic.

Today, over 40% of adults in America report feeling lonely, and research suggests that the real number may well be higher.

In the workplace, many employees — and half of CEOs — report feeling lonely in their roles. “At work, loneliness reduces task performance, limits creativity, and impairs other aspects of executive function such as reasoning and decision making. For our health and our work, it is imperative that we address the loneliness epidemic quickly.

And even working at an office doesn’t guarantee meaningful connections: People sit in an office full of coworkers, even in open-plan workspaces, but everyone is staring at a computer or attending task-oriented meetings where opportunities to connect on a human level are scarce.

Happy hours, coffee breaks, and team-building exercises are designed to build connections between colleagues, but do they really help people develop deep relationships? On average, we spend more waking hours with our coworkers than we do with our families. But do they know what we really care about? Do they understand our values? Do they share in our triumphs and pains?

These aren’t just rhetorical questions; from a biological perspective, we evolved to be social creatures. Over thousands of years, the value of social connection has become baked into our nervous system such that the absence of such a protective force creates a stress state in the body.

Work And The Loneliness Epidemic: Reducing Isolation At Work Is Good For Business,” Harvard Business Review (2017).

He offers these remedies:

  • Evaluate the current state of connections in your workplace.
  • Build understanding of high-quality relationships.
  • Make strengthening social connections a strategic priority in your organization.
  • Create opportunities to learn about your colleagues’ personal lives.

And, he might have added, you might want to rethink your stingy vacation policy.

For more, see Work Loneliness and Employee Performance, Academy of Management Proceedings (2011).

If you like Kevin Rhodes’s posts, you might enjoy his new Iconoclast.blog, which focuses on several themes that have appeared in this blog over the years, such as how belief creates culture and culture creates behavior, and why growth and change are difficult but doable. You can also follow Iconoclast.blog on Facebook.

Total Work [2]: Asleep on the Subway

I saw it often during a visit to Seoul: people sacked out on the subway, on the bus, at coffee shops, on park benches… The practice is common all around Asia. The Japanese have a word for it: “inemuri.”

It is often translated as ‘sleeping on duty,’ but Brigitte Steger, a senior lecturer in Japanese studies at Downing College, Cambridge, who has written a book on the topic, says it would be more accurate to render it as ‘sleeping while present.’

Napping in Public? In Japan, That’s a Sign of DiligenceNY Times (Dec 16, 2016).

Inemuri means it’s more polite to be present, even if you nod off. In the workplace, that means it’s better to sleep on the job than not show up. Besides, it gets you brownie points:

In most countries, sleeping on the job isn’t just frowned upon, it may get you fired… But in Japan, napping in the office is common and culturally accepted. And in fact, it is often seen as a subtle sign of diligence: You must be working yourself to exhaustion.

And of course working yourself to exhaustion is a good thing. Add the Asian practice of wee hours business drinking and you might also be napping on the pavement — another common sight.

Run a Google Images search on the topic and the sheer volume of visuals is striking — these are seriously tired people.[1] It’s easy to imagine the impact of that level of fatigue on job performance, let alone daily life. The cognitive impairment and other health risks of sleep deprivation are well documented,[2] It’s especially bad in the professions — lawyers and doctors are chief among the sleep-deprived.

There’s also a deeper, darker side of chronic, overworked exhaustion, as we saw in last week’s post:

“Off in corners, rumours would occasionally circulate about death or suicide from overwork, but such faintly sweet susurrus would rightly be regarded as no more than local manifestations of the spirit of total work, for some even as a praiseworthy way of taking work to its logical limit in ultimate sacrifice.”

If Work Dominated Your Every Moment Would Life be Worth Living?Aeon Magazine (2018)

Wait a minute! It’s praiseworthy to work yourself to death?! Believe it. And it’s not just in Asia, it’s all around the world, as people everywhere make the steady march toward the state of total work.[3]

Stanford Professor Jeffrey Pfeffer recently wrote a book about workplace-induced ill health and death. The following is from a Stanford Business interview, “The Workplace is Killing People and Nobody Cares” (March 15, 2018).

Jeffrey Pfeffer has an ambitious aspiration for his latest book. “I want this to be the Silent Spring of workplace health,” says Pfeffer, a professor of organizational behavior at Stanford Graduate School of Business. ‘We are harming both company performance and individual well-being, and this needs to be the clarion call for us to stop. There is too much damage being done.’

This is from the book blurb:

In one survey, 61 percent of employees said that workplace stress had made them sick and 7 percent said they had actually been hospitalized. Job stress costs US employers more than $300 billion annually and may cause 120,000 excess deaths each year. In China, 1 million people a year may be dying from overwork. People are literally dying for a paycheck. And it needs to stop.

In this timely, provocative book, Jeffrey Pfeffer contends that many modern management commonalities such as long work hours, work-family conflict, and economic insecurity are toxic to employees—hurting engagement, increasing turnover, and destroying people’s physical and emotional health—and also inimical to company performance.

Jeffrey Pfeffer marshals a vast trove of evidence and numerous examples from all over the world to expose the infuriating truth about modern work life: even as organizations allow management practices that literally sicken and kill their employees, those policies do not enhance productivity or the bottom line, thereby creating a lose-lose situation.

The Japanese word for work-related death is karōshi, which Wikipedia says can be translated literally as ‘overwork death.” The comparable term in South Korea is “gwarosa.” Call it what you like, give it a special name or not — death by overwork is total work taken to its utmost.

We don’t like to think about it, talk about it, admit it. It’s not our problem. Let the pros handle it. We wouldn’t know what to do anyway.

Maybe it’s time we learned.


[1] See also “Death by Work: Japan’s Habits of Overwork Are Hard To Change,” The Economist (2018)

[2] For an introduction, see Wikipedia and Harvard Business Review.

[3] See, e.g.,Britain’s Joyless Jobs Market Can Be Bad For Your Health,” The Financial Times (Aug. 2017). See alsoDead For Dough: Death by Overwork Around the World,” The Straits Times (first published April 6, 2016, updated Oct 6, 2017).

If you like Kevin Rhodes’s posts, you might enjoy his new Iconoclast.blog, which focuses on several themes that have appeared in this blog over the years, such as how belief creates culture and culture creates behavior, and why growth and change are difficult but doable. You can also follow Iconoclast.blog on Facebook.

He Works Hard (But Not Always for the Money)

University of London economist Guy Standing has championed universal basic income since the ’80s. In Basic Income: A Guide For the Open-Minded (2017), he tackles the argument that UBI is flawed because recipients don’t have to work for it.

A remarkable number of commentators and social scientists lose their common sense when it comes to talking or writing about work. While every age throughout history has drawn arbitrary distinctions between what counts as work and what does not, ours may be the most perverse.

Only in the twentieth century did most work that was not paid labour become non-work. Labour statistics persist in this travesty. ‘Work’ is counted only if it is for pay, in the marketplace.

For example, he says, it’s the same work to walk the dog whether you do it yourself or pay someone else to do it, but the former doesn’t count. If it did, it would add up to a lot:

In the U.K. — and it is similar in other countries — the unremunerated economy (caring for children and the elderly, housework, voluntary work in the community, and so on) is estimated to be worth well over half the size of the money economy.

Juha Järvinen, one of 2,000 Finns selected for a two-year UBI test does work that counts and work that doesn’t; either way, he works hard:

In a speck of a village deep in the Finnish countryside, a man gets money for free. Each month, almost €560 [about $640] is dropped into his bank account, with no strings attached.

He’s a human lab rat in an experiment that could help to shape the future of the west.

Until this year . . . he was trapped in a “humiliating” system that gave him barely enough to feed himself . . . The Finnish [workfare system] was always on his case about job applications and training.

[He was in the same position as] an unemployed Finn called Christian [who] was caught carving and selling wooden guitar plectrums [picks]. It was more pastime than business, earning him a little more than €2,000 in a year. But the sum was not what angered the authorities, it was the thought that each plectrum had taken up time that could have been spent on official hoop-jumping.

Ideas flow out of Järvinen as easily as water from a tap, yet he could exercise none of his initiative for fear of arousing bureaucratic scrutiny.

So what accounted for his change? Certainly not the UBI money. In Finland, €560 is less than a fifth of average private-sector income. “You have to be a magician to survive on such money,” Järvinen says. Over and over, he baldly describes himself as ‘poor.’

Ask Järvinen what difference money for nothing has made to his life, and you are marched over to his workshop. Inside is film-making equipment, a blackboard on which is scrawled plans for an artists’ version of Airbnb, and an entire little room where he makes shaman drums that sell for up to €900. All this while helping to bring up six children.

All those free euros have driven him to work harder than ever.

Compare his situation to that of Florian Dou, one of France’s “yellow vest” protesters, who has no UBI safety net:

At the bare bottom of Florian Dou’s shopping cart at the discount supermarket, there was a packet of $6 sausages and not much else. . . . “My salary and my wife’s have been gone for 10 days,” he lamented.

How to survive those days between when the money runs out and when his paycheck arrives for his work as a warehouse handler has become a monthly challenge. The same is true for so many others in Guéret, a grim provincial town in south-central France.

In places like these, a quiet fear gnaws at households: What happens when the money runs out around the 20th? What do I put in the refrigerator with nothing left in the account and the electricity bill to pay? Which meal should I skip today? How do I tell my wife again there is no going out this weekend?

That last comment — “going out this weekend” — is a moralistic hot button among UBI foes. Again from Guy Standing:

More generally, there is a moralistic presumption that poor people, especially those receiving benefits, should not be spending money on anything but the bare essentials, denying themselves even the smallest ‘luxury’ that might make their lives less miserable. As Marx pointed out in 1844, ‘every luxury of the worker seems to be reprehensible, and everything that goes beyond the most abstract need seems a luxury.’

Standing also exposes a related presumption:

It is often claimed that giving cash to those in need is misguided because people will spend it on alcohol, cigarettes, and other ‘bads’ rather than on their children and essentials such as food, clothes, and heating.

Obviously, this is a thoroughly paternalistic line of attack. Where to draw a line between ‘good’ and ‘bad’? Why should a rich person have the freedom to buy and consume whatever the state bureaucracy deems a ‘bad,” but not a poor person?

Good vs. bad, work that counts vs. work that doesn’t, necessities vs. luxuries… the UBI debate is littered with polarities and prejudices. Suppose the cultural pendulum swings all the way to a state of “total work” — what would that be like? We’ll look at that next time.

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

Work and Money

He’s a gentleman with a family
A gentle man, living day to day
He’s a gentleman with pride, one may conclude
Sign reads, “Gentleman with a family will work for food.”

Manhattan Transfer, Gentleman With a Family

Norwegian Petter Amlie is an entrepreneur, technology consultant, and frequent contributor on Medium. Work runs our economy, he writes in a recent article, “but if future technology lets us keep our standard of living without it, why do we hold on to it?” It’s a good question — one of those obvious ones we don’t think to ask. Why would we insist on working for food — or the money we need to buy food — if we don’t have to?

As we’ve seen, at the center of the objections to robotics, artificial intelligence, big data, marketing algorithms, machine learning, and universal basic income is that they threaten the link between work and money. That’s upsetting because we believe jobs are the only way to “make a living.” But what if a day comes — sooner than we’d like to think — when that’s no longer true?

Work comes naturally to us, but the link between work and money is artificial — the function of an economic/social contract that relies on jobs to support both the production and consumption sides of the supply/demand curve: we work to produce goods and services, we get paid for doing it, we use the money to buy goods and services from each other. If technology takes over the production jobs, we won’t get paid to produce things — then how are we supposed to buy them? Faced with that question, “the captains of industry and their fools on the hill” (Don Henley) generally talk jobs, jobs, jobs — or, in the absence of jobs, workfare.

John Maynard Keynes had a different idea back in 1930, just after the original Black Friday, when he predicted that technological progress would mostly end the need for jobs, so that we would work for pay maybe fifteen hours per week, leaving us free to pursue nobler pursuits. He spoke in rapturous, Biblical terms:

I see us free, therefore, to return to some of the most sure and certain principles of religion and traditional virtue — that avarice is a vice, that the exaction of usury is a misdemeanor, and the love of money is detestable, that those who walk most truly in the paths of virtue and sane wisdom take least thought for the morrow. We shall once more value ends above means and prefer the good to the useful. We shall honour those who can teach us how to pluck the hour and the day virtuously and well, the delightful people who are capable of taking direct enjoyment in things, the lilies of the field who toil not neither do they spin.

But then, after a second world war tore the planet apart, jobs rebuilt it. We’ve lived with that reality so long that we readily pooh-pooh Keynes’s euphoric prophecy. Amlie suggests we open our minds to it:

Work and money are both systems we’ve invented that were right for their time, but there’s no reason to see them as universally unavoidable parts of society. They helped us build a strong global economy, but why would we battle to keep it that way, if societal and technological progress could help us change it?

We have a built-in defense mechanism when the status quo is challenged by ideas such as Universal Basic Income, shorter work weeks and even just basic flexibility at the workplace, often without considering why we have an urge to defend it.

You’re supposed to be here at eight, even if you’re tired. You’re supposed to sit here in an open landscape, even if the isolation of a home office can help you concentrate on challenging tasks. You have exactly X number of weeks to recharge your batteries every year, because that’s how it’s always been done.

While many organizations have made significant policy adjustments in the last two decades, we’re still clinging to the idea that we should form companies, they should have employees that are paid a monthly sum to be there at the same time every morning five days a week, even if this system is not making us very happy.

I do know that work is not something I necessarily want to hold on to, if I could sustain my standard of living without it, which may just be the case if robots of the future could supply us with all the productivity we could ever need. If every job we can conceive could be done better by a machine than a human, and the machines demand no pay, vacation or motivation to produce goods and services for mankind for all eternity, is it such a ridiculous thought to ask in such a society why we would need money?

We should be exploring eagerly how to meet these challenges and how they can improve the human existence, rather than fighting tooth and nail to sustain it without knowing why we want it that way.

The change is coming. Why not see it in a positive light, and work towards a future where waking up at 4 am to go to an office is not considered the peak of human achievement?

One gentleman with a family who’s been seeing change in a positive new light is Juha Järvinen, one of 2,000 Finns selected for a two-year UBI test that just ended. He’s no longer working hard for the money, but he is working harder than ever. We’ll meet him next time.

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

Social Contract

“Men are born free, yet everywhere are in chains.”
Jean-Jacques Rousseau, The Social Contract & Discourses

What do Fortnite, New Year’s Day, and the USA have in common?

They all exist because we believe they do.

Political theorists call this kind of communal belief a “social contract.” According to Rousseau, that’s the mechanism by which we trade individual liberty for community restraint. Similarly, Thomas Hobbes said this in Leviathan:

As long as men live without a common power to keep them all in awe, they are in the condition known as war, and it is a war of every man against every man.

When a man thinks that peace and self-defense require it, he should be willing (when others are too) to lay down his right to everything, and should be contented with as much liberty against other men as he would allow against himself.”

In Fortnite terms, life is a battle royale: everybody against everybody else, with only one left standing. As Hobbes famously said, that makes life “solitary, poor, nasty, brutish, and short.” As a recent version put it, “For roughly 99% of the world’s history, 99% of humanity was poor, hungry, dirty, afraid, stupid, sick, and ugly.”[1] A social contract suggests we can do better.

Can we really create something out of nothing, by mere belief? Yes, of course — we do it all the time. My daughter can’t figure out why New Year’s Day is a holiday. “It’s just a day!” she says, unmoved by my explanation that it’s a holiday because everyone thinks it is. Same with Fortnite — as 125 million enthusiasts know, it’s not just an online game, it’s a worldwide reality. And same with the United States — the Colonies’ deal with England grew long on chains and short on freedom until the Founders declared a new sovereign nation into existence:

We hold these truths to be self-evident, that all Men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the Pursuit of Happiness.

The new nation was conceived in liberty, but there would be limits. Once the Revolutionary War settled the issue of sovereign independence[2], the Founders articulated a new freedom/chains balance:

We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defense, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America.

That social contract + 250 years of history = the USA. We are a nation borne of imagination and belief, continually redefined and updated since its founding through interpretations and amendments to the terms of our social contract.

Our economic system works the same way. Adam Smith’s capitalism survived the trip to the new world, produced astonishing quality of life improvements in the 19th and 20th Centuries, and then was recast into the neoliberal framework that powered the world’s recovery from WWII. That version of our economic social contract thrived for three decades, but began to falter in the face of several unforeseen developments:

  • the democratization of knowledge in the information age;
  • accelerated automation, mass production, and eventually robotics;
  • software that at first only did what it was told but later morphed into machine intelligence; and
  • globalization, which shrank the world, homogenized culture, opened international trade, and recast national borders.

Neoliberalism couldn’t keep up with these developments. Tensions grew until the year 2016 became a worldwide referendum on the social contracts of democracy and neoliberalism. New social contracts would have required a new freedom/chains balance. 2016’s response was, “Not on my watch.”

That’s the context into which universal basic income would now be introduced. For that to happen, the American Dream of independence and upward mobility fueled by working for a living must give way to a belief that basic sustenance — job or no job — is a human right so fundamental that it’s one of those “self-evident” truths. As we’ve seen, that radical belief is slowly changing the North Carolina Cherokee Reservation’s culture of poverty, and has caught the fancy of a growing list of techno-plutocrats. As Mark Zuckerberg said, “Now it’s our time to define a new social contract for our generation.” Law professor James Kwak makes the same point[3]:

We have the physical, financial, and human capital necessary for everyone in our country to enjoy a comfortable standard of living, and within a few generations the same should be true of the entire planet, And yet our social organization remains the same as it was in the Great Depression: some people work very hard and make more money than they will ever need, while many others are unable to find work and live in poverty.

Millions if not billions of people today hunger to live in a world that is more fair, more forgiving, and more humane than the one they were born into. Creating a new vision of society worthy of that collective yearning … is the first step toward building a better future for our children.”

To be continued.


[1] Rutger Bregman, Utopia for Realists (2016).

[2] In Hobbes’ terms, social contracts end the battle royale. Ironically, they often also create war as ideals of one contract conflict with another’s.

[3] James Kwak, Economism (2017).

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

Silicon Valley: Problem or Solution?

 There is no more neutrality in the world.
You either have to be part of the solution,
or you’re going to be part of the problem.
Eldridge Cleaver

The high tech high rollers build the robots, code the algorithms, and wire up the machine intelligence that threaten jobs. If they’re the problem, what’s their the solution?

Elon Musk: Universal basic income is “going to be necessary” because “there will be fewer and fewer jobs that a robot cannot do better.”

Richard Branson: “A lot of exciting new innovations are going to be created, which will generate a lot of opportunities and a lot of wealth, but there is a real danger it could also reduce the amount of jobs. Basic income is going to be all the more important. If a lot more wealth is created by AI, the least that the country should be able to do is that a lot of that wealth that is created by AI goes back into making sure that everybody has a safety net.”

Mark Zuckerberg: “The greatest successes come from having the freedom to fail. Now it’s our time to define a new social contract for our generation. We should explore ideas like universal basic income to give everyone a cushion to try new things.”

Sam Altman: “Eliminating poverty is such a moral imperative and something that I believe in so strongly. There’s so much research about how bad poverty is. There’s so much research about the emotional and physical toll that it takes on people.” (Altman’s company Y Combinator is conducting its own UBI experiment in Oakland.)

Ideas like this get labelled “progressive,” meaning “ahead of their time,” which in turn means “over my dead body.” We saw a few posts back that Pres. Johnson’s visionary Triple Revolution Report and National Commission on Technology, Automation, and Economic Progress ended up in the dustbin of history. Another technology/jobs initiative had already landed there two decades earlier:

In 1949, at the request of the New York Times, Norbert Wiener, an internationally renowned mathematician at the Massachusetts Institute of Technology, wrote an article describing his vision for future computers and automation. Wiener had been a child prodigy who entered college at age eleven and completed his PhD when he was seventeen; he went on to establish the field of cybernetics and made substantial contributions in applied mathematics and to the foundations of computer science, robotics, and computer-controlled automation.

In his article — written just three years after the first true general purpose electronic computer was built at the University of Pennsylvania — Wiener argued that ‘if we can do anything in a clear and intelligible way, we can do it by machine’ and warned that this could ultimately lead to ‘an industrial revolution of unmitigated cruelty’ powered by machines capable of ‘reducing the economic value of the routine factory employee to a point at which he is not worth hiring at any price.’

Rise of the Robots: Technology and the Threat of a Jobless Future, Martin Ford

Wiener’s article was never published, and was only recently (in 2012) discovered in MIT’s archives. Outspoken technology commentator Douglas Rushkoff hopes UBI meets a similar end. In a recent Medium piece, he called UBI “Silicon Valley’s Latest Scam.”[1] His main critique? UBI doesn’t go far enough:

They will basically tell you that a Universal Basic Income is a great idea and more effective than any other method of combating technological unemployment, the death of the Middle Class and the automation of the future of work.

They don’t propose a solution to wealth inequality, they only show a way to prevent all out mass social unrest and chaos, something that would inconvenience the state and elite.

The bottom 60% of the economy, well what do you suppose is in store for us with the rise of robots, machine learning and automation . . . ?

California might get a lot of sunshine and easy access to VC, but they aren’t blessed with a lot of common sense. They don’t know the pain of rural America, much less the underclass or warped narrative primed by Facebook algorithms or the new media that’s dehumanized by advertising agents and propaganda hackers.

What if receiving a basic income is actually humiliating and is our money for opioids and alcohol, and not for hope that we can again join a labor force that’s decreasing while robots and AI do the jobs we once did?

The problem lies in the fact that there won’t be a whole lot of “new jobs” for the blue and white collar workers to adapt to once they sink and become part of the permanent unemployed via technological unemployment.

With housing rising in major urban centers, more folk living paycheck-to-paycheck, rising debt to income ratios and less discretionary spending, combined with many other factors, the idea of a UBI (about the same as a meagre pension) saving us, sounds pretty insulting and absurd to a lot of people.

Since when did capitalism care about the down trodden and the poor? If we are to believe that automation and robots really will steal our jobs in unprecedented numbers, we should call Basic Income for what it is, a way to curtail social unrest and a post-work ‘peasant uprising.’

Getting [UBI] just for being alive isn’t a privilege, it’s a death sentence. We are already seeing the toll of the death of the middle class on the opioid epidemic, on the rise of suicide, alcoholism and early death all due to in part of the stress of a declining quality of life since the great recession of 2008.”

If UBI doesn’t go far enough, then what does? Mark Zuckerberg used the phrase “new social contract” in his quote above. More on that coming up.


[1] UBI advocacy group BIEN (Basic Income Earth Network) reported Rushkoff’s opinions in a recent newsletter, and described his alternative: Universal Basic Assets.

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

Basic Income on the Res, Part 2

For nearly two decades, Duke Medical School professor Jane Costello has been studying the impact of casino money on the health and wellbeing of the North Carolina Cherokee tribe. For long, balanced articles about her work, see “What Happens When the Poor Receive a Stipend?The New York Times (2014) and “Free Money: The Surprising Effects Of A Basic Income Supplied By GovernmentWired Magazine (2017).

The NY Times article lists several  encouraging results. Here are a few:

The number of Cherokee living below the poverty line had declined by half.

The frequency of behavioral problems declined by 40 percent, nearly  reaching the risk of children who had never been poor.

Crimes committed by Cherokee youth declined.

On-time high school graduation rates improved.

The earlier the supplements arrived in a child’s life, the better that child’s mental health in early adulthood.

The money seemed to improve parenting quality.

Prof. Costello also noted neurological benefits, particularly brain development in the ”hippocampus and amygdala, brain regions important for memory and emotional well-being.”

Randall Akee, an economist at UCLA and a collaborator with Prof. Costello, speculated about the impact of these findings on the cost of welfare benefits:

A cash infusion in childhood seemed to lower the risk of problems in adulthood. That suggests that poverty makes people unwell, and that meaningful intervention is relatively simple.

Bearing that in mind, [Prof. Akee] argues that the supplements actually save money in the long run. He calculates that 5 to 10 years after age 19, the savings incurred by the Cherokee income supplements surpass the initial costs — the payments to parents while the children were minors. That’s a conservative estimate, he says, based on reduced criminality, a reduced need for psychiatric care and savings gained from not repeating grades.

The Wired article tracks the experiences of “Skooter” McCoy, who left the Cherokee Reservation to play small college football the year the casino money distributions began, and of his son Spencer McCoy, who was born that same year. Skooter returned to the Reservation to coach football at the local high school and is now general manager of the Cherokee Boys Club, a nonprofit that provides day care, foster care, and other tribal services.

The casino money made it possible for him to support his young family, but the money his children will receive is potentially life-altering on a different scale.

‘If you’ve lived in a small rural community and never saw anybody leave, never saw anyone with a white-collar job or leading any organization, you always kind of keep your mindset right here,’ he says, forming a little circle with his hands in front of his face. ‘Our kids today? The kids at the high school?’ He throws his arms out wide. ‘They believe the sky’s the limit. It’s really changed the entire mindset of the community these past 20 years.’

The Cherokees’ experience began with the same provisions for a one-time distribution at age 18 of the money set aside for minors that we saw last time in the Seneca tribe’s program, but the Cherokees later amended their law to call for payments in three stages — still not ideal, but a move toward sensibility. Skooter calls the coming-of-age payments “big money,” and has seen his share of abuse, but his son Spencer appears to be taking a different path:

When Spencer first got his ‘big money,’ he says, ‘I’d get online and I was looking for trucks and stuff, but I thought at the end of the day, it wasn’t really worth it.’ Aside from a used bass boat he bought to take out fishing, Spencer has stashed most of the money away in hopes of using it to start his own business one day.

After reviewing Prof. Costello’s work, the Wired article examines the use of UBI as a response to technological unemployment, concluding as follows:

The true impact of the money on the tribe may not really be known until Spencer’s generation, the first born after the casino opened, is grown up. For the techies backing basic income as a remedy to the slow-moving national crisis that is economic inequality, that may prove a tedious wait.

Still, if anything is to be learned from the Cherokee experiment, it’s this: To imagine that a basic income, or something like it, would suddenly satisfy the disillusioned, out-of-work Rust Belt worker is as wrong-headed as imagining it would do no good at all, or drive people to stop working.

There is a third possibility: that an infusion of cash into struggling households would lift up the youth in those households in all the subtle but still meaningful ways Costello has observed over the years, until finally, when they come of age, they are better prepared for the brave new world of work, whether the robots are coming or not.

We’ll look more at “the robots are coming” and Silicon Valley’s response to technological unemployment next time. Meanwhile, for related information, see this summary re: U.S. government benefits to Indian tribes, and see this article re: another current version of UBI — the Alaska oil money trust fund.

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

There’s No Such Thing as a Free Lunch — True or False?

Last time, we were introduced to the idea of a universal basic income (UBI). We can assume that the pros and cons have been thoroughly researched and reasonably analyzed, and that each side holds its position with utmost conviction.

We can also assume that none of that reasonableness and conviction will convert anyone from one side to the other, or win over the uncommitted. Reason doesn’t move us: we use it to justify what we already decided, based on what we believe. SeeWhy Facts Don’t Change Our Minds,” The New Yorker (February 2017) and “This Article Won’t Change Your Mind,” The Atlantic (March 2017).

History doesn’t guide us either — see Why We Refuse to Learn From History from Big Think and Why Don’t We Learn From History, from military historian Sir Basil Henry Liddell Hart. The latter contains conventional wisdom such as this:

The most instructive, indeed the only method of learning to bear with dignity the vicissitude of fortune, is to recall the catastrophes of others.

History is the best help, being a record of how things usually go wrong.

There are two roads to the reformation for mankind— one through misfortunes of their own, the other through the misfortunes of others; the former is the most unmistakable, the latter the less painful.

I would add that the only hope for humanity, now, is that my particular field of study, warfare, will become purely a subject of antiquarian interest. For with the advent of atomic weapons we have come either to the last page of war, at any rate on the major international scale we have known in the past, or to the last page of history.

That’s seems like good advice, but it mostly goes unheeded. It seems we’d rather make our own mistakes.

If reasoned analysis and historical perspective don’t inform our responses to radically new ideas like UBI, then what does? Many things, but cultural belief is high on the list. Policy is rooted in culture, culture is rooted in shared beliefs, and beliefs are rooted in history. Cultural beliefs shape individual bias, and the whole belief system becomes sacred in the culture’s mythology. Try to subverts cultural beliefs, and the response is outrage and entrenchment.

All of which means that each of us probably had a quick true or false answer to the question in this week’s blog post title, and were ready to defend it with something that sounded reasonable. Our answer likely signals our kneejerk response to the idea of UBI. The “free lunch”— or, more accurately, “free money” — issue appears to be the UBI Great Divide: get to that point, and you’re either pro or con, and there’s no neutral option. (See this for more about where the “no free lunch” phrase came from.[1])

The Great Divide is what tanked President Nixon’s UBI legislation. The plan, which would have paid a family of four $1,600/year (equivalent to $10,428 today) was set to launch in the midst of an outpouring of political self-congratulation and media endorsement, only to be scuttled by a memo from a White House staffer that described the failure of a British UBI experiment 150 years earlier. UBI was in fact a free lunch; its fate was thus sealed.

As it turns out, whether the experiment failed or not was lost in a 19th Century fog of cultural belief, so that opponents of the experiment pounced on a bogus report about its impact to justify passing the Poor Law Amendment Act of 1834 — which is what they wanted to do anyway. The new Poor Law was that era’s version of workfare, and was generated by the worst kind of scarcity mentality applied to the worst kind of scarcity. Besides creating the backdrop to Charles Dickens’ writing, the new Poor Law’s philosophical roots still support today’s welfare system:

The new Poor Law introduced perhaps the most heinous form of “public assistance” that the world has ever witnessed. Believing the workhouses to be the only effective remedy against sloth and depravity, the Royal Commission forced the poor into senseless slave labor, from breaking stones to walking on treadmills. . . .

For the whole history lesson, see “The Bizarre Tale Of President Nixon’s Basic Income Plan.”

And so we’re back to asking whether UBI is a free lunch or not. If it is, then it’s an affront to a culture that values self-sufficiency. If it isn’t, then it requires a vastly different cultural value system to support it. The former believes that doing something — “making a living” at a job — is how you earn your daily bread. The latter believes you’re entitled do sustenance if you are something: i.e., a citizen or member of the nation, state, city, or other institution or community providing the UBI. The former is about activity, the latter is about identity. This Wired article captures the distinction:

The idea [of UBI] is not exactly new—Thomas Paine proposed a form of basic income back in 1797—but in this country, aside from Social Security and Medicare, most government payouts are based on individual need rather than simply citizenship.

UBI is about “simply citizenship.” It requires a cultural belief that everybody in the group shares its prosperity. Cultural identity alone ensures basic sustenance — it’s a right, and that right makes Poor Laws and workfare obsolete.

The notion of cultural identity invites comparison between UBI and the “casino money” some Native American tribes pay their members. How’s that working? We’ll look at that next time.


[1] Yes, Milton Friedman did in fact say it, although he wasn’t the only one. And in a surprising twist, he has been criticized for advocating his own version of UBI.

 

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

Old Dog, Old Trick, New Showtime

Blockchain consultant and futurist Michael Spencer called it a conspiracy by the 0.01 percenters to enslave the rest of us for good.[1] A growing number of those 0.01 percenters have already supported it, but they’re not alone: this poll conducted shortly after the 2016 election showed that half of Americans supported it as well. A parade of think tanks (here’s one) and other professional skeptics (more than I can cite with hyperlinks in a single sentence) have given it a thorough vetting and mostly concluded yeah well maybe it’s worth a try.

What is “it”? This idea: give the poor what they lack — money. Ensure everyone a livable income while getting rid of the expensive and draconian welfare system. And just to be fair, go ahead and give everyone else money, too, even the billionaires.

The idea mostly goes by the name “universal basic income” (UBI). It’s rooted in the futuristic fear that technology will eventually put humans out of work. That’s not an old fear: UBI is “far from a new idea,” says Martin Ford, another Silicon Valley entrepreneur and a popular TED talker, in his New York Times Bestselling Rise of the Robots: Technology and the Threat of a Jobless Future.

In the context of the contemporary American political landscape . . . a guaranteed income is likely to be disparaged as “socialism” and a massive expansion of the welfare state. The idea’s historical origins, however, suggest something quite different. While a basic income has been embraced by economists and intellectuals on both sides of the political spectrum, the idea has been advocated especially forcefully by conservatives and libertarians.

Friedrich Hayek, who has become an iconic figure among today’s conservatives, was a strong proponent of the idea. In his three-volume work. Law, Legislation and Liberty, published between 1973 and 1979, Hayek suggested that a guaranteed income would be a legitimate government policy designed to provide against adversity, and that the need for this type of safety net is the direct result of the transition to a more open and mobile society where many individuals can no longer rely on traditional support systems:

There is, however, yet another class of common risks with regard to which the need for government action has until recently not been generally admitted. . . . The problem here is chiefly the fate of those who for various reasons cannot make their living in the market . . . that is, all people suffering from adverse conditions which may affect anyone and against which most individuals cannot alone make adequate protection but in which a society that has reached a certain level of wealth can afford to provide for all.

LBJ foresaw the possibility of massive technological unemployment back in the 60s and appointed an “Ad Hoc Committee on the Triple Revolution” to study the topic. The Committee included co-Nobel Prize winners Friedrich Hayek and Swedish economist and sociologist Gunnar Myrdal.[2] Rise of the Robots describes the Committee’s findings:

“Cybernation” (or automation) would soon result in an economy where “potentially unlimited output can be achieved by systems of machines which will require little cooperation from human beings.” The result would be massive unemployment, soaring inequality, and, ultimately, falling demand for goods and services as consumers increasingly lacked the purchasing power necessary to continue driving economic growth.

The Ad Hoc Committee went on to propose a radical solution: the eventual implementation of a guaranteed minimum income made possible by the “economy of abundance” such widespread automation would create, and which would “take the place of the patchwork of welfare measures” that were then in place to address poverty.

The Triple Revolution report was released to the media and sent to President Johnson, the secretary of labor, and congressional leaders in March 1964. An accompanying cover letter warned ominously that if something akin to the report’s proposed solutions was not implemented, “the nation will be thrown into unprecedented economic and social disorder.” A front-page story with extensive quotations from the report appeared in the next day’s New York Times, and numerous other newspapers and magazines ran stories and editorials (most of which were critical), in some cases even printing the entire text of the report.

The Triple Revolution marked what was perhaps the crest of a wave of worry about the impact of automation that had arisen following World War II. The specter of mass joblessness as machines displaced workers had incited fear many times in the past — going all the way back to Britain’s Luddite uprising in 1812 — but in the 1950s the 60s, the concern was especially acute and was articulated by some of the United States’ most prominent and intellectually capable individuals.

Four months after the Johnson administration received the Triple Revolution report, the president signed a bill creating the National Commission on Technology, Automation, and Economic Progress. In his remarks at the bills signing ceremony, Johnson said that “automation can be the ally of our prosperity if we will just look ahead, if we will understand what is to come, and if we will set our course wisely after proper planning for the future.” The newly formed Commission then . . . quickly faded into obscurity.

A few years later, Richard Nixon introduced UBI legislation that he called “The most significant piece of social legislation in our nation’s history.” That legislation also faded into obscurity — more on that another time.

Thus, UBI is an old idea responding to an old fear: how do we make a living if we can’t work for it? A half century after LBJ and Nixon, that fear is all too real, and lots of people think it might be time for the historical UBI solution to make its appearance.

But not everyone is jumping on the UBI bandwagon. The very thought that jobs might not be the source of our sustenance is the rallying cry of UBI’s most strident opponents.

More on UBI next time.


[1] Spencer followed with a similarly scathing assessment in this article.

[2] Myrdal’s study of race relations was influential in Brown v. Board of Education. He was also an architect of the Swedish social democratic welfare state. Hayek and Myrdal were jointly awarded the Nobel Prize in Economics in 1974.

The Success Delusion

How did the social safety net turn into a poverty trap? It was a victim of the success of the job as an economic force.

Psychologists call it “the success delusion.” You do something and get a result you like, so you keep doing it, expecting more of the same. It keeps working until one day it doesn’t. Do you try something new? No, you double down — it worked before, surely it will work again. You keep doubling down until you’ve made a mess.

You’re a victim of your own success. If you could listen, hindsight would tell you that there was more to it than what you were doing, that a lot of what happened was you being in the right place at the right time. You might believe that or not, but what matters now is that the times have changed and you didn’t.

That’s what happened to social welfare. 40 years of post-WWII economic success positioned the steady job as the cornerstone of economic prosperity and upward mobility. Then, in the 80s and 90s, about the time the job was starting to lose its economic vitality, policy-makers doubled down on it: work had raised the welfare of the whole world since the days of the telegraph and railroad, and surely it was still the best route out of poverty. So now we had workfare instead of welfare, and, as we saw last time, social welfare became “a system of suspicion and shame.”

Standin’ in line marking time
Waiting for the welfare dime
‘Cause they can’t buy a job
The man in the silk suit hurries by
As he catches the poor old lady’s eyes
Just for fun he says, “Get a job.”

That’s The Way It Is”
Bruce Hornsby and the Range

Rutger Bregman sums it up this way:

We’re saddled with a welfare state from a bygone era when the breadwinners were still mostly men and people spent their whole lives working at the same company. The pension system and employment protection rules are still keyed to those fortunate to have a steady job, public assistance is rooted in the misconception that we can rely on the economy to generate enough jobs, and welfare benefits are often not a trampoline, but a trap.

Utopia for Realists (2017).

Guy Standing explains it this way:

The period from the nineteenth century to the 1970’s saw what Karl Polanyi, in his famous 1944 book, dubbed “The Great Transformation.”

The essence of labourism was that labour rights — more correctly , entitlements — should be provided to those (mostly men) who performed labour and to their spouses and children.

Those in full-time jobs obtained rising real wages, a growing array of “contributory” non-wage benefits, and entitlements to social security for themselves and their family. As workers previously had little security, this was a progressive step.

Labourism promoted the view that the more labour people did, the more privileged they should be, and the less they did the less privileged they should be. The ultimate fetishism was Lenin’s dictate, enshrined in the Soviet constitution, that anybody who did not labour should not eat.

The labourist model frayed in the 1980’s, as labour markets became more flexible and increasing numbers of people moved from job to job and in and of employment.

To defend labour-based welfare, social democratic governments turned to means testing, targeting benefits on those deemed the deserving poor.

The shift to means testing was fatal. As previous generations of social democrats had understood, benefits designed only for the poor are invariably poor benefits and stand to lose support among the rest of society.

Ironically, it was mainly social democratic parties that shifted policy towards workfare, requiring the unemployed to apply for non-existent or unsuitable jobs, or to do menial, dead-end jobs or phony training courses in return for increasingly meagre benefits.

Today, we are living in a Second Gilded Age — with one significant difference. In the first, which ended in the Great Crash of 1929, inequality grew sharply but wages on average rose as well. The Second Gilded Age has also involved growing inequality, but this time real wages on average have stagnated or fallen. Meanwhile, those relying on state benefits have fallen further behind, many pushed into homelessness, penury and dependence on inadequate private charity.

Since the 1980s, the share of income going to labour has shrunk, globally and in most countries of economic significance. . . . The labour share fell in the USA from 53 per cent in 1970 to 43.5 per cent in 2013. Most dramatically, it slid by over twenty percentage points in China and also dropped steeply in the rising industrial giant of South Korea.

Besides falling wages, there has been an increase in wage differentials and a less-documented decline in the share of people receiving non-wage benefits, such as occupational pensions, paid holidays, sick leave or medical coverage. Thus worker compensation, in terms of “social income,” has fallen by more than revealed by wages alone.

As a consequence of these developments, “in-work poverty” has rocketed. In some OECD [Organisation for Economic Cooperation and Development — 34 industrialized member countries], including Britain, the USA, Spain and Poland, a majority of those in poverty live in households where at least one person has a job.

The mantra that “work is the best route out of poverty” is simply false.

The Corruption of Capitalism (2017).

Not only are jobs doing a poor job at social welfare — for both employed and unemployed alike — but they are themselves an endangered species. More to come…

 

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”