February 21, 2019

Working With Passion

Hmmm… love… passion… Happy Valentine’s Day!

Now back to work.

Is there really such a thing as loving your work/working with passion? Yes.

What does it mean, to work with passion? I don’t have a good definition, but you know when you’ve got it. 

And it certainly isn’t what ManagementSpeak calls “engagement.”

Google “work engagement,” and you get a truly stunning number and variety of results, many of which are monotonously unoriginal and insultingly obvious, and some of which are just plain scary. Consider this article from “OSH WIKI,” sponsored by the EU version of OSHA[1]:

Work engagement is defined as positive behaviour or a positive state of mind at work that leads to positive work-related outcomes. Employees with high levels of work engagement are energetic and dedicated to their work and immersed to their work.

We’ll ignore the redundancy and wayward preposition for a moment and notice all the strong adjectives: positive, energetic, dedicated, immersed. No issues there. Wikipedia adds a few more:

Work engagement is the ‘harnessing of organization member’s selves to their work roles: in engagement, people employ and express themselves physically, cognitively, emotionally and mentally during role performances. Three aspects of work motivation are cognitive, emotional and physical engagement.’[2]

Okay, got it:  when you’re engaged at work, you’re “physically, cognitively, emotionally and mentally” all there. Hard to argue with that. But then you also need to be “harnessed” to your “work role,” with the ultimate objectives of “role performance” and “work-related outcomes.”

Um, no thanks. I’m pretty sure I’m busy that night. The robots can handle it while I’m out.

Thus far, we only have descriptions of what it’s like when you are engaged.  But how do you get there in the first place? That would seem to be where “passion” comes in. But where does that come from? Maybe we’ll find some clues in an article with a catchy title:  Is Your Colleague A Zombie Worker?

They walk among us, dead-eyed, with heavy tread. They are the colleague sagging at the coffee machine, the project manager staring out of the window. Meet the zombie workforce: an army of employees who’re failing to find inspiration at work.

There are more of these ‘working dead’ than you might imagine. According to a recent study by Aon Hewitt, less than one-quarter of the world’s employees are classified as ‘highly’ engaged in their jobs, while only 39% admit to being ‘moderately’ so.

This leaves an awful lot of the 5 million people Aon surveyed ‘unengaged’, which the more gruesome-minded of us might take to mean ‘haunting office corridors like reanimated corpses’ where once they might have been valuable staff members, full of life and great ideas.


We all know people like that. We might be that ourselves:  according to the research, look left and look right, and two of you don’t have a pulse. The working dead can’t find the “inspiration at work” they need. Hence no passion.

How do we wake the dead?

I met the world of working dead lawyers right after the Great Recession of 2007-2008. In a stroke of exquisitely bad timing, I left my law practice to start a new venture at the start of 2007. The project bombed, and I was at loose ends. I attended a bar association career change/job search meeting where we did one of those speed-dating things where you meet everybody. It was an eye-opener. Here were all these amazing people — bright, personable, articulate, with wide interests and a desire to serve — but they didn’t see themselves that way. Instead, they saw themselves as victims, helpless, hopeless.

I raced home and sketched out a workshop to help them discover who they really were. I’d never done a workshop like that before, but the ideas poured in, and I wrote them down in a white heat. A couple hours later, I fired off a proposal to the bar association. Weeks later I got an email:  “How’d you like to do your program over lunch next Tuesday? We’ll provide the pizza.” They put a blurb in a monthly newsletter, and 40 people showed up. I’ll never forget standing in front and looking into 40 pairs of empty eyes. The lights were on but nobody was home — or in some cases, the lights weren’t even on, and apparently hadn’t been for a long time.

The workshop morphed into a traveling Continuing Legal Education road show. The promoter called it “Beyond Burnout: Find Your Passion in the Law,” but then quickly added “Or Out of the Law.” Best intentions aside, most attendees wanted out. Of the hundreds of heartfelt evaluations I collected, the following was by far in the minority:

I knew I was fairly happy in my career, but I took this CLE because it sounded more interesting than the traditional practice area CLEs. In working through the exercises, I met some amazing people and realized just how truly blessed I am to be currently working in a job that I love. This workshop got me excited to build my business to an even bigger level — it reignited the passion!

“Reignited” meant the writer had the passion, and knew it. I said earlier you know it if you’ve got it. Next time, we’ll talk about what that feels like — kinda like falling in love, actually.


[1] In its defense, OSH is in the business of making sure workers are engaged at leaqst enough not tyo hurt themselves or others — a pretty low standard when it comes to passion. Here’s its mission:  “OSHwiki has been developed by EU-OSHA, to enable the sharing of occupational safety and health (OSH) knowledge, information and best practices, in order to support government, industry and employee organisations in ensuring safety and health at the workplace.”

[2] Quoting a 1990 Academy of Management Journal article.

If you like Kevin Rhodes’s posts, you might enjoy his new Iconoclast.blog, which focuses on several themes that have appeared in this blog over the years, such as how belief creates culture and culture creates behavior, and why growth and change are difficult but doable. You can also follow Iconoclast.blog on Facebook.

Work Less, Do More

Anybody else remember May Day baskets? You made a little basket, put dandelions or candy in it, left it at the door of the girl next door’s house, rang the doorbell and ran away. If she heard, she was obligated to chase you and give you a kiss if she caught you. (That never happened.)

Hey c’mon… winters were long in Minnesota…

On May Day 1926, Henry Ford gave his factory floor workers the ultimate May Day basket: the 40-hour work week, all the way down from 60 hours. Ford’s office workers got their reduced workweek three months later.

Ford was progressive, and then some. Twelve years earlier, he’d given them another surprise: a raise from $2.34 per day all the way up to $5.00.[1] You had to love the man, and they did. Little wonder that productivity skyrocketed. Ford’s employees were working lees, doing more, and now they could also afford to buy his cars — although only with prior approval from Ford’s Sociological Dept, which looked after workers’ personal, home, family, and financial health.

We’ve been living with Ford’s 40-hour work week for 93 years now. Some people think maybe it’s time for an upgrade — they suggest a four-day work week.

This position is backed up by Academic research. Multiple studies support the view that a shorter working week would make people happier and more productive, while OECD figures show that countries with a culture of long working hours often score poorly for productivity and GDP per hour worked.

Meanwhile, one company in New Zealand that trialed a four-day working week last year confirmed it would adopt the measure on a permanent basis.[2]

Academics who studied the trial reported lower stress levels, higher levels of job satisfaction and an improved sense of work-life balance. Critically, they also say workers were 20% more productive.

Three-day weekend, anyone?

From this article about a presentation on the four-day work week at the recent World Economic Forum conclave in Davos, Switzerland.


Another WEF article indicates that research reveals an inverse relationship between hours worked (units of input) and productivity (units of output). The extra day off per week raises employee morale, improves health and wellbeing, and yes, raises productivity. And although some jobs really need to be staffed more days per week. that’s readily addressed through job-sharing.

It seems intuitive, doesn’t it, that happier, better rested workers will do more, and probably do it better, in less time? Not everyone is so easily convinced — here’s a sample of articles that do their journalistic best to present both upsides and downsides, while barely concealing an overall thumbs up: Wired, Huffington Post, Stuff.

From what I can tell from a review of those articles and several others like them, the dividing line between pro and con seems to be how comfortable corporate managers and politicos are with the word “progressive.” The New Zealand Guardian Trust Company is the one that took the four-day plunge, and these days New Zealand is floating on a progressive tide — see these articles: Business Insider, Business Insider, The Independent.:

Next time, we’ll start looking at some other common advice about how to improve the workplace, such as finding your true calling/vocation, getting a sense of meaning and purpose in your work, following your dreams, doing what you love, etc. Good advice? Bad advice? We’ll look into it.


[1] That was for the male workers; the females got the same raise two years later.

[2] These are the researchers who conducted the New Zealand pilot.

If you like Kevin Rhodes’s posts, you might enjoy his new Iconoclast.blog, which focuses on several themes that have appeared in this blog over the years, such as how belief creates culture and culture creates behavior, and why growth and change are difficult but doable. You can also follow Iconoclast.blog on Facebook.

The Lonely Worker

In four years, my law firm went from me and my laptop to $800,000 and climbing, and suddenly we were twelve of us in newly decked out offices complete with $100,000 in telecommunications and electronics upgrades.

Obviously we’d hit a sweet spot, and we were having fun. We laughed a lot. We ate together, visited each other’s homes. We took firm ski days and watched the Rockies at Coors Field. We had crazy non-policies like “take as much vacation as you need to come to work refreshed.” We had the coolest Christmas event ever. And we did kick-ass legal work.

But then the numbers got bigger and I got serious. An accountant said our vacation policy was unsustainable — we needed one, in a real live employee manual. I wrote one but never had the heart to show it to anyone. We sat in meetings with consultants formulating heartless strategic plans we all ignored. We had an employee retreat that was just plain weird.

The worst thing I took seriously was myself. I totally blew the lesson basketball Hall-of-Famer and Orlando Magic founder Pat William put in the title of his book Humility:  The Secret Ingredient of Success. Time and chance had favored us — I’d stumbled  into doing the right thing in the right place at the right time. Work had often been a rollicking, happy social occasion. But then I decided I must  have been responsible for it, and paved Paradise, put up a parking lot, and didn’t know what we had ‘til it was gone.

We’d been in our new offices one week. My wife and I had flown  back the day before from a cushy five-day CLE at a resort in San Diego, and I was heading out to visit our new satellite office when the phone rang. It was the associate-soon-to-be-partner  we’d put in charge. “There’s something going on you need to know about,” he said.

The date was September 11th. The second plane had just hit the second tower.

Our clients — mostly small businesses — got hammered in the mini-recession that followed. As a result, so did we. I sought advice from two Denver law firm icons. They were sympathetic — they’d done that, too — expanded too much too quickly and paid for it in a downturn. A couple other people said you have to let people go — I followed their advice and let one person go — a move I mourn to this day. That’s when I decided we’ll survive or go down, but we’re doing it together.

We limped along until January 2004, when the new leader of our major referral source called to say they were “moving in a new direction” and March 31st would be the date we were officially toast. For the next three months I wrote job recommendations, we gave people their furniture and computers, sold the rest, archived files…

When I went to the office on April 1st (April Fool’s Day), the place echoed. I’d never felt so lonely in my life. Rotten timing, victim of circumstance, happens to everyone… yeah maybe, but all I could think was I miss my friends.

We don’t usually associate loneliness with work. We ought to, says Emily Esfahani-Smith in her book The Power of Meaning: Crafting a Life That Matters. She cites findings that 20% consider loneliness a “major source of unhappiness in their lives,” that 1/3 of Americans 45 of older say they’re lonely, and that close relationships at work are a major source of meaning. Former Surgeon General Vivek Murphy agrees and then some:

There is good reason to be concerned about social connection in our current world. Loneliness is a growing health epidemic.

Today, over 40% of adults in America report feeling lonely, and research suggests that the real number may well be higher.

In the workplace, many employees — and half of CEOs — report feeling lonely in their roles. “At work, loneliness reduces task performance, limits creativity, and impairs other aspects of executive function such as reasoning and decision making. For our health and our work, it is imperative that we address the loneliness epidemic quickly.

And even working at an office doesn’t guarantee meaningful connections: People sit in an office full of coworkers, even in open-plan workspaces, but everyone is staring at a computer or attending task-oriented meetings where opportunities to connect on a human level are scarce.

Happy hours, coffee breaks, and team-building exercises are designed to build connections between colleagues, but do they really help people develop deep relationships? On average, we spend more waking hours with our coworkers than we do with our families. But do they know what we really care about? Do they understand our values? Do they share in our triumphs and pains?

These aren’t just rhetorical questions; from a biological perspective, we evolved to be social creatures. Over thousands of years, the value of social connection has become baked into our nervous system such that the absence of such a protective force creates a stress state in the body.

Work And The Loneliness Epidemic: Reducing Isolation At Work Is Good For Business,” Harvard Business Review (2017).

He offers these remedies:

  • Evaluate the current state of connections in your workplace.
  • Build understanding of high-quality relationships.
  • Make strengthening social connections a strategic priority in your organization.
  • Create opportunities to learn about your colleagues’ personal lives.

And, he might have added, you might want to rethink your stingy vacation policy.

For more, see Work Loneliness and Employee Performance, Academy of Management Proceedings (2011).

If you like Kevin Rhodes’s posts, you might enjoy his new Iconoclast.blog, which focuses on several themes that have appeared in this blog over the years, such as how belief creates culture and culture creates behavior, and why growth and change are difficult but doable. You can also follow Iconoclast.blog on Facebook.

Total Work [2]: Asleep on the Subway

I saw it often during a visit to Seoul: people sacked out on the subway, on the bus, at coffee shops, on park benches… The practice is common all around Asia. The Japanese have a word for it: “inemuri.”

It is often translated as ‘sleeping on duty,’ but Brigitte Steger, a senior lecturer in Japanese studies at Downing College, Cambridge, who has written a book on the topic, says it would be more accurate to render it as ‘sleeping while present.’

Napping in Public? In Japan, That’s a Sign of DiligenceNY Times (Dec 16, 2016).

Inemuri means it’s more polite to be present, even if you nod off. In the workplace, that means it’s better to sleep on the job than not show up. Besides, it gets you brownie points:

In most countries, sleeping on the job isn’t just frowned upon, it may get you fired… But in Japan, napping in the office is common and culturally accepted. And in fact, it is often seen as a subtle sign of diligence: You must be working yourself to exhaustion.

And of course working yourself to exhaustion is a good thing. Add the Asian practice of wee hours business drinking and you might also be napping on the pavement — another common sight.

Run a Google Images search on the topic and the sheer volume of visuals is striking — these are seriously tired people.[1] It’s easy to imagine the impact of that level of fatigue on job performance, let alone daily life. The cognitive impairment and other health risks of sleep deprivation are well documented,[2] It’s especially bad in the professions — lawyers and doctors are chief among the sleep-deprived.

There’s also a deeper, darker side of chronic, overworked exhaustion, as we saw in last week’s post:

“Off in corners, rumours would occasionally circulate about death or suicide from overwork, but such faintly sweet susurrus would rightly be regarded as no more than local manifestations of the spirit of total work, for some even as a praiseworthy way of taking work to its logical limit in ultimate sacrifice.”

If Work Dominated Your Every Moment Would Life be Worth Living?Aeon Magazine (2018)

Wait a minute! It’s praiseworthy to work yourself to death?! Believe it. And it’s not just in Asia, it’s all around the world, as people everywhere make the steady march toward the state of total work.[3]

Stanford Professor Jeffrey Pfeffer recently wrote a book about workplace-induced ill health and death. The following is from a Stanford Business interview, “The Workplace is Killing People and Nobody Cares” (March 15, 2018).

Jeffrey Pfeffer has an ambitious aspiration for his latest book. “I want this to be the Silent Spring of workplace health,” says Pfeffer, a professor of organizational behavior at Stanford Graduate School of Business. ‘We are harming both company performance and individual well-being, and this needs to be the clarion call for us to stop. There is too much damage being done.’

This is from the book blurb:

In one survey, 61 percent of employees said that workplace stress had made them sick and 7 percent said they had actually been hospitalized. Job stress costs US employers more than $300 billion annually and may cause 120,000 excess deaths each year. In China, 1 million people a year may be dying from overwork. People are literally dying for a paycheck. And it needs to stop.

In this timely, provocative book, Jeffrey Pfeffer contends that many modern management commonalities such as long work hours, work-family conflict, and economic insecurity are toxic to employees—hurting engagement, increasing turnover, and destroying people’s physical and emotional health—and also inimical to company performance.

Jeffrey Pfeffer marshals a vast trove of evidence and numerous examples from all over the world to expose the infuriating truth about modern work life: even as organizations allow management practices that literally sicken and kill their employees, those policies do not enhance productivity or the bottom line, thereby creating a lose-lose situation.

The Japanese word for work-related death is karōshi, which Wikipedia says can be translated literally as ‘overwork death.” The comparable term in South Korea is “gwarosa.” Call it what you like, give it a special name or not — death by overwork is total work taken to its utmost.

We don’t like to think about it, talk about it, admit it. It’s not our problem. Let the pros handle it. We wouldn’t know what to do anyway.

Maybe it’s time we learned.


[1] See also “Death by Work: Japan’s Habits of Overwork Are Hard To Change,” The Economist (2018)

[2] For an introduction, see Wikipedia and Harvard Business Review.

[3] See, e.g.,Britain’s Joyless Jobs Market Can Be Bad For Your Health,” The Financial Times (Aug. 2017). See alsoDead For Dough: Death by Overwork Around the World,” The Straits Times (first published April 6, 2016, updated Oct 6, 2017).

If you like Kevin Rhodes’s posts, you might enjoy his new Iconoclast.blog, which focuses on several themes that have appeared in this blog over the years, such as how belief creates culture and culture creates behavior, and why growth and change are difficult but doable. You can also follow Iconoclast.blog on Facebook.

Total Work

Andrew Taggart is an entrepreneur, “practical philosopher,” and prolific writer who works with creative leaders at the Banff Centre for Arts and Creativity and social entrepreneurs at Kaospilot in Denmark. In a recent article, he comments on the state of “total work,” a term coined by German philosopher Josef Pieper in his 1948 book Leisure: The Basis of Culture, which described the process by which society increasingly categorizes us as workers above all else. Like Pieper, Taggart believes human experience derails when work is the dominant cultural norm:

Imagine that work had taken over the world. It would be the centre around which the rest of life turned. Then all else would come to be subservient to work.

And how, in this world of total work, would people think and sound and act?

Everywhere they looked, they would see the pre-employed, employed, post-employed, underemployed and unemployed, and there would be no one uncounted in this census. Everywhere they would laud and love work, wishing each other the very best for a productive day, opening their eyes to tasks and closing them only to sleep.

Everywhere an ethos of hard work would be championed as the means by which success is to be achieved, laziness being deemed the gravest sin. Everywhere among content providers, knowledge brokers, collaboration architects and heads of new divisions would be heard ceaseless chatter about workflows and deltas, about plans and benchmarks, about scaling up, monetisation and growth.

[Work becomes total] when it is the centre around which all of human life turns; when everything else is put in its service; when leisure, festivity and play come to resemble and then become work; when there remains no further dimension to life beyond work; when humans fully believe that we were born only to work; and when other ways of life, existing before total work won out, disappear completely from cultural memory.

Crucially, the attitude of the total worker is not grasped best in cases of overwork, but rather in the everyday way in which he is single-mindedly focused on tasks to be completed, with productivity, effectiveness and efficiency to be enhanced. How? Through the modes of effective planning, skilful prioritising and timely delegation. The total worker, in brief, is a figure of ceaseless, tensed, busied activity.

Hmmm, sounds a lot like the practice of law… But it’s not just lawyers, it’s everywhere. For the movers and shakers it’s build, fund, scale, execute, maximize, prioritize, manage, lead. For the rest it’s be early, stay late, be nice to callers and customers, and get through all that email — there might be something important in there. And everywhere it’s build the platform, get the clicks, likes, and follows, join the meetups and podcasts, eat healthy, buy the Peloton and the Beemer, learn a new language, take the beach vacation, drink the microbrew, subscribe to the curated monthly clothing delivery… it all counts.

There’s nothing intrinsically “bad” in all of that. I do a lot of it myself. But when everything we do is organized around trading our time and energy for reward in the marketplace, we’re going to suffer, individually and as a culture:

To see how [total work] causes needless human suffering, consider the illuminating phenomenology of total work as it shows up in the daily awareness of two imaginary conversation partners. There is, to begin with, constant tension, an overarching sense of pressure associated with the thought that there’s something that needs to be done, always something I’m supposed to be doing right now. As the second conversation partner puts it, there is concomitantly the looming question: Is this the best use of my time? Time, an enemy, a scarcity, reveals the agent’s limited powers of action, the pain of harrying, unanswerable opportunity costs.

Together, thoughts of the not yet but supposed to be done, the should have been done already, the could be something more productive I should be doing, and the ever-awaiting next thing to do conspire as enemies to harass the agent who is, by default, always behind in the incomplete now. . . . One feels guilt whenever he is not as productive as possible. Guilt, in this case, is an expression of a failure to keep up or keep on top of things, with tasks overflowing because of presumed neglect or relative idleness.

The burdened character of total work, then, is defined by ceaseless, restless, agitated activity, anxiety about the future, a sense of life being overwhelming, nagging thoughts about missed opportunities, and guilt connected to the possibility of laziness.

In other words, total work is chronically stressful — a well-documented source of mental, physical, relational, and societal ill health. And the problem is, if we’re not already there, we’re alarmingly close:

This world [of total work], it turns out, is not a work of science fiction; it is unmistakably close to our own.

As a result:

Off in corners, rumours would occasionally circulate about death or suicide from overwork, but such faintly sweet susurrus[1] would rightly be regarded as no more than local manifestations of the spirit of total work, for some even as a praiseworthy way of taking work to its logical limit in ultimate sacrifice.

More on that coming up.


[1] I had to look up “susurrus.” It means “whispering, murmuring, or rustling.”

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

Work and Money

He’s a gentleman with a family
A gentle man, living day to day
He’s a gentleman with pride, one may conclude
Sign reads, “Gentleman with a family will work for food.”

Manhattan Transfer, Gentleman With a Family

Norwegian Petter Amlie is an entrepreneur, technology consultant, and frequent contributor on Medium. Work runs our economy, he writes in a recent article, “but if future technology lets us keep our standard of living without it, why do we hold on to it?” It’s a good question — one of those obvious ones we don’t think to ask. Why would we insist on working for food — or the money we need to buy food — if we don’t have to?

As we’ve seen, at the center of the objections to robotics, artificial intelligence, big data, marketing algorithms, machine learning, and universal basic income is that they threaten the link between work and money. That’s upsetting because we believe jobs are the only way to “make a living.” But what if a day comes — sooner than we’d like to think — when that’s no longer true?

Work comes naturally to us, but the link between work and money is artificial — the function of an economic/social contract that relies on jobs to support both the production and consumption sides of the supply/demand curve: we work to produce goods and services, we get paid for doing it, we use the money to buy goods and services from each other. If technology takes over the production jobs, we won’t get paid to produce things — then how are we supposed to buy them? Faced with that question, “the captains of industry and their fools on the hill” (Don Henley) generally talk jobs, jobs, jobs — or, in the absence of jobs, workfare.

John Maynard Keynes had a different idea back in 1930, just after the original Black Friday, when he predicted that technological progress would mostly end the need for jobs, so that we would work for pay maybe fifteen hours per week, leaving us free to pursue nobler pursuits. He spoke in rapturous, Biblical terms:

I see us free, therefore, to return to some of the most sure and certain principles of religion and traditional virtue — that avarice is a vice, that the exaction of usury is a misdemeanor, and the love of money is detestable, that those who walk most truly in the paths of virtue and sane wisdom take least thought for the morrow. We shall once more value ends above means and prefer the good to the useful. We shall honour those who can teach us how to pluck the hour and the day virtuously and well, the delightful people who are capable of taking direct enjoyment in things, the lilies of the field who toil not neither do they spin.

But then, after a second world war tore the planet apart, jobs rebuilt it. We’ve lived with that reality so long that we readily pooh-pooh Keynes’s euphoric prophecy. Amlie suggests we open our minds to it:

Work and money are both systems we’ve invented that were right for their time, but there’s no reason to see them as universally unavoidable parts of society. They helped us build a strong global economy, but why would we battle to keep it that way, if societal and technological progress could help us change it?

We have a built-in defense mechanism when the status quo is challenged by ideas such as Universal Basic Income, shorter work weeks and even just basic flexibility at the workplace, often without considering why we have an urge to defend it.

You’re supposed to be here at eight, even if you’re tired. You’re supposed to sit here in an open landscape, even if the isolation of a home office can help you concentrate on challenging tasks. You have exactly X number of weeks to recharge your batteries every year, because that’s how it’s always been done.

While many organizations have made significant policy adjustments in the last two decades, we’re still clinging to the idea that we should form companies, they should have employees that are paid a monthly sum to be there at the same time every morning five days a week, even if this system is not making us very happy.

I do know that work is not something I necessarily want to hold on to, if I could sustain my standard of living without it, which may just be the case if robots of the future could supply us with all the productivity we could ever need. If every job we can conceive could be done better by a machine than a human, and the machines demand no pay, vacation or motivation to produce goods and services for mankind for all eternity, is it such a ridiculous thought to ask in such a society why we would need money?

We should be exploring eagerly how to meet these challenges and how they can improve the human existence, rather than fighting tooth and nail to sustain it without knowing why we want it that way.

The change is coming. Why not see it in a positive light, and work towards a future where waking up at 4 am to go to an office is not considered the peak of human achievement?

One gentleman with a family who’s been seeing change in a positive new light is Juha Järvinen, one of 2,000 Finns selected for a two-year UBI test that just ended. He’s no longer working hard for the money, but he is working harder than ever. We’ll meet him next time.

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

Social Contract

“Men are born free, yet everywhere are in chains.”
Jean-Jacques Rousseau, The Social Contract & Discourses

What do Fortnite, New Year’s Day, and the USA have in common?

They all exist because we believe they do.

Political theorists call this kind of communal belief a “social contract.” According to Rousseau, that’s the mechanism by which we trade individual liberty for community restraint. Similarly, Thomas Hobbes said this in Leviathan:

As long as men live without a common power to keep them all in awe, they are in the condition known as war, and it is a war of every man against every man.

When a man thinks that peace and self-defense require it, he should be willing (when others are too) to lay down his right to everything, and should be contented with as much liberty against other men as he would allow against himself.”

In Fortnite terms, life is a battle royale: everybody against everybody else, with only one left standing. As Hobbes famously said, that makes life “solitary, poor, nasty, brutish, and short.” As a recent version put it, “For roughly 99% of the world’s history, 99% of humanity was poor, hungry, dirty, afraid, stupid, sick, and ugly.”[1] A social contract suggests we can do better.

Can we really create something out of nothing, by mere belief? Yes, of course — we do it all the time. My daughter can’t figure out why New Year’s Day is a holiday. “It’s just a day!” she says, unmoved by my explanation that it’s a holiday because everyone thinks it is. Same with Fortnite — as 125 million enthusiasts know, it’s not just an online game, it’s a worldwide reality. And same with the United States — the Colonies’ deal with England grew long on chains and short on freedom until the Founders declared a new sovereign nation into existence:

We hold these truths to be self-evident, that all Men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the Pursuit of Happiness.

The new nation was conceived in liberty, but there would be limits. Once the Revolutionary War settled the issue of sovereign independence[2], the Founders articulated a new freedom/chains balance:

We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defense, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America.

That social contract + 250 years of history = the USA. We are a nation borne of imagination and belief, continually redefined and updated since its founding through interpretations and amendments to the terms of our social contract.

Our economic system works the same way. Adam Smith’s capitalism survived the trip to the new world, produced astonishing quality of life improvements in the 19th and 20th Centuries, and then was recast into the neoliberal framework that powered the world’s recovery from WWII. That version of our economic social contract thrived for three decades, but began to falter in the face of several unforeseen developments:

  • the democratization of knowledge in the information age;
  • accelerated automation, mass production, and eventually robotics;
  • software that at first only did what it was told but later morphed into machine intelligence; and
  • globalization, which shrank the world, homogenized culture, opened international trade, and recast national borders.

Neoliberalism couldn’t keep up with these developments. Tensions grew until the year 2016 became a worldwide referendum on the social contracts of democracy and neoliberalism. New social contracts would have required a new freedom/chains balance. 2016’s response was, “Not on my watch.”

That’s the context into which universal basic income would now be introduced. For that to happen, the American Dream of independence and upward mobility fueled by working for a living must give way to a belief that basic sustenance — job or no job — is a human right so fundamental that it’s one of those “self-evident” truths. As we’ve seen, that radical belief is slowly changing the North Carolina Cherokee Reservation’s culture of poverty, and has caught the fancy of a growing list of techno-plutocrats. As Mark Zuckerberg said, “Now it’s our time to define a new social contract for our generation.” Law professor James Kwak makes the same point[3]:

We have the physical, financial, and human capital necessary for everyone in our country to enjoy a comfortable standard of living, and within a few generations the same should be true of the entire planet, And yet our social organization remains the same as it was in the Great Depression: some people work very hard and make more money than they will ever need, while many others are unable to find work and live in poverty.

Millions if not billions of people today hunger to live in a world that is more fair, more forgiving, and more humane than the one they were born into. Creating a new vision of society worthy of that collective yearning … is the first step toward building a better future for our children.”

To be continued.


[1] Rutger Bregman, Utopia for Realists (2016).

[2] In Hobbes’ terms, social contracts end the battle royale. Ironically, they often also create war as ideals of one contract conflict with another’s.

[3] James Kwak, Economism (2017).

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

Silicon Valley: Problem or Solution?

 There is no more neutrality in the world.
You either have to be part of the solution,
or you’re going to be part of the problem.
Eldridge Cleaver

The high tech high rollers build the robots, code the algorithms, and wire up the machine intelligence that threaten jobs. If they’re the problem, what’s their the solution?

Elon Musk: Universal basic income is “going to be necessary” because “there will be fewer and fewer jobs that a robot cannot do better.”

Richard Branson: “A lot of exciting new innovations are going to be created, which will generate a lot of opportunities and a lot of wealth, but there is a real danger it could also reduce the amount of jobs. Basic income is going to be all the more important. If a lot more wealth is created by AI, the least that the country should be able to do is that a lot of that wealth that is created by AI goes back into making sure that everybody has a safety net.”

Mark Zuckerberg: “The greatest successes come from having the freedom to fail. Now it’s our time to define a new social contract for our generation. We should explore ideas like universal basic income to give everyone a cushion to try new things.”

Sam Altman: “Eliminating poverty is such a moral imperative and something that I believe in so strongly. There’s so much research about how bad poverty is. There’s so much research about the emotional and physical toll that it takes on people.” (Altman’s company Y Combinator is conducting its own UBI experiment in Oakland.)

Ideas like this get labelled “progressive,” meaning “ahead of their time,” which in turn means “over my dead body.” We saw a few posts back that Pres. Johnson’s visionary Triple Revolution Report and National Commission on Technology, Automation, and Economic Progress ended up in the dustbin of history. Another technology/jobs initiative had already landed there two decades earlier:

In 1949, at the request of the New York Times, Norbert Wiener, an internationally renowned mathematician at the Massachusetts Institute of Technology, wrote an article describing his vision for future computers and automation. Wiener had been a child prodigy who entered college at age eleven and completed his PhD when he was seventeen; he went on to establish the field of cybernetics and made substantial contributions in applied mathematics and to the foundations of computer science, robotics, and computer-controlled automation.

In his article — written just three years after the first true general purpose electronic computer was built at the University of Pennsylvania — Wiener argued that ‘if we can do anything in a clear and intelligible way, we can do it by machine’ and warned that this could ultimately lead to ‘an industrial revolution of unmitigated cruelty’ powered by machines capable of ‘reducing the economic value of the routine factory employee to a point at which he is not worth hiring at any price.’

Rise of the Robots: Technology and the Threat of a Jobless Future, Martin Ford

Wiener’s article was never published, and was only recently (in 2012) discovered in MIT’s archives. Outspoken technology commentator Douglas Rushkoff hopes UBI meets a similar end. In a recent Medium piece, he called UBI “Silicon Valley’s Latest Scam.”[1] His main critique? UBI doesn’t go far enough:

They will basically tell you that a Universal Basic Income is a great idea and more effective than any other method of combating technological unemployment, the death of the Middle Class and the automation of the future of work.

They don’t propose a solution to wealth inequality, they only show a way to prevent all out mass social unrest and chaos, something that would inconvenience the state and elite.

The bottom 60% of the economy, well what do you suppose is in store for us with the rise of robots, machine learning and automation . . . ?

California might get a lot of sunshine and easy access to VC, but they aren’t blessed with a lot of common sense. They don’t know the pain of rural America, much less the underclass or warped narrative primed by Facebook algorithms or the new media that’s dehumanized by advertising agents and propaganda hackers.

What if receiving a basic income is actually humiliating and is our money for opioids and alcohol, and not for hope that we can again join a labor force that’s decreasing while robots and AI do the jobs we once did?

The problem lies in the fact that there won’t be a whole lot of “new jobs” for the blue and white collar workers to adapt to once they sink and become part of the permanent unemployed via technological unemployment.

With housing rising in major urban centers, more folk living paycheck-to-paycheck, rising debt to income ratios and less discretionary spending, combined with many other factors, the idea of a UBI (about the same as a meagre pension) saving us, sounds pretty insulting and absurd to a lot of people.

Since when did capitalism care about the down trodden and the poor? If we are to believe that automation and robots really will steal our jobs in unprecedented numbers, we should call Basic Income for what it is, a way to curtail social unrest and a post-work ‘peasant uprising.’

Getting [UBI] just for being alive isn’t a privilege, it’s a death sentence. We are already seeing the toll of the death of the middle class on the opioid epidemic, on the rise of suicide, alcoholism and early death all due to in part of the stress of a declining quality of life since the great recession of 2008.”

If UBI doesn’t go far enough, then what does? Mark Zuckerberg used the phrase “new social contract” in his quote above. More on that coming up.


[1] UBI advocacy group BIEN (Basic Income Earth Network) reported Rushkoff’s opinions in a recent newsletter, and described his alternative: Universal Basic Assets.

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

Basic Income on the Res, Part 2

For nearly two decades, Duke Medical School professor Jane Costello has been studying the impact of casino money on the health and wellbeing of the North Carolina Cherokee tribe. For long, balanced articles about her work, see “What Happens When the Poor Receive a Stipend?The New York Times (2014) and “Free Money: The Surprising Effects Of A Basic Income Supplied By GovernmentWired Magazine (2017).

The NY Times article lists several  encouraging results. Here are a few:

The number of Cherokee living below the poverty line had declined by half.

The frequency of behavioral problems declined by 40 percent, nearly  reaching the risk of children who had never been poor.

Crimes committed by Cherokee youth declined.

On-time high school graduation rates improved.

The earlier the supplements arrived in a child’s life, the better that child’s mental health in early adulthood.

The money seemed to improve parenting quality.

Prof. Costello also noted neurological benefits, particularly brain development in the ”hippocampus and amygdala, brain regions important for memory and emotional well-being.”

Randall Akee, an economist at UCLA and a collaborator with Prof. Costello, speculated about the impact of these findings on the cost of welfare benefits:

A cash infusion in childhood seemed to lower the risk of problems in adulthood. That suggests that poverty makes people unwell, and that meaningful intervention is relatively simple.

Bearing that in mind, [Prof. Akee] argues that the supplements actually save money in the long run. He calculates that 5 to 10 years after age 19, the savings incurred by the Cherokee income supplements surpass the initial costs — the payments to parents while the children were minors. That’s a conservative estimate, he says, based on reduced criminality, a reduced need for psychiatric care and savings gained from not repeating grades.

The Wired article tracks the experiences of “Skooter” McCoy, who left the Cherokee Reservation to play small college football the year the casino money distributions began, and of his son Spencer McCoy, who was born that same year. Skooter returned to the Reservation to coach football at the local high school and is now general manager of the Cherokee Boys Club, a nonprofit that provides day care, foster care, and other tribal services.

The casino money made it possible for him to support his young family, but the money his children will receive is potentially life-altering on a different scale.

‘If you’ve lived in a small rural community and never saw anybody leave, never saw anyone with a white-collar job or leading any organization, you always kind of keep your mindset right here,’ he says, forming a little circle with his hands in front of his face. ‘Our kids today? The kids at the high school?’ He throws his arms out wide. ‘They believe the sky’s the limit. It’s really changed the entire mindset of the community these past 20 years.’

The Cherokees’ experience began with the same provisions for a one-time distribution at age 18 of the money set aside for minors that we saw last time in the Seneca tribe’s program, but the Cherokees later amended their law to call for payments in three stages — still not ideal, but a move toward sensibility. Skooter calls the coming-of-age payments “big money,” and has seen his share of abuse, but his son Spencer appears to be taking a different path:

When Spencer first got his ‘big money,’ he says, ‘I’d get online and I was looking for trucks and stuff, but I thought at the end of the day, it wasn’t really worth it.’ Aside from a used bass boat he bought to take out fishing, Spencer has stashed most of the money away in hopes of using it to start his own business one day.

After reviewing Prof. Costello’s work, the Wired article examines the use of UBI as a response to technological unemployment, concluding as follows:

The true impact of the money on the tribe may not really be known until Spencer’s generation, the first born after the casino opened, is grown up. For the techies backing basic income as a remedy to the slow-moving national crisis that is economic inequality, that may prove a tedious wait.

Still, if anything is to be learned from the Cherokee experiment, it’s this: To imagine that a basic income, or something like it, would suddenly satisfy the disillusioned, out-of-work Rust Belt worker is as wrong-headed as imagining it would do no good at all, or drive people to stop working.

There is a third possibility: that an infusion of cash into struggling households would lift up the youth in those households in all the subtle but still meaningful ways Costello has observed over the years, until finally, when they come of age, they are better prepared for the brave new world of work, whether the robots are coming or not.

We’ll look more at “the robots are coming” and Silicon Valley’s response to technological unemployment next time. Meanwhile, for related information, see this summary re: U.S. government benefits to Indian tribes, and see this article re: another current version of UBI — the Alaska oil money trust fund.

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

Basic Income on the Res

Thomas Sowell has a platinum resume: Marine Corps war vet, bachelor’s Harvard, master’s Columbia, Ph.D. U of Chicago, professor at Cornell and UCLA, Urban Institute and the Hoover Institute at Stanford, books, articles…. You get the point: when he talks economic and social policy, people listen.

The people at The Institute for Family Studies (IFS) were listening when they published a blog post earlier this year entitled “What We Can Learn From Native Americans About a Universal Basic Income.” The article describes the Seneca tribe’s practice of distributing casino money to its members, and focuses on the particularly disastrous provisions pertaining to the money for minors:

Half the money for children under 18 is given to their parents, and the other half is put into a trust. When a Seneca youth turns 18 and can show that he or she has graduated from high school or earned a GED, he or she receives a lump sum of $30,000. Those who don’t get a high-school degree have to wait until they’re 21 to receive the money.

Government officials and other members of the nation tell me that the best thing most young adults do with this money is to buy a new truck. These are kids who have never had very much before; so when someone hands them a huge check, they clearly don’t know what to do. Store owners report that young people will come in to buy candy, handing $50 or $100 without expecting any change. These young people seem to have no concept of saving or investing.

I used to practice estate planning, and need to point out that the Seneca approach to minor beneficiaries unfortunately borrows the worst kind of legislation drafting laziness from intestacy law, uniform gifts to minors acts, and similar laws involving minors and money. Their experience therefore has nothing to do with UBI specifically. Of course dropping a wad of cash on an unprepared 18 or 21 year-old is a dumb idea. Of course the kids “have no concept of saving or investing.” (Like the rest of us do.) Moving on, the article cites more disasters:

The money “is almost never saved for education.

“Despite a vast apparatus to help Seneca members set up businesses, almost no one starts one.

“Unless people are employed by the tribe (either through the casino or in tribal government), they are largely unemployed.

“Theft is also a problem. One official told me that they have had reports of elder abuse where children and grandchildren were stealing payments from older members of the tribe.

“The results of all this can be seen in the poverty rates for the Senecas, which have continued to rise. Their territory is divided into two reservations. As of 2011, the Allegany reservation poverty rate was 33.3 percent and the Cattaraugus reservation poverty rate was 64.9 percent, the highest in Cattaraugus County. During the first decade that the casino was operating, the poverty rate in Cattaraugus County, which includes part of the Seneca Territory, increased from 12.8 in 2000 to 18.7 in 2011.”

Finally, the article ends by citing Thomas Sowell:

Writing about the concept of a Universal Basic Income last year, Thomas Sowell summed up the situation: ‘The track record of divorcing personal rewards from personal contributions hardly justifies more of the same, even when it is in a more sophisticated form. Sophisticated social disaster is still disaster—and we already have too much of that.’

The Sowell article cited by the IFS blogger was “Is Personal Responsibility Obsolete?” (Investor’s Business Daily, June 6, 2016). It begins this way:

Among the many disturbing signs of our times are conservatives and libertarians of high intelligence and high principles who are advocating government programs that relieve people of the necessity of working to provide their own livelihoods.

Generations ago, both religious people and socialists were agreed on the proposition that ‘he who does not work, neither shall he eat.’ Both would come to the aid of those unable to work. But the idea that people who simply choose not to work should be supported by money taken from those who are working was rejected across the ideological spectrum.

And so we see the standard anti-UBI fightin’ words:

“divorcing personal reward from personal contributions”

“government programs that relieve people of the necessity of working to provide their own livelihoods”

“people who simply choose not to work”

“money taken from those who are working”

I confess, I can’t help but wonder what people who say those things think they would do with UBI money. Again moving along….

Other tribes also distribute casino money. The following is from What Happens When the Poor Receive a Stipend?”, published by The New York Times as part of a 2017 series on economic inequality called “The Great Divide.”

Scientists interested in the link between poverty and mental health, however, often face a more fundamental problem: a relative dearth of experiments that test and compare potential interventions.

So when, in 1996, the Eastern Band of Cherokee Indians in North Carolina’s Great Smoky Mountains opened a casino, Jane Costello, an epidemiologist at Duke University Medical School, saw an opportunity. The tribe elected to distribute a proportion of the profits equally among its 8,000 members. Professor Costello wondered whether the extra money would change psychiatric outcomes among poor Cherokee families.

Same idea, different tribe. How’d they do? We’ll find out next time.

 

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

There’s No Such Thing as a Free Lunch — True or False?

Last time, we were introduced to the idea of a universal basic income (UBI). We can assume that the pros and cons have been thoroughly researched and reasonably analyzed, and that each side holds its position with utmost conviction.

We can also assume that none of that reasonableness and conviction will convert anyone from one side to the other, or win over the uncommitted. Reason doesn’t move us: we use it to justify what we already decided, based on what we believe. SeeWhy Facts Don’t Change Our Minds,” The New Yorker (February 2017) and “This Article Won’t Change Your Mind,” The Atlantic (March 2017).

History doesn’t guide us either — see Why We Refuse to Learn From History from Big Think and Why Don’t We Learn From History, from military historian Sir Basil Henry Liddell Hart. The latter contains conventional wisdom such as this:

The most instructive, indeed the only method of learning to bear with dignity the vicissitude of fortune, is to recall the catastrophes of others.

History is the best help, being a record of how things usually go wrong.

There are two roads to the reformation for mankind— one through misfortunes of their own, the other through the misfortunes of others; the former is the most unmistakable, the latter the less painful.

I would add that the only hope for humanity, now, is that my particular field of study, warfare, will become purely a subject of antiquarian interest. For with the advent of atomic weapons we have come either to the last page of war, at any rate on the major international scale we have known in the past, or to the last page of history.

That’s seems like good advice, but it mostly goes unheeded. It seems we’d rather make our own mistakes.

If reasoned analysis and historical perspective don’t inform our responses to radically new ideas like UBI, then what does? Many things, but cultural belief is high on the list. Policy is rooted in culture, culture is rooted in shared beliefs, and beliefs are rooted in history. Cultural beliefs shape individual bias, and the whole belief system becomes sacred in the culture’s mythology. Try to subverts cultural beliefs, and the response is outrage and entrenchment.

All of which means that each of us probably had a quick true or false answer to the question in this week’s blog post title, and were ready to defend it with something that sounded reasonable. Our answer likely signals our kneejerk response to the idea of UBI. The “free lunch”— or, more accurately, “free money” — issue appears to be the UBI Great Divide: get to that point, and you’re either pro or con, and there’s no neutral option. (See this for more about where the “no free lunch” phrase came from.[1])

The Great Divide is what tanked President Nixon’s UBI legislation. The plan, which would have paid a family of four $1,600/year (equivalent to $10,428 today) was set to launch in the midst of an outpouring of political self-congratulation and media endorsement, only to be scuttled by a memo from a White House staffer that described the failure of a British UBI experiment 150 years earlier. UBI was in fact a free lunch; its fate was thus sealed.

As it turns out, whether the experiment failed or not was lost in a 19th Century fog of cultural belief, so that opponents of the experiment pounced on a bogus report about its impact to justify passing the Poor Law Amendment Act of 1834 — which is what they wanted to do anyway. The new Poor Law was that era’s version of workfare, and was generated by the worst kind of scarcity mentality applied to the worst kind of scarcity. Besides creating the backdrop to Charles Dickens’ writing, the new Poor Law’s philosophical roots still support today’s welfare system:

The new Poor Law introduced perhaps the most heinous form of “public assistance” that the world has ever witnessed. Believing the workhouses to be the only effective remedy against sloth and depravity, the Royal Commission forced the poor into senseless slave labor, from breaking stones to walking on treadmills. . . .

For the whole history lesson, see “The Bizarre Tale Of President Nixon’s Basic Income Plan.”

And so we’re back to asking whether UBI is a free lunch or not. If it is, then it’s an affront to a culture that values self-sufficiency. If it isn’t, then it requires a vastly different cultural value system to support it. The former believes that doing something — “making a living” at a job — is how you earn your daily bread. The latter believes you’re entitled do sustenance if you are something: i.e., a citizen or member of the nation, state, city, or other institution or community providing the UBI. The former is about activity, the latter is about identity. This Wired article captures the distinction:

The idea [of UBI] is not exactly new—Thomas Paine proposed a form of basic income back in 1797—but in this country, aside from Social Security and Medicare, most government payouts are based on individual need rather than simply citizenship.

UBI is about “simply citizenship.” It requires a cultural belief that everybody in the group shares its prosperity. Cultural identity alone ensures basic sustenance — it’s a right, and that right makes Poor Laws and workfare obsolete.

The notion of cultural identity invites comparison between UBI and the “casino money” some Native American tribes pay their members. How’s that working? We’ll look at that next time.


[1] Yes, Milton Friedman did in fact say it, although he wasn’t the only one. And in a surprising twist, he has been criticized for advocating his own version of UBI.

 

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”

Fireflies and Algorithms

We’ve been looking at workfare — the legislated link between jobs and the social safety net. An article published last week — “Fireflies And Algorithms — The Coming Explosion Of Companies[1] brought the specter of workfare to the legal profession.

Reading it, my life flashed before my eyes, beginning with one particular memory: me, a newly-hired associate, resplendent in my three-piece gray pinstripe suit, joining the 4:30 queue at the Secretary of State’s office, clutching hot-off-the-word-processor Articles of Incorporation and a firm check for the filing fee, fretting whether I’d get my copy time-stamped by closing time. We always had to file today, for reasons I don’t remember.

Entity choice and creation spanned transactional practice: corporate, securities, mergers and acquisitions, franchising, tax, intellectual property, real property, commercial leasing… The practice enjoyed its glory days when LLCs were invented, and when a raft of new entity hybrids followed… well, that was an embarrassment of riches.

It was a big deal to set up a new entity and get it just right — make sure the correct ABC acquired the correct XYZ, draw the whole thing up in x’s and o’s, and finance it with somebody else’s money. To do all that required strategic alliances with brokers, planners, agents, promoters, accountants, investment bankers, financiers… Important people initiated the process, and there was a sense of substantiality and permanence about it, with overtones of mahogany and leather, brandy and cigars. These were entities that would create and engage whole communities of real people doing real jobs to deliver real goods and services to real consumers. Dissolving an entity was an equally big deal, requiring somber evaluation and critical reluctance, not to mention more time-stamped paperwork.

“Fireflies and Algorithms” sweeps it all away — whoosh! just like that!— and describes its replacement: an inhuman world of here-and-gone entities created and dissolved without the intent of all those important people or all that help from all those people in the law and allied businesses. (How many jobs are we talking about, I wonder — tens, maybe hundreds of thousands?) The new entities will do to choice of entity practice what automated trading did to the stock market, as described in this UCLA Law Review article:

Modern finance is becoming an industry in which the main players are no longer entirely human. Instead, the key players are now cyborgs: part machine, part human. Modern finance is transforming into what this Article calls cyborg finance.

In that “cyborg finance” world,

[The “enhanced velocity” of automated, algorithmic trading] has shortened the timeline of finance from days to hours, to minutes, to seconds, to nanoseconds. The accelerated velocity means not only faster trade executions but also faster investment turnovers. “At the end of World War II, the average holding period for a stock was four years. By 2000, it was eight months. By 2008, it was two months. And by 2011 it was twenty-two seconds.

“Fireflies and Algorithms” says the business entity world is in for the same dynamic, and therefore we can expect:

[W]hat we’re calling ‘firefly companies’ — the blink-and-you-miss-it scenario brought about by ultra-short-life companies, combined with registers that remove records once a company has been dissolved, meaning that effectively they are invisible.

Firefly companies are formed by algorithms, not by human initiative. Each is created for a single transaction — one contract, one sale, one span of ownership. They’re peer-reviewed, digitally secure, self-executing, self-policing, and trans-jurisdictional — all for free or minimal cost. And all of that is memorialized not in SOS or SEC filings but in blockchain.

“So what does all this mean?” the article asks:

How do we make sense of a world where companies — which are, remember, artificial legal constructs created out of thin air to have legal personality — can come into existence for brief periods of time, like fireflies in the night, perform or collaborate on an act, and then disappear? Where there are perhaps not 300 million companies, but 1 billion, or 10 billion?

Think about it. And then — if it hasn’t happened yet — watch your life flash before your eyes.

Or if not your life, at least your job. Consider, for example, a widely-cited 2013 study that predicted 57% of U.S. jobs could be lost to automation. Even if that prediction is only half true, that’s still a lot of jobs. And consider a recent LawGeex contest, in which artificial intelligence absolutely smoked an elite group of transactional lawyers:

In a landmark study, 20 top US corporate lawyers with decades of experience in corporate law and contract review were pitted against an AI. Their task was to spot issues in five Non-Disclosure Agreements (NDAs), which are a contractual basis for most business deals.

The study, carried out with leading legal academics and experts, saw the LawGeex AI achieve an average 94% accuracy rate, higher than the lawyers who achieved an average rate of 85%. It took the lawyers an average of 92 minutes to complete the NDA issue spotting, compared to 26 seconds for the LawGeex AI. The longest time taken by a lawyer to complete the test was 156 minutes, and the shortest time was 51 minutes.

These developments significantly expand the pool of people potentially needing help through bad times. Currently, that means workfare. But how can you have workfare if technology is wiping out jobs?

More on that next time.


[1] The article was published by OpenCorporates, which according to its website is “the world’s largest open database of the corporate world and winner of the Open Data Business Award.”

 

Kevin Rhodes studies and writes about economics in an effort to understand the world his kids are growing up in, which is also the world he’s growing old in. You might enjoy his latest LinkedIn Pulse article “The Fame Monster: Rockstars And Rockstar Entrepreneurs.”