June 23, 2018

On the Third Hand, Continued

Will the machines take over the jobs?

In a recent TED talk, scholar, economist, author, and general wunderkind Daniel Susskindl[1] says the question is distracting us from a much bigger and more important issue: how will we feed, clothe, and shelter ourselves if we no longer work for a living?:

If we think of the economy as a pie, technological progress makes the pie bigger. Technological unemployment, if it does happen, in a strange way will be a symptom of that success — we will have solved one problem — how to make the pie bigger — but replaced it with another — how to make sure that everyone gets a slice. As other economists have noted, solving this problem won’t be easy.

Today, for most people, their job is their seat at the economic dinner table, and in a world with less work or even without work, it won’t be clear how they get their slice. This is the collective challenge that’s right in front of us — to figure out how this material prosperity generated by our economic system can be enjoyed by everyone in a world in which our traditional mechanism for slicing up the pie, the work that people do, withers away and perhaps disappears.

Guy Standing, another British economist, agrees with Susskind about this larger issue. The following excerpts are from his book The Corruption of Capitalism. He begins by quoting Nobel prizewinning economist Herbert Simon’s 1960 prediction:

Within the very near future — much less than twenty-five years — we shall have the technical capacity of substituting machines for any and all human functions in organisations.

And then he makes these comments:

You do not receive a Nobel Prize for Economics for being right all the time! Simon received his in 1978, when the number of people in jobs was at record levels. It is higher still today. Yet the internet-based technological revolution has reopened age-old visions of machine domination. Some are utopian, such as the post-capitalism of Paul Mason, imagining an era of free information and information sharing. Some are decidedly dystopian, where the robots — or rather their owners — are in control and mass joblessness is coupled with a “panopticon” state[2] subjecting the proles to intrusive surveillance, medicalized therapy and brain control. The pessimists paint a “world without work.” With every technological revolution there is a scare that machines will cause “technological unemployment”. This time the Jeremiahs seem a majority.

Whether or not they will do so in the future, the technologies have not yet produced mass unemployment . . . [but they] are contributing to inequality.

While technology is not necessarily destroyed jobs, it is helping to destroy the old income distribution system.

The threat is technology-induced inequality, not technological unemployment.”

Economic inequality and income distribution (sharing national wealth on a basis other than individual earned income) are two sides of the issue of economic fairness — always an inflammatory topic.

When I began my study of economics 15 months ago, I had never heard of economic inequality, and income distribution was something socialist countries did. Now I find both topics all over worldwide economic news and commentary and still mostly absent in U.S. public discourse (such as it is) outside of academic circles. On the whole, most policy-makers on both the left and right maintain their allegiance to the post-WWII Mont Pelerin neoliberal economic model, supported by a cultural and moral bias in favor of working for a living, and if the plutocrats take a bigger slice of pie while the welfare rug gets pulled on the working poor, well then so be it. If the new robotic and super-intelligent digital workers do in fact cause massive technological unemployment among the humans, we’ll all be reexamining these beliefs, big time.

I began this series months ago by asking whether money can buy happiness, citing the U.N.’s World Happiness Report. The 2018 Report was issued this week, and who should be on top but… Finland! And guess what — among other things, factors cited include low economic inequality and strong social support systems (i.e., a cultural value for non-job-based income distribution). National wealth was also a key factor, but it alone didn’t buy happiness: the USA, with far and away the strongest per capita GDP, had an overall ranking of 18th. For more, see this World Economic Forum article or this one from the South China Morning Post.

We’ll be looking further into all of this (and much more) in the weeks to come.


[1] If you’ve been following this column for awhile and the name “Susskind” sounds familiar, a couple years ago, I blogged about the future and culture of the law, often citing the work of Richard Susskind, whose opus is pretty much the mother lode of crisp thinking about the law and technology. His equally brilliant son Daniel joined him in a book that also addressed other professions, which that series also considered. (Those blogs were collected in Cyborg Lawyers.) Daniel received a doctorate in economics from Oxford University, was a Kennedy Scholar at Harvard, and is now a Fellow in Economics at Balliol College, Oxford. Previously, he worked as a policy adviser in the Prime Minister’s Strategy Unit and as a senior policy adviser in the Cabinet Office.

[2] The panopticon architectural structure was the brainchild of legal philosopher Jeremy Bentham. For an introduction to the origins of his idea and its application to the digital age, see this article in The Guardian.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

On the Third Hand…

Will the machines take over the jobs? Ask a bunch of economists, and you’ll get opinions organized around competing ideologies, reflecting individual cognitive, emotional, and political biases. That’s been the experience of Martin Fordentrepreneur, TED talker, and New York Times bestselling author of Rise of the Robots: Technology and the Threat of a Jobless Future:

In the field of economics the opinions all too often break cleanly along predefined political lines. Knowing the ideological predisposition of a particular economist is often a better predictor of what that individual is likely to say than anything contained in the data under examination. In other words, if you’re waiting for the economists to deliver some sort of definitive verdict on the impact that advancing technology is having on the economy, you may have a very long wait.[1]

In this Psychology Today article, Dr. Karl Albrecht[2] offers a neurological explanation for polarized thinking:

Recent research suggests that our brains may be pre-wired for dichotomized thinking. That’s a fancy name for thinking and perceiving in terms of two — and only two — opposing possibilities.

These research findings might help explain how and why the public discourse of our culture has become so polarized and rancorous, and how we might be able to replace it with a more intelligent conversation.

[O]ur brains can keep tabs on two tasks at a time, by sending each one to a different side of the brain. Apparently, we toggle back and forth, with one task being primary and the other on standby.

Add a third task, however, and one of the others has to drop off the to-do list.

Scans of brain activity during this task switching have led to the hypothesis that the brain actually likes handling things in pairs. Indeed, the brain itself is subdivided into two distinct half-brains, or hemispheres.

Curiously, part of our cranial craving for two-ness might be related to our own physiology: the human body is bilaterally symmetrical. Draw an imaginary center line down through the front of a person and you see a lot of parts (not all, of course), that come in pairs: two eyes, two ears, two nostrils, matching teeth on left and right sides, two shoulders, two arms, two hands, two nipples, two legs, two knees, and two feet. Inside you’ll find two of some things and one of others.

Some researchers are now extending this reasoning to suggest that the brain has a built-in tendency, when confronted by complex propositions, to selfishly reduce the set of choices to just two. Apparently it doesn’t like to work hard.

Considering how quickly we make our choices and set our opinions, it’s unlikely that all of the options will even be identified, never mind carefully considered.

“On the one hand this, on the other hand that,” we like to say. Lawyers perfect the art. Politics and the press also thrive on dichotomy:

Again, our common language encodes the effect of this anatomical self reference. “On the one hand, there is X. But on the other hand, we have Y.” Many people describe political views as being either “left” or “right.”

The popular press routinely constructs “news” stories around conflicts and differences between pairs of opposing people, factions, and ideologies. Bipolar conflict is the very essence of most of the news.

So, are robots and artificially intelligence going to trash the working world, or not?

Hmmm, there might be another option — several, actually. Dr. Albrecht urges us to find them:

Seek the “third hand” — and any other “hands” you can discover. Ask yourself, and others, “Are there other options to be considered?”

We’ll consider some third hand perspectives about the rise of the robots in the coming weeks.


[1] Martin Ford is also the consulting expert for Societe Generale’s new “Rise of the Robots” investment index, which focuses on companies that are “significant participants in the artificial intelligence and robotics revolution.”

[2] According to his website, Karl Albrecht is “is an executive management consultant, futurist, lecturer, and author of more than 20 books on professional achievement, organizational performance, and business strategy. He is also a leading authority on cognitive styles and the development of advanced thinking skills. The Mensa Society honored him with its lifetime achievement award, for significant contributions by a member to the understanding of intelligence. Originally a physicist, and having served as a military intelligence officer and business executive, he now consults, lectures, and writes about whatever he thinks would be fun.”

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

Race Against the Machine, Continued

Rational choice theory is a cornerstone of conventional economic thinking. It states that:

Individuals always make prudent and logical decisions. These decisions provide people with the greatest benefit or satisfaction — given the choices available — and are also in their highest self-interest.

Presumably Stephen Hawking, Elon Musk, and Bill Gates had something like this in mind when they published an open letter in January 2015 urging that artificial intelligence R&D should focus “not only on making AI more capable, but also on maximizing the societal benefit,” To execute on this imperative, they urged an interdisciplinary collaboration among “economics, law and philosophy. computer security, formal methods and, of course, various branches of AI itself.” (Since its release, the letter has garnered another 8.000 signatures — you can sign it, too, if you like.)

The letter’s steady, rational four paragraphs praise how technology has benefited the human race, and anticipate more of the same in the future, but its reception and the authors’ comments in other contexts are not so measured. As a result, the letter has become a cheering section for those who think humanity is losing its race against the robots.

Consider, for example, the following from an Observer article:

“Success in creating AI would be the biggest event in human history,” wrote Stephen Hawking in an op-ed, which appeared in The Independent in 2014. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Professor Hawking added in a 2014 interview with BBC, “humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.”

Elon Musk called the prospect of artificial intelligence “our greatest existential threat” in a 2014 interview with MIT students at the AeroAstro Centennial Symposium. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” Mr. Musk cites his decision to invest in the Artificial Intelligence firm, DeepMind, as a means to “just keep an eye on what’s going on with artificial intelligence. I think there is potentially a dangerous outcome there.”

Microsoft co-founder Bill Gates has also expressed concerns about Artificial Intelligence. During a Q&A session on Reddit in January 2015, Mr. Gates said, “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Or consider this Elon Musk comment in Vanity Fair:

In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”

In other words, Hawking, Gates, and Musk aren’t just worried about machines taking over jobs, they’re worried about the end of the world — or at least the human race. This Washington Post op-ed piece thinks that might not be such a bad thing:

When a technology is so obviously dangerous — like nuclear energy or synthetic biology — humanity has an imperative to consider dystopian predictions of the future. But it also has an imperative to push on, to reach its full potential. While it’s scary, sure, that humans may no longer be the smartest life forms in the room a generation from now, should we really be that concerned? Seems like we’ve already done a pretty good job of finishing off the planet anyway. If anything, we should be welcoming our AI masters to arrive sooner rather than later.

Or consider this open letter written back to Hawking, Gates, and Musk, which basically says forget the fear mongering — it’s going to happen no matter what you think:

Progress is inevitable, even if it is reached by accident and happenstance. Even if we do not intend to, sentient AI is something that will inevitably be created, be it through the evolution of a learning AI, or as a byproduct of some research. No treaty or coalition can stop it, no matter what you think. I just pray you do not go from educated men to fear mongers when it happens.

As usual, we’re at an ideological impasse, with both sides responding not so much according to the pros and cons but according to their predispositions. This article suggests a way through the impasse:

At the beginning of this article, we asked if the pessimists or optimists would be right.

There is a third option, though: one where we move from building jobs around processes and tasks, a solution that is optimal for neither human nor machine, to building jobs around problems.

The article is long, well-researched, and… well, very rational. Too bad — conventional thinking aside — other research shows we rarely act from a rational outlook when it comes to jobs and the economy… or anything else for that matter.

More on that next time.

 

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

Race Against the Machine

For the past several years, two MIT big thinkers[1] have been the go-to authorities in the scramble to explain how robotics, artificial intelligence, and big data are revolutionizing the economy and the working world. Their two books were published four and six years ago — so yesterday in the world of technology — but they were remarkably prescient when written, and have not diminished in relevance. They are:

Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy (2012)

The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (2014)

Click here for a chapter-by-chapter digest of The Second Machine Age, written by an all star cast of economic commentators. Among other things, they acknowledge the authors’ view that neoliberal capitalism has not fared well in its dealings with the technological juggernaut, but in the absence of a better alternative, we might as well continue to ride the horse in the direction it’s going.

While admitting that History (not human choice) is “littered with unintended . . . side effects of well-intentioned social and economic policies,” the authors cite Tim O’Reilly[2] in pushing forward with technology’s momentum rather than clinging to the past or present. They suggest that we should let the technologies do their work and just find ways to deal with it. They are “skeptical of efforts to come up with fundamental alternatives to capitalism.”

David Rotman, editor of the MIT Technology Review cites The Second Machine Age extensively in an excellent, longer article, “How Technology is Destroying Jobs.” Although the article is packed with contrary analysis and opinion, the following excepts emphasize what many might consider the shadowy side of the street (compared to the sunny side we looked at in the past couple posts). I added the headings below to emphasize that many of the general economic themes we’ve been talking about also apply to the specific dynamics of the job market.

It used to be that economic growth — including wealth creation — also created more jobs. It doesn’t work that way any more. Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States.

For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.

A rising economic tide no longer floats all boats. The result is a skewed allocation of the rewards of growth away from jobs — i.e., economic inequality. The contention that automation and digital technologies are partly responsible for today’s lack of jobs has obviously touched a raw nerve for many worried about their own employment. But this is only one consequence of what ­Brynjolfsson and McAfee see as a broader trend. The rapid acceleration of technological progress, they say, has greatly widened the gap between economic winners and losers—the income inequalities that many economists have worried about for decades..

“[S]teadily rising productivity raised all boats for much of the 20th century,” [Brynjolfsson] says. “Many people, especially economists, jumped to the conclusion that was just the way the world worked. I used to say that if we took care of productivity, everything else would take care of itself; it was the single most important economic statistic. But that’s no longer true.” He adds, “It’s one of the dirty secrets of economics: technology progress does grow the economy and create wealth, but there is no economic law that says everyone will benefit.” In other words, in the race against the machine, some are likely to win while many others lose.

That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States.

Meanwhile, technology is taking over the jobs that are left— blue collar, white collar, and even the professions. [I]mpressive advances in computer technology—from improved industrial robotics to automated translation services—are largely behind the sluggish employment growth of the last 10 to 15 years. Even more ominous for workers, the MIT academics foresee dismal prospects for many types of jobs as these powerful new technologies are increasingly adopted not only in manufacturing, clerical, and retail work but in professions such as law, financial services, education, and medicine.

Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared.

New technologies are “encroaching into human skills in a way that is completely unprecedented,” McAfee says, and many middle-class jobs are right in the bull’s-eye; even relatively high-skill work in education, medicine, and law is affected.

We’ll spend more time on the shadowy side of the street next time.


[1] Erik Brynjolfsson is director of the MIT Center for Digital Business, and Andrew McAfee is a principal research scientist at MIT who studies how digital technologies are changing business, the economy, and society.

[2] According to his official bio on his website, Tim O’Reilly “is the founder and CEO of O’Reilly Media, Inc. His original business plan was simply ‘interesting work for interesting people,’ and that’s worked out pretty well. O’Reilly Media delivers online learning, publishes books, runs conferences, urges companies to create more value than they capture, and tries to change the world by spreading and amplifying the knowledge of innovators.”

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article:  Meeting Goals the Olympic Way:  Train + Transform.

Bright Sunshiny Day, Continued

Last time, we heard David Lee[1] express his conviction that, far from destroying human jobs, robotic technology will unleash human creativity on a wonderful new world of work. His perspective is so remarkably and refreshingly upbeat that I thought we’d let him continue where he left off last week:

I think it’s important to recognize that we brought this problem on ourselves. And it’s not just because, you know, we are the one building the robots. But even though most jobs left the factory decades ago, we still hold on to this factory mindset of standardization and de-skilling. We still define jobs around procedural tasks and then pay people for the number of hours that they perform these tasks. We’ve created narrow job definitions like cashier, loan processor or taxi driver and then asked people to form entire careers around these singular tasks.

These choices have left us with actually two dangerous side effects. The first is that these narrowly defined jobs will be the first to be displaced by robots, because single-task robots are just the easiest kinds to build. But second, we have accidentally made it so that millions of workers around the world have unbelievably boring working lives.

Let’s take the example of a call center agent. Over the last few decades, we brag about lower operating costs because we’ve taken most of the need for brainpower out of the person and put it into the system. For most of their day, they click on screens, they read scripts. They act more like machines than humans. And unfortunately, over the next few years, as our technology gets more advanced, they, along with people like clerks and bookkeepers, will see the vast majority of their work disappear.

To counteract this, we have to start creating new jobs that are less centered on the tasks that a person does and more focused on the skills that a person brings to work. For example, robots are great at repetitive and constrained work, but human beings have an amazing ability to bring together capability with creativity when faced with problems that we’ve never seen before.

We need to realistically think about the tasks that will be disappearing over the next few years and start planning for more meaningful, more valuable work that should replace it. We need to create environments where both human beings and robots thrive. I say, let’s give more work to the robots, and let’s start with the work that we absolutely hate doing. Here, robot, process this painfully idiotic report.

And for the human beings, we should follow the advice from Harry Davis at the University of Chicago. He says we have to make it so that people don’t leave too much of themselves in the trunk of their car. I mean, human beings are amazing on weekends. Think about the people that you know and what they do on Saturdays. They’re artists, carpenters, chefs and athletes. But on Monday, they’re back to being Junior HR Specialist and Systems Analyst 3.

You know, these narrow job titles not only sound boring, but they’re actually a subtle encouragement for people to make narrow and boring job contributions. But I’ve seen firsthand that when you invite people to be more, they can amaze us with how much more they can be.

[The key is]to turn dreams into a reality. And that dreaming is an important part of what separates us from machines. For now, our machines do not get frustrated, they do not get annoyed, and they certainly don’t imagine.

But we, as human beings — we feel pain, we get frustrated. And it’s when we’re most annoyed and most curious that we’re motivated to dig into a problem and create change. Our imaginations are the birthplace of new products, new services, and even new industries.

If we really want to robot-proof our jobs, we, as leaders, need to get out of the mindset of telling people what to do and instead start asking them what problems they’re inspired to solve and what talents they want to bring to work. Because when you can bring your Saturday self to work on Wednesdays, you’ll look forward to Mondays more, and those feelings that we have about Mondays are part of what makes us human.

We’ll give the other side equal time next week.


[1] David Lee is Vice President of Innovation and the Strategic Enterprise Fund for UPS.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: Leadership and Life Lessons From an Elite Athlete and a Dying Man.

Gonna Be a Bright, Bright, Sunshiny Day

We met Sebastian Thrun last time. He’s a bright guy with a sunshiny disposition: he’s not worried about robots and artificial intelligence taking over all the good jobs, even his own. Instead, he’s perfectly okay if technology eliminates most of what he does every day because he believes human ingenuity will fill the vacuum with something better. This is from his conversation with TED curator Chris Anderson:

If I look at my own job as a CEO, I would say 90 percent of my work is repetitive, I don’t enjoy it, I spend about four hours per day on stupid, repetitive email. And I’m burning to have something that helps me get rid of this. Why? Because I believe all of us are insanely creative . . . What this will empower is to turn this creativity into action.

We’ve unleashed this amazing creativity by de-slaving us from farming and later, of course, from factory work and have invented so many things. It’s going to be even better, in my opinion. And there’s going to be great side effects. One of the side effects will be that things like food and medical supply and education and shelter and transportation will all become much more affordable to all of us, not just the rich people.

Anderson sums it up this way:

So the jobs that are getting lost, in a way, even though it’s going to be painful, humans are capable of more than those jobs. This is the dream. The dream is that humans can rise to just a new level of empowerment and discovery. That’s the dream.

Another bright guy with a sunshiny disposition is David Lee, Vice President of Innovation and the Strategic Enterprise Fund for UPS. He, too, shares the dream that technology will turn human creativity loose on a whole new kind of working world. Here’s his TED talk (click the image):

Like Sebastian Thrun, he’s no Pollyanna: he understands that yes, technology threatens jobs:

There’s a lot of valid concern these days that our technology is getting so smart that we’ve put ourselves on the path to a jobless future. And I think the example of a self-driving car is actually the easiest one to see. So these are going to be fantastic for all kinds of different reasons. But did you know that “driver” is actually the most common job in 29 of the 50 US states? What’s going to happen to these jobs when we’re no longer driving our cars or cooking our food or even diagnosing our own diseases?

Well, a recent study from Forrester Research goes so far to predict that 25 million jobs might disappear over the next 10 years. To put that in perspective, that’s three times as many jobs lost in the aftermath of the financial crisis. And it’s not just blue-collar jobs that are at risk. On Wall Street and across Silicon Valley, we are seeing tremendous gains in the quality of analysis and decision-making because of machine learning. So even the smartest, highest-paid people will be affected by this change.

What’s clear is that no matter what your job is, at least some, if not all of your work, is going to be done by a robot or software in the next few years.

But that’s not the end of the story. Like Thrun, he believes that the rise of the robots will clear the way for unprecedented levels of human creativity — provided we move fast:

The good news is that we have faced down and recovered two mass extinctions of jobs before. From 1870 to 1970, the percent of American workers based on farms fell by 90 percent, and then again from 1950 to 2010, the percent of Americans working in factories fell by 75 percent. The challenge we face this time, however, is one of time. We had a hundred years to move from farms to factories, and then 60 years to fully build out a service economy.

The rate of change today suggests that we may only have 10 or 15 years to adjust, and if we don’t react fast enough, that means by the time today’s elementary-school students are college-aged, we could be living in a world that’s robotic, largely unemployed and stuck in kind of un-great depression.

But I don’t think it has to be this way. You see, I work in innovation, and part of my job is to shape how large companies apply new technologies. Certainly some of these technologies are even specifically designed to replace human workers. But I believe that if we start taking steps right now to change the nature of work, we can not only create environments where people love coming to work but also generate the innovation that we need to replace the millions of jobs that will be lost to technology.

I believe that the key to preventing our jobless future is to rediscover what makes us human, and to create a new generation of human-centered jobs that allow us to unlock the hidden talents and passions that we carry with us every day.

More from David Lee next time.

If all this bright sunshiny perspective made you think of that old tune, you might treat yourself to a listen. It’s short, you’ve got time.

And for a look at a current legal challenge to the “gig economy” across the pond, check out this Economist article from earlier this week.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: Leadership and Life Lessons From an Elite Athlete and a Dying Man.

Learning to Learn

“I didn’t know robots had advanced so far,” a reader remarked after last week’s post about how computers are displacing knowledge workers. What changed to make that happen? The machines learned how to learn.

This is from Artificial Intelligence Goes Bilingual—Without A Dictionary, Science Magazine, Nov. 28, 2017.

“Imagine that you give one person lots of Chinese books and lots of Arabic books—none of them overlapping—and the person has to learn to translate Chinese to Arabic. That seems impossible, right?” says . . . Mikel Artetxe, a computer scientist at the University of the Basque Country (UPV) in San Sebastiàn, Spain. “But we show that a computer can do that.”

Most machine learning—in which neural networks and other computer algorithms learn from experience—is “supervised.” A computer makes a guess, receives the right answer, and adjusts its process accordingly. That works well when teaching a computer to translate between, say, English and French, because many documents exist in both languages. It doesn’t work so well for rare languages, or for popular ones without many parallel texts.

[This learning technique is called] unsupervised machine learning. [A computer using this technique] constructs bilingual dictionaries without the aid of a human teacher telling them when their guesses are right.

Hmm. . . . I could have used that last year, when my wife and I spent three months visiting our daughter in South Korea. The Korean language is ridiculously complex; I never got much past “good morning.”

Go matches were a standard offering on the gym TV’s where I worked out. (Imagine two guys in black suits staring intently at a game board — not exactly a riveting workout visual.) Go is also ridiculously complex, and mysterious, too: the masters seem to make moves more intuitively than analytically. But the days of human Go supremacy are over. Google wizard and overall overachiever Sebastian Thrun[1] explains why in this conversation with TED Curator Chris Anderson:

Artificial intelligence and machine learning is about 60 years old and has not had a great day in its past until recently. And the reason is that today, we have reached a scale of computing and datasets that was necessary to make machines smart. The new thing now is that computers can find their own rules. So instead of an expert deciphering, step by step, a rule for every contingency, what you do now is you give the computer examples and have it infer its own rules.

A really good example is AlphaGo. Normally, in game playing, you would really write down all the rules, but in AlphaGo’s case, the system looked over a million games and was able to infer its own rules and then beat the world’s residing Go champion. That is exciting, because it relieves the software engineer of the need of being super smart, and pushes the burden towards the data.

20 years ago the computers were as big as a cockroach brain. Now they are powerful enough to really emulate specialized human thinking. And then the computers take advantage of the fact that they can look at much more data than people can. AlphaGo looked at more than a million games. No human expert can ever study a million games. So as a result, the computer can find rules that even people can’t find.

Thrun made those comments in April 2017. AlphaGo’s championship reign was short-lived: it was unseated a mere six months by a new cyber challenger that taught itself without reviewing all that data. This is from “AlphaGo Zero Shows Machines Can Become Superhuman Without Any Help,” MIT Technology Review, October 18, 2017.

AlphaGo wasn’t the best Go player on the planet for very long. A new version of the masterful AI program has emerged, and it’s a monster. In a head-to-head matchup, AlphaGo Zero defeated the original program by 100 games to none.

Whereas the original AlphaGo learned by ingesting data from hundreds of thousands of games played by human experts, AlphaGo Zero started with nothing but a blank board and the rules of the game. It learned simply by playing millions of games against itself, using what it learned in each game to improve.

The new program represents a step forward in the quest to build machines that are truly intelligent. That’s because machines will need to figure out solutions to difficult problems even when there isn’t a large amount of training data to learn from.

“The most striking thing is we don’t need any human data anymore,” says Demis Hassabis, CEO and cofounder of DeepMind [the creators of AlphaGo Zero].

“By not using human data or human expertise, we’ve actually removed the constraints of human knowledge,” says David Silver, the lead researcher at DeepMind and a professor at University College London. “It’s able to create knowledge for itself from first principles.”

Did you catch that? “We’ve removed the constraints of human knowledge.” Wow. No wonder computers are elbowing all those knowledge workers out of the way.

What’s left for human to do? We’ll hear from Sebastian Thrun and others on that topic next time.


[1] Sebastian Thrun’s TED bio describes him as “an educator, entrepreneur and troublemaker. After a long life as a professor at Stanford University, Thrun resigned from tenure to join Google. At Google, he founded Google X, home to self-driving cars and many other moonshot technologies. Thrun also founded Udacity, an online university with worldwide reach, and Kitty Hawk, a ‘flying car’ company. He has authored 11 books, 400 papers, holds 3 doctorates and has won numerous awards.”

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: Leadership and Life Lessons From an Elite Athlete and a Dying Man.

Too True to be Too Funny

Did you see the Sprint Super Bowl ad (click the image), where a scientist gets laughed out of his lab by his impertinent artificially intelligent robots? It was funny, but in that groaning kind of way when humor is just a bit too true. Let’s break down the punchline: “My coworkers” says the scientist, talking about robots “laughed at me.” He responds to the robotic peer pressure with the human feeling of shame, and changes his cell phone provider to conform.

Wow. Get used to it. It could happen to you. True, the robots’ sense of humor was pretty immature. He chastises them, “Guys, it wasn’t that funny.” But they’ll learn — that’s what artificial intelligence does — it learns, really fast. They’ll be doing sarcasm and irony soon — that is, when they’re not busy passing a university entrance exam, managing an investment portfolio, developing business strategy, practicing medicine. practicing law, writing up your news feeds… and generally doing all those other things everybody knew all along that robots surely would never be able to do.

Miami lawyer Luis Salazar used to think that way, until he met Ross. This is from a NY Times article from last March:

Skeptical at first, he tested Ross against himself. After 10 hours of searching online legal databases, he found a case whose facts nearly mirrored the one he was working on. Ross found that case almost instantly.

Ross is not a human. “He” never went to law school, never took a legal methods class, never learned to do research, never had a professor or partner critique his legal writing. “He” is machine intelligence. Not only did he find the clincher case in a fraction of the time Salazar did, he also did a nice job of writing up a legal memo:

Mr. Salazar has been particularly impressed by a legal memo service that Ross is developing. Type in a legal question and Ross replies a day later with a few paragraphs summarizing the answer and a two-page explanatory memo.

The results, he said, are indistinguishable from a memo written by a lawyer. ‘That blew me away,’ Mr. Salazar said. ‘It’s kind of scary. If it gets better, a lot of people could lose their jobs.’

Yes, scary — especially when you consider the cost of legal research: click here and enter “legal research” in the search field. Among other things, you’ll get an article about Ross and another about the cost of legal research. If Ross is that good, he could save a lot of firms a lot of money… and eliminate a lot of jobs along the way. (The Ross Intelligence website is worth a visit — there’s attorney Salazar on video, and an impressive banner of early adopting law firms, with a lot of names you’ll recognize.)

And speaking of things that were never supposed to happen, the NY Times article cites a McKinsey report that, using technology then available, 23 percent of a lawyer’s work could be fully automated. Given the explosion of AI in the past year, we are already way beyond that percentage.

How are you going to compete with that? You’re not. Consider this story from a source we’ve visited several times already (the book Plutocrats by Chrystia Freeland):

In 2010, DLA Piper faced a court-imposed deadline of searching through 570,000 documents in one week. The firm . . . hired Clearwell, a Silicon Valley e-discovery company. Clearwell software did the job in two days. DLA Piper lawyers spent one day going through the results. After three days of work, the firm responded to the judge’s order with 3,070 documents. A decade ago, DLA Piper would have employed thirty associates full-time for six months to do that work.

Note the date: that happened eight years ago. Today, the whole thing would happen a lot faster, with much less human involvement.

I tried to get a robot to write this blog post, but didn’t succeed. Articoolo.com looked promising: “Stop wasting your time,” its website trumpets, “let us do the writing for you!” The company is obviously fully in tune with the freelance job market we’ve been talking about: “You no longer have to wait for someone on the other side of the world to write, proofread and send the content to you.” I tried a few topic entries, but the best it could do was to admit it had written an article but it wasn’t up to standards, so sorry… But then, it’s only available in beta. Give it time to learn.

I also sent an inquiry to the people at Ross Intelligence, asking if Ross could write an article about itself. I never heard back — he’s probably too busy signing up more firms to hire him.

More on robots and artificial intelligence next time.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: Leadership and Life Lessons From an Elite Athlete and a Dying Man.

The Super Bowl of Economics: Capitalism vs. Technology

Technology is the odds-on favorite.

In the multi-author collection Does Capitalism Have a Future?, Randall Collins, Emeritus Professor of Sociology at the University of Pennsylvania, observes that capitalism is subject to a “long-term structural weakness,” namely “ the technological displacement of labor by machines.”

Technology eliminating jobs is nothing new. From the end of the 18th Century through the end of the 20th, the Industrial Revolution swept a huge number of manual labor jobs into the dustbin of history. It didn’t happen instantly: at the turn of the 20th Century, 40% of the USA workforce still worked on the farm. A half century later, that figure was 16%. I grew up in rural Minnesota, where farm kids did chores before school, town kids baled hay for summer jobs, and everybody watched the weather and asked how the crops were doing. We didn’t know we were a vanishing species. In fact, “learning a trade” so you could “work with your hands” was still a moral and societal virtue. I chose carpentry. It was my first fulltime job after I graduated with a liberal arts degree.

Another half century later, at the start of the 21st Century, less than 2% of the U.S. workforce was still on the farm. In my hometown, our GI fathers beat their swords into plowshares, then my generation moved to the city and melted the plows down into silicon. And now the technological revolution is doing the same thing to mental labor that the Industrial revolution did to manual labor — only it’s doing it way faster, even though most of us aren’t aware that “knowledge workers” are a vanishing species. The following is from The Stupidity Paradox: The Power and Pitfalls of Functional Stupidity at Work:

1962… was the year the management thinker Peter Drucker was asked by The New York Times to write about what the economy would look like in 1980. One big change he foresaw was the rise of the new type of employee he called ‘knowledge workers.’

A few years ago, Steven Sweets and Peter Meiksins decided they wanted to track the changing nature of work in the new knowledge intensive economy. These two US labour sociologists assembled large-scale statistical databases as well as research reports from hundreds of workplaces. What they found surprised them. A new economy full of knowledge workers was nowhere to be found.

The researchers summarized their unexpected finding this way: for every well-paid programmer working at a firm like Microsoft, there are three people flipping burgers at a restaurant like McDonald’s. It seems that in the ‘knowledge’ economy, low-level service jobs still dominate.

A report by the US Bureau of Labor Statistics painted an even bleaker picture. One third of the US workforce was made up of three occupational groups: office and administrative support, sales and related occupations, and food preparation and related work.

And now — guess what? — those non-knowledge workers flipping your burgers might not be human. This is from “Robots Will Transform Fast Food” in this month’s The Atlantic:

According to Michael Chui, a partner at the McKinsey Global Institute, many tasks in the food-service and accommodation industry are exactly the kind that are easily automated. Chui’s latest research estimates that 54 percent of the tasks workers perform in American restaurants and hotels could be automated using currently available technologies — making it the fourth-most-automatable sector in the U.S.

Robots have arrived in American restaurants and hotels for the same reasons they first arrived on factory floors. The cost of machines, even sophisticated ones, has fallen significantly in recent years, dropping 40 percent since 2005, according to the Boston Consulting Group.

‘We think we’ve hit the point where labor-wage rates are now making automation of those tasks make a lot more sense,’ Bob Wright, the chief operations officer of Wendy’s, said in a conference call with investors last February, referring to jobs that feature ‘repetitive production tasks.’

The international chain CaliBurger, for example, will soon install Flippy, a robot that can flip 150 burgers an hour.

That’s Flippy’s picture at the top of this post. Burger flippers are going the way of farmers — the Flippies of the world are busy eliminating one of the three main occupational groups in the U.S. And again, a lot of us aren’t aware this is going on.

Burger flipping maybe to particularly amenable to automation, but what about other knowledge-based jobs that surely a robot couldn’t do — like, let’s say, writing this column, or managing a corporation, or even… practicing law?

More to come.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: Leadership and Life Lessons From an Elite Athlete and a Dying Man.

Brave New (Jobs) World

“The American work environment is rapidly changing.
For better or worse, the days of the conventional full-time job may be numbered.”

The above quote is from a December 5, 2016 Quartz article that reported the findings of economists Lawrence Katz (Harvard) and Alan Krueger (Princeton, former chairman of the White House Council of Economic Advisers) that 94% of all US jobs created between 2005 to 2015 were temporary, “alternative work” — with the biggest increases coming from freelancers, independent contractors, and contract employees (who work at a business but are paid by an outside firm).

These findings are consistent with what we looked at last time: how neoliberal economics has eroded institutional support for the conventional notion of working for a living, resulting in a more individuated approach to the job market. Aeon Magazine recently offered an essay on this topic: The Quitting Economy: When employees are treated as short-term assets, they reinvent themselves as marketable goods, always ready to quit. Here are some samples:

In the early 1990s, career advice in the United States changed. A new social philosophy, neoliberalism, was transforming society, including the nature of employment, and career counsellors and business writers had to respond. (Emphasis added.)

US economic intellectuals raced to implement the ultra-individualist ideals of Friedrich Hayek, Milton Friedman and other members of the Mont Pelerin Society…In doing so… they developed a metaphor — that every person should think of herself as a business, the CEO of Me, Inc. The metaphor took off, and has had profound implications for how workplaces are run, how people understand their jobs, and how they plan careers, which increasingly revolve around quitting.

The CEO of Me, Inc. is a job-quitter for a good reason — the business world has come to agree with Hayek that market value is the best measure of value. As a consequence, a career means a string of jobs at different companies. So workers respond in kind, thinking about how to shape their career in a world where you can expect so little from employers. In a society where market rules rule, the only way for an employee to know her value is to look for another job and, if she finds one, usually to quit.

I.e., tooting your own résumé horn is no longer not so much about who you worked for, but what you did while you were there. And once you’re finished, don’t get comfortable, get moving. (This recent Time/Money article offers help for creating your new mobility résumé.)

A couple years ago I blogged here about a new form of law firm entirely staffed by contract attorneys. A quick Google search revealed that the trend toward lawyer “alternative” staffing has been gaining momentum. For example:

This May 26, 2017 Above the Law article reported a robust market for more conventional associate openings and lateral partner hires, but included this caveat:

The one trend that we see continue to stick is the importance of the personal brand over the law firm brand, and that means that every attorney should really focus on how they differentiate themselves from the pack, regardless of where they hang their shingle.

Upwork offers “Freelance Lawyer Jobs.” “Looking to hire faster and more affordably?” their website asks. “ Tackle your next Contract Law project with Upwork – the top freelancing website.”

Flexwork offers “Flexible & Telecommuting Attorney Jobs.”

Indeed posts “Remote Contract Attorney Jobs.”

And on it goes. Whether you’re hiring or looking to be hired, you do well to be schooled in the Brave New World of “alternative” jobs. For a further introduction, check out these articles on the “Gig Economy” from Investopedia and McKinsey. For more depth, see:

The Shift: The Future of Work is Already Here (2011), by Lynda Gratton, Professor of Management Practice at London Business School, where she directs the program “Human Resource Strategy in Transforming Companies.”

Down and Out in the New Economy: How People Find (or Don’t Find) Work Today (2017), by University of Indiana Anthropology Professor LLana Gershon — the author of the Aeon article quoted above.

Next time, we’ll begin looking at three major non-human players in the new job marketplace: artificial intelligence, big data, and robotics. They’re big, they’re bad, and they’re already elbowing their way into jobs long considered “safe.”

 

Kevin Rhodes left a successful long-term law practice to scratch a creative itch and lived to tell about it… barely. Since then, he has been on a mission to bring professional excellence and personal wellbeing to the people who learn, teach, and practice the law. He has also blogged extensively and written several books about his unique journey to wellness, including how he deals with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Capitalism on the Fritz, Continued

Post-WWII neoliberal capitalism became a societal institution. Its most rudimentary unit was the concept of working for a living, which meant having a job. Jobs organized life, defined social identities, roles, and virtues, conferred status, supported assumptions about how life worked. Those assumptions held as long as the post-war recovery roared ahead, reinforced by the common human error of assuming happy days weren’t just here again but would continue on indefinitely — especially since we could trace the free market’s roots back a couple hundred years.

But the recovery didn’t keep roaring on. Those days are over — as evidenced by the consensus list of capitalistic fritzes from Rethinking Capitalism we looked at last time. Neoliberal economics met its match when it ran up against modern megatrends such as globalization and disruptive technologies, and when it did, it relinquished its function as a social institution we can rely on. Hence the list of fritzes.

Economic sociologist Wolfgang Streeck[1] reviews essentially the same list in his book How Will Capitalism End? (2017), and concludes that, “I suggest that all [of the developments on the list] may be aggregated into a diagnosis of multi-morbidity in which different disorders coexist and, more often than not, reinforce each other.” I.e., neoliberalism’s woes are greater than the sum of its microeconomic parts. Streeck characterizes the result as the “advanced decline of the capacity of capitalism as an economic regime to underwrite a stable society.”

Where does that leave us? Ryan Avent — senior editor and economic columnist for The Economist — says the following in his book The Wealth of Humans: Work, Power, and Status in the Twenty-First Century (2016):

The remarkable technological progress of the digital age is refracted through industrial institutions in ways that obscure what is causing what. New technologies do contain the potential to revolutionize society and the economy. New firms are appearing which promise to move society along this revolutionary path. And collateral damage, in the form of collapsing firms and sacked workers, is accumulating.

But the institutions we have available, and which have served us well these last two centuries, are working to take the capital and labour that has been made redundant and reuse it elsewhere. Workers, needing money to live, seek work, and accept pay cuts when they absolutely must. Lower wages make it attractive for firms to use workers at less productive tasks . . . [and reduce] the incentive to invest in labour-saving technology.

This process will not end without a dramatic and unexpected shift in the nature of technology, or in the nature of economic institutions.

As we’ll see in future posts, technology has already moved far enough along that any “dramatic and unexpected shift in the nature of technology” is unlikely to backtrack — instead is far more likely to accelerate the erosion of societal economic norms. As for a shift in “the nature of economic institutions,” there is no replacement economic system waiting in the wings. The result, says Streeck, is that we are entering an “age of entropy,” where we are likely to remain for the foreseeable future. He describes it as follows:

Social life in an age of entropy is by necessity individualistic… In the absence of collective institutions, social structures must be devised individually bottom-up, anticipating and accommodating top-down pressures from the markets. Social life consists of individuals building networks of private connections around themselves, as best they can with the means they happen to have at hand. Person-centred relation-making creates lateral social structures that are voluntary and contract-like, which makes them flexible but perishable, requiring continuous networking to keep them together and adjust them on a current basis to changing circumstances. An ideal tool for this are the new social media that produce social structures for individuals, substituting voluntary for obligatory forms of social relations, and networks of users for communities of citizens.

He’s speaking in general, sociological terms, but his description closely mirrors the realities of the kind of résumé creating, network building, and job seeking that dominate the current world of temporary, part-time, contract labor, which makes up the vast majority of new jobs created in this century. These new jobs are not the same jobs that characterized the former workplace model; working for a living has taken on a whole new meaning. Among other things, we now have what some are calling the “Gig Economy,” the “On-Demand Economy,” or even the “Quitting Economy.”

More on that next time.


[1] Of interest is this December 14, 2017 interview with Prof. Streeck entitled “Farewell, Neoliberalism” on his website.

 

Kevin Rhodes is on a mission to bring professional excellence and personal wellbeing to the people who learn, teach, and practice the law. His past blog posts for the CBA have been collected in two volumes — click the book covers for more information.

Capitalism on the Fritz

Capitalism on the Fritz[1]

In November 2008, as the global financial crash was gathering pace, the 82-year-old British monarch Queen Elizabeth visited the London School of Economics. She was there to open a new building, but she was more interested in the assembled academics. She asked them an innocent but pointed question. Given its extraordinary scale, how as it possible that no one saw it coming?

The Queen’s question went to the heart of two huge failures. Western capitalism came close to collapsing in 2007-2008 and has still not recovered. And the vast majority of economists had not understood what was happening.

That’s from the Introduction to Rethinking Capitalism (2016), edited by Michael Jacobs and Mariana Mazzucato.[2] The editors and authors review a catalogue of chronic economic “dysfunction” that they trace to policy-makers’ continued allegiance to neoliberal economic orthodoxy even as it has been breaking down over the past four decades.

Before we get to their dysfunction list, let’s give the other side equal time. First, consider an open letter from Warren Buffett published in Time last week. It begins this way:

“I have good news. First, most American children are going to live far better than their parents did. Second, large gains in the living standards of Americans will continue for many generations to come.”

Mr. Buffett acknowledges that “The market system . . . has also left many people hopelessly behind,” but assures us that “These devastating side effects can be ameliorated,” observing that “a rich family takes care of all its children, not just those with talents valued by the marketplace.” With this compassionate caveat, he is definitely bullish on America’s economy:

In the years of growth that certainly lie ahead, I have no doubt that America can both deliver riches to many and a decent life to all. We must not settle for less.

So, apparently, is our Congress. The new tax law is a virtual pledge of allegiance to the neoliberal economic model. Barring a significant pullback of the law (which seems unlikely), we now have eight years to watch how its assumptions play out.

And now, back to Rethinking Capitalism’s dysfunction’s list (which I’ve seen restated over and over in my research):

  • Production and wages no longer move in tandem — the latter lag behind the former.
  • This has been going on now for several decades,[3] during which living standards (adjusted) for the majority of households have been flat.
  • This is a problem because consumer spending accounts for over 70% of U.S. GDP. What hurts consumers hurts the whole economy.
  • What economic growth there has been is mostly the result of spending fueled by consumer and corporate debt. This is especially true of the post-Great Recession “recovery.”
  • Meanwhile, companies have been increasing production through increased automation — most recently through intelligent machines — which means getting more done with fewer employees.
  • That means the portion of marginal output attributable to human (wage-earner) effort is less, which causes consumer incomes to fall.
  • The job marketplace has responded with new dynamics, featuring a worldwide rise of “non-standard’ work (temporary, part-time, and self-employed).[4]
  • Overall, there has been an increase in the number of lower-paid workers and a rise in intransigent unemployment — especially among young people.
  • Adjusting to these new realities has left traditional wage-earners with feelings of meaninglessness and disempowerment, fueling populist backlash political movements.
  • In the meantime, economic inequality (both wealth and income) has grown to levels not seen since pre-revolution France, the days of the Robber Barons, and the Roaring 20s.
  • Economic inequality means that the shrinking share of compensation paid out in wages, salaries, bonuses, and benefits has been dramatically skewed toward the top of the earnings scale, with much less (both proportionately and absolutely) going to those at the middle and bottom. [5]
  • Increased wealth doesn’t mean increased consumer spending by the top 20% sufficient to offset lost demand (spending) by the lower 80% of income earners, other than as reflected by consumer debt.
  • Instead, increased wealth at the top end is turned into “rentable” assets — e.g., real estate. intellectual property, and privatized holdings in what used to be the “commons” — which both drives up their value (cost) and the rent derived from them. This creates a “rentier” culture in which lower income earners are increasingly stressed to meet rental rates, and ultimately are driven out of certain markets.
  • Inequality has also created a new working class system, in which a large share of workers are in precarious/uncertain/unsustainable employment and earning circumstances.
  • Inequality has also resulted in limitations on economic opportunity and social mobility — e.g., there is a new kind of “glass floor/glass ceiling” below which the top 20% are unlikely to fall and the bottom 80% are unlikely to rise.
  • In the meantime, the social safety nets that developed during the post-WWII boom (as Buffett’s “rich family” took care of “all its children”) have been largely torn down since the advent of “workfare” in the 80’s and 90’s, leaving those at the bottom and middle more exposed than ever.

The editors of Rethinking Capitalism believe that “These failings are not temporary, they are structural.” That conclusion has led some to believe that people like Warren Buffett are seriously misguided in their continued faith in Western capitalism as a reliable societal institution.

More on that next time.


[1] I wondered where the expression “on the fritz” came from, and tried to find out. Surprisingly, no one seems to know.

[2] Michael Jacobs is an environmental economist and political theorist; at the time the book was published, he was a visiting professor at University College of London. Mariana Mazzucato is an economics professor at the University of Sussex.

[3] “In the US, real median household income was barely higher in 2014 than it had been in 1990, though GDP had increased by 78 percent over the same period. Though beginning earlier in the US, this divergence of average incomes from overall economic growth has not become a feature of most advanced economies.”  Rethinking Capitalism.

[4] These have accounted for “half the jobs created since the 1990s and 60 per cent since the 2008 crisis.” Rethinking Capitalism.

[5] Meanwhile, those at the very top of the income distribution have done exceedingly well… In the US, the incomes of the richest 1 percent rose by 142 per cent between 1980 and 2013 (from an average of $461,910, adjusted for inflation, to $1,119,315) and their share of national income doubled, from 10 to 20 per cent. In the first three years of the recovery after the 2008 crash, an extraordinary 91 per cent of the gains in income went to the richest one-hundredth of the population.” Rethinking Capitalism.

 

Kevin Rhodes left a successful long-term law practice to scratch a creative itch and lived to tell about it… barely. Since then, he has been on a mission to bring professional excellence and personal wellbeing to the people who learn, teach, and practice the law. He has also blogged extensively and written several books about his unique journey to wellness, including how he deals with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.