June 24, 2018

Reframing “The Economy”

We’ve seen that conventional thinking about “the economy” struggles to accommodate technologies such as machine learning, robotics, and artificial intelligence — which means it’s ripe for a big dose of reframing. Reframing is a problem-solving strategy that flips our usual ways of thinking so that blind spots are revealed, conundrums resolved, polarities synthesized, and barriers transformed into logistics.

The Santa Fe Institute is on the reframing case: Rolling Stone called it “a sort of Justice League of renegade geeks, where teams of scientists from disparate fields study the Big Questions.” W. Brian Arthur is one of those geeks. He’s also onboard with PARC — a Xerox company in “the business of breakthroughs” — and has written two seminal books on complexity economics: Complexity and the Economy (2014) and The Nature of Technology: What it Is and How it Evolves (2009). Here’s his pitch for reframing “the economy”:

The standard way to define the economy — whether in dictionaries or economics textbooks — is as a “system of production and distribution and consumption” of goods and services. And we picture this system, “the economy,” as something that exists in itself, as a backdrop to the events and adjustments that occur within it. Seen this way, the economy becomes something like a gigantic container . . . , a huge machine with many modules or parts.

I want to look at the economy in a different way. The shift in thinking I am putting forward here is . . . like seeing the mind not as a container for its concepts and habitual thought processes but as something that emerges from these. Or seeing an ecology not as containing a collection of biological species, but as forming from its collection of species. So it is with the economy.

The economy is a set of activities and behaviors and flows of goods and services mediated by — draped over — its technologies: the of arrangements and activities by which a society satisfies its needs. They include hospitals and surgical procedures. And markets and pricing systems. And trading arrangements, distribution systems, organizations, and businesses. And financial systems, banks, regulatory systems, and legal systems. All these are arrangements by which we fulfill our needs, all are means to fulfill human purposes.

George Zarkadakis is another Big Questions geek. He’s an artificial intelligence Ph.D. and engineer, and the author of In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence (2016). He describes his complexity economics reframe in a recent article “The Economy Is More A Messy, Fractal Living Thing Than A Machine”:

Mainstream economics is built on the premise that the economy is a machine-like system operating at equilibrium. According to this idea, individual actors – such as companies, government departments and consumers – behave in a rational way. The system might experience shocks, but the result of all these minute decisions is that the economy eventually works its way back to a stable state.

Unfortunately, this naive approach prevents us from coming to terms with the profound consequences of machine learning, robotics and artificial intelligence.

Both political camps accept a version of the elegant premise of economic equilibrium, which inclines them to a deterministic, linear way of thinking. But why not look at the economy in terms of the messy complexity of natural systems, such as the fractal growth of living organisms or the frantic jive of atoms?

These frameworks are bigger than the sum of their parts, in that you can’t predict the behaviour of the whole by studying the step-by-step movement of each individual bit. The underlying rules might be simple, but what emerges is inherently dynamic, chaotic and somehow self-organising.

Complexity economics takes its cue from these systems, and creates computational models of artificial worlds in which the actors display a more symbiotic and changeable relationship to their environments. Seen in this light, the economy becomes a pattern of continuous motion, emerging from numerous interactions. The shape of the pattern influences the behaviour of the agents within it, which in turn influences the shape of the pattern, and so on.

There’s a stark contrast between the classical notion of equilibrium and the complex-systems perspective. The former assumes rational agents with near-perfect knowledge, while the latter recognises that agents are limited in various ways, and that their behaviour is contingent on the outcomes of their previous actions. Most significantly, complexity economics recognises that the system itself constantly changes and evolves – including when new technologies upend the rules of the game.

That’s all pretty heady stuff, but what we’d really like to know is what complexity economics can tell us that conventional economics can’t.

We’ll look at that next time.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning. Check out his latest LinkedIn Pulse article: “Rolling the Rock: Lessons From Sisyphus on Work, Working Out, and Life.”

What is “The Economy” Anyway?

Throughout this series, we’ve heard from numerous commentators who believe that conventional economic thinking isn’t keeping pace with the technological revolution, and that polarized ideological posturing is preventing the kind of open-minded discourse we need to reframe our thinking.

In this short TED talk, the author[1] of Americana: A Four Hundred Year History of American Capitalism suggests that we unplug the ideological debate and instead adopt a less combative and more digital-friendly metaphor for how we talk about the economy:

Capitalism . . . is this either celebrated term or condemned term. It’s either revered or it’s reviled. And I’m here to argue that this is because capitalism, in the modern iteration, is largely misunderstood.

In my view, capitalism should not be thought of as an ideology, but instead should be thought of as an operating system.

When you think about it as an operating system, it devolves the language of ideology away from what traditional defenders of capitalism think.

The operating system metaphor shifts policy agendas away from ideology and instead invites us to consider the economy as something that needs to be continually updated:

As you have advances in hardware, you have advances in software. And the operating system needs to keep up. It needs to be patched, it needs to be updated, new releases have to happen. And all of these things have to happen symbiotically. The operating system needs to keep getting more and more advanced to keep up with innovation.

But what if the operating system has gotten too complex for the human mind to comprehend? This recent article from the Silicon Flatirons Center at the University of Colorado[2] observes that “Human ingenuity has created a world that the mind cannot master,” then asks, “Have we finally reached our limits?” The question telegraphs its answer: In many respects, yes we have. Consider, for example, the air Traffic Alert and Collision Avoidance System (TCAS) that’s responsible for keeping us safe when we fly:

TCAS alerts pilots to potential hazards, and tells them how to respond by using a series of complicated rules. In fact, this set of rules — developed over decades — is so complex, perhaps only a handful of individuals alive even understand it anymore.

While the problem of avoiding collisions is itself a complex question, the system we’ve built to handle this problem has essentially become too complicated for us to understand, and even experts sometimes react with surprise to its behaviour. This escalating complexity points to a larger phenomenon in modern life. When the systems designed to save our lives are hard to grasp, we have reached a technological threshold that bears examining.

It’s one thing to recognise that technology continues to grow more complex, making the task of the experts who build and maintain our systems more complicated still, but it’s quite another to recognise that many of these systems are actually no longer completely understandable.

The article cites numerous other impossibly complex systems, including the law:

Even our legal systems have grown irreconcilably messy. The US Code, itself a kind of technology, is more than 22 million words long and contains more than 80,000 links within it, between one section and another. This vast legal network is profoundly complicated, the functionality of which no person could understand in its entirety.

In an earlier book[3], Steven Pinker, author of the recent optimistic bestseller Enlightenment Now (check back a couple posts in this series) suggests that the human brain just isn’t equipped for the complexity required of modern life:

Maybe philosophical problems are hard not because they are divine or irreducible or workaday science, but because the mind of Homo Sapiens lacks the cognitive equipment to solve them. We are organisms, not angels, and our minds are organs, not pipelines to the truth. Our minds evolved by natural selection to solve problems that were life-and-death matters to our ancestors, not to commune with correctness or to answer any question we are capable of asking.

In other words, we have our limits.

Imagine that.

So then… where do we turn for appropriately complex economic thinking? According to “complexity economics,” we turn to the source: the economy itself, understood not by reference to historical theory or newly updated metaphor, but on its own data-rich and machine-intelligent terms.

We’ll go there next time.


[1] According to his TED bio, Bhu Srinivasan “researches the intersection of capitalism and technological progress.”

[2] Samuel Arbesman is the author. The Center’s mission is to “propel the future of technology policy and innovation.”

[3] How The Brain Works, which Pinker wrote in 1997 when he was a professor of psychology and director of The Center for Cognitive Neuroscience at MIT.

 

Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!

Utopia Already

“If you had to choose a moment in history to be born, and you did not know ahead of time who you would be—you didn’t know whether you were going to be born into a wealthy family or a poor family, what country you’d be born in, whether you were going to be a man or a woman—if you had to choose blindly what moment you’d want to be born you’d choose now.”

Pres. Barack Obama, 2016

It’s been a good month for optimists in my reading pile. Utopia is already here, they say, and we’ve got the facts to prove it.

Harvard Professor Steven Pinker is his own weather system. Bill Gates called Pinker’s latest book Enlightenment Now “My new favorite book of all time.”

Pinker begins cautiously: “The second half of the second decade of the third millennium would not seem to be an auspicious time to publish a book on the historical sweep of progress and its causes,” he says, and follows with a recitation of the bad news sound bytes and polarized blame-shifting we’ve (sadly) gotten used to. But then he throws down the optimist gauntlet: “In the pages that follow, I will show that this bleak assessment of the state of the world is wrong. And not just a little wrong — wrong, wrong, flat-earth wrong, couldn’t-be-more-wrong wrong.”

He makes his case in a string of data-laced chapters on progress, life expectancy, health, food and famine, wealth, inequality, the environment, war and peace, safety and security, terrorism, democracy, equal rights, knowledge and education, quality of life, happiness, and “existential” threats such as nuclear war. In each of them, he calls up the pessimistic party line and counters with his version of the rest of the story.

And then, just to make sure we’re getting the point, 322 pages of data and analysis into it, he plays a little mind game with us. First he offers an eight paragraph summary of the prior chapters, then starts the next three paragraphs with the words “And yet,” followed by a catalogue of everything that’s still broken and in need of fixing. Despite 322 prior pages and optimism’s 8-3 winning margin, the negativity feels oddly welcome. I found myself thinking, “Well finally, you’re admitting there’s a lot of mess we need to clean up.” But then Prof. Pinker reveals what just happened:

The facts in the last three paragraphs, of course, are the same as the ones in the first eight. I’ve simply read the numbers from the bad rather the good end of the scales or subtracted the hopeful percentages from 100. My point in presenting the state of the world in these two ways is not to show that I can focus on the space in the glass as well as on the beverage. It’s to reiterate that progress is not utopia, and that there is room — indeed, an imperative — for us to strive to continue that progress.

Pinker acknowledges his debt to the work of Swedish physician, professor of global health, and TED all-star Hans Rosling and his recent bestselling book Factfulness. Prof. Rosling died last year, and the book begins with a poignant declaration: “This book is my last battle in my lifelong mission to fight devastating ignorance.” His daughter and son-in-law co-wrote the book and are carrying on his work — how’s that for commitment, passion, and family legacy?

The book leads us through ten of the most common mind games we play in our attempts to remain ignorant. It couldn’t be more timely or relevant to our age of “willful blindness,” “cognitive bias,” “echo chambers” and “epistemic bubbles.”

Finally, this week professional skeptic Michael Sheerer weighed in on the positive side of the scale with his review of a new book by journalist Gregg Easterbrook — It’s Better Than It Looks. Shermer blasts out of the gate with “Though declinists in both parties may bemoan our miserable lives, Americans are healthier, wealthier, safer and living longer than ever.” He also begins his case with the Obama quote above, and adds another one:

As Obama explained to a German audience earlier that year: “We’re fortunate to be living in the most peaceful, most prosperous, most progressive era in human history,” adding “that it’s been decades since the last war between major powers. More people live in democracies. We’re wealthier and healthier and better educated, with a global economy that has lifted up more than a billion people from extreme poverty.”

A similar paeon to progress begins last year’s blockbuster Homo Deus (another of Bill Gates’ favorite books of all time). The optimist case has been showing up elsewhere in my research, too. Who knows, maybe utopia isn’t such a bad idea after all. In fact, maybe it’s already here.

Now there’s a thought.

All this ferocious optimism has been bracing, to say the least — it’s been the best challenge yet to what was becoming a comfortably dour outlook on economic reality.

And just as I was beginning to despair of anyone anywhere at any time ever using data to make sense of things, I also ran into an alternative to utopian thinking that both Pinker and Shermer acknowledge. It’s called “protopia,” and we’ll look at it next time.

 

Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!

Utopia for Realists, Continued

Like humor and satire, utopias throw open the windows of the mind.

Rutger Bregman

Continuing with Rutger Bregman’s analysis of utopian thinking that we began last week:

Let’s first distinguish between two forms of utopian thought. The first is the most familiar, the utopia of the blueprint. Instead of abstract ideals, blueprints consist of immutable rules that tolerate no discussion.

There is, however, another avenue of utopian thought, one that is all but forgotten. If the blueprint is a high-resolution photo, then this utopia is just a vague outline. It offers not solutions but guideposts. Instead of forcing us into a straitjacket, it inspires us to change. And it understands that, as Voltaire put it, the perfect is the enemy of the good. As one American philosopher has remarked, ‘any serious utopian thinker will be made uncomfortable by the very idea of the blueprint.’

It was in this spirit that the British philosopher Thomas More literally wrote the book on utopia (and coined the term). More understood that utopia is dangerous when taken too seriously. ‘One needs to be believe passionately and also be able to see the absurdity of one’s own beliefs and laugh at them,’ observes philosopher and leading utopia expert Lyman Tower Sargent. Like humor and satire, utopias throw open the windows of the mind. And that’s vital. As people and societies get progressively older they become accustomed to the status quo, in which liberty can become a prison, and the truth can become lies. The modern creed — or worse, the belief that there’s nothing left to believe in — makes us blind to the shortsightedness and injustice that still surround us every day.

Thus the lines are drawn between utopian blueprints grounded in dogma vs. utopian ideals arising from sympathy and compassion. Both begin with good intentions, but the pull of entropy is stronger with the former — at least, so says Rutger Bregman, and he’s got good company in Sir Thomas More and others. Blueprints require compliance, and its purveyors are zealously ready to enforce it. Ideals on the other hand inspire creativity, and creativity requires acting in the face of uncertainty, living with imperfection, responding with resourcefulness and resilience when best intentions don’t play out, and a lot of just plain showing up and grinding it out. I have a personal bias for coloring outside the lines, but I must confess that my own attempts to promote utopian workplace ideals have given me pause.

For years, I led interactive workshops designed to help people creatively engage with their big ideas about work and wellbeing — variously tailored for CLE ethics credits or for general audiences. I realized recently that, reduced to their essence, they employed the kinds of ideals advocated by beatnik-era philosopher and metaphysicist Alan Watts. (We met him several months ago — he’s the “What would you do if money were no object?” guy. )

The workshops generated hundreds of heartwarming “this was life-changing” testimonies, but I could never quite get over this nagging feeling that the participants mostly hadn’t achieved escape velocity, and come next Monday they would be back to the despair of “But everybody knows you can’t earn any money that way.”

I especially wondered about the lawyers, for whom “I hate my job but love my paycheck” was a recurrent theme. The Post WWII neoliberal economic tide floated the legal profession’s boat, too, but prosperity has done little for lawyer happiness and well-being. True, we’re seeing substantial quality-of-life change in the profession recently (which I’ve blogged about in the past), but most have been around the edges, while overall lawyers’ workplace reality remains a bulwark of what one writer calls the “over-culture” — the overweening force of culturally-accepted norms about how things are and should be — and the legal over-culture has stepped in line with the worldwide workplace trend of favoring wealth over a sense of meaning and value.

Alan Watts’ ideals were widely adopted by the burgeoning self-help industry, which also rode the neoliberal tide to prosperous heights. Self-help tends to be long on inspiration and short on grinding, and sustainable creative change requires large doses of both. I served up both in the workshops, but still wonder if they were just too… well, um…beatnik … for the law profession. I’ll never know — the guy who promoted the workshops retired, and I quit doing them. If nothing else, writing this series has opened my eyes to how closely law practice mirrors worldwide economic and workplace dynamics. We’ll look more at that in the coming weeks.

 

Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!

The Perils of Predicting

“We were promised flying cars, and instead what we got was 140 characters.”

Peter Thiel, PayPal co-founder[1]

Economic forecasts and policy solutions are based on predictions, and predicting is a perilous business.

I grew up in a small town in western Minnesota. Our family got the morning paper — the Minneapolis Tribune. The Stars ubscribers got their paper around 4:00. A friend’s dad was a lawyer — his family got both. In a childhood display of cognitive bias, I never could understand why anyone would want an afternoon paper. News was made the day before, so you could read about it the next morning, and that was that.

I remember one Tribune headline to this day: it predicted nuclear war in 10 years. That was 1961, when I was eight. The Cuban missile crisis was the following year, and for awhile it looked like it wouldn’t take all ten years for the headline’s prediction to come true.

The Tribune helpfully ran designs and instructions for building your own fallout shelter. Our house had the perfect place for one: a root cellar off one side of the basement — easily the creepiest place in the house. You descended a couple steps down from the basement floor, through a stubby cinderblock hallway, past a door hanging on one hinge. Ahead of you was a bare light bulb swinging from the ceiling — it flickered, revealing decades of cobwebs and homeowner flotsam worthy of Miss Havisham. It was definitely a bomb shelter fixer-upper, but it was the right size, and as an added bonus it had a concrete slab over it — if you banged the ground above with a pipe it made a hollow sound.

I scoured the fallout shelter plans, but my dad said no. Someone else in town built one — the ventilation pipes stuck out of a room-size mound next to their house. People used to go by it on their Sunday drives. Meanwhile I ran my own personal version of the Doomsday Clockfor the next ten years until my 18th birthday came and went. So much for that headline.

I also remember a Sunday cartoon that predicted driverless cars. I found an article about it in this article from Gizmodo:[2]

The article explains:

The period between 1958 and 1963 might be described as a Golden Age of American Futurism, if not the Golden Age of American Futurism. Bookended by the founding of NASA in 1958 and the end of The Jetsons in 1963, these few years were filled with some of the wildest techno-utopian dreams that American futurists had to offer. It also happens to be the exact timespan for the greatest futuristic comic strip to ever grace the Sunday funnies: Closer Than We Think.

Jetpacks, meal pills, flying cars — they were all there, beautifully illustrated by Arthur Radebaugh, a commercial artist based in Detroit best known for his work in the auto industry. Radebaugh would help influence countless Baby Boomers and shape their expectations for the future. The influence of Closer Than We Think can still be felt today.

Timing is Everything

Apparently timing is everything in the prediction business. The driverless car prediction was accurate, just way too early. The Tribune’s nuclear war prediction was inaccurate (and let’s hope not just because it was too early). Predictions from the hapless mythological prophetess Cassandra were never inaccurate or untimely: she was cursed by Apollo (who ran a highly successful prophecy business at Delphi) with the gift of always being right but never believed.

Now that would be frustrating.

As I said last week, predicting is as perilous as policy-making. An especially perilous version of both is utopian thinking. There’s been plenty of utopian economic thinking the past couple centuries, and today’s economists continue the grand tradition — to their peril, and potentially to ours. We’ll look at some economic utopian thinking (and the case for and against it) beginning next time.

 

Apparently timing is everything in country music, too. I’m not an aficionado, but I did come across this video while researching this post. The guy’s got a nice baritone.


[1]Peter Thiel needn’t despair about the lack of flying cars anymore: here’s a video re: a prototypefrom Sebastian Thrun and his company Kitty Hawk.

[2]The article is worth a look, if you like that sort of thing. So is this Smithsonian articleon the Jetsons. And while we’re on the topic, check out this IEEE Spectrum articleon a 1960 RCA initiative that had self-driving cars just around the corner, and this Atlantic articleabout an Electronic Age/Science Digestarticle that made the same prediction even earlier — in 1958.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

Bus Riding Economists

Lord, I was born a ramblin’ man
Tryin’ to make a livin’ and doin’ the best I can[1]

A couple economists took the same bus I did one day last week. We’ll call them “Home Boy” and “Ramblin’ Man.”. They made acquaintance when Ramblin’ Man put his money in the fare box and didn’t get a transfer coupon. He was from out of town, he said, and didn’t know how to work it. Home Boy explained that you need to wait until the driver gets back from her break. Ramblin’ Man said he guessed the money was just gone, but the driver showed up about then and checked the meter — it showed he’d put the money in, so he got his transfer. Technology’s great, ain’t it?

Ramblin’ Man took the seat in front of me. Home Boy sat across the aisle. When the conversation turned to economics, I eavesdropped[2] shamelessly. Well not exactly — they were talking pretty loud. Ramblin’ Man said he’d been riding the bus for two days to get to the VA. That gave them instant common ground:  they were both Vietnam vets, and agreed they were lucky to get out alive.

Ramblin’ Man said when he got out he went traveling — hitchhike, railroad, bus, you name it. That was back in the 70’s, when a guy could go anywhere and get a job. Not no more. Now he lives in a small town up on northeast Montana. He likes it, but it’s a long way to get to the VA, but he knew if he could get here, there’d be a bus to take him right to it, and sure enough there was. That’s the trouble with those small towns, said Home Boy — nice and quiet, but not enough people to have any services. I’ll bet there’s no bus company up there, he chuckled. Not full of people like Minneapolis.

Minneapolis! Ramblin’ Man lit up at the mention of it. All them people, and no jobs. He was there in 2009, right after the bankers ruined the economy. Yeah, them and the politicians, Home Boy agreed. Shoulda put them all in jail. It’s those one-percenters. They got it fixed now so nobody makes any money but them. It’s like it was back when they were building the railroads and stuff. Now they’re doing it again. Nobody learns from history — they keep doing the same things over and over. They’re stuck in the past.

Except this time, it’s different, said Ramblin’ Man. It’s all that technology — takes away all the jobs. Back in 09, he’d been in Minneapolis for three months, and his phone never rang once for a job offer. Not once. Never used to happen in the 70’s.

And then my stop came up, and my economic history lesson was over. My two bus riding economists had covered the same developments I’ve been studying for the past 15 months. My key takeaway? That “The Economy” is a lazy fiction — none of us really lives there. Instead, we live in the daily challenges of figuring out how to get the goods and services we need — maybe to thrive (if you’re one of them “one-percenters”), or maybe just to get by. The Economy isn’t some transcendent structure, it’s created one human transaction at a time — like when a guy hits the road to make sense of life after a war, picking up odd jobs along the way until eventually he settles in a peaceful little town in the American Outback. When we look at The Economy that way, we get a whole new take on it. That’s precisely what a new breed of cross-disciplinary economists are doing, and we’ll examine their outlook in the coming weeks.

In the meantime, I suspect that one of the reasons we don’t learn from history is that we don’t know it. In that regard, I recently read a marvelous economic history book that taught me a whole lot I never knew:  Americana: A 400-Year History of American Capitalism (2017)  by tech entrepreneur Bhu Srinivasan. Here’s the promo blurb:

“From the days of the Mayflower and the Virginia Company, America has been a place for people to dream, invent, build, tinker, and bet the farm in pursuit of a better life. Americana takes us on a four-hundred-year journey of this spirit of innovation and ambition through a series of Next Big Things — the inventions, techniques, and industries that drove American history forward: from the telegraph, the railroad, guns, radio, and banking to flight, suburbia, and sneakers, culminating with the Internet and mobile technology at the turn of the twenty-first century. The result is a thrilling alternative history of modern America that reframes events, trends, and people we thought we knew through the prism of the value that, for better or for worse, this nation holds dearest: capitalism. In a winning, accessible style, Bhu Srinivasan boldly takes on four centuries of American enterprise, revealing the unexpected connections that link them.”

This is American history as we never learned it, and the book is well worth every surprising page.


[1] From “Ramblin’ Man,” by the Allman Brothers. Here’s a 1970 live version. And here’s the studio version.

[2] If you wonder, as I did, where “eavesdrop” came from, here’s the Word Detective’s explanation.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

On the Third Hand…

Will the machines take over the jobs? Ask a bunch of economists, and you’ll get opinions organized around competing ideologies, reflecting individual cognitive, emotional, and political biases. That’s been the experience of Martin Fordentrepreneur, TED talker, and New York Times bestselling author of Rise of the Robots: Technology and the Threat of a Jobless Future:

In the field of economics the opinions all too often break cleanly along predefined political lines. Knowing the ideological predisposition of a particular economist is often a better predictor of what that individual is likely to say than anything contained in the data under examination. In other words, if you’re waiting for the economists to deliver some sort of definitive verdict on the impact that advancing technology is having on the economy, you may have a very long wait.[1]

In this Psychology Today article, Dr. Karl Albrecht[2] offers a neurological explanation for polarized thinking:

Recent research suggests that our brains may be pre-wired for dichotomized thinking. That’s a fancy name for thinking and perceiving in terms of two — and only two — opposing possibilities.

These research findings might help explain how and why the public discourse of our culture has become so polarized and rancorous, and how we might be able to replace it with a more intelligent conversation.

[O]ur brains can keep tabs on two tasks at a time, by sending each one to a different side of the brain. Apparently, we toggle back and forth, with one task being primary and the other on standby.

Add a third task, however, and one of the others has to drop off the to-do list.

Scans of brain activity during this task switching have led to the hypothesis that the brain actually likes handling things in pairs. Indeed, the brain itself is subdivided into two distinct half-brains, or hemispheres.

Curiously, part of our cranial craving for two-ness might be related to our own physiology: the human body is bilaterally symmetrical. Draw an imaginary center line down through the front of a person and you see a lot of parts (not all, of course), that come in pairs: two eyes, two ears, two nostrils, matching teeth on left and right sides, two shoulders, two arms, two hands, two nipples, two legs, two knees, and two feet. Inside you’ll find two of some things and one of others.

Some researchers are now extending this reasoning to suggest that the brain has a built-in tendency, when confronted by complex propositions, to selfishly reduce the set of choices to just two. Apparently it doesn’t like to work hard.

Considering how quickly we make our choices and set our opinions, it’s unlikely that all of the options will even be identified, never mind carefully considered.

“On the one hand this, on the other hand that,” we like to say. Lawyers perfect the art. Politics and the press also thrive on dichotomy:

Again, our common language encodes the effect of this anatomical self reference. “On the one hand, there is X. But on the other hand, we have Y.” Many people describe political views as being either “left” or “right.”

The popular press routinely constructs “news” stories around conflicts and differences between pairs of opposing people, factions, and ideologies. Bipolar conflict is the very essence of most of the news.

So, are robots and artificially intelligence going to trash the working world, or not?

Hmmm, there might be another option — several, actually. Dr. Albrecht urges us to find them:

Seek the “third hand” — and any other “hands” you can discover. Ask yourself, and others, “Are there other options to be considered?”

We’ll consider some third hand perspectives about the rise of the robots in the coming weeks.


[1] Martin Ford is also the consulting expert for Societe Generale’s new “Rise of the Robots” investment index, which focuses on companies that are “significant participants in the artificial intelligence and robotics revolution.”

[2] According to his website, Karl Albrecht is “is an executive management consultant, futurist, lecturer, and author of more than 20 books on professional achievement, organizational performance, and business strategy. He is also a leading authority on cognitive styles and the development of advanced thinking skills. The Mensa Society honored him with its lifetime achievement award, for significant contributions by a member to the understanding of intelligence. Originally a physicist, and having served as a military intelligence officer and business executive, he now consults, lectures, and writes about whatever he thinks would be fun.”

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

Race Against the Machine, Continued

Rational choice theory is a cornerstone of conventional economic thinking. It states that:

Individuals always make prudent and logical decisions. These decisions provide people with the greatest benefit or satisfaction — given the choices available — and are also in their highest self-interest.

Presumably Stephen Hawking, Elon Musk, and Bill Gates had something like this in mind when they published an open letter in January 2015 urging that artificial intelligence R&D should focus “not only on making AI more capable, but also on maximizing the societal benefit,” To execute on this imperative, they urged an interdisciplinary collaboration among “economics, law and philosophy. computer security, formal methods and, of course, various branches of AI itself.” (Since its release, the letter has garnered another 8.000 signatures — you can sign it, too, if you like.)

The letter’s steady, rational four paragraphs praise how technology has benefited the human race, and anticipate more of the same in the future, but its reception and the authors’ comments in other contexts are not so measured. As a result, the letter has become a cheering section for those who think humanity is losing its race against the robots.

Consider, for example, the following from an Observer article:

“Success in creating AI would be the biggest event in human history,” wrote Stephen Hawking in an op-ed, which appeared in The Independent in 2014. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Professor Hawking added in a 2014 interview with BBC, “humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.”

Elon Musk called the prospect of artificial intelligence “our greatest existential threat” in a 2014 interview with MIT students at the AeroAstro Centennial Symposium. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” Mr. Musk cites his decision to invest in the Artificial Intelligence firm, DeepMind, as a means to “just keep an eye on what’s going on with artificial intelligence. I think there is potentially a dangerous outcome there.”

Microsoft co-founder Bill Gates has also expressed concerns about Artificial Intelligence. During a Q&A session on Reddit in January 2015, Mr. Gates said, “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Or consider this Elon Musk comment in Vanity Fair:

In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”

In other words, Hawking, Gates, and Musk aren’t just worried about machines taking over jobs, they’re worried about the end of the world — or at least the human race. This Washington Post op-ed piece thinks that might not be such a bad thing:

When a technology is so obviously dangerous — like nuclear energy or synthetic biology — humanity has an imperative to consider dystopian predictions of the future. But it also has an imperative to push on, to reach its full potential. While it’s scary, sure, that humans may no longer be the smartest life forms in the room a generation from now, should we really be that concerned? Seems like we’ve already done a pretty good job of finishing off the planet anyway. If anything, we should be welcoming our AI masters to arrive sooner rather than later.

Or consider this open letter written back to Hawking, Gates, and Musk, which basically says forget the fear mongering — it’s going to happen no matter what you think:

Progress is inevitable, even if it is reached by accident and happenstance. Even if we do not intend to, sentient AI is something that will inevitably be created, be it through the evolution of a learning AI, or as a byproduct of some research. No treaty or coalition can stop it, no matter what you think. I just pray you do not go from educated men to fear mongers when it happens.

As usual, we’re at an ideological impasse, with both sides responding not so much according to the pros and cons but according to their predispositions. This article suggests a way through the impasse:

At the beginning of this article, we asked if the pessimists or optimists would be right.

There is a third option, though: one where we move from building jobs around processes and tasks, a solution that is optimal for neither human nor machine, to building jobs around problems.

The article is long, well-researched, and… well, very rational. Too bad — conventional thinking aside — other research shows we rarely act from a rational outlook when it comes to jobs and the economy… or anything else for that matter.

More on that next time.

 

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

Race Against the Machine

For the past several years, two MIT big thinkers[1] have been the go-to authorities in the scramble to explain how robotics, artificial intelligence, and big data are revolutionizing the economy and the working world. Their two books were published four and six years ago — so yesterday in the world of technology — but they were remarkably prescient when written, and have not diminished in relevance. They are:

Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy (2012)

The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (2014)

Click here for a chapter-by-chapter digest of The Second Machine Age, written by an all star cast of economic commentators. Among other things, they acknowledge the authors’ view that neoliberal capitalism has not fared well in its dealings with the technological juggernaut, but in the absence of a better alternative, we might as well continue to ride the horse in the direction it’s going.

While admitting that History (not human choice) is “littered with unintended . . . side effects of well-intentioned social and economic policies,” the authors cite Tim O’Reilly[2] in pushing forward with technology’s momentum rather than clinging to the past or present. They suggest that we should let the technologies do their work and just find ways to deal with it. They are “skeptical of efforts to come up with fundamental alternatives to capitalism.”

David Rotman, editor of the MIT Technology Review cites The Second Machine Age extensively in an excellent, longer article, “How Technology is Destroying Jobs.” Although the article is packed with contrary analysis and opinion, the following excepts emphasize what many might consider the shadowy side of the street (compared to the sunny side we looked at in the past couple posts). I added the headings below to emphasize that many of the general economic themes we’ve been talking about also apply to the specific dynamics of the job market.

It used to be that economic growth — including wealth creation — also created more jobs. It doesn’t work that way any more. Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States.

For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.

A rising economic tide no longer floats all boats. The result is a skewed allocation of the rewards of growth away from jobs — i.e., economic inequality. The contention that automation and digital technologies are partly responsible for today’s lack of jobs has obviously touched a raw nerve for many worried about their own employment. But this is only one consequence of what ­Brynjolfsson and McAfee see as a broader trend. The rapid acceleration of technological progress, they say, has greatly widened the gap between economic winners and losers—the income inequalities that many economists have worried about for decades..

“[S]teadily rising productivity raised all boats for much of the 20th century,” [Brynjolfsson] says. “Many people, especially economists, jumped to the conclusion that was just the way the world worked. I used to say that if we took care of productivity, everything else would take care of itself; it was the single most important economic statistic. But that’s no longer true.” He adds, “It’s one of the dirty secrets of economics: technology progress does grow the economy and create wealth, but there is no economic law that says everyone will benefit.” In other words, in the race against the machine, some are likely to win while many others lose.

That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States.

Meanwhile, technology is taking over the jobs that are left— blue collar, white collar, and even the professions. [I]mpressive advances in computer technology—from improved industrial robotics to automated translation services—are largely behind the sluggish employment growth of the last 10 to 15 years. Even more ominous for workers, the MIT academics foresee dismal prospects for many types of jobs as these powerful new technologies are increasingly adopted not only in manufacturing, clerical, and retail work but in professions such as law, financial services, education, and medicine.

Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared.

New technologies are “encroaching into human skills in a way that is completely unprecedented,” McAfee says, and many middle-class jobs are right in the bull’s-eye; even relatively high-skill work in education, medicine, and law is affected.

We’ll spend more time on the shadowy side of the street next time.


[1] Erik Brynjolfsson is director of the MIT Center for Digital Business, and Andrew McAfee is a principal research scientist at MIT who studies how digital technologies are changing business, the economy, and society.

[2] According to his official bio on his website, Tim O’Reilly “is the founder and CEO of O’Reilly Media, Inc. His original business plan was simply ‘interesting work for interesting people,’ and that’s worked out pretty well. O’Reilly Media delivers online learning, publishes books, runs conferences, urges companies to create more value than they capture, and tries to change the world by spreading and amplifying the knowledge of innovators.”

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article:  Meeting Goals the Olympic Way:  Train + Transform.

Bright Sunshiny Day, Continued

Last time, we heard David Lee[1] express his conviction that, far from destroying human jobs, robotic technology will unleash human creativity on a wonderful new world of work. His perspective is so remarkably and refreshingly upbeat that I thought we’d let him continue where he left off last week:

I think it’s important to recognize that we brought this problem on ourselves. And it’s not just because, you know, we are the one building the robots. But even though most jobs left the factory decades ago, we still hold on to this factory mindset of standardization and de-skilling. We still define jobs around procedural tasks and then pay people for the number of hours that they perform these tasks. We’ve created narrow job definitions like cashier, loan processor or taxi driver and then asked people to form entire careers around these singular tasks.

These choices have left us with actually two dangerous side effects. The first is that these narrowly defined jobs will be the first to be displaced by robots, because single-task robots are just the easiest kinds to build. But second, we have accidentally made it so that millions of workers around the world have unbelievably boring working lives.

Let’s take the example of a call center agent. Over the last few decades, we brag about lower operating costs because we’ve taken most of the need for brainpower out of the person and put it into the system. For most of their day, they click on screens, they read scripts. They act more like machines than humans. And unfortunately, over the next few years, as our technology gets more advanced, they, along with people like clerks and bookkeepers, will see the vast majority of their work disappear.

To counteract this, we have to start creating new jobs that are less centered on the tasks that a person does and more focused on the skills that a person brings to work. For example, robots are great at repetitive and constrained work, but human beings have an amazing ability to bring together capability with creativity when faced with problems that we’ve never seen before.

We need to realistically think about the tasks that will be disappearing over the next few years and start planning for more meaningful, more valuable work that should replace it. We need to create environments where both human beings and robots thrive. I say, let’s give more work to the robots, and let’s start with the work that we absolutely hate doing. Here, robot, process this painfully idiotic report.

And for the human beings, we should follow the advice from Harry Davis at the University of Chicago. He says we have to make it so that people don’t leave too much of themselves in the trunk of their car. I mean, human beings are amazing on weekends. Think about the people that you know and what they do on Saturdays. They’re artists, carpenters, chefs and athletes. But on Monday, they’re back to being Junior HR Specialist and Systems Analyst 3.

You know, these narrow job titles not only sound boring, but they’re actually a subtle encouragement for people to make narrow and boring job contributions. But I’ve seen firsthand that when you invite people to be more, they can amaze us with how much more they can be.

[The key is]to turn dreams into a reality. And that dreaming is an important part of what separates us from machines. For now, our machines do not get frustrated, they do not get annoyed, and they certainly don’t imagine.

But we, as human beings — we feel pain, we get frustrated. And it’s when we’re most annoyed and most curious that we’re motivated to dig into a problem and create change. Our imaginations are the birthplace of new products, new services, and even new industries.

If we really want to robot-proof our jobs, we, as leaders, need to get out of the mindset of telling people what to do and instead start asking them what problems they’re inspired to solve and what talents they want to bring to work. Because when you can bring your Saturday self to work on Wednesdays, you’ll look forward to Mondays more, and those feelings that we have about Mondays are part of what makes us human.

We’ll give the other side equal time next week.


[1] David Lee is Vice President of Innovation and the Strategic Enterprise Fund for UPS.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: Leadership and Life Lessons From an Elite Athlete and a Dying Man.

Learning to Learn

“I didn’t know robots had advanced so far,” a reader remarked after last week’s post about how computers are displacing knowledge workers. What changed to make that happen? The machines learned how to learn.

This is from Artificial Intelligence Goes Bilingual—Without A Dictionary, Science Magazine, Nov. 28, 2017.

“Imagine that you give one person lots of Chinese books and lots of Arabic books—none of them overlapping—and the person has to learn to translate Chinese to Arabic. That seems impossible, right?” says . . . Mikel Artetxe, a computer scientist at the University of the Basque Country (UPV) in San Sebastiàn, Spain. “But we show that a computer can do that.”

Most machine learning—in which neural networks and other computer algorithms learn from experience—is “supervised.” A computer makes a guess, receives the right answer, and adjusts its process accordingly. That works well when teaching a computer to translate between, say, English and French, because many documents exist in both languages. It doesn’t work so well for rare languages, or for popular ones without many parallel texts.

[This learning technique is called] unsupervised machine learning. [A computer using this technique] constructs bilingual dictionaries without the aid of a human teacher telling them when their guesses are right.

Hmm. . . . I could have used that last year, when my wife and I spent three months visiting our daughter in South Korea. The Korean language is ridiculously complex; I never got much past “good morning.”

Go matches were a standard offering on the gym TV’s where I worked out. (Imagine two guys in black suits staring intently at a game board — not exactly a riveting workout visual.) Go is also ridiculously complex, and mysterious, too: the masters seem to make moves more intuitively than analytically. But the days of human Go supremacy are over. Google wizard and overall overachiever Sebastian Thrun[1] explains why in this conversation with TED Curator Chris Anderson:

Artificial intelligence and machine learning is about 60 years old and has not had a great day in its past until recently. And the reason is that today, we have reached a scale of computing and datasets that was necessary to make machines smart. The new thing now is that computers can find their own rules. So instead of an expert deciphering, step by step, a rule for every contingency, what you do now is you give the computer examples and have it infer its own rules.

A really good example is AlphaGo. Normally, in game playing, you would really write down all the rules, but in AlphaGo’s case, the system looked over a million games and was able to infer its own rules and then beat the world’s residing Go champion. That is exciting, because it relieves the software engineer of the need of being super smart, and pushes the burden towards the data.

20 years ago the computers were as big as a cockroach brain. Now they are powerful enough to really emulate specialized human thinking. And then the computers take advantage of the fact that they can look at much more data than people can. AlphaGo looked at more than a million games. No human expert can ever study a million games. So as a result, the computer can find rules that even people can’t find.

Thrun made those comments in April 2017. AlphaGo’s championship reign was short-lived: it was unseated a mere six months by a new cyber challenger that taught itself without reviewing all that data. This is from “AlphaGo Zero Shows Machines Can Become Superhuman Without Any Help,” MIT Technology Review, October 18, 2017.

AlphaGo wasn’t the best Go player on the planet for very long. A new version of the masterful AI program has emerged, and it’s a monster. In a head-to-head matchup, AlphaGo Zero defeated the original program by 100 games to none.

Whereas the original AlphaGo learned by ingesting data from hundreds of thousands of games played by human experts, AlphaGo Zero started with nothing but a blank board and the rules of the game. It learned simply by playing millions of games against itself, using what it learned in each game to improve.

The new program represents a step forward in the quest to build machines that are truly intelligent. That’s because machines will need to figure out solutions to difficult problems even when there isn’t a large amount of training data to learn from.

“The most striking thing is we don’t need any human data anymore,” says Demis Hassabis, CEO and cofounder of DeepMind [the creators of AlphaGo Zero].

“By not using human data or human expertise, we’ve actually removed the constraints of human knowledge,” says David Silver, the lead researcher at DeepMind and a professor at University College London. “It’s able to create knowledge for itself from first principles.”

Did you catch that? “We’ve removed the constraints of human knowledge.” Wow. No wonder computers are elbowing all those knowledge workers out of the way.

What’s left for human to do? We’ll hear from Sebastian Thrun and others on that topic next time.


[1] Sebastian Thrun’s TED bio describes him as “an educator, entrepreneur and troublemaker. After a long life as a professor at Stanford University, Thrun resigned from tenure to join Google. At Google, he founded Google X, home to self-driving cars and many other moonshot technologies. Thrun also founded Udacity, an online university with worldwide reach, and Kitty Hawk, a ‘flying car’ company. He has authored 11 books, 400 papers, holds 3 doctorates and has won numerous awards.”

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: Leadership and Life Lessons From an Elite Athlete and a Dying Man.

Capitalism on the Fritz

Capitalism on the Fritz[1]

In November 2008, as the global financial crash was gathering pace, the 82-year-old British monarch Queen Elizabeth visited the London School of Economics. She was there to open a new building, but she was more interested in the assembled academics. She asked them an innocent but pointed question. Given its extraordinary scale, how as it possible that no one saw it coming?

The Queen’s question went to the heart of two huge failures. Western capitalism came close to collapsing in 2007-2008 and has still not recovered. And the vast majority of economists had not understood what was happening.

That’s from the Introduction to Rethinking Capitalism (2016), edited by Michael Jacobs and Mariana Mazzucato.[2] The editors and authors review a catalogue of chronic economic “dysfunction” that they trace to policy-makers’ continued allegiance to neoliberal economic orthodoxy even as it has been breaking down over the past four decades.

Before we get to their dysfunction list, let’s give the other side equal time. First, consider an open letter from Warren Buffett published in Time last week. It begins this way:

“I have good news. First, most American children are going to live far better than their parents did. Second, large gains in the living standards of Americans will continue for many generations to come.”

Mr. Buffett acknowledges that “The market system . . . has also left many people hopelessly behind,” but assures us that “These devastating side effects can be ameliorated,” observing that “a rich family takes care of all its children, not just those with talents valued by the marketplace.” With this compassionate caveat, he is definitely bullish on America’s economy:

In the years of growth that certainly lie ahead, I have no doubt that America can both deliver riches to many and a decent life to all. We must not settle for less.

So, apparently, is our Congress. The new tax law is a virtual pledge of allegiance to the neoliberal economic model. Barring a significant pullback of the law (which seems unlikely), we now have eight years to watch how its assumptions play out.

And now, back to Rethinking Capitalism’s dysfunction’s list (which I’ve seen restated over and over in my research):

  • Production and wages no longer move in tandem — the latter lag behind the former.
  • This has been going on now for several decades,[3] during which living standards (adjusted) for the majority of households have been flat.
  • This is a problem because consumer spending accounts for over 70% of U.S. GDP. What hurts consumers hurts the whole economy.
  • What economic growth there has been is mostly the result of spending fueled by consumer and corporate debt. This is especially true of the post-Great Recession “recovery.”
  • Meanwhile, companies have been increasing production through increased automation — most recently through intelligent machines — which means getting more done with fewer employees.
  • That means the portion of marginal output attributable to human (wage-earner) effort is less, which causes consumer incomes to fall.
  • The job marketplace has responded with new dynamics, featuring a worldwide rise of “non-standard’ work (temporary, part-time, and self-employed).[4]
  • Overall, there has been an increase in the number of lower-paid workers and a rise in intransigent unemployment — especially among young people.
  • Adjusting to these new realities has left traditional wage-earners with feelings of meaninglessness and disempowerment, fueling populist backlash political movements.
  • In the meantime, economic inequality (both wealth and income) has grown to levels not seen since pre-revolution France, the days of the Robber Barons, and the Roaring 20s.
  • Economic inequality means that the shrinking share of compensation paid out in wages, salaries, bonuses, and benefits has been dramatically skewed toward the top of the earnings scale, with much less (both proportionately and absolutely) going to those at the middle and bottom. [5]
  • Increased wealth doesn’t mean increased consumer spending by the top 20% sufficient to offset lost demand (spending) by the lower 80% of income earners, other than as reflected by consumer debt.
  • Instead, increased wealth at the top end is turned into “rentable” assets — e.g., real estate. intellectual property, and privatized holdings in what used to be the “commons” — which both drives up their value (cost) and the rent derived from them. This creates a “rentier” culture in which lower income earners are increasingly stressed to meet rental rates, and ultimately are driven out of certain markets.
  • Inequality has also created a new working class system, in which a large share of workers are in precarious/uncertain/unsustainable employment and earning circumstances.
  • Inequality has also resulted in limitations on economic opportunity and social mobility — e.g., there is a new kind of “glass floor/glass ceiling” below which the top 20% are unlikely to fall and the bottom 80% are unlikely to rise.
  • In the meantime, the social safety nets that developed during the post-WWII boom (as Buffett’s “rich family” took care of “all its children”) have been largely torn down since the advent of “workfare” in the 80’s and 90’s, leaving those at the bottom and middle more exposed than ever.

The editors of Rethinking Capitalism believe that “These failings are not temporary, they are structural.” That conclusion has led some to believe that people like Warren Buffett are seriously misguided in their continued faith in Western capitalism as a reliable societal institution.

More on that next time.


[1] I wondered where the expression “on the fritz” came from, and tried to find out. Surprisingly, no one seems to know.

[2] Michael Jacobs is an environmental economist and political theorist; at the time the book was published, he was a visiting professor at University College of London. Mariana Mazzucato is an economics professor at the University of Sussex.

[3] “In the US, real median household income was barely higher in 2014 than it had been in 1990, though GDP had increased by 78 percent over the same period. Though beginning earlier in the US, this divergence of average incomes from overall economic growth has not become a feature of most advanced economies.”  Rethinking Capitalism.

[4] These have accounted for “half the jobs created since the 1990s and 60 per cent since the 2008 crisis.” Rethinking Capitalism.

[5] Meanwhile, those at the very top of the income distribution have done exceedingly well… In the US, the incomes of the richest 1 percent rose by 142 per cent between 1980 and 2013 (from an average of $461,910, adjusted for inflation, to $1,119,315) and their share of national income doubled, from 10 to 20 per cent. In the first three years of the recovery after the 2008 crash, an extraordinary 91 per cent of the gains in income went to the richest one-hundredth of the population.” Rethinking Capitalism.

 

Kevin Rhodes left a successful long-term law practice to scratch a creative itch and lived to tell about it… barely. Since then, he has been on a mission to bring professional excellence and personal wellbeing to the people who learn, teach, and practice the law. He has also blogged extensively and written several books about his unique journey to wellness, including how he deals with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.