September 22, 2018

Economics + Math = Science?

The human brain is wired to recognize patterns, which it then organizes into higher level models and theories and beliefs, which in turn it uses to explain the past and present, and to predict the future. Models offer the consolation of rationality and understanding, which provide a sense of control. All of this is foundational to classical economic theory, which assumes we approach commerce equipped with an internal rational scale that weighs supply and demand, cost and benefit, and that we then act according to our assessment of what we give for what we get back. This assumption of an internal calculus has caused mathematical modeling to reign supreme in the practice of economics.

The trouble is, humans aren’t as innately calculating as classical economics would like to believe — so says David Graeber, professor of anthropology at the London School of Economics, in his new book Bullshit Jobs:

According to classical economic theory, homo oeconomicus, or “economic man” — that is, the model human being that lies behind every predication made by the discipline — is assumed to be motivated by a calculus of costs and benefits.

All the mathematical equations by which economists bedazzle their clients, or the public, are founded on one simple assumption: that everyone, left to his own devices, will choose the course of action that provides the most of what he wants for the least expenditure of resources and effort.

It is the simplicity of the formula that makes the equations possible: if one were to admit that humans have complicated emotions, there would be too many factors to take into account, it would be impossible to weigh them, and predictions would not be made.

Therefore, while an economist will say that while of course everyone is aware that human beings are not really selfish, calculating machines, assuming they are makes it possible to explain.

This is a reasonable statement as far as it goes. The problem is there are many dimensions of human life where the assumption clearly doesn’t hold — and some of them are precisely in the domain of what we like to call the economy.

Economics’ reliance on mathematics has been a topic of lively debate for a long time:

The trouble . . . is that measurement and mathematics do not guarantee the status of science – they guarantee only the semblance of science. When the presumptions or conclusions of a scientific theory are absurd or simply false, the theory ought to be questioned and, eventually, rejected. The discipline of economics, however, is presently so blinkered by the talismanic authority of mathematics that theories go overvalued and unchecked.

In 1886, an article in Science accused economics of misusing the language of the physical sciences to conceal “emptiness behind a breastwork of mathematical formulas.” More recently, Deirdre N. McCloskey’s The Rhetoric of Economics (1998) and Robert H. Nelson’s Economics as Religion (2001) both argued that mathematics in economic theory serves, in McCloskey’s words, primarily to deliver the message “Look at how very scientific I am.”

After the Great Recession, the failure of economic science to protect our economy was once again impossible to ignore. In 2009, the Nobel Laureate Paul Krugman tried to explain it in The New York Times with a version of the mathiness diagnosis. “As I see it,” he wrote, “the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.” Krugman named economists’ “desire . . . to show off their mathematical prowess” as the “central cause of the profession’s failure.”

The result is people . . . who trust the mathematical exactitude of theories without considering their performance – that is, who confuse math with science, rationality with reality.

There is no longer any excuse for making the same mistake with economic theory. For more than a century, the public has been warned, and the way forward is clear. It’s time to stop wasting our money and recognise the high priests for what they really are: gifted social scientists who excel at producing mathematical explanations of economies, but who fail, like astrologers before them, at prophecy.

The New Astrology: By fetishising mathematical models, economists turned economics into a highly paid pseudoscience,” Aeon Magazine.

Economists may bristle at being compared to astrologers, but as we have seen, their skill at prediction seems about comparable.

In the coming weeks we’ll look at other models emerging from the digital revolution, consider what they can tell us that classical economic theory can’t, and how they are affecting the world of work.

 

Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!

Protopia

Last week we heard professional skeptic Michael Shermer weigh in as an optimistic believer in progress (albeit guardedly — I mean, he is a skeptic after all) in his review of the new book It’s Better Than It Looks. That doesn’t mean he’s ready to stake a homestead claim on the Utopian frontier: the title of a recent article tells you what you need to know about where he stands on that subject: “Utopia Is A Dangerous Ideal: We Should Aim For Protopia.”[1]

He begins with a now-familiar litany of utopias that soured into dystopias in the 19th and 20th Centuries. He then endorses the “protopian” alternative, quoting an oft-cited passage in which Kevin Kelly[2] coined the term.

Protopia is a state that is better today than yesterday, although it might be only a little better. Protopia is much much harder to visualize. Because a protopia contains as many new problems as new benefits, this complex interaction of working and broken is very hard to predict.

Doesn’t sound like much, but there’s more to it than appears. Protopia is about incremental, sustainable progress — even in the impatient onslaught of technology. Kelly’s optimism is ambitious — for a full dose of it, see his book The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future (2016). This is from the book blurb:

Much of what will happen in the next thirty years is inevitable, driven by technological trends that are already in motion. In this fascinating, provocative new book, Kevin Kelly provides an optimistic road map for the future, showing how the coming changes in our lives — from virtual reality in the home to an on-demand economy to artificial intelligence embedded in everything we manufacture — can be understood as the result of a few long-term, accelerating forces.

These larger forces will completely revolutionize the way we buy, work, learn, and communicate with each other. By understanding and embracing them, says Kelly, it will be easier for us to remain on top of the coming wave of changes and to arrange our day-to-day relationships with technology in ways that bring forth maximum benefits.

Kelly’s bright, hopeful book will be indispensable to anyone who seeks guidance on where their business, industry, or life is heading — what to invent, where to work, in what to invest, how to better reach customers, and what to begin to put into place — as this new world emerges.

Protopian thinking begins with Kelly’s “bright, hopeful” attitude of optimism about progress (again, remember the thinkers we heard from last week). To adopt both optimism and the protopian vision it produces, we’ll need to relinquish our willful cognitive blindness, our allegiance to inadequate old models and explanations, and our nostalgic urge to resist and retrench.

Either that, or we can just die off. Economist Paul Samuelson said this in a 1975 Newsweek column:

As the great Max Planck, himself the originator of the quantum theory in physics, has said, science makes progress funeral by funeral: the old are never converted by the new doctrines, they simply are replaced by a new generation.

Planck himself said it this way, in his Scientific Autobiography and Other Papers:

 A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

Progress funeral by funeral[3]. . . . If that’s what it takes, that’s the way protopian progress will be made — in the smallest increments of “better today than yesterday” we will allow. But I somehow doubt progress will be that slow; I don’t think technology can wait.

Plus, if we insist on “not in my lifetime, you don’t,” we’ll miss out on a benefit we probably wouldn’t have seen coming: technology itself guiding us as we stumble our way forward through the benefits and problems of progress. There’s support for that idea in the emerging field of complexity economics — I’ve mentioned it before, and we’ll look more into it next time.


[1] The article is based on Shermer’s recent book Heavens on Earth: The Scientific Search for the Afterlife, Immortality, and Utopia.

[2] Kelly is a prolific TED talker — revealing his optimistic protopian ideas. Here’s his bio.

[3] See the Quote Investigator’s history of these quotes.

 

Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!

Utopia for Realists, Continued

Like humor and satire, utopias throw open the windows of the mind.

Rutger Bregman

Continuing with Rutger Bregman’s analysis of utopian thinking that we began last week:

Let’s first distinguish between two forms of utopian thought. The first is the most familiar, the utopia of the blueprint. Instead of abstract ideals, blueprints consist of immutable rules that tolerate no discussion.

There is, however, another avenue of utopian thought, one that is all but forgotten. If the blueprint is a high-resolution photo, then this utopia is just a vague outline. It offers not solutions but guideposts. Instead of forcing us into a straitjacket, it inspires us to change. And it understands that, as Voltaire put it, the perfect is the enemy of the good. As one American philosopher has remarked, ‘any serious utopian thinker will be made uncomfortable by the very idea of the blueprint.’

It was in this spirit that the British philosopher Thomas More literally wrote the book on utopia (and coined the term). More understood that utopia is dangerous when taken too seriously. ‘One needs to be believe passionately and also be able to see the absurdity of one’s own beliefs and laugh at them,’ observes philosopher and leading utopia expert Lyman Tower Sargent. Like humor and satire, utopias throw open the windows of the mind. And that’s vital. As people and societies get progressively older they become accustomed to the status quo, in which liberty can become a prison, and the truth can become lies. The modern creed — or worse, the belief that there’s nothing left to believe in — makes us blind to the shortsightedness and injustice that still surround us every day.

Thus the lines are drawn between utopian blueprints grounded in dogma vs. utopian ideals arising from sympathy and compassion. Both begin with good intentions, but the pull of entropy is stronger with the former — at least, so says Rutger Bregman, and he’s got good company in Sir Thomas More and others. Blueprints require compliance, and its purveyors are zealously ready to enforce it. Ideals on the other hand inspire creativity, and creativity requires acting in the face of uncertainty, living with imperfection, responding with resourcefulness and resilience when best intentions don’t play out, and a lot of just plain showing up and grinding it out. I have a personal bias for coloring outside the lines, but I must confess that my own attempts to promote utopian workplace ideals have given me pause.

For years, I led interactive workshops designed to help people creatively engage with their big ideas about work and wellbeing — variously tailored for CLE ethics credits or for general audiences. I realized recently that, reduced to their essence, they employed the kinds of ideals advocated by beatnik-era philosopher and metaphysicist Alan Watts. (We met him several months ago — he’s the “What would you do if money were no object?” guy. )

The workshops generated hundreds of heartwarming “this was life-changing” testimonies, but I could never quite get over this nagging feeling that the participants mostly hadn’t achieved escape velocity, and come next Monday they would be back to the despair of “But everybody knows you can’t earn any money that way.”

I especially wondered about the lawyers, for whom “I hate my job but love my paycheck” was a recurrent theme. The Post WWII neoliberal economic tide floated the legal profession’s boat, too, but prosperity has done little for lawyer happiness and well-being. True, we’re seeing substantial quality-of-life change in the profession recently (which I’ve blogged about in the past), but most have been around the edges, while overall lawyers’ workplace reality remains a bulwark of what one writer calls the “over-culture” — the overweening force of culturally-accepted norms about how things are and should be — and the legal over-culture has stepped in line with the worldwide workplace trend of favoring wealth over a sense of meaning and value.

Alan Watts’ ideals were widely adopted by the burgeoning self-help industry, which also rode the neoliberal tide to prosperous heights. Self-help tends to be long on inspiration and short on grinding, and sustainable creative change requires large doses of both. I served up both in the workshops, but still wonder if they were just too… well, um…beatnik … for the law profession. I’ll never know — the guy who promoted the workshops retired, and I quit doing them. If nothing else, writing this series has opened my eyes to how closely law practice mirrors worldwide economic and workplace dynamics. We’ll look more at that in the coming weeks.

 

Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!

Utopia for Realists

Dutchman Rutger Bregman is a member of the Forbes 30 Under 30 Europe Class of 2017. He’s written four books on history, philosophy, and economics. In his book Utopia for Realists (2016), he recognizes the dangers of utopian thinking:

True, history is full of horrifying forms of utopianism — fascism, communism, Nazism — just as every religion has also spawned fanatical sects.

According to the cliché, dreams have a way of turning into nightmares. Utopias are a breeding ground for discord, violence, even genocide. Utopias ultimately become dystopias.

Having faced up to the dangers, however, he presses on:

Let’s start with a little history lesson: In the past, everything was worse. For roughly 99% of the world’s history, 99% of humanity was poor, hungry, dirty, afraid, stupid, sick, and ugly. As recently as the seventeenth century, the French philosopher Blaise Pascal (1623-62) described life as one giant vale of tears. “Humanity is great,” he wrote, “because it knows itself to be wretched.” In Britain, fellow philosopher Thomas Hobbes (1588-1679) concurred that human life was basically, “solitary, poor, nasty, brutish, and short.”

But in the last 200 years, all that has changed. In just a fraction of the time that our species has clocked on this planet, billions of us are suddenly rich, well nourished, clean, safe, smart, healthy, and occasionally even beautiful.[1]

Welcome, in other words, to the Land of Plenty. To the good life, where almost everyone is rich, safe, and healthy. Where there’s only one thing we lack: a reason to get out of bed in the morning. Because, after all, you can’t really improve on paradise. Back in 1989, the American philosopher Francis Fukuyama already noted that we had arrived in an era where life has been reduced to “economic calculation, the endless solving of technical problems, environmental concerns, and the satisfaction of sophisticated consumer demands.”[2]

Notching up our purchasing power another percentage point, or shaving a couple off our carbon emissions; perhaps a new gadget — that’s about the extent of our vision. We live in an era of wealth and overabundance, but how bleak it is. There is “neither art nor philosophy,” Fukuyama says. All that’s left is the “perpetual caretaking of the museum of human history.”

According to Oscar Wilde, upon reaching the Land of Plenty, we should once more fix our gaze on the farthest horizon and rehoist the sails. “Progress is the realization of utopias,” he wrote. But the farthest horizon remains blank. The Land of Plenty is shrouded in fog. Precisely when we should be shouldering the historic task of investing this rich, safe, and healthy existence with meaning, we’ve buried utopia instead.

In fact, most people in wealthy countries believe children will actually be worse off than their parents. According to the World Health Organization, depression has even become the biggest health problem among teens and will be the number-one cause of illness worldwide by 2030.[3]

It’s a vicious cycle. Never before have so many young people been seeing a psychiatrist. Never before have there been so many early career burnouts. And we’re popping antidepressants like never before. Time and again, we blame collective problems like unemployment, dissatisfaction, and depression on the individual. If success is a choice, so is failure. Lost your job? You should have worked harder. Sick? You must not be leading a healthy lifestyle. Unhappy? Take a pill.

No, the real crisis is that we can’t come up with anything better. We can’t imagine a better world than the one we’ve got. The real crisis of our times, of my generation, is not that we don’t have it good, or even that we might be worse off later on. “The best minds of my generation are thinking about how to make people click ads,” a former math whiz at Facebook recently lamented.[4]

After this assessment, Bregman shifts gears. “The widespread nostalgia, the yearning for a past that really never was,” he says, “suggest that we still have ideals, even if we have buried them alive.” From there, he distinguishes the kind of utopian thinking we do well to avoid from the kind we might dare to embrace. We’ll follow him into that discussion next time.


[1] For a detailed (1,000 pages total) history of this economic growth from general nastiness to the standard of living we enjoy now, I’ll refer you again to two books I plugged a couple weeks ago: Americana: A 400 Year History Of American Capitalism and The Rise and Fall of American Growth.

[2] See here and here for a sampling of updates/opinions providing a current assessment of Fukuyama’s 1989 article.

[3] World Health Organization, Health for the World’s Adolescents, June 2014. See this executive summary.

[4] “This Tech Bubble is Different,” Bloomberg Businessweek, April 14, 2011.

 

Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!

 

The Perils of Predicting

“We were promised flying cars, and instead what we got was 140 characters.”

Peter Thiel, PayPal co-founder[1]

Economic forecasts and policy solutions are based on predictions, and predicting is a perilous business.

I grew up in a small town in western Minnesota. Our family got the morning paper — the Minneapolis Tribune. The Stars ubscribers got their paper around 4:00. A friend’s dad was a lawyer — his family got both. In a childhood display of cognitive bias, I never could understand why anyone would want an afternoon paper. News was made the day before, so you could read about it the next morning, and that was that.

I remember one Tribune headline to this day: it predicted nuclear war in 10 years. That was 1961, when I was eight. The Cuban missile crisis was the following year, and for awhile it looked like it wouldn’t take all ten years for the headline’s prediction to come true.

The Tribune helpfully ran designs and instructions for building your own fallout shelter. Our house had the perfect place for one: a root cellar off one side of the basement — easily the creepiest place in the house. You descended a couple steps down from the basement floor, through a stubby cinderblock hallway, past a door hanging on one hinge. Ahead of you was a bare light bulb swinging from the ceiling — it flickered, revealing decades of cobwebs and homeowner flotsam worthy of Miss Havisham. It was definitely a bomb shelter fixer-upper, but it was the right size, and as an added bonus it had a concrete slab over it — if you banged the ground above with a pipe it made a hollow sound.

I scoured the fallout shelter plans, but my dad said no. Someone else in town built one — the ventilation pipes stuck out of a room-size mound next to their house. People used to go by it on their Sunday drives. Meanwhile I ran my own personal version of the Doomsday Clockfor the next ten years until my 18th birthday came and went. So much for that headline.

I also remember a Sunday cartoon that predicted driverless cars. I found an article about it in this article from Gizmodo:[2]

The article explains:

The period between 1958 and 1963 might be described as a Golden Age of American Futurism, if not the Golden Age of American Futurism. Bookended by the founding of NASA in 1958 and the end of The Jetsons in 1963, these few years were filled with some of the wildest techno-utopian dreams that American futurists had to offer. It also happens to be the exact timespan for the greatest futuristic comic strip to ever grace the Sunday funnies: Closer Than We Think.

Jetpacks, meal pills, flying cars — they were all there, beautifully illustrated by Arthur Radebaugh, a commercial artist based in Detroit best known for his work in the auto industry. Radebaugh would help influence countless Baby Boomers and shape their expectations for the future. The influence of Closer Than We Think can still be felt today.

Timing is Everything

Apparently timing is everything in the prediction business. The driverless car prediction was accurate, just way too early. The Tribune’s nuclear war prediction was inaccurate (and let’s hope not just because it was too early). Predictions from the hapless mythological prophetess Cassandra were never inaccurate or untimely: she was cursed by Apollo (who ran a highly successful prophecy business at Delphi) with the gift of always being right but never believed.

Now that would be frustrating.

As I said last week, predicting is as perilous as policy-making. An especially perilous version of both is utopian thinking. There’s been plenty of utopian economic thinking the past couple centuries, and today’s economists continue the grand tradition — to their peril, and potentially to ours. We’ll look at some economic utopian thinking (and the case for and against it) beginning next time.

 

Apparently timing is everything in country music, too. I’m not an aficionado, but I did come across this video while researching this post. The guy’s got a nice baritone.


[1]Peter Thiel needn’t despair about the lack of flying cars anymore: here’s a video re: a prototypefrom Sebastian Thrun and his company Kitty Hawk.

[2]The article is worth a look, if you like that sort of thing. So is this Smithsonian articleon the Jetsons. And while we’re on the topic, check out this IEEE Spectrum articleon a 1960 RCA initiative that had self-driving cars just around the corner, and this Atlantic articleabout an Electronic Age/Science Digestarticle that made the same prediction even earlier — in 1958.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

The Perils of Policy

Economics articles, books, and speeches usually end with policy recommendations. You can predict them in advance if you know the ideological bias of the source. Let’s look at three, for comparison.

First, this Brookings Institute piece— What happens if robots take the jobs? The impact of emerging technologies on employment and public policy — written a couple years back by Darrell M. West, vice president and director of Governance Studies and founding director of the Center for Technology Innovation at the Institute.

Second, this piece — Inequality isn’t inevitable. Here’s what we can do differently — published by the World Economic Forum and written last month by a seriously over-achieving 23-year old globe-trotting Italian named Andrea Zorzetto.

Third, this piece — Mark My Words: This Political Event Will be Unlike Anything We’ve Seen in 50 Years — by Porter Stansberry, which showed up in my Facebook feed last month. Stansberry offers this bio: “You may not know me, but nearly 20 years ago, I started a financial research and education business called Stansberry Research. Today we have offices in the U.S., Hong Kong, and Singapore. We serve more than half a million paid customers in virtually every country (172 at last count). We have nearly 500 employees, including dozens of financial analysts, corporate attorneys, accountants, technology experts, former hedge fund managers, and even a medical doctor.”

The Brookings article is what you would expect: long, careful, reasoned. Energetic Mr. Zorzetto’s article is bright, upbeat, and generally impressive. Porter Stansberry’s missive is … well, we’ll just let it speak for itself. I chose these three because they all cite the same economic data and developments, but reach for different policy ideals. There’s plenty more where these came from. Read enough of them, and they start to organize themselves into multiple opinion categories which after numerous iterations all mush together into vague uncertainty.

There’s got to be a better way. Turns out there is: how about if we ask the economy itself what it’s up to? That’s what the emerging field of study called “complexity economics” does. Here’s a short explanation of it, published online by Exploring Economics, an “open source learning platform.” The word “complexity” in this context doesn’t mean “hard to figure out.” It’s a technical term borrowed from a systems theory approach that originated in science, mathematics, and statistics.

Complexity economics bypasses ideological bias and lets the raw data speak for itself. It’s amazing what you hear when you give data a voice — for example, an answer to the question we heard the Queen of England ask a few posts back, which a group of Cambridge economists couldn’t answer (neither could anyone else, for that matter): Why didn’t we see the 2007-2008 Recession coming? The economy had and answer; you just need to know how to listen to it. (More on that coming up.)

What gives data its voice? Ironically, the very job-threatening technological trends we’ve been talking about in the past couple months:

Big Data + Artificial Intelligence + Brute Strength Computer Processing Power
= Complexity Economics

Which means — in a stroke of delicious irony — guess whose jobs are most threatened by this new approach to economics? You guessed it: the jobs currently held by ideologically-based economists making policy recommendations. For them, economics just became “the dismal science” in a whole new way.

Complex systems theory is as close to a Theory of Everything as I’ve seen. No kidding. We’ll be looking at it in more depth, but first… Explaining is one thing, but predicting is another. Policy-making invariably relies on the ability to predict outcomes, but predicting has its own perils. We’ll look at those next time. In the meantime, just for fun…

                           

If you click on the first image, you’ll go to the original silent movie melodrama series. A click on the second image takes you to Wikipedia re: the 1947 Hollywood technicolor remake. The original is from a period of huge economic growth and quality of life advancements. The movie came out at the beginning of equally powerful post-WWII economic growth. Which leads to another economic history book I can’t recommend highly enough, shown in the image on the left below. Like Americana, which I recommended a couple weeks ago, it’s well researched and readable. They’re both big, thick books, but together they offer a fascinating course on all the American history we never knew. (Click the images for more.)

                    

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

The Fatal Flaw

Several years ago I wrote a screenplay that did okay in a contest. I made a couple trips to Burbank to pitch it, got no sustained interest, and gave up on it. Recently, someone who actually knows what he’s doing encouraged me to revise and re-enter it. Among other things, he introduced me to Inside Story: The Power of the Transformational Arc, by Dara Marks (2007). The book describes what the author calls “the essential story element” — which, it turns out, is remarkably apt not just for film but for life in general, and particularly for talking about economics, technology, and the workplace.

No kidding.

What is it?

The Fatal Flaw.

This is from the book:

First, it’s important to recap or highlight the fundamental premise on which the fatal flaw is based:

  • Because change is essential for growth, it is a mandatory requirement for life.
  • If something isn’t growing and developing, it can only be headed toward decay and death.
  • There is no condition of stasis in nature. Nothing reaches a permanent position where neither growth nor diminishment is in play.

As essential as change is, most of us resist it, and cling rigidly to old survival systems because they are familiar and “seem” safer. In reality, if an old, obsolete survival system makes us feel alone, isolated, fearful, uninspired, unappreciated, and unloved, we will reason that it’s easier to cope with what we know that with what we haven’t yet experienced. As a result, most of us will fight to sustain destructive relationships, unchallenging jobs, unproductive work, harmful addictions, unhealthy environments, and immature behavior long after there is any sign of life or value to them.

This unyielding commitment to old, exhausted survival systems that have outlived their usefulness, and resistance to the rejuvenating energy of new, evolving levels of existence and consciousness is what I refer to as the fatal flaw of character:

The Fatal Flaw is a struggle within a character
to maintain a survival system
long after it has outlived its usefulness.

As it is with screenwriting, so it is with us as we’re reckoning with the wreckage of today’s collision among economics, technology, and the workplace. We’re like the character who must change or die to make the story work: our economic survival is at risk, and failure to adapt is fatal. Faced with that prospect, we can change our worldview, or we can wish we had. Trouble is, our struggle to embrace a new paradigm is as perilous as holding to an old one.

What’s more, we will also need to reckon with two peculiar dynamics of our time: “echo chambers” and “epistemic bubbles.” The following is from an Aeon Magazine article published earlier this week entitled “Escape The Echo Chamber”:

Something has gone wrong with the flow of information. It’s not just that different people are drawing subtly different conclusions from the same evidence. It seems like different intellectual communities no longer share basic foundational beliefs. Maybe nobody cares about the truth anymore, as some have started to worry. Maybe political allegiance has replaced basic reasoning skills. Maybe we’ve all become trapped in echo chambers of our own making – wrapping ourselves in an intellectually impenetrable layer of likeminded friends and web pages and social media feeds.

But there are two very different phenomena at play here, each of which subvert the flow of information in very distinct ways. Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs. But they work in entirely different ways, and they require very different modes of intervention. An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trust people from the other side.

An echo chamber doesn’t destroy their members’ interest in the truth; it merely manipulates whom they trust and changes whom they accept as trustworthy sources and institutions.

Here’s a basic check: does a community’s belief system actively undermine the trustworthiness of any outsiders who don’t subscribe to its central dogmas? Then it’s probably an echo chamber.

That’s what we’re up against. We’ll plow fearlessly ahead next time.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

On the Third Hand…

Will the machines take over the jobs? Ask a bunch of economists, and you’ll get opinions organized around competing ideologies, reflecting individual cognitive, emotional, and political biases. That’s been the experience of Martin Fordentrepreneur, TED talker, and New York Times bestselling author of Rise of the Robots: Technology and the Threat of a Jobless Future:

In the field of economics the opinions all too often break cleanly along predefined political lines. Knowing the ideological predisposition of a particular economist is often a better predictor of what that individual is likely to say than anything contained in the data under examination. In other words, if you’re waiting for the economists to deliver some sort of definitive verdict on the impact that advancing technology is having on the economy, you may have a very long wait.[1]

In this Psychology Today article, Dr. Karl Albrecht[2] offers a neurological explanation for polarized thinking:

Recent research suggests that our brains may be pre-wired for dichotomized thinking. That’s a fancy name for thinking and perceiving in terms of two — and only two — opposing possibilities.

These research findings might help explain how and why the public discourse of our culture has become so polarized and rancorous, and how we might be able to replace it with a more intelligent conversation.

[O]ur brains can keep tabs on two tasks at a time, by sending each one to a different side of the brain. Apparently, we toggle back and forth, with one task being primary and the other on standby.

Add a third task, however, and one of the others has to drop off the to-do list.

Scans of brain activity during this task switching have led to the hypothesis that the brain actually likes handling things in pairs. Indeed, the brain itself is subdivided into two distinct half-brains, or hemispheres.

Curiously, part of our cranial craving for two-ness might be related to our own physiology: the human body is bilaterally symmetrical. Draw an imaginary center line down through the front of a person and you see a lot of parts (not all, of course), that come in pairs: two eyes, two ears, two nostrils, matching teeth on left and right sides, two shoulders, two arms, two hands, two nipples, two legs, two knees, and two feet. Inside you’ll find two of some things and one of others.

Some researchers are now extending this reasoning to suggest that the brain has a built-in tendency, when confronted by complex propositions, to selfishly reduce the set of choices to just two. Apparently it doesn’t like to work hard.

Considering how quickly we make our choices and set our opinions, it’s unlikely that all of the options will even be identified, never mind carefully considered.

“On the one hand this, on the other hand that,” we like to say. Lawyers perfect the art. Politics and the press also thrive on dichotomy:

Again, our common language encodes the effect of this anatomical self reference. “On the one hand, there is X. But on the other hand, we have Y.” Many people describe political views as being either “left” or “right.”

The popular press routinely constructs “news” stories around conflicts and differences between pairs of opposing people, factions, and ideologies. Bipolar conflict is the very essence of most of the news.

So, are robots and artificially intelligence going to trash the working world, or not?

Hmmm, there might be another option — several, actually. Dr. Albrecht urges us to find them:

Seek the “third hand” — and any other “hands” you can discover. Ask yourself, and others, “Are there other options to be considered?”

We’ll consider some third hand perspectives about the rise of the robots in the coming weeks.


[1] Martin Ford is also the consulting expert for Societe Generale’s new “Rise of the Robots” investment index, which focuses on companies that are “significant participants in the artificial intelligence and robotics revolution.”

[2] According to his website, Karl Albrecht is “is an executive management consultant, futurist, lecturer, and author of more than 20 books on professional achievement, organizational performance, and business strategy. He is also a leading authority on cognitive styles and the development of advanced thinking skills. The Mensa Society honored him with its lifetime achievement award, for significant contributions by a member to the understanding of intelligence. Originally a physicist, and having served as a military intelligence officer and business executive, he now consults, lectures, and writes about whatever he thinks would be fun.”

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

Race Against the Machine, Continued

Rational choice theory is a cornerstone of conventional economic thinking. It states that:

Individuals always make prudent and logical decisions. These decisions provide people with the greatest benefit or satisfaction — given the choices available — and are also in their highest self-interest.

Presumably Stephen Hawking, Elon Musk, and Bill Gates had something like this in mind when they published an open letter in January 2015 urging that artificial intelligence R&D should focus “not only on making AI more capable, but also on maximizing the societal benefit,” To execute on this imperative, they urged an interdisciplinary collaboration among “economics, law and philosophy. computer security, formal methods and, of course, various branches of AI itself.” (Since its release, the letter has garnered another 8.000 signatures — you can sign it, too, if you like.)

The letter’s steady, rational four paragraphs praise how technology has benefited the human race, and anticipate more of the same in the future, but its reception and the authors’ comments in other contexts are not so measured. As a result, the letter has become a cheering section for those who think humanity is losing its race against the robots.

Consider, for example, the following from an Observer article:

“Success in creating AI would be the biggest event in human history,” wrote Stephen Hawking in an op-ed, which appeared in The Independent in 2014. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Professor Hawking added in a 2014 interview with BBC, “humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.”

Elon Musk called the prospect of artificial intelligence “our greatest existential threat” in a 2014 interview with MIT students at the AeroAstro Centennial Symposium. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” Mr. Musk cites his decision to invest in the Artificial Intelligence firm, DeepMind, as a means to “just keep an eye on what’s going on with artificial intelligence. I think there is potentially a dangerous outcome there.”

Microsoft co-founder Bill Gates has also expressed concerns about Artificial Intelligence. During a Q&A session on Reddit in January 2015, Mr. Gates said, “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Or consider this Elon Musk comment in Vanity Fair:

In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”

In other words, Hawking, Gates, and Musk aren’t just worried about machines taking over jobs, they’re worried about the end of the world — or at least the human race. This Washington Post op-ed piece thinks that might not be such a bad thing:

When a technology is so obviously dangerous — like nuclear energy or synthetic biology — humanity has an imperative to consider dystopian predictions of the future. But it also has an imperative to push on, to reach its full potential. While it’s scary, sure, that humans may no longer be the smartest life forms in the room a generation from now, should we really be that concerned? Seems like we’ve already done a pretty good job of finishing off the planet anyway. If anything, we should be welcoming our AI masters to arrive sooner rather than later.

Or consider this open letter written back to Hawking, Gates, and Musk, which basically says forget the fear mongering — it’s going to happen no matter what you think:

Progress is inevitable, even if it is reached by accident and happenstance. Even if we do not intend to, sentient AI is something that will inevitably be created, be it through the evolution of a learning AI, or as a byproduct of some research. No treaty or coalition can stop it, no matter what you think. I just pray you do not go from educated men to fear mongers when it happens.

As usual, we’re at an ideological impasse, with both sides responding not so much according to the pros and cons but according to their predispositions. This article suggests a way through the impasse:

At the beginning of this article, we asked if the pessimists or optimists would be right.

There is a third option, though: one where we move from building jobs around processes and tasks, a solution that is optimal for neither human nor machine, to building jobs around problems.

The article is long, well-researched, and… well, very rational. Too bad — conventional thinking aside — other research shows we rarely act from a rational outlook when it comes to jobs and the economy… or anything else for that matter.

More on that next time.

 

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

Race Against the Machine

For the past several years, two MIT big thinkers[1] have been the go-to authorities in the scramble to explain how robotics, artificial intelligence, and big data are revolutionizing the economy and the working world. Their two books were published four and six years ago — so yesterday in the world of technology — but they were remarkably prescient when written, and have not diminished in relevance. They are:

Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy (2012)

The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (2014)

Click here for a chapter-by-chapter digest of The Second Machine Age, written by an all star cast of economic commentators. Among other things, they acknowledge the authors’ view that neoliberal capitalism has not fared well in its dealings with the technological juggernaut, but in the absence of a better alternative, we might as well continue to ride the horse in the direction it’s going.

While admitting that History (not human choice) is “littered with unintended . . . side effects of well-intentioned social and economic policies,” the authors cite Tim O’Reilly[2] in pushing forward with technology’s momentum rather than clinging to the past or present. They suggest that we should let the technologies do their work and just find ways to deal with it. They are “skeptical of efforts to come up with fundamental alternatives to capitalism.”

David Rotman, editor of the MIT Technology Review cites The Second Machine Age extensively in an excellent, longer article, “How Technology is Destroying Jobs.” Although the article is packed with contrary analysis and opinion, the following excepts emphasize what many might consider the shadowy side of the street (compared to the sunny side we looked at in the past couple posts). I added the headings below to emphasize that many of the general economic themes we’ve been talking about also apply to the specific dynamics of the job market.

It used to be that economic growth — including wealth creation — also created more jobs. It doesn’t work that way any more. Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States.

For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.

A rising economic tide no longer floats all boats. The result is a skewed allocation of the rewards of growth away from jobs — i.e., economic inequality. The contention that automation and digital technologies are partly responsible for today’s lack of jobs has obviously touched a raw nerve for many worried about their own employment. But this is only one consequence of what ­Brynjolfsson and McAfee see as a broader trend. The rapid acceleration of technological progress, they say, has greatly widened the gap between economic winners and losers—the income inequalities that many economists have worried about for decades..

“[S]teadily rising productivity raised all boats for much of the 20th century,” [Brynjolfsson] says. “Many people, especially economists, jumped to the conclusion that was just the way the world worked. I used to say that if we took care of productivity, everything else would take care of itself; it was the single most important economic statistic. But that’s no longer true.” He adds, “It’s one of the dirty secrets of economics: technology progress does grow the economy and create wealth, but there is no economic law that says everyone will benefit.” In other words, in the race against the machine, some are likely to win while many others lose.

That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States.

Meanwhile, technology is taking over the jobs that are left— blue collar, white collar, and even the professions. [I]mpressive advances in computer technology—from improved industrial robotics to automated translation services—are largely behind the sluggish employment growth of the last 10 to 15 years. Even more ominous for workers, the MIT academics foresee dismal prospects for many types of jobs as these powerful new technologies are increasingly adopted not only in manufacturing, clerical, and retail work but in professions such as law, financial services, education, and medicine.

Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared.

New technologies are “encroaching into human skills in a way that is completely unprecedented,” McAfee says, and many middle-class jobs are right in the bull’s-eye; even relatively high-skill work in education, medicine, and law is affected.

We’ll spend more time on the shadowy side of the street next time.


[1] Erik Brynjolfsson is director of the MIT Center for Digital Business, and Andrew McAfee is a principal research scientist at MIT who studies how digital technologies are changing business, the economy, and society.

[2] According to his official bio on his website, Tim O’Reilly “is the founder and CEO of O’Reilly Media, Inc. His original business plan was simply ‘interesting work for interesting people,’ and that’s worked out pretty well. O’Reilly Media delivers online learning, publishes books, runs conferences, urges companies to create more value than they capture, and tries to change the world by spreading and amplifying the knowledge of innovators.”

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article:  Meeting Goals the Olympic Way:  Train + Transform.

Bright Sunshiny Day, Continued

Last time, we heard David Lee[1] express his conviction that, far from destroying human jobs, robotic technology will unleash human creativity on a wonderful new world of work. His perspective is so remarkably and refreshingly upbeat that I thought we’d let him continue where he left off last week:

I think it’s important to recognize that we brought this problem on ourselves. And it’s not just because, you know, we are the one building the robots. But even though most jobs left the factory decades ago, we still hold on to this factory mindset of standardization and de-skilling. We still define jobs around procedural tasks and then pay people for the number of hours that they perform these tasks. We’ve created narrow job definitions like cashier, loan processor or taxi driver and then asked people to form entire careers around these singular tasks.

These choices have left us with actually two dangerous side effects. The first is that these narrowly defined jobs will be the first to be displaced by robots, because single-task robots are just the easiest kinds to build. But second, we have accidentally made it so that millions of workers around the world have unbelievably boring working lives.

Let’s take the example of a call center agent. Over the last few decades, we brag about lower operating costs because we’ve taken most of the need for brainpower out of the person and put it into the system. For most of their day, they click on screens, they read scripts. They act more like machines than humans. And unfortunately, over the next few years, as our technology gets more advanced, they, along with people like clerks and bookkeepers, will see the vast majority of their work disappear.

To counteract this, we have to start creating new jobs that are less centered on the tasks that a person does and more focused on the skills that a person brings to work. For example, robots are great at repetitive and constrained work, but human beings have an amazing ability to bring together capability with creativity when faced with problems that we’ve never seen before.

We need to realistically think about the tasks that will be disappearing over the next few years and start planning for more meaningful, more valuable work that should replace it. We need to create environments where both human beings and robots thrive. I say, let’s give more work to the robots, and let’s start with the work that we absolutely hate doing. Here, robot, process this painfully idiotic report.

And for the human beings, we should follow the advice from Harry Davis at the University of Chicago. He says we have to make it so that people don’t leave too much of themselves in the trunk of their car. I mean, human beings are amazing on weekends. Think about the people that you know and what they do on Saturdays. They’re artists, carpenters, chefs and athletes. But on Monday, they’re back to being Junior HR Specialist and Systems Analyst 3.

You know, these narrow job titles not only sound boring, but they’re actually a subtle encouragement for people to make narrow and boring job contributions. But I’ve seen firsthand that when you invite people to be more, they can amaze us with how much more they can be.

[The key is]to turn dreams into a reality. And that dreaming is an important part of what separates us from machines. For now, our machines do not get frustrated, they do not get annoyed, and they certainly don’t imagine.

But we, as human beings — we feel pain, we get frustrated. And it’s when we’re most annoyed and most curious that we’re motivated to dig into a problem and create change. Our imaginations are the birthplace of new products, new services, and even new industries.

If we really want to robot-proof our jobs, we, as leaders, need to get out of the mindset of telling people what to do and instead start asking them what problems they’re inspired to solve and what talents they want to bring to work. Because when you can bring your Saturday self to work on Wednesdays, you’ll look forward to Mondays more, and those feelings that we have about Mondays are part of what makes us human.

We’ll give the other side equal time next week.


[1] David Lee is Vice President of Innovation and the Strategic Enterprise Fund for UPS.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: Leadership and Life Lessons From an Elite Athlete and a Dying Man.

Gonna Be a Bright, Bright, Sunshiny Day

We met Sebastian Thrun last time. He’s a bright guy with a sunshiny disposition: he’s not worried about robots and artificial intelligence taking over all the good jobs, even his own. Instead, he’s perfectly okay if technology eliminates most of what he does every day because he believes human ingenuity will fill the vacuum with something better. This is from his conversation with TED curator Chris Anderson:

If I look at my own job as a CEO, I would say 90 percent of my work is repetitive, I don’t enjoy it, I spend about four hours per day on stupid, repetitive email. And I’m burning to have something that helps me get rid of this. Why? Because I believe all of us are insanely creative . . . What this will empower is to turn this creativity into action.

We’ve unleashed this amazing creativity by de-slaving us from farming and later, of course, from factory work and have invented so many things. It’s going to be even better, in my opinion. And there’s going to be great side effects. One of the side effects will be that things like food and medical supply and education and shelter and transportation will all become much more affordable to all of us, not just the rich people.

Anderson sums it up this way:

So the jobs that are getting lost, in a way, even though it’s going to be painful, humans are capable of more than those jobs. This is the dream. The dream is that humans can rise to just a new level of empowerment and discovery. That’s the dream.

Another bright guy with a sunshiny disposition is David Lee, Vice President of Innovation and the Strategic Enterprise Fund for UPS. He, too, shares the dream that technology will turn human creativity loose on a whole new kind of working world. Here’s his TED talk (click the image):

Like Sebastian Thrun, he’s no Pollyanna: he understands that yes, technology threatens jobs:

There’s a lot of valid concern these days that our technology is getting so smart that we’ve put ourselves on the path to a jobless future. And I think the example of a self-driving car is actually the easiest one to see. So these are going to be fantastic for all kinds of different reasons. But did you know that “driver” is actually the most common job in 29 of the 50 US states? What’s going to happen to these jobs when we’re no longer driving our cars or cooking our food or even diagnosing our own diseases?

Well, a recent study from Forrester Research goes so far to predict that 25 million jobs might disappear over the next 10 years. To put that in perspective, that’s three times as many jobs lost in the aftermath of the financial crisis. And it’s not just blue-collar jobs that are at risk. On Wall Street and across Silicon Valley, we are seeing tremendous gains in the quality of analysis and decision-making because of machine learning. So even the smartest, highest-paid people will be affected by this change.

What’s clear is that no matter what your job is, at least some, if not all of your work, is going to be done by a robot or software in the next few years.

But that’s not the end of the story. Like Thrun, he believes that the rise of the robots will clear the way for unprecedented levels of human creativity — provided we move fast:

The good news is that we have faced down and recovered two mass extinctions of jobs before. From 1870 to 1970, the percent of American workers based on farms fell by 90 percent, and then again from 1950 to 2010, the percent of Americans working in factories fell by 75 percent. The challenge we face this time, however, is one of time. We had a hundred years to move from farms to factories, and then 60 years to fully build out a service economy.

The rate of change today suggests that we may only have 10 or 15 years to adjust, and if we don’t react fast enough, that means by the time today’s elementary-school students are college-aged, we could be living in a world that’s robotic, largely unemployed and stuck in kind of un-great depression.

But I don’t think it has to be this way. You see, I work in innovation, and part of my job is to shape how large companies apply new technologies. Certainly some of these technologies are even specifically designed to replace human workers. But I believe that if we start taking steps right now to change the nature of work, we can not only create environments where people love coming to work but also generate the innovation that we need to replace the millions of jobs that will be lost to technology.

I believe that the key to preventing our jobless future is to rediscover what makes us human, and to create a new generation of human-centered jobs that allow us to unlock the hidden talents and passions that we carry with us every day.

More from David Lee next time.

If all this bright sunshiny perspective made you think of that old tune, you might treat yourself to a listen. It’s short, you’ve got time.

And for a look at a current legal challenge to the “gig economy” across the pond, check out this Economist article from earlier this week.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: Leadership and Life Lessons From an Elite Athlete and a Dying Man.