July 22, 2018

Who Controls the World?

One fine afternoon autumn day in Cincinnati I watched transfixed as a gigantic flock of migratory birds swarmed over the woods across the street. I didn’t know it then, but I was watching a “complex, self-organizing system” in action. Schools of fish, ant colonies, human brains — and even the financial industry — all exhibit this behavior. And so does “the economy.”

James B. Glattfelder holds a Ph.D. in complex systems from the Swiss Federal Institute of Technology. He began as a physicist, became a researcher at a Swiss hedge fund, and now does quantitative research at Olsen Ltd. in Zurich, a foreign exchange investment manager. He begins his TED Talk with two quotes about the Great Recession of 2007-2008:

When the crisis came, the serious limitations of existing economic and financial models immediately became apparent.

There is also a strong belief, which I share, that bad or over simplistic and overconfident economics helped create the crisis.

Then he tells us where they came from:

You’ve probably all heard of similar criticism coming from people who are skeptical of capitalism. But this is different. This is coming from the heart of finance. The first quote is from Jean-Claude Trichet when he was governor of the European Central Bank. The second quote is from the head of the UK Financial Services Authority. Are these people implying that we don’t understand the economic systems that drive our modern societies?

That’s a rhetorical question, of course:  yes they are, and no we don’t. As a result, nobody saw the Great Recession coming, with its layoffs carnage and near-collapse of the global economy, or its “too big to fail” bailouts and generous bonuses paid to its key players.

Glattfelder tackles what that was about, from a complex systems perspective. First, he dismisses two approaches we’ve already seen discredited.

Ideologies: “I really hope that this complexity perspective allows for some common ground to be found. It would be really great if it has the power to help end the gridlock created by conflicting ideas, which appears to be paralyzing our globalized world.  Ideas relating to finance, economics, politics, society, are very often tainted by people’s personal ideologies.  Reality is so complex, we need to move away from dogma.”

Mathematics: “You can think of physics as follows. You take a chunk of reality you want to understand and you translate it into mathematics. You encode it into equations. Then, predictions can be made and tested. But despite the success, physics has its limits. Complex systems are very hard to map into mathematical equations, so the usual physics approach doesn’t really work here.”

Then he lays out a couple key features of complex, self-organizing systems:

It turns out that what looks like complex behavior from the outside is actually the result of a few simple rules of interaction. This means you can forget about the equations and just start to understand the system by looking at the interactions.

And it gets even better, because most complex systems have this amazing property called emergence. This means that the system as a whole suddenly starts to show a behavior which cannot be understood or predicted by looking at the components. The whole is literally more than the sum of its parts.

Applying this to the financial industry, he describes how his firm studied the Great Recession by analyzing a database of controlling shareholder interests in 43,000 transnational corporations (TNCs). That analysis netted over 600,000 “nodes” of ownership, and over a million connections among them. Then came the revelation:

It turns out that the 737 top shareholders have the potential to collectively control 80 percent of the TNCs’ value. Now remember, we started out with 600,000 nodes, so these 737 top players make up a bit more than 0.1 percent. They’re mostly financial institutions in the US and the UK. And it gets even more extreme. There are 146 top players in the core, and they together have the potential to collectively control 40 percent of the TNCs’ value.

737 or 146 shareholders — “mostly financial institutions in the U.S. and the U.K.” — had the power to control 80% or 40% of the value of 43,000 multinational corporations. And those few hundreds — for their own accounts and through the entities they controlled — bought securitized sub-prime mortgages until the market imploded and nearly brought down the global economy valued in the tens of trillions dollars — giving a whole new meaning to the concept of financial leverage. In what might be the economic understatement of the 21st Century, Glattfelder concludes:

This high level of concentrated ownership means these elite owners possess an enormous amount of leverage over financial risk worldwide. The high degree of control you saw is very extreme by any standard. The high degree of interconnectivity of the top players in the core could pose a significant systemic risk to the global economy.

It took a lot of brute number-crunching computer power and some slick machine intelligence to generate all of that, but in the end there’s an innate simplicity to it all. He concludes:

[The TNC network of ownership is] an emergent property which depends on the rules of interaction in the system. We could easily reproduce [it] with a few simple rules.

The same is true of the mesmerizing flock of birds I watched that day; here’s a YouTube explanation of the three simple rules that explain it[1].


[1] What I saw was a “murmuration” of birds — see this YouTube video for an example. It is explained by a form of complex system analysis  known as “swarm behavior.”

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning. Check out his latest LinkedIn Pulse article: “Rolling the Rock: Lessons From Sisyphus on Work, Working Out, and Life.”

Economics + Math = Science?

The human brain is wired to recognize patterns, which it then organizes into higher level models and theories and beliefs, which in turn it uses to explain the past and present, and to predict the future. Models offer the consolation of rationality and understanding, which provide a sense of control. All of this is foundational to classical economic theory, which assumes we approach commerce equipped with an internal rational scale that weighs supply and demand, cost and benefit, and that we then act according to our assessment of what we give for what we get back. This assumption of an internal calculus has caused mathematical modeling to reign supreme in the practice of economics.

The trouble is, humans aren’t as innately calculating as classical economics would like to believe — so says David Graeber, professor of anthropology at the London School of Economics, in his new book Bullshit Jobs:

According to classical economic theory, homo oeconomicus, or “economic man” — that is, the model human being that lies behind every predication made by the discipline — is assumed to be motivated by a calculus of costs and benefits.

All the mathematical equations by which economists bedazzle their clients, or the public, are founded on one simple assumption: that everyone, left to his own devices, will choose the course of action that provides the most of what he wants for the least expenditure of resources and effort.

It is the simplicity of the formula that makes the equations possible: if one were to admit that humans have complicated emotions, there would be too many factors to take into account, it would be impossible to weigh them, and predictions would not be made.

Therefore, while an economist will say that while of course everyone is aware that human beings are not really selfish, calculating machines, assuming they are makes it possible to explain.

This is a reasonable statement as far as it goes. The problem is there are many dimensions of human life where the assumption clearly doesn’t hold — and some of them are precisely in the domain of what we like to call the economy.

Economics’ reliance on mathematics has been a topic of lively debate for a long time:

The trouble . . . is that measurement and mathematics do not guarantee the status of science – they guarantee only the semblance of science. When the presumptions or conclusions of a scientific theory are absurd or simply false, the theory ought to be questioned and, eventually, rejected. The discipline of economics, however, is presently so blinkered by the talismanic authority of mathematics that theories go overvalued and unchecked.

In 1886, an article in Science accused economics of misusing the language of the physical sciences to conceal “emptiness behind a breastwork of mathematical formulas.” More recently, Deirdre N. McCloskey’s The Rhetoric of Economics (1998) and Robert H. Nelson’s Economics as Religion (2001) both argued that mathematics in economic theory serves, in McCloskey’s words, primarily to deliver the message “Look at how very scientific I am.”

After the Great Recession, the failure of economic science to protect our economy was once again impossible to ignore. In 2009, the Nobel Laureate Paul Krugman tried to explain it in The New York Times with a version of the mathiness diagnosis. “As I see it,” he wrote, “the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.” Krugman named economists’ “desire . . . to show off their mathematical prowess” as the “central cause of the profession’s failure.”

The result is people . . . who trust the mathematical exactitude of theories without considering their performance – that is, who confuse math with science, rationality with reality.

There is no longer any excuse for making the same mistake with economic theory. For more than a century, the public has been warned, and the way forward is clear. It’s time to stop wasting our money and recognise the high priests for what they really are: gifted social scientists who excel at producing mathematical explanations of economies, but who fail, like astrologers before them, at prophecy.

The New Astrology: By fetishising mathematical models, economists turned economics into a highly paid pseudoscience,” Aeon Magazine.

Economists may bristle at being compared to astrologers, but as we have seen, their skill at prediction seems about comparable.

In the coming weeks we’ll look at other models emerging from the digital revolution, consider what they can tell us that classical economic theory can’t, and how they are affecting the world of work.

 

Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!

Protopia

Last week we heard professional skeptic Michael Shermer weigh in as an optimistic believer in progress (albeit guardedly — I mean, he is a skeptic after all) in his review of the new book It’s Better Than It Looks. That doesn’t mean he’s ready to stake a homestead claim on the Utopian frontier: the title of a recent article tells you what you need to know about where he stands on that subject: “Utopia Is A Dangerous Ideal: We Should Aim For Protopia.”[1]

He begins with a now-familiar litany of utopias that soured into dystopias in the 19th and 20th Centuries. He then endorses the “protopian” alternative, quoting an oft-cited passage in which Kevin Kelly[2] coined the term.

Protopia is a state that is better today than yesterday, although it might be only a little better. Protopia is much much harder to visualize. Because a protopia contains as many new problems as new benefits, this complex interaction of working and broken is very hard to predict.

Doesn’t sound like much, but there’s more to it than appears. Protopia is about incremental, sustainable progress — even in the impatient onslaught of technology. Kelly’s optimism is ambitious — for a full dose of it, see his book The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future (2016). This is from the book blurb:

Much of what will happen in the next thirty years is inevitable, driven by technological trends that are already in motion. In this fascinating, provocative new book, Kevin Kelly provides an optimistic road map for the future, showing how the coming changes in our lives — from virtual reality in the home to an on-demand economy to artificial intelligence embedded in everything we manufacture — can be understood as the result of a few long-term, accelerating forces.

These larger forces will completely revolutionize the way we buy, work, learn, and communicate with each other. By understanding and embracing them, says Kelly, it will be easier for us to remain on top of the coming wave of changes and to arrange our day-to-day relationships with technology in ways that bring forth maximum benefits.

Kelly’s bright, hopeful book will be indispensable to anyone who seeks guidance on where their business, industry, or life is heading — what to invent, where to work, in what to invest, how to better reach customers, and what to begin to put into place — as this new world emerges.

Protopian thinking begins with Kelly’s “bright, hopeful” attitude of optimism about progress (again, remember the thinkers we heard from last week). To adopt both optimism and the protopian vision it produces, we’ll need to relinquish our willful cognitive blindness, our allegiance to inadequate old models and explanations, and our nostalgic urge to resist and retrench.

Either that, or we can just die off. Economist Paul Samuelson said this in a 1975 Newsweek column:

As the great Max Planck, himself the originator of the quantum theory in physics, has said, science makes progress funeral by funeral: the old are never converted by the new doctrines, they simply are replaced by a new generation.

Planck himself said it this way, in his Scientific Autobiography and Other Papers:

 A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

Progress funeral by funeral[3]. . . . If that’s what it takes, that’s the way protopian progress will be made — in the smallest increments of “better today than yesterday” we will allow. But I somehow doubt progress will be that slow; I don’t think technology can wait.

Plus, if we insist on “not in my lifetime, you don’t,” we’ll miss out on a benefit we probably wouldn’t have seen coming: technology itself guiding us as we stumble our way forward through the benefits and problems of progress. There’s support for that idea in the emerging field of complexity economics — I’ve mentioned it before, and we’ll look more into it next time.


[1] The article is based on Shermer’s recent book Heavens on Earth: The Scientific Search for the Afterlife, Immortality, and Utopia.

[2] Kelly is a prolific TED talker — revealing his optimistic protopian ideas. Here’s his bio.

[3] See the Quote Investigator’s history of these quotes.

 

Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!

Utopia Already

“If you had to choose a moment in history to be born, and you did not know ahead of time who you would be—you didn’t know whether you were going to be born into a wealthy family or a poor family, what country you’d be born in, whether you were going to be a man or a woman—if you had to choose blindly what moment you’d want to be born you’d choose now.”

Pres. Barack Obama, 2016

It’s been a good month for optimists in my reading pile. Utopia is already here, they say, and we’ve got the facts to prove it.

Harvard Professor Steven Pinker is his own weather system. Bill Gates called Pinker’s latest book Enlightenment Now “My new favorite book of all time.”

Pinker begins cautiously: “The second half of the second decade of the third millennium would not seem to be an auspicious time to publish a book on the historical sweep of progress and its causes,” he says, and follows with a recitation of the bad news sound bytes and polarized blame-shifting we’ve (sadly) gotten used to. But then he throws down the optimist gauntlet: “In the pages that follow, I will show that this bleak assessment of the state of the world is wrong. And not just a little wrong — wrong, wrong, flat-earth wrong, couldn’t-be-more-wrong wrong.”

He makes his case in a string of data-laced chapters on progress, life expectancy, health, food and famine, wealth, inequality, the environment, war and peace, safety and security, terrorism, democracy, equal rights, knowledge and education, quality of life, happiness, and “existential” threats such as nuclear war. In each of them, he calls up the pessimistic party line and counters with his version of the rest of the story.

And then, just to make sure we’re getting the point, 322 pages of data and analysis into it, he plays a little mind game with us. First he offers an eight paragraph summary of the prior chapters, then starts the next three paragraphs with the words “And yet,” followed by a catalogue of everything that’s still broken and in need of fixing. Despite 322 prior pages and optimism’s 8-3 winning margin, the negativity feels oddly welcome. I found myself thinking, “Well finally, you’re admitting there’s a lot of mess we need to clean up.” But then Prof. Pinker reveals what just happened:

The facts in the last three paragraphs, of course, are the same as the ones in the first eight. I’ve simply read the numbers from the bad rather the good end of the scales or subtracted the hopeful percentages from 100. My point in presenting the state of the world in these two ways is not to show that I can focus on the space in the glass as well as on the beverage. It’s to reiterate that progress is not utopia, and that there is room — indeed, an imperative — for us to strive to continue that progress.

Pinker acknowledges his debt to the work of Swedish physician, professor of global health, and TED all-star Hans Rosling and his recent bestselling book Factfulness. Prof. Rosling died last year, and the book begins with a poignant declaration: “This book is my last battle in my lifelong mission to fight devastating ignorance.” His daughter and son-in-law co-wrote the book and are carrying on his work — how’s that for commitment, passion, and family legacy?

The book leads us through ten of the most common mind games we play in our attempts to remain ignorant. It couldn’t be more timely or relevant to our age of “willful blindness,” “cognitive bias,” “echo chambers” and “epistemic bubbles.”

Finally, this week professional skeptic Michael Sheerer weighed in on the positive side of the scale with his review of a new book by journalist Gregg Easterbrook — It’s Better Than It Looks. Shermer blasts out of the gate with “Though declinists in both parties may bemoan our miserable lives, Americans are healthier, wealthier, safer and living longer than ever.” He also begins his case with the Obama quote above, and adds another one:

As Obama explained to a German audience earlier that year: “We’re fortunate to be living in the most peaceful, most prosperous, most progressive era in human history,” adding “that it’s been decades since the last war between major powers. More people live in democracies. We’re wealthier and healthier and better educated, with a global economy that has lifted up more than a billion people from extreme poverty.”

A similar paeon to progress begins last year’s blockbuster Homo Deus (another of Bill Gates’ favorite books of all time). The optimist case has been showing up elsewhere in my research, too. Who knows, maybe utopia isn’t such a bad idea after all. In fact, maybe it’s already here.

Now there’s a thought.

All this ferocious optimism has been bracing, to say the least — it’s been the best challenge yet to what was becoming a comfortably dour outlook on economic reality.

And just as I was beginning to despair of anyone anywhere at any time ever using data to make sense of things, I also ran into an alternative to utopian thinking that both Pinker and Shermer acknowledge. It’s called “protopia,” and we’ll look at it next time.

 

Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!

Utopia for Realists

Dutchman Rutger Bregman is a member of the Forbes 30 Under 30 Europe Class of 2017. He’s written four books on history, philosophy, and economics. In his book Utopia for Realists (2016), he recognizes the dangers of utopian thinking:

True, history is full of horrifying forms of utopianism — fascism, communism, Nazism — just as every religion has also spawned fanatical sects.

According to the cliché, dreams have a way of turning into nightmares. Utopias are a breeding ground for discord, violence, even genocide. Utopias ultimately become dystopias.

Having faced up to the dangers, however, he presses on:

Let’s start with a little history lesson: In the past, everything was worse. For roughly 99% of the world’s history, 99% of humanity was poor, hungry, dirty, afraid, stupid, sick, and ugly. As recently as the seventeenth century, the French philosopher Blaise Pascal (1623-62) described life as one giant vale of tears. “Humanity is great,” he wrote, “because it knows itself to be wretched.” In Britain, fellow philosopher Thomas Hobbes (1588-1679) concurred that human life was basically, “solitary, poor, nasty, brutish, and short.”

But in the last 200 years, all that has changed. In just a fraction of the time that our species has clocked on this planet, billions of us are suddenly rich, well nourished, clean, safe, smart, healthy, and occasionally even beautiful.[1]

Welcome, in other words, to the Land of Plenty. To the good life, where almost everyone is rich, safe, and healthy. Where there’s only one thing we lack: a reason to get out of bed in the morning. Because, after all, you can’t really improve on paradise. Back in 1989, the American philosopher Francis Fukuyama already noted that we had arrived in an era where life has been reduced to “economic calculation, the endless solving of technical problems, environmental concerns, and the satisfaction of sophisticated consumer demands.”[2]

Notching up our purchasing power another percentage point, or shaving a couple off our carbon emissions; perhaps a new gadget — that’s about the extent of our vision. We live in an era of wealth and overabundance, but how bleak it is. There is “neither art nor philosophy,” Fukuyama says. All that’s left is the “perpetual caretaking of the museum of human history.”

According to Oscar Wilde, upon reaching the Land of Plenty, we should once more fix our gaze on the farthest horizon and rehoist the sails. “Progress is the realization of utopias,” he wrote. But the farthest horizon remains blank. The Land of Plenty is shrouded in fog. Precisely when we should be shouldering the historic task of investing this rich, safe, and healthy existence with meaning, we’ve buried utopia instead.

In fact, most people in wealthy countries believe children will actually be worse off than their parents. According to the World Health Organization, depression has even become the biggest health problem among teens and will be the number-one cause of illness worldwide by 2030.[3]

It’s a vicious cycle. Never before have so many young people been seeing a psychiatrist. Never before have there been so many early career burnouts. And we’re popping antidepressants like never before. Time and again, we blame collective problems like unemployment, dissatisfaction, and depression on the individual. If success is a choice, so is failure. Lost your job? You should have worked harder. Sick? You must not be leading a healthy lifestyle. Unhappy? Take a pill.

No, the real crisis is that we can’t come up with anything better. We can’t imagine a better world than the one we’ve got. The real crisis of our times, of my generation, is not that we don’t have it good, or even that we might be worse off later on. “The best minds of my generation are thinking about how to make people click ads,” a former math whiz at Facebook recently lamented.[4]

After this assessment, Bregman shifts gears. “The widespread nostalgia, the yearning for a past that really never was,” he says, “suggest that we still have ideals, even if we have buried them alive.” From there, he distinguishes the kind of utopian thinking we do well to avoid from the kind we might dare to embrace. We’ll follow him into that discussion next time.


[1] For a detailed (1,000 pages total) history of this economic growth from general nastiness to the standard of living we enjoy now, I’ll refer you again to two books I plugged a couple weeks ago: Americana: A 400 Year History Of American Capitalism and The Rise and Fall of American Growth.

[2] See here and here for a sampling of updates/opinions providing a current assessment of Fukuyama’s 1989 article.

[3] World Health Organization, Health for the World’s Adolescents, June 2014. See this executive summary.

[4] “This Tech Bubble is Different,” Bloomberg Businessweek, April 14, 2011.

 

Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!

 

Utopia

“Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back”

John Maynard Keynes

We met law professor and economics visionary James Kwak a few months ago. In his book Economism: Bad Economics and the Rise of Inequality (2017), he tells this well-known story about John Maynard Keynes:

In 1930, John Maynard Keynes argued that, thanks to technological progress, the ‘economic problem’ would be solved in about a century and people would only work fifteen hours per week — primarily to keep themselves occupied. When freed from the need to accumulate wealth, the human life would change profoundly.

This passage is from Keynes’ 1930 essay:

I see us free, therefore, to return to some of the most sure and certain principles of religion and traditional virtue—that avarice is a vice, that the exaction of usury is a misdemeanor, and the love of money is detestable, that those who walk most truly in the paths of virtue and sane wisdom are take least thought for the morrow. We shall once more value ends above means and prefer the good to the useful. We shall honour those who can teach us how to pluck the hour and the day virtuously and well, the delightful people who are capable of taking direct enjoyment in things, the lilies of the field who toil not neither do they spin.

The timing of Keynes’ essay is fascinating: he wrote it right after the original Black Friday and as the Great Depression was rolling out. Today, it seems as though his prediction was more than out of time, it was just plain wrong. Plus, it was undeniably utopian — which for most of us is usually translated something like, “Teah, don’t I wish, but that’s never going to happen.” Someone says “utopia,” and we automatically hear “dystopia,” which is where utopias usually end up, “reproduc[ing] many of the same tyrannies that people were trying to escape: egoism, power struggles, envy, mistrust and fear.” “Utopia, Inc.,” Aeon Magazine.

It’s just another day in paradise 
As you stumble to your bed 
You’d give anything to silence 
Those voices ringing in your head 
You thought you could find happiness 
Just over that green hill 
You thought you would be satisfied 
But you never will- 

The Eagles

To be fair, the post-WWII surge truly was a worldwide feast of economic utopia, served up mostly by the Mont Pelerin Society and other champions of neoliberal ideology. If they didn’t create the precise utopia Keynes envisioned, that’s because even the best ideas can grow out of time: a growing international body of data, analysis, and commentary indicates that continued unexamined allegiance to neoliberalism is rapidly turning postwar economic utopia into its opposite.

But what if we actually could, if not create utopia, then at least root out some persistent strains of dystopia — things like poverty, lack of access to meaningful work, even a more even-handed and less unequal income distribution? Kwak isn’t alone in thinking we could do just that, but to get there from here will require more than a new ideology to bump neoliberalism aside. Instead, we need an entirely new economic narrative, based on a new understanding of how the world works:

Almost a century [after Keynes made his prediction], we have the physical, financial, and human capital necessary for everyone in our country to enjoy a comfortable standard of living, and within a few generations the same should be true of the entire planet, And yet our social organization remains the same as it was in the Great Depression: some people work very hard and make more money than they will ever need, while many others are unable to find work and live in poverty.

Real change will not be achieved by mastering the details of marginal costs and marginal benefits, but by constructing a new, controlling narrative about how the world works.

Rooting out the persistent strains of economic dystopia in our midst will require a whole new way of thinking — maybe even some utopia thinking. If we’re going to go there, we’ll need to keep our wits about us. More on that next time.

Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!

 

The Perils of Policy

Economics articles, books, and speeches usually end with policy recommendations. You can predict them in advance if you know the ideological bias of the source. Let’s look at three, for comparison.

First, this Brookings Institute piece— What happens if robots take the jobs? The impact of emerging technologies on employment and public policy — written a couple years back by Darrell M. West, vice president and director of Governance Studies and founding director of the Center for Technology Innovation at the Institute.

Second, this piece — Inequality isn’t inevitable. Here’s what we can do differently — published by the World Economic Forum and written last month by a seriously over-achieving 23-year old globe-trotting Italian named Andrea Zorzetto.

Third, this piece — Mark My Words: This Political Event Will be Unlike Anything We’ve Seen in 50 Years — by Porter Stansberry, which showed up in my Facebook feed last month. Stansberry offers this bio: “You may not know me, but nearly 20 years ago, I started a financial research and education business called Stansberry Research. Today we have offices in the U.S., Hong Kong, and Singapore. We serve more than half a million paid customers in virtually every country (172 at last count). We have nearly 500 employees, including dozens of financial analysts, corporate attorneys, accountants, technology experts, former hedge fund managers, and even a medical doctor.”

The Brookings article is what you would expect: long, careful, reasoned. Energetic Mr. Zorzetto’s article is bright, upbeat, and generally impressive. Porter Stansberry’s missive is … well, we’ll just let it speak for itself. I chose these three because they all cite the same economic data and developments, but reach for different policy ideals. There’s plenty more where these came from. Read enough of them, and they start to organize themselves into multiple opinion categories which after numerous iterations all mush together into vague uncertainty.

There’s got to be a better way. Turns out there is: how about if we ask the economy itself what it’s up to? That’s what the emerging field of study called “complexity economics” does. Here’s a short explanation of it, published online by Exploring Economics, an “open source learning platform.” The word “complexity” in this context doesn’t mean “hard to figure out.” It’s a technical term borrowed from a systems theory approach that originated in science, mathematics, and statistics.

Complexity economics bypasses ideological bias and lets the raw data speak for itself. It’s amazing what you hear when you give data a voice — for example, an answer to the question we heard the Queen of England ask a few posts back, which a group of Cambridge economists couldn’t answer (neither could anyone else, for that matter): Why didn’t we see the 2007-2008 Recession coming? The economy had and answer; you just need to know how to listen to it. (More on that coming up.)

What gives data its voice? Ironically, the very job-threatening technological trends we’ve been talking about in the past couple months:

Big Data + Artificial Intelligence + Brute Strength Computer Processing Power
= Complexity Economics

Which means — in a stroke of delicious irony — guess whose jobs are most threatened by this new approach to economics? You guessed it: the jobs currently held by ideologically-based economists making policy recommendations. For them, economics just became “the dismal science” in a whole new way.

Complex systems theory is as close to a Theory of Everything as I’ve seen. No kidding. We’ll be looking at it in more depth, but first… Explaining is one thing, but predicting is another. Policy-making invariably relies on the ability to predict outcomes, but predicting has its own perils. We’ll look at those next time. In the meantime, just for fun…

                           

If you click on the first image, you’ll go to the original silent movie melodrama series. A click on the second image takes you to Wikipedia re: the 1947 Hollywood technicolor remake. The original is from a period of huge economic growth and quality of life advancements. The movie came out at the beginning of equally powerful post-WWII economic growth. Which leads to another economic history book I can’t recommend highly enough, shown in the image on the left below. Like Americana, which I recommended a couple weeks ago, it’s well researched and readable. They’re both big, thick books, but together they offer a fascinating course on all the American history we never knew. (Click the images for more.)

                    

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

The Fatal Flaw

Several years ago I wrote a screenplay that did okay in a contest. I made a couple trips to Burbank to pitch it, got no sustained interest, and gave up on it. Recently, someone who actually knows what he’s doing encouraged me to revise and re-enter it. Among other things, he introduced me to Inside Story: The Power of the Transformational Arc, by Dara Marks (2007). The book describes what the author calls “the essential story element” — which, it turns out, is remarkably apt not just for film but for life in general, and particularly for talking about economics, technology, and the workplace.

No kidding.

What is it?

The Fatal Flaw.

This is from the book:

First, it’s important to recap or highlight the fundamental premise on which the fatal flaw is based:

  • Because change is essential for growth, it is a mandatory requirement for life.
  • If something isn’t growing and developing, it can only be headed toward decay and death.
  • There is no condition of stasis in nature. Nothing reaches a permanent position where neither growth nor diminishment is in play.

As essential as change is, most of us resist it, and cling rigidly to old survival systems because they are familiar and “seem” safer. In reality, if an old, obsolete survival system makes us feel alone, isolated, fearful, uninspired, unappreciated, and unloved, we will reason that it’s easier to cope with what we know that with what we haven’t yet experienced. As a result, most of us will fight to sustain destructive relationships, unchallenging jobs, unproductive work, harmful addictions, unhealthy environments, and immature behavior long after there is any sign of life or value to them.

This unyielding commitment to old, exhausted survival systems that have outlived their usefulness, and resistance to the rejuvenating energy of new, evolving levels of existence and consciousness is what I refer to as the fatal flaw of character:

The Fatal Flaw is a struggle within a character
to maintain a survival system
long after it has outlived its usefulness.

As it is with screenwriting, so it is with us as we’re reckoning with the wreckage of today’s collision among economics, technology, and the workplace. We’re like the character who must change or die to make the story work: our economic survival is at risk, and failure to adapt is fatal. Faced with that prospect, we can change our worldview, or we can wish we had. Trouble is, our struggle to embrace a new paradigm is as perilous as holding to an old one.

What’s more, we will also need to reckon with two peculiar dynamics of our time: “echo chambers” and “epistemic bubbles.” The following is from an Aeon Magazine article published earlier this week entitled “Escape The Echo Chamber”:

Something has gone wrong with the flow of information. It’s not just that different people are drawing subtly different conclusions from the same evidence. It seems like different intellectual communities no longer share basic foundational beliefs. Maybe nobody cares about the truth anymore, as some have started to worry. Maybe political allegiance has replaced basic reasoning skills. Maybe we’ve all become trapped in echo chambers of our own making – wrapping ourselves in an intellectually impenetrable layer of likeminded friends and web pages and social media feeds.

But there are two very different phenomena at play here, each of which subvert the flow of information in very distinct ways. Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs. But they work in entirely different ways, and they require very different modes of intervention. An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trust people from the other side.

An echo chamber doesn’t destroy their members’ interest in the truth; it merely manipulates whom they trust and changes whom they accept as trustworthy sources and institutions.

Here’s a basic check: does a community’s belief system actively undermine the trustworthiness of any outsiders who don’t subscribe to its central dogmas? Then it’s probably an echo chamber.

That’s what we’re up against. We’ll plow fearlessly ahead next time.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

Bus Riding Economists

Lord, I was born a ramblin’ man
Tryin’ to make a livin’ and doin’ the best I can[1]

A couple economists took the same bus I did one day last week. We’ll call them “Home Boy” and “Ramblin’ Man.”. They made acquaintance when Ramblin’ Man put his money in the fare box and didn’t get a transfer coupon. He was from out of town, he said, and didn’t know how to work it. Home Boy explained that you need to wait until the driver gets back from her break. Ramblin’ Man said he guessed the money was just gone, but the driver showed up about then and checked the meter — it showed he’d put the money in, so he got his transfer. Technology’s great, ain’t it?

Ramblin’ Man took the seat in front of me. Home Boy sat across the aisle. When the conversation turned to economics, I eavesdropped[2] shamelessly. Well not exactly — they were talking pretty loud. Ramblin’ Man said he’d been riding the bus for two days to get to the VA. That gave them instant common ground:  they were both Vietnam vets, and agreed they were lucky to get out alive.

Ramblin’ Man said when he got out he went traveling — hitchhike, railroad, bus, you name it. That was back in the 70’s, when a guy could go anywhere and get a job. Not no more. Now he lives in a small town up on northeast Montana. He likes it, but it’s a long way to get to the VA, but he knew if he could get here, there’d be a bus to take him right to it, and sure enough there was. That’s the trouble with those small towns, said Home Boy — nice and quiet, but not enough people to have any services. I’ll bet there’s no bus company up there, he chuckled. Not full of people like Minneapolis.

Minneapolis! Ramblin’ Man lit up at the mention of it. All them people, and no jobs. He was there in 2009, right after the bankers ruined the economy. Yeah, them and the politicians, Home Boy agreed. Shoulda put them all in jail. It’s those one-percenters. They got it fixed now so nobody makes any money but them. It’s like it was back when they were building the railroads and stuff. Now they’re doing it again. Nobody learns from history — they keep doing the same things over and over. They’re stuck in the past.

Except this time, it’s different, said Ramblin’ Man. It’s all that technology — takes away all the jobs. Back in 09, he’d been in Minneapolis for three months, and his phone never rang once for a job offer. Not once. Never used to happen in the 70’s.

And then my stop came up, and my economic history lesson was over. My two bus riding economists had covered the same developments I’ve been studying for the past 15 months. My key takeaway? That “The Economy” is a lazy fiction — none of us really lives there. Instead, we live in the daily challenges of figuring out how to get the goods and services we need — maybe to thrive (if you’re one of them “one-percenters”), or maybe just to get by. The Economy isn’t some transcendent structure, it’s created one human transaction at a time — like when a guy hits the road to make sense of life after a war, picking up odd jobs along the way until eventually he settles in a peaceful little town in the American Outback. When we look at The Economy that way, we get a whole new take on it. That’s precisely what a new breed of cross-disciplinary economists are doing, and we’ll examine their outlook in the coming weeks.

In the meantime, I suspect that one of the reasons we don’t learn from history is that we don’t know it. In that regard, I recently read a marvelous economic history book that taught me a whole lot I never knew:  Americana: A 400-Year History of American Capitalism (2017)  by tech entrepreneur Bhu Srinivasan. Here’s the promo blurb:

“From the days of the Mayflower and the Virginia Company, America has been a place for people to dream, invent, build, tinker, and bet the farm in pursuit of a better life. Americana takes us on a four-hundred-year journey of this spirit of innovation and ambition through a series of Next Big Things — the inventions, techniques, and industries that drove American history forward: from the telegraph, the railroad, guns, radio, and banking to flight, suburbia, and sneakers, culminating with the Internet and mobile technology at the turn of the twenty-first century. The result is a thrilling alternative history of modern America that reframes events, trends, and people we thought we knew through the prism of the value that, for better or for worse, this nation holds dearest: capitalism. In a winning, accessible style, Bhu Srinivasan boldly takes on four centuries of American enterprise, revealing the unexpected connections that link them.”

This is American history as we never learned it, and the book is well worth every surprising page.


[1] From “Ramblin’ Man,” by the Allman Brothers. Here’s a 1970 live version. And here’s the studio version.

[2] If you wonder, as I did, where “eavesdrop” came from, here’s the Word Detective’s explanation.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

On the Third Hand, Continued

Will the machines take over the jobs?

In a recent TED talk, scholar, economist, author, and general wunderkind Daniel Susskindl[1] says the question is distracting us from a much bigger and more important issue: how will we feed, clothe, and shelter ourselves if we no longer work for a living?:

If we think of the economy as a pie, technological progress makes the pie bigger. Technological unemployment, if it does happen, in a strange way will be a symptom of that success — we will have solved one problem — how to make the pie bigger — but replaced it with another — how to make sure that everyone gets a slice. As other economists have noted, solving this problem won’t be easy.

Today, for most people, their job is their seat at the economic dinner table, and in a world with less work or even without work, it won’t be clear how they get their slice. This is the collective challenge that’s right in front of us — to figure out how this material prosperity generated by our economic system can be enjoyed by everyone in a world in which our traditional mechanism for slicing up the pie, the work that people do, withers away and perhaps disappears.

Guy Standing, another British economist, agrees with Susskind about this larger issue. The following excerpts are from his book The Corruption of Capitalism. He begins by quoting Nobel prizewinning economist Herbert Simon’s 1960 prediction:

Within the very near future — much less than twenty-five years — we shall have the technical capacity of substituting machines for any and all human functions in organisations.

And then he makes these comments:

You do not receive a Nobel Prize for Economics for being right all the time! Simon received his in 1978, when the number of people in jobs was at record levels. It is higher still today. Yet the internet-based technological revolution has reopened age-old visions of machine domination. Some are utopian, such as the post-capitalism of Paul Mason, imagining an era of free information and information sharing. Some are decidedly dystopian, where the robots — or rather their owners — are in control and mass joblessness is coupled with a “panopticon” state[2] subjecting the proles to intrusive surveillance, medicalized therapy and brain control. The pessimists paint a “world without work.” With every technological revolution there is a scare that machines will cause “technological unemployment”. This time the Jeremiahs seem a majority.

Whether or not they will do so in the future, the technologies have not yet produced mass unemployment . . . [but they] are contributing to inequality.

While technology is not necessarily destroyed jobs, it is helping to destroy the old income distribution system.

The threat is technology-induced inequality, not technological unemployment.”

Economic inequality and income distribution (sharing national wealth on a basis other than individual earned income) are two sides of the issue of economic fairness — always an inflammatory topic.

When I began my study of economics 15 months ago, I had never heard of economic inequality, and income distribution was something socialist countries did. Now I find both topics all over worldwide economic news and commentary and still mostly absent in U.S. public discourse (such as it is) outside of academic circles. On the whole, most policy-makers on both the left and right maintain their allegiance to the post-WWII Mont Pelerin neoliberal economic model, supported by a cultural and moral bias in favor of working for a living, and if the plutocrats take a bigger slice of pie while the welfare rug gets pulled on the working poor, well then so be it. If the new robotic and super-intelligent digital workers do in fact cause massive technological unemployment among the humans, we’ll all be reexamining these beliefs, big time.

I began this series months ago by asking whether money can buy happiness, citing the U.N.’s World Happiness Report. The 2018 Report was issued this week, and who should be on top but… Finland! And guess what — among other things, factors cited include low economic inequality and strong social support systems (i.e., a cultural value for non-job-based income distribution). National wealth was also a key factor, but it alone didn’t buy happiness: the USA, with far and away the strongest per capita GDP, had an overall ranking of 18th. For more, see this World Economic Forum article or this one from the South China Morning Post.

We’ll be looking further into all of this (and much more) in the weeks to come.


[1] If you’ve been following this column for awhile and the name “Susskind” sounds familiar, a couple years ago, I blogged about the future and culture of the law, often citing the work of Richard Susskind, whose opus is pretty much the mother lode of crisp thinking about the law and technology. His equally brilliant son Daniel joined him in a book that also addressed other professions, which that series also considered. (Those blogs were collected in Cyborg Lawyers.) Daniel received a doctorate in economics from Oxford University, was a Kennedy Scholar at Harvard, and is now a Fellow in Economics at Balliol College, Oxford. Previously, he worked as a policy adviser in the Prime Minister’s Strategy Unit and as a senior policy adviser in the Cabinet Office.

[2] The panopticon architectural structure was the brainchild of legal philosopher Jeremy Bentham. For an introduction to the origins of his idea and its application to the digital age, see this article in The Guardian.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

Race Against the Machine, Continued

Rational choice theory is a cornerstone of conventional economic thinking. It states that:

Individuals always make prudent and logical decisions. These decisions provide people with the greatest benefit or satisfaction — given the choices available — and are also in their highest self-interest.

Presumably Stephen Hawking, Elon Musk, and Bill Gates had something like this in mind when they published an open letter in January 2015 urging that artificial intelligence R&D should focus “not only on making AI more capable, but also on maximizing the societal benefit,” To execute on this imperative, they urged an interdisciplinary collaboration among “economics, law and philosophy. computer security, formal methods and, of course, various branches of AI itself.” (Since its release, the letter has garnered another 8.000 signatures — you can sign it, too, if you like.)

The letter’s steady, rational four paragraphs praise how technology has benefited the human race, and anticipate more of the same in the future, but its reception and the authors’ comments in other contexts are not so measured. As a result, the letter has become a cheering section for those who think humanity is losing its race against the robots.

Consider, for example, the following from an Observer article:

“Success in creating AI would be the biggest event in human history,” wrote Stephen Hawking in an op-ed, which appeared in The Independent in 2014. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Professor Hawking added in a 2014 interview with BBC, “humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.”

Elon Musk called the prospect of artificial intelligence “our greatest existential threat” in a 2014 interview with MIT students at the AeroAstro Centennial Symposium. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” Mr. Musk cites his decision to invest in the Artificial Intelligence firm, DeepMind, as a means to “just keep an eye on what’s going on with artificial intelligence. I think there is potentially a dangerous outcome there.”

Microsoft co-founder Bill Gates has also expressed concerns about Artificial Intelligence. During a Q&A session on Reddit in January 2015, Mr. Gates said, “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Or consider this Elon Musk comment in Vanity Fair:

In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”

In other words, Hawking, Gates, and Musk aren’t just worried about machines taking over jobs, they’re worried about the end of the world — or at least the human race. This Washington Post op-ed piece thinks that might not be such a bad thing:

When a technology is so obviously dangerous — like nuclear energy or synthetic biology — humanity has an imperative to consider dystopian predictions of the future. But it also has an imperative to push on, to reach its full potential. While it’s scary, sure, that humans may no longer be the smartest life forms in the room a generation from now, should we really be that concerned? Seems like we’ve already done a pretty good job of finishing off the planet anyway. If anything, we should be welcoming our AI masters to arrive sooner rather than later.

Or consider this open letter written back to Hawking, Gates, and Musk, which basically says forget the fear mongering — it’s going to happen no matter what you think:

Progress is inevitable, even if it is reached by accident and happenstance. Even if we do not intend to, sentient AI is something that will inevitably be created, be it through the evolution of a learning AI, or as a byproduct of some research. No treaty or coalition can stop it, no matter what you think. I just pray you do not go from educated men to fear mongers when it happens.

As usual, we’re at an ideological impasse, with both sides responding not so much according to the pros and cons but according to their predispositions. This article suggests a way through the impasse:

At the beginning of this article, we asked if the pessimists or optimists would be right.

There is a third option, though: one where we move from building jobs around processes and tasks, a solution that is optimal for neither human nor machine, to building jobs around problems.

The article is long, well-researched, and… well, very rational. Too bad — conventional thinking aside — other research shows we rarely act from a rational outlook when it comes to jobs and the economy… or anything else for that matter.

More on that next time.

 

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

Gonna Be a Bright, Bright, Sunshiny Day

We met Sebastian Thrun last time. He’s a bright guy with a sunshiny disposition: he’s not worried about robots and artificial intelligence taking over all the good jobs, even his own. Instead, he’s perfectly okay if technology eliminates most of what he does every day because he believes human ingenuity will fill the vacuum with something better. This is from his conversation with TED curator Chris Anderson:

If I look at my own job as a CEO, I would say 90 percent of my work is repetitive, I don’t enjoy it, I spend about four hours per day on stupid, repetitive email. And I’m burning to have something that helps me get rid of this. Why? Because I believe all of us are insanely creative . . . What this will empower is to turn this creativity into action.

We’ve unleashed this amazing creativity by de-slaving us from farming and later, of course, from factory work and have invented so many things. It’s going to be even better, in my opinion. And there’s going to be great side effects. One of the side effects will be that things like food and medical supply and education and shelter and transportation will all become much more affordable to all of us, not just the rich people.

Anderson sums it up this way:

So the jobs that are getting lost, in a way, even though it’s going to be painful, humans are capable of more than those jobs. This is the dream. The dream is that humans can rise to just a new level of empowerment and discovery. That’s the dream.

Another bright guy with a sunshiny disposition is David Lee, Vice President of Innovation and the Strategic Enterprise Fund for UPS. He, too, shares the dream that technology will turn human creativity loose on a whole new kind of working world. Here’s his TED talk (click the image):

Like Sebastian Thrun, he’s no Pollyanna: he understands that yes, technology threatens jobs:

There’s a lot of valid concern these days that our technology is getting so smart that we’ve put ourselves on the path to a jobless future. And I think the example of a self-driving car is actually the easiest one to see. So these are going to be fantastic for all kinds of different reasons. But did you know that “driver” is actually the most common job in 29 of the 50 US states? What’s going to happen to these jobs when we’re no longer driving our cars or cooking our food or even diagnosing our own diseases?

Well, a recent study from Forrester Research goes so far to predict that 25 million jobs might disappear over the next 10 years. To put that in perspective, that’s three times as many jobs lost in the aftermath of the financial crisis. And it’s not just blue-collar jobs that are at risk. On Wall Street and across Silicon Valley, we are seeing tremendous gains in the quality of analysis and decision-making because of machine learning. So even the smartest, highest-paid people will be affected by this change.

What’s clear is that no matter what your job is, at least some, if not all of your work, is going to be done by a robot or software in the next few years.

But that’s not the end of the story. Like Thrun, he believes that the rise of the robots will clear the way for unprecedented levels of human creativity — provided we move fast:

The good news is that we have faced down and recovered two mass extinctions of jobs before. From 1870 to 1970, the percent of American workers based on farms fell by 90 percent, and then again from 1950 to 2010, the percent of Americans working in factories fell by 75 percent. The challenge we face this time, however, is one of time. We had a hundred years to move from farms to factories, and then 60 years to fully build out a service economy.

The rate of change today suggests that we may only have 10 or 15 years to adjust, and if we don’t react fast enough, that means by the time today’s elementary-school students are college-aged, we could be living in a world that’s robotic, largely unemployed and stuck in kind of un-great depression.

But I don’t think it has to be this way. You see, I work in innovation, and part of my job is to shape how large companies apply new technologies. Certainly some of these technologies are even specifically designed to replace human workers. But I believe that if we start taking steps right now to change the nature of work, we can not only create environments where people love coming to work but also generate the innovation that we need to replace the millions of jobs that will be lost to technology.

I believe that the key to preventing our jobless future is to rediscover what makes us human, and to create a new generation of human-centered jobs that allow us to unlock the hidden talents and passions that we carry with us every day.

More from David Lee next time.

If all this bright sunshiny perspective made you think of that old tune, you might treat yourself to a listen. It’s short, you’ve got time.

And for a look at a current legal challenge to the “gig economy” across the pond, check out this Economist article from earlier this week.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: Leadership and Life Lessons From an Elite Athlete and a Dying Man.