June 23, 2018

Reframing “The Economy”

We’ve seen that conventional thinking about “the economy” struggles to accommodate technologies such as machine learning, robotics, and artificial intelligence — which means it’s ripe for a big dose of reframing. Reframing is a problem-solving strategy that flips our usual ways of thinking so that blind spots are revealed, conundrums resolved, polarities synthesized, and barriers transformed into logistics.

The Santa Fe Institute is on the reframing case: Rolling Stone called it “a sort of Justice League of renegade geeks, where teams of scientists from disparate fields study the Big Questions.” W. Brian Arthur is one of those geeks. He’s also onboard with PARC — a Xerox company in “the business of breakthroughs” — and has written two seminal books on complexity economics: Complexity and the Economy (2014) and The Nature of Technology: What it Is and How it Evolves (2009). Here’s his pitch for reframing “the economy”:

The standard way to define the economy — whether in dictionaries or economics textbooks — is as a “system of production and distribution and consumption” of goods and services. And we picture this system, “the economy,” as something that exists in itself, as a backdrop to the events and adjustments that occur within it. Seen this way, the economy becomes something like a gigantic container . . . , a huge machine with many modules or parts.

I want to look at the economy in a different way. The shift in thinking I am putting forward here is . . . like seeing the mind not as a container for its concepts and habitual thought processes but as something that emerges from these. Or seeing an ecology not as containing a collection of biological species, but as forming from its collection of species. So it is with the economy.

The economy is a set of activities and behaviors and flows of goods and services mediated by — draped over — its technologies: the of arrangements and activities by which a society satisfies its needs. They include hospitals and surgical procedures. And markets and pricing systems. And trading arrangements, distribution systems, organizations, and businesses. And financial systems, banks, regulatory systems, and legal systems. All these are arrangements by which we fulfill our needs, all are means to fulfill human purposes.

George Zarkadakis is another Big Questions geek. He’s an artificial intelligence Ph.D. and engineer, and the author of In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence (2016). He describes his complexity economics reframe in a recent article “The Economy Is More A Messy, Fractal Living Thing Than A Machine”:

Mainstream economics is built on the premise that the economy is a machine-like system operating at equilibrium. According to this idea, individual actors – such as companies, government departments and consumers – behave in a rational way. The system might experience shocks, but the result of all these minute decisions is that the economy eventually works its way back to a stable state.

Unfortunately, this naive approach prevents us from coming to terms with the profound consequences of machine learning, robotics and artificial intelligence.

Both political camps accept a version of the elegant premise of economic equilibrium, which inclines them to a deterministic, linear way of thinking. But why not look at the economy in terms of the messy complexity of natural systems, such as the fractal growth of living organisms or the frantic jive of atoms?

These frameworks are bigger than the sum of their parts, in that you can’t predict the behaviour of the whole by studying the step-by-step movement of each individual bit. The underlying rules might be simple, but what emerges is inherently dynamic, chaotic and somehow self-organising.

Complexity economics takes its cue from these systems, and creates computational models of artificial worlds in which the actors display a more symbiotic and changeable relationship to their environments. Seen in this light, the economy becomes a pattern of continuous motion, emerging from numerous interactions. The shape of the pattern influences the behaviour of the agents within it, which in turn influences the shape of the pattern, and so on.

There’s a stark contrast between the classical notion of equilibrium and the complex-systems perspective. The former assumes rational agents with near-perfect knowledge, while the latter recognises that agents are limited in various ways, and that their behaviour is contingent on the outcomes of their previous actions. Most significantly, complexity economics recognises that the system itself constantly changes and evolves – including when new technologies upend the rules of the game.

That’s all pretty heady stuff, but what we’d really like to know is what complexity economics can tell us that conventional economics can’t.

We’ll look at that next time.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning. Check out his latest LinkedIn Pulse article: “Rolling the Rock: Lessons From Sisyphus on Work, Working Out, and Life.”

What is “The Economy” Anyway?

Throughout this series, we’ve heard from numerous commentators who believe that conventional economic thinking isn’t keeping pace with the technological revolution, and that polarized ideological posturing is preventing the kind of open-minded discourse we need to reframe our thinking.

In this short TED talk, the author[1] of Americana: A Four Hundred Year History of American Capitalism suggests that we unplug the ideological debate and instead adopt a less combative and more digital-friendly metaphor for how we talk about the economy:

Capitalism . . . is this either celebrated term or condemned term. It’s either revered or it’s reviled. And I’m here to argue that this is because capitalism, in the modern iteration, is largely misunderstood.

In my view, capitalism should not be thought of as an ideology, but instead should be thought of as an operating system.

When you think about it as an operating system, it devolves the language of ideology away from what traditional defenders of capitalism think.

The operating system metaphor shifts policy agendas away from ideology and instead invites us to consider the economy as something that needs to be continually updated:

As you have advances in hardware, you have advances in software. And the operating system needs to keep up. It needs to be patched, it needs to be updated, new releases have to happen. And all of these things have to happen symbiotically. The operating system needs to keep getting more and more advanced to keep up with innovation.

But what if the operating system has gotten too complex for the human mind to comprehend? This recent article from the Silicon Flatirons Center at the University of Colorado[2] observes that “Human ingenuity has created a world that the mind cannot master,” then asks, “Have we finally reached our limits?” The question telegraphs its answer: In many respects, yes we have. Consider, for example, the air Traffic Alert and Collision Avoidance System (TCAS) that’s responsible for keeping us safe when we fly:

TCAS alerts pilots to potential hazards, and tells them how to respond by using a series of complicated rules. In fact, this set of rules — developed over decades — is so complex, perhaps only a handful of individuals alive even understand it anymore.

While the problem of avoiding collisions is itself a complex question, the system we’ve built to handle this problem has essentially become too complicated for us to understand, and even experts sometimes react with surprise to its behaviour. This escalating complexity points to a larger phenomenon in modern life. When the systems designed to save our lives are hard to grasp, we have reached a technological threshold that bears examining.

It’s one thing to recognise that technology continues to grow more complex, making the task of the experts who build and maintain our systems more complicated still, but it’s quite another to recognise that many of these systems are actually no longer completely understandable.

The article cites numerous other impossibly complex systems, including the law:

Even our legal systems have grown irreconcilably messy. The US Code, itself a kind of technology, is more than 22 million words long and contains more than 80,000 links within it, between one section and another. This vast legal network is profoundly complicated, the functionality of which no person could understand in its entirety.

In an earlier book[3], Steven Pinker, author of the recent optimistic bestseller Enlightenment Now (check back a couple posts in this series) suggests that the human brain just isn’t equipped for the complexity required of modern life:

Maybe philosophical problems are hard not because they are divine or irreducible or workaday science, but because the mind of Homo Sapiens lacks the cognitive equipment to solve them. We are organisms, not angels, and our minds are organs, not pipelines to the truth. Our minds evolved by natural selection to solve problems that were life-and-death matters to our ancestors, not to commune with correctness or to answer any question we are capable of asking.

In other words, we have our limits.

Imagine that.

So then… where do we turn for appropriately complex economic thinking? According to “complexity economics,” we turn to the source: the economy itself, understood not by reference to historical theory or newly updated metaphor, but on its own data-rich and machine-intelligent terms.

We’ll go there next time.


[1] According to his TED bio, Bhu Srinivasan “researches the intersection of capitalism and technological progress.”

[2] Samuel Arbesman is the author. The Center’s mission is to “propel the future of technology policy and innovation.”

[3] How The Brain Works, which Pinker wrote in 1997 when he was a professor of psychology and director of The Center for Cognitive Neuroscience at MIT.

 

Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!

Economics + Math = Science?

The human brain is wired to recognize patterns, which it then organizes into higher level models and theories and beliefs, which in turn it uses to explain the past and present, and to predict the future. Models offer the consolation of rationality and understanding, which provide a sense of control. All of this is foundational to classical economic theory, which assumes we approach commerce equipped with an internal rational scale that weighs supply and demand, cost and benefit, and that we then act according to our assessment of what we give for what we get back. This assumption of an internal calculus has caused mathematical modeling to reign supreme in the practice of economics.

The trouble is, humans aren’t as innately calculating as classical economics would like to believe — so says David Graeber, professor of anthropology at the London School of Economics, in his new book Bullshit Jobs:

According to classical economic theory, homo oeconomicus, or “economic man” — that is, the model human being that lies behind every predication made by the discipline — is assumed to be motivated by a calculus of costs and benefits.

All the mathematical equations by which economists bedazzle their clients, or the public, are founded on one simple assumption: that everyone, left to his own devices, will choose the course of action that provides the most of what he wants for the least expenditure of resources and effort.

It is the simplicity of the formula that makes the equations possible: if one were to admit that humans have complicated emotions, there would be too many factors to take into account, it would be impossible to weigh them, and predictions would not be made.

Therefore, while an economist will say that while of course everyone is aware that human beings are not really selfish, calculating machines, assuming they are makes it possible to explain.

This is a reasonable statement as far as it goes. The problem is there are many dimensions of human life where the assumption clearly doesn’t hold — and some of them are precisely in the domain of what we like to call the economy.

Economics’ reliance on mathematics has been a topic of lively debate for a long time:

The trouble . . . is that measurement and mathematics do not guarantee the status of science – they guarantee only the semblance of science. When the presumptions or conclusions of a scientific theory are absurd or simply false, the theory ought to be questioned and, eventually, rejected. The discipline of economics, however, is presently so blinkered by the talismanic authority of mathematics that theories go overvalued and unchecked.

In 1886, an article in Science accused economics of misusing the language of the physical sciences to conceal “emptiness behind a breastwork of mathematical formulas.” More recently, Deirdre N. McCloskey’s The Rhetoric of Economics (1998) and Robert H. Nelson’s Economics as Religion (2001) both argued that mathematics in economic theory serves, in McCloskey’s words, primarily to deliver the message “Look at how very scientific I am.”

After the Great Recession, the failure of economic science to protect our economy was once again impossible to ignore. In 2009, the Nobel Laureate Paul Krugman tried to explain it in The New York Times with a version of the mathiness diagnosis. “As I see it,” he wrote, “the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.” Krugman named economists’ “desire . . . to show off their mathematical prowess” as the “central cause of the profession’s failure.”

The result is people . . . who trust the mathematical exactitude of theories without considering their performance – that is, who confuse math with science, rationality with reality.

There is no longer any excuse for making the same mistake with economic theory. For more than a century, the public has been warned, and the way forward is clear. It’s time to stop wasting our money and recognise the high priests for what they really are: gifted social scientists who excel at producing mathematical explanations of economies, but who fail, like astrologers before them, at prophecy.

The New Astrology: By fetishising mathematical models, economists turned economics into a highly paid pseudoscience,” Aeon Magazine.

Economists may bristle at being compared to astrologers, but as we have seen, their skill at prediction seems about comparable.

In the coming weeks we’ll look at other models emerging from the digital revolution, consider what they can tell us that classical economic theory can’t, and how they are affecting the world of work.

 

Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!

Protopia

Last week we heard professional skeptic Michael Shermer weigh in as an optimistic believer in progress (albeit guardedly — I mean, he is a skeptic after all) in his review of the new book It’s Better Than It Looks. That doesn’t mean he’s ready to stake a homestead claim on the Utopian frontier: the title of a recent article tells you what you need to know about where he stands on that subject: “Utopia Is A Dangerous Ideal: We Should Aim For Protopia.”[1]

He begins with a now-familiar litany of utopias that soured into dystopias in the 19th and 20th Centuries. He then endorses the “protopian” alternative, quoting an oft-cited passage in which Kevin Kelly[2] coined the term.

Protopia is a state that is better today than yesterday, although it might be only a little better. Protopia is much much harder to visualize. Because a protopia contains as many new problems as new benefits, this complex interaction of working and broken is very hard to predict.

Doesn’t sound like much, but there’s more to it than appears. Protopia is about incremental, sustainable progress — even in the impatient onslaught of technology. Kelly’s optimism is ambitious — for a full dose of it, see his book The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future (2016). This is from the book blurb:

Much of what will happen in the next thirty years is inevitable, driven by technological trends that are already in motion. In this fascinating, provocative new book, Kevin Kelly provides an optimistic road map for the future, showing how the coming changes in our lives — from virtual reality in the home to an on-demand economy to artificial intelligence embedded in everything we manufacture — can be understood as the result of a few long-term, accelerating forces.

These larger forces will completely revolutionize the way we buy, work, learn, and communicate with each other. By understanding and embracing them, says Kelly, it will be easier for us to remain on top of the coming wave of changes and to arrange our day-to-day relationships with technology in ways that bring forth maximum benefits.

Kelly’s bright, hopeful book will be indispensable to anyone who seeks guidance on where their business, industry, or life is heading — what to invent, where to work, in what to invest, how to better reach customers, and what to begin to put into place — as this new world emerges.

Protopian thinking begins with Kelly’s “bright, hopeful” attitude of optimism about progress (again, remember the thinkers we heard from last week). To adopt both optimism and the protopian vision it produces, we’ll need to relinquish our willful cognitive blindness, our allegiance to inadequate old models and explanations, and our nostalgic urge to resist and retrench.

Either that, or we can just die off. Economist Paul Samuelson said this in a 1975 Newsweek column:

As the great Max Planck, himself the originator of the quantum theory in physics, has said, science makes progress funeral by funeral: the old are never converted by the new doctrines, they simply are replaced by a new generation.

Planck himself said it this way, in his Scientific Autobiography and Other Papers:

 A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

Progress funeral by funeral[3]. . . . If that’s what it takes, that’s the way protopian progress will be made — in the smallest increments of “better today than yesterday” we will allow. But I somehow doubt progress will be that slow; I don’t think technology can wait.

Plus, if we insist on “not in my lifetime, you don’t,” we’ll miss out on a benefit we probably wouldn’t have seen coming: technology itself guiding us as we stumble our way forward through the benefits and problems of progress. There’s support for that idea in the emerging field of complexity economics — I’ve mentioned it before, and we’ll look more into it next time.


[1] The article is based on Shermer’s recent book Heavens on Earth: The Scientific Search for the Afterlife, Immortality, and Utopia.

[2] Kelly is a prolific TED talker — revealing his optimistic protopian ideas. Here’s his bio.

[3] See the Quote Investigator’s history of these quotes.

 

Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!

Utopia for Realists, Continued

Like humor and satire, utopias throw open the windows of the mind.

Rutger Bregman

Continuing with Rutger Bregman’s analysis of utopian thinking that we began last week:

Let’s first distinguish between two forms of utopian thought. The first is the most familiar, the utopia of the blueprint. Instead of abstract ideals, blueprints consist of immutable rules that tolerate no discussion.

There is, however, another avenue of utopian thought, one that is all but forgotten. If the blueprint is a high-resolution photo, then this utopia is just a vague outline. It offers not solutions but guideposts. Instead of forcing us into a straitjacket, it inspires us to change. And it understands that, as Voltaire put it, the perfect is the enemy of the good. As one American philosopher has remarked, ‘any serious utopian thinker will be made uncomfortable by the very idea of the blueprint.’

It was in this spirit that the British philosopher Thomas More literally wrote the book on utopia (and coined the term). More understood that utopia is dangerous when taken too seriously. ‘One needs to be believe passionately and also be able to see the absurdity of one’s own beliefs and laugh at them,’ observes philosopher and leading utopia expert Lyman Tower Sargent. Like humor and satire, utopias throw open the windows of the mind. And that’s vital. As people and societies get progressively older they become accustomed to the status quo, in which liberty can become a prison, and the truth can become lies. The modern creed — or worse, the belief that there’s nothing left to believe in — makes us blind to the shortsightedness and injustice that still surround us every day.

Thus the lines are drawn between utopian blueprints grounded in dogma vs. utopian ideals arising from sympathy and compassion. Both begin with good intentions, but the pull of entropy is stronger with the former — at least, so says Rutger Bregman, and he’s got good company in Sir Thomas More and others. Blueprints require compliance, and its purveyors are zealously ready to enforce it. Ideals on the other hand inspire creativity, and creativity requires acting in the face of uncertainty, living with imperfection, responding with resourcefulness and resilience when best intentions don’t play out, and a lot of just plain showing up and grinding it out. I have a personal bias for coloring outside the lines, but I must confess that my own attempts to promote utopian workplace ideals have given me pause.

For years, I led interactive workshops designed to help people creatively engage with their big ideas about work and wellbeing — variously tailored for CLE ethics credits or for general audiences. I realized recently that, reduced to their essence, they employed the kinds of ideals advocated by beatnik-era philosopher and metaphysicist Alan Watts. (We met him several months ago — he’s the “What would you do if money were no object?” guy. )

The workshops generated hundreds of heartwarming “this was life-changing” testimonies, but I could never quite get over this nagging feeling that the participants mostly hadn’t achieved escape velocity, and come next Monday they would be back to the despair of “But everybody knows you can’t earn any money that way.”

I especially wondered about the lawyers, for whom “I hate my job but love my paycheck” was a recurrent theme. The Post WWII neoliberal economic tide floated the legal profession’s boat, too, but prosperity has done little for lawyer happiness and well-being. True, we’re seeing substantial quality-of-life change in the profession recently (which I’ve blogged about in the past), but most have been around the edges, while overall lawyers’ workplace reality remains a bulwark of what one writer calls the “over-culture” — the overweening force of culturally-accepted norms about how things are and should be — and the legal over-culture has stepped in line with the worldwide workplace trend of favoring wealth over a sense of meaning and value.

Alan Watts’ ideals were widely adopted by the burgeoning self-help industry, which also rode the neoliberal tide to prosperous heights. Self-help tends to be long on inspiration and short on grinding, and sustainable creative change requires large doses of both. I served up both in the workshops, but still wonder if they were just too… well, um…beatnik … for the law profession. I’ll never know — the guy who promoted the workshops retired, and I quit doing them. If nothing else, writing this series has opened my eyes to how closely law practice mirrors worldwide economic and workplace dynamics. We’ll look more at that in the coming weeks.

 

Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!

Utopia for Realists

Dutchman Rutger Bregman is a member of the Forbes 30 Under 30 Europe Class of 2017. He’s written four books on history, philosophy, and economics. In his book Utopia for Realists (2016), he recognizes the dangers of utopian thinking:

True, history is full of horrifying forms of utopianism — fascism, communism, Nazism — just as every religion has also spawned fanatical sects.

According to the cliché, dreams have a way of turning into nightmares. Utopias are a breeding ground for discord, violence, even genocide. Utopias ultimately become dystopias.

Having faced up to the dangers, however, he presses on:

Let’s start with a little history lesson: In the past, everything was worse. For roughly 99% of the world’s history, 99% of humanity was poor, hungry, dirty, afraid, stupid, sick, and ugly. As recently as the seventeenth century, the French philosopher Blaise Pascal (1623-62) described life as one giant vale of tears. “Humanity is great,” he wrote, “because it knows itself to be wretched.” In Britain, fellow philosopher Thomas Hobbes (1588-1679) concurred that human life was basically, “solitary, poor, nasty, brutish, and short.”

But in the last 200 years, all that has changed. In just a fraction of the time that our species has clocked on this planet, billions of us are suddenly rich, well nourished, clean, safe, smart, healthy, and occasionally even beautiful.[1]

Welcome, in other words, to the Land of Plenty. To the good life, where almost everyone is rich, safe, and healthy. Where there’s only one thing we lack: a reason to get out of bed in the morning. Because, after all, you can’t really improve on paradise. Back in 1989, the American philosopher Francis Fukuyama already noted that we had arrived in an era where life has been reduced to “economic calculation, the endless solving of technical problems, environmental concerns, and the satisfaction of sophisticated consumer demands.”[2]

Notching up our purchasing power another percentage point, or shaving a couple off our carbon emissions; perhaps a new gadget — that’s about the extent of our vision. We live in an era of wealth and overabundance, but how bleak it is. There is “neither art nor philosophy,” Fukuyama says. All that’s left is the “perpetual caretaking of the museum of human history.”

According to Oscar Wilde, upon reaching the Land of Plenty, we should once more fix our gaze on the farthest horizon and rehoist the sails. “Progress is the realization of utopias,” he wrote. But the farthest horizon remains blank. The Land of Plenty is shrouded in fog. Precisely when we should be shouldering the historic task of investing this rich, safe, and healthy existence with meaning, we’ve buried utopia instead.

In fact, most people in wealthy countries believe children will actually be worse off than their parents. According to the World Health Organization, depression has even become the biggest health problem among teens and will be the number-one cause of illness worldwide by 2030.[3]

It’s a vicious cycle. Never before have so many young people been seeing a psychiatrist. Never before have there been so many early career burnouts. And we’re popping antidepressants like never before. Time and again, we blame collective problems like unemployment, dissatisfaction, and depression on the individual. If success is a choice, so is failure. Lost your job? You should have worked harder. Sick? You must not be leading a healthy lifestyle. Unhappy? Take a pill.

No, the real crisis is that we can’t come up with anything better. We can’t imagine a better world than the one we’ve got. The real crisis of our times, of my generation, is not that we don’t have it good, or even that we might be worse off later on. “The best minds of my generation are thinking about how to make people click ads,” a former math whiz at Facebook recently lamented.[4]

After this assessment, Bregman shifts gears. “The widespread nostalgia, the yearning for a past that really never was,” he says, “suggest that we still have ideals, even if we have buried them alive.” From there, he distinguishes the kind of utopian thinking we do well to avoid from the kind we might dare to embrace. We’ll follow him into that discussion next time.


[1] For a detailed (1,000 pages total) history of this economic growth from general nastiness to the standard of living we enjoy now, I’ll refer you again to two books I plugged a couple weeks ago: Americana: A 400 Year History Of American Capitalism and The Rise and Fall of American Growth.

[2] See here and here for a sampling of updates/opinions providing a current assessment of Fukuyama’s 1989 article.

[3] World Health Organization, Health for the World’s Adolescents, June 2014. See this executive summary.

[4] “This Tech Bubble is Different,” Bloomberg Businessweek, April 14, 2011.

 

Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!

 

The Perils of Predicting

“We were promised flying cars, and instead what we got was 140 characters.”

Peter Thiel, PayPal co-founder[1]

Economic forecasts and policy solutions are based on predictions, and predicting is a perilous business.

I grew up in a small town in western Minnesota. Our family got the morning paper — the Minneapolis Tribune. The Stars ubscribers got their paper around 4:00. A friend’s dad was a lawyer — his family got both. In a childhood display of cognitive bias, I never could understand why anyone would want an afternoon paper. News was made the day before, so you could read about it the next morning, and that was that.

I remember one Tribune headline to this day: it predicted nuclear war in 10 years. That was 1961, when I was eight. The Cuban missile crisis was the following year, and for awhile it looked like it wouldn’t take all ten years for the headline’s prediction to come true.

The Tribune helpfully ran designs and instructions for building your own fallout shelter. Our house had the perfect place for one: a root cellar off one side of the basement — easily the creepiest place in the house. You descended a couple steps down from the basement floor, through a stubby cinderblock hallway, past a door hanging on one hinge. Ahead of you was a bare light bulb swinging from the ceiling — it flickered, revealing decades of cobwebs and homeowner flotsam worthy of Miss Havisham. It was definitely a bomb shelter fixer-upper, but it was the right size, and as an added bonus it had a concrete slab over it — if you banged the ground above with a pipe it made a hollow sound.

I scoured the fallout shelter plans, but my dad said no. Someone else in town built one — the ventilation pipes stuck out of a room-size mound next to their house. People used to go by it on their Sunday drives. Meanwhile I ran my own personal version of the Doomsday Clockfor the next ten years until my 18th birthday came and went. So much for that headline.

I also remember a Sunday cartoon that predicted driverless cars. I found an article about it in this article from Gizmodo:[2]

The article explains:

The period between 1958 and 1963 might be described as a Golden Age of American Futurism, if not the Golden Age of American Futurism. Bookended by the founding of NASA in 1958 and the end of The Jetsons in 1963, these few years were filled with some of the wildest techno-utopian dreams that American futurists had to offer. It also happens to be the exact timespan for the greatest futuristic comic strip to ever grace the Sunday funnies: Closer Than We Think.

Jetpacks, meal pills, flying cars — they were all there, beautifully illustrated by Arthur Radebaugh, a commercial artist based in Detroit best known for his work in the auto industry. Radebaugh would help influence countless Baby Boomers and shape their expectations for the future. The influence of Closer Than We Think can still be felt today.

Timing is Everything

Apparently timing is everything in the prediction business. The driverless car prediction was accurate, just way too early. The Tribune’s nuclear war prediction was inaccurate (and let’s hope not just because it was too early). Predictions from the hapless mythological prophetess Cassandra were never inaccurate or untimely: she was cursed by Apollo (who ran a highly successful prophecy business at Delphi) with the gift of always being right but never believed.

Now that would be frustrating.

As I said last week, predicting is as perilous as policy-making. An especially perilous version of both is utopian thinking. There’s been plenty of utopian economic thinking the past couple centuries, and today’s economists continue the grand tradition — to their peril, and potentially to ours. We’ll look at some economic utopian thinking (and the case for and against it) beginning next time.

 

Apparently timing is everything in country music, too. I’m not an aficionado, but I did come across this video while researching this post. The guy’s got a nice baritone.


[1]Peter Thiel needn’t despair about the lack of flying cars anymore: here’s a video re: a prototypefrom Sebastian Thrun and his company Kitty Hawk.

[2]The article is worth a look, if you like that sort of thing. So is this Smithsonian articleon the Jetsons. And while we’re on the topic, check out this IEEE Spectrum articleon a 1960 RCA initiative that had self-driving cars just around the corner, and this Atlantic articleabout an Electronic Age/Science Digestarticle that made the same prediction even earlier — in 1958.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

The Perils of Policy

Economics articles, books, and speeches usually end with policy recommendations. You can predict them in advance if you know the ideological bias of the source. Let’s look at three, for comparison.

First, this Brookings Institute piece— What happens if robots take the jobs? The impact of emerging technologies on employment and public policy — written a couple years back by Darrell M. West, vice president and director of Governance Studies and founding director of the Center for Technology Innovation at the Institute.

Second, this piece — Inequality isn’t inevitable. Here’s what we can do differently — published by the World Economic Forum and written last month by a seriously over-achieving 23-year old globe-trotting Italian named Andrea Zorzetto.

Third, this piece — Mark My Words: This Political Event Will be Unlike Anything We’ve Seen in 50 Years — by Porter Stansberry, which showed up in my Facebook feed last month. Stansberry offers this bio: “You may not know me, but nearly 20 years ago, I started a financial research and education business called Stansberry Research. Today we have offices in the U.S., Hong Kong, and Singapore. We serve more than half a million paid customers in virtually every country (172 at last count). We have nearly 500 employees, including dozens of financial analysts, corporate attorneys, accountants, technology experts, former hedge fund managers, and even a medical doctor.”

The Brookings article is what you would expect: long, careful, reasoned. Energetic Mr. Zorzetto’s article is bright, upbeat, and generally impressive. Porter Stansberry’s missive is … well, we’ll just let it speak for itself. I chose these three because they all cite the same economic data and developments, but reach for different policy ideals. There’s plenty more where these came from. Read enough of them, and they start to organize themselves into multiple opinion categories which after numerous iterations all mush together into vague uncertainty.

There’s got to be a better way. Turns out there is: how about if we ask the economy itself what it’s up to? That’s what the emerging field of study called “complexity economics” does. Here’s a short explanation of it, published online by Exploring Economics, an “open source learning platform.” The word “complexity” in this context doesn’t mean “hard to figure out.” It’s a technical term borrowed from a systems theory approach that originated in science, mathematics, and statistics.

Complexity economics bypasses ideological bias and lets the raw data speak for itself. It’s amazing what you hear when you give data a voice — for example, an answer to the question we heard the Queen of England ask a few posts back, which a group of Cambridge economists couldn’t answer (neither could anyone else, for that matter): Why didn’t we see the 2007-2008 Recession coming? The economy had and answer; you just need to know how to listen to it. (More on that coming up.)

What gives data its voice? Ironically, the very job-threatening technological trends we’ve been talking about in the past couple months:

Big Data + Artificial Intelligence + Brute Strength Computer Processing Power
= Complexity Economics

Which means — in a stroke of delicious irony — guess whose jobs are most threatened by this new approach to economics? You guessed it: the jobs currently held by ideologically-based economists making policy recommendations. For them, economics just became “the dismal science” in a whole new way.

Complex systems theory is as close to a Theory of Everything as I’ve seen. No kidding. We’ll be looking at it in more depth, but first… Explaining is one thing, but predicting is another. Policy-making invariably relies on the ability to predict outcomes, but predicting has its own perils. We’ll look at those next time. In the meantime, just for fun…

                           

If you click on the first image, you’ll go to the original silent movie melodrama series. A click on the second image takes you to Wikipedia re: the 1947 Hollywood technicolor remake. The original is from a period of huge economic growth and quality of life advancements. The movie came out at the beginning of equally powerful post-WWII economic growth. Which leads to another economic history book I can’t recommend highly enough, shown in the image on the left below. Like Americana, which I recommended a couple weeks ago, it’s well researched and readable. They’re both big, thick books, but together they offer a fascinating course on all the American history we never knew. (Click the images for more.)

                    

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

The Fatal Flaw

Several years ago I wrote a screenplay that did okay in a contest. I made a couple trips to Burbank to pitch it, got no sustained interest, and gave up on it. Recently, someone who actually knows what he’s doing encouraged me to revise and re-enter it. Among other things, he introduced me to Inside Story: The Power of the Transformational Arc, by Dara Marks (2007). The book describes what the author calls “the essential story element” — which, it turns out, is remarkably apt not just for film but for life in general, and particularly for talking about economics, technology, and the workplace.

No kidding.

What is it?

The Fatal Flaw.

This is from the book:

First, it’s important to recap or highlight the fundamental premise on which the fatal flaw is based:

  • Because change is essential for growth, it is a mandatory requirement for life.
  • If something isn’t growing and developing, it can only be headed toward decay and death.
  • There is no condition of stasis in nature. Nothing reaches a permanent position where neither growth nor diminishment is in play.

As essential as change is, most of us resist it, and cling rigidly to old survival systems because they are familiar and “seem” safer. In reality, if an old, obsolete survival system makes us feel alone, isolated, fearful, uninspired, unappreciated, and unloved, we will reason that it’s easier to cope with what we know that with what we haven’t yet experienced. As a result, most of us will fight to sustain destructive relationships, unchallenging jobs, unproductive work, harmful addictions, unhealthy environments, and immature behavior long after there is any sign of life or value to them.

This unyielding commitment to old, exhausted survival systems that have outlived their usefulness, and resistance to the rejuvenating energy of new, evolving levels of existence and consciousness is what I refer to as the fatal flaw of character:

The Fatal Flaw is a struggle within a character
to maintain a survival system
long after it has outlived its usefulness.

As it is with screenwriting, so it is with us as we’re reckoning with the wreckage of today’s collision among economics, technology, and the workplace. We’re like the character who must change or die to make the story work: our economic survival is at risk, and failure to adapt is fatal. Faced with that prospect, we can change our worldview, or we can wish we had. Trouble is, our struggle to embrace a new paradigm is as perilous as holding to an old one.

What’s more, we will also need to reckon with two peculiar dynamics of our time: “echo chambers” and “epistemic bubbles.” The following is from an Aeon Magazine article published earlier this week entitled “Escape The Echo Chamber”:

Something has gone wrong with the flow of information. It’s not just that different people are drawing subtly different conclusions from the same evidence. It seems like different intellectual communities no longer share basic foundational beliefs. Maybe nobody cares about the truth anymore, as some have started to worry. Maybe political allegiance has replaced basic reasoning skills. Maybe we’ve all become trapped in echo chambers of our own making – wrapping ourselves in an intellectually impenetrable layer of likeminded friends and web pages and social media feeds.

But there are two very different phenomena at play here, each of which subvert the flow of information in very distinct ways. Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs. But they work in entirely different ways, and they require very different modes of intervention. An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trust people from the other side.

An echo chamber doesn’t destroy their members’ interest in the truth; it merely manipulates whom they trust and changes whom they accept as trustworthy sources and institutions.

Here’s a basic check: does a community’s belief system actively undermine the trustworthiness of any outsiders who don’t subscribe to its central dogmas? Then it’s probably an echo chamber.

That’s what we’re up against. We’ll plow fearlessly ahead next time.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

On the Third Hand…

Will the machines take over the jobs? Ask a bunch of economists, and you’ll get opinions organized around competing ideologies, reflecting individual cognitive, emotional, and political biases. That’s been the experience of Martin Fordentrepreneur, TED talker, and New York Times bestselling author of Rise of the Robots: Technology and the Threat of a Jobless Future:

In the field of economics the opinions all too often break cleanly along predefined political lines. Knowing the ideological predisposition of a particular economist is often a better predictor of what that individual is likely to say than anything contained in the data under examination. In other words, if you’re waiting for the economists to deliver some sort of definitive verdict on the impact that advancing technology is having on the economy, you may have a very long wait.[1]

In this Psychology Today article, Dr. Karl Albrecht[2] offers a neurological explanation for polarized thinking:

Recent research suggests that our brains may be pre-wired for dichotomized thinking. That’s a fancy name for thinking and perceiving in terms of two — and only two — opposing possibilities.

These research findings might help explain how and why the public discourse of our culture has become so polarized and rancorous, and how we might be able to replace it with a more intelligent conversation.

[O]ur brains can keep tabs on two tasks at a time, by sending each one to a different side of the brain. Apparently, we toggle back and forth, with one task being primary and the other on standby.

Add a third task, however, and one of the others has to drop off the to-do list.

Scans of brain activity during this task switching have led to the hypothesis that the brain actually likes handling things in pairs. Indeed, the brain itself is subdivided into two distinct half-brains, or hemispheres.

Curiously, part of our cranial craving for two-ness might be related to our own physiology: the human body is bilaterally symmetrical. Draw an imaginary center line down through the front of a person and you see a lot of parts (not all, of course), that come in pairs: two eyes, two ears, two nostrils, matching teeth on left and right sides, two shoulders, two arms, two hands, two nipples, two legs, two knees, and two feet. Inside you’ll find two of some things and one of others.

Some researchers are now extending this reasoning to suggest that the brain has a built-in tendency, when confronted by complex propositions, to selfishly reduce the set of choices to just two. Apparently it doesn’t like to work hard.

Considering how quickly we make our choices and set our opinions, it’s unlikely that all of the options will even be identified, never mind carefully considered.

“On the one hand this, on the other hand that,” we like to say. Lawyers perfect the art. Politics and the press also thrive on dichotomy:

Again, our common language encodes the effect of this anatomical self reference. “On the one hand, there is X. But on the other hand, we have Y.” Many people describe political views as being either “left” or “right.”

The popular press routinely constructs “news” stories around conflicts and differences between pairs of opposing people, factions, and ideologies. Bipolar conflict is the very essence of most of the news.

So, are robots and artificially intelligence going to trash the working world, or not?

Hmmm, there might be another option — several, actually. Dr. Albrecht urges us to find them:

Seek the “third hand” — and any other “hands” you can discover. Ask yourself, and others, “Are there other options to be considered?”

We’ll consider some third hand perspectives about the rise of the robots in the coming weeks.


[1] Martin Ford is also the consulting expert for Societe Generale’s new “Rise of the Robots” investment index, which focuses on companies that are “significant participants in the artificial intelligence and robotics revolution.”

[2] According to his website, Karl Albrecht is “is an executive management consultant, futurist, lecturer, and author of more than 20 books on professional achievement, organizational performance, and business strategy. He is also a leading authority on cognitive styles and the development of advanced thinking skills. The Mensa Society honored him with its lifetime achievement award, for significant contributions by a member to the understanding of intelligence. Originally a physicist, and having served as a military intelligence officer and business executive, he now consults, lectures, and writes about whatever he thinks would be fun.”

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

Race Against the Machine, Continued

Rational choice theory is a cornerstone of conventional economic thinking. It states that:

Individuals always make prudent and logical decisions. These decisions provide people with the greatest benefit or satisfaction — given the choices available — and are also in their highest self-interest.

Presumably Stephen Hawking, Elon Musk, and Bill Gates had something like this in mind when they published an open letter in January 2015 urging that artificial intelligence R&D should focus “not only on making AI more capable, but also on maximizing the societal benefit,” To execute on this imperative, they urged an interdisciplinary collaboration among “economics, law and philosophy. computer security, formal methods and, of course, various branches of AI itself.” (Since its release, the letter has garnered another 8.000 signatures — you can sign it, too, if you like.)

The letter’s steady, rational four paragraphs praise how technology has benefited the human race, and anticipate more of the same in the future, but its reception and the authors’ comments in other contexts are not so measured. As a result, the letter has become a cheering section for those who think humanity is losing its race against the robots.

Consider, for example, the following from an Observer article:

“Success in creating AI would be the biggest event in human history,” wrote Stephen Hawking in an op-ed, which appeared in The Independent in 2014. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Professor Hawking added in a 2014 interview with BBC, “humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.”

Elon Musk called the prospect of artificial intelligence “our greatest existential threat” in a 2014 interview with MIT students at the AeroAstro Centennial Symposium. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” Mr. Musk cites his decision to invest in the Artificial Intelligence firm, DeepMind, as a means to “just keep an eye on what’s going on with artificial intelligence. I think there is potentially a dangerous outcome there.”

Microsoft co-founder Bill Gates has also expressed concerns about Artificial Intelligence. During a Q&A session on Reddit in January 2015, Mr. Gates said, “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Or consider this Elon Musk comment in Vanity Fair:

In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”

In other words, Hawking, Gates, and Musk aren’t just worried about machines taking over jobs, they’re worried about the end of the world — or at least the human race. This Washington Post op-ed piece thinks that might not be such a bad thing:

When a technology is so obviously dangerous — like nuclear energy or synthetic biology — humanity has an imperative to consider dystopian predictions of the future. But it also has an imperative to push on, to reach its full potential. While it’s scary, sure, that humans may no longer be the smartest life forms in the room a generation from now, should we really be that concerned? Seems like we’ve already done a pretty good job of finishing off the planet anyway. If anything, we should be welcoming our AI masters to arrive sooner rather than later.

Or consider this open letter written back to Hawking, Gates, and Musk, which basically says forget the fear mongering — it’s going to happen no matter what you think:

Progress is inevitable, even if it is reached by accident and happenstance. Even if we do not intend to, sentient AI is something that will inevitably be created, be it through the evolution of a learning AI, or as a byproduct of some research. No treaty or coalition can stop it, no matter what you think. I just pray you do not go from educated men to fear mongers when it happens.

As usual, we’re at an ideological impasse, with both sides responding not so much according to the pros and cons but according to their predispositions. This article suggests a way through the impasse:

At the beginning of this article, we asked if the pessimists or optimists would be right.

There is a third option, though: one where we move from building jobs around processes and tasks, a solution that is optimal for neither human nor machine, to building jobs around problems.

The article is long, well-researched, and… well, very rational. Too bad — conventional thinking aside — other research shows we rarely act from a rational outlook when it comes to jobs and the economy… or anything else for that matter.

More on that next time.

 

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

Race Against the Machine

For the past several years, two MIT big thinkers[1] have been the go-to authorities in the scramble to explain how robotics, artificial intelligence, and big data are revolutionizing the economy and the working world. Their two books were published four and six years ago — so yesterday in the world of technology — but they were remarkably prescient when written, and have not diminished in relevance. They are:

Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy (2012)

The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (2014)

Click here for a chapter-by-chapter digest of The Second Machine Age, written by an all star cast of economic commentators. Among other things, they acknowledge the authors’ view that neoliberal capitalism has not fared well in its dealings with the technological juggernaut, but in the absence of a better alternative, we might as well continue to ride the horse in the direction it’s going.

While admitting that History (not human choice) is “littered with unintended . . . side effects of well-intentioned social and economic policies,” the authors cite Tim O’Reilly[2] in pushing forward with technology’s momentum rather than clinging to the past or present. They suggest that we should let the technologies do their work and just find ways to deal with it. They are “skeptical of efforts to come up with fundamental alternatives to capitalism.”

David Rotman, editor of the MIT Technology Review cites The Second Machine Age extensively in an excellent, longer article, “How Technology is Destroying Jobs.” Although the article is packed with contrary analysis and opinion, the following excepts emphasize what many might consider the shadowy side of the street (compared to the sunny side we looked at in the past couple posts). I added the headings below to emphasize that many of the general economic themes we’ve been talking about also apply to the specific dynamics of the job market.

It used to be that economic growth — including wealth creation — also created more jobs. It doesn’t work that way any more. Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States.

For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.

A rising economic tide no longer floats all boats. The result is a skewed allocation of the rewards of growth away from jobs — i.e., economic inequality. The contention that automation and digital technologies are partly responsible for today’s lack of jobs has obviously touched a raw nerve for many worried about their own employment. But this is only one consequence of what ­Brynjolfsson and McAfee see as a broader trend. The rapid acceleration of technological progress, they say, has greatly widened the gap between economic winners and losers—the income inequalities that many economists have worried about for decades..

“[S]teadily rising productivity raised all boats for much of the 20th century,” [Brynjolfsson] says. “Many people, especially economists, jumped to the conclusion that was just the way the world worked. I used to say that if we took care of productivity, everything else would take care of itself; it was the single most important economic statistic. But that’s no longer true.” He adds, “It’s one of the dirty secrets of economics: technology progress does grow the economy and create wealth, but there is no economic law that says everyone will benefit.” In other words, in the race against the machine, some are likely to win while many others lose.

That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States.

Meanwhile, technology is taking over the jobs that are left— blue collar, white collar, and even the professions. [I]mpressive advances in computer technology—from improved industrial robotics to automated translation services—are largely behind the sluggish employment growth of the last 10 to 15 years. Even more ominous for workers, the MIT academics foresee dismal prospects for many types of jobs as these powerful new technologies are increasingly adopted not only in manufacturing, clerical, and retail work but in professions such as law, financial services, education, and medicine.

Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared.

New technologies are “encroaching into human skills in a way that is completely unprecedented,” McAfee says, and many middle-class jobs are right in the bull’s-eye; even relatively high-skill work in education, medicine, and law is affected.

We’ll spend more time on the shadowy side of the street next time.


[1] Erik Brynjolfsson is director of the MIT Center for Digital Business, and Andrew McAfee is a principal research scientist at MIT who studies how digital technologies are changing business, the economy, and society.

[2] According to his official bio on his website, Tim O’Reilly “is the founder and CEO of O’Reilly Media, Inc. His original business plan was simply ‘interesting work for interesting people,’ and that’s worked out pretty well. O’Reilly Media delivers online learning, publishes books, runs conferences, urges companies to create more value than they capture, and tries to change the world by spreading and amplifying the knowledge of innovators.”

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article:  Meeting Goals the Olympic Way:  Train + Transform.