May 27, 2018

Utopia Already

“If you had to choose a moment in history to be born, and you did not know ahead of time who you would be—you didn’t know whether you were going to be born into a wealthy family or a poor family, what country you’d be born in, whether you were going to be a man or a woman—if you had to choose blindly what moment you’d want to be born you’d choose now.”

Pres. Barack Obama, 2016

It’s been a good month for optimists in my reading pile. Utopia is already here, they say, and we’ve got the facts to prove it.

Harvard Professor Steven Pinker is his own weather system. Bill Gates called Pinker’s latest book Enlightenment Now “My new favorite book of all time.”

Pinker begins cautiously: “The second half of the second decade of the third millennium would not seem to be an auspicious time to publish a book on the historical sweep of progress and its causes,” he says, and follows with a recitation of the bad news sound bytes and polarized blame-shifting we’ve (sadly) gotten used to. But then he throws down the optimist gauntlet: “In the pages that follow, I will show that this bleak assessment of the state of the world is wrong. And not just a little wrong — wrong, wrong, flat-earth wrong, couldn’t-be-more-wrong wrong.”

He makes his case in a string of data-laced chapters on progress, life expectancy, health, food and famine, wealth, inequality, the environment, war and peace, safety and security, terrorism, democracy, equal rights, knowledge and education, quality of life, happiness, and “existential” threats such as nuclear war. In each of them, he calls up the pessimistic party line and counters with his version of the rest of the story.

And then, just to make sure we’re getting the point, 322 pages of data and analysis into it, he plays a little mind game with us. First he offers an eight paragraph summary of the prior chapters, then starts the next three paragraphs with the words “And yet,” followed by a catalogue of everything that’s still broken and in need of fixing. Despite 322 prior pages and optimism’s 8-3 winning margin, the negativity feels oddly welcome. I found myself thinking, “Well finally, you’re admitting there’s a lot of mess we need to clean up.” But then Prof. Pinker reveals what just happened:

The facts in the last three paragraphs, of course, are the same as the ones in the first eight. I’ve simply read the numbers from the bad rather the good end of the scales or subtracted the hopeful percentages from 100. My point in presenting the state of the world in these two ways is not to show that I can focus on the space in the glass as well as on the beverage. It’s to reiterate that progress is not utopia, and that there is room — indeed, an imperative — for us to strive to continue that progress.

Pinker acknowledges his debt to the work of Swedish physician, professor of global health, and TED all-star Hans Rosling and his recent bestselling book Factfulness. Prof. Rosling died last year, and the book begins with a poignant declaration: “This book is my last battle in my lifelong mission to fight devastating ignorance.” His daughter and son-in-law co-wrote the book and are carrying on his work — how’s that for commitment, passion, and family legacy?

The book leads us through ten of the most common mind games we play in our attempts to remain ignorant. It couldn’t be more timely or relevant to our age of “willful blindness,” “cognitive bias,” “echo chambers” and “epistemic bubbles.”

Finally, this week professional skeptic Michael Sheerer weighed in on the positive side of the scale with his review of a new book by journalist Gregg Easterbrook — It’s Better Than It Looks. Shermer blasts out of the gate with “Though declinists in both parties may bemoan our miserable lives, Americans are healthier, wealthier, safer and living longer than ever.” He also begins his case with the Obama quote above, and adds another one:

As Obama explained to a German audience earlier that year: “We’re fortunate to be living in the most peaceful, most prosperous, most progressive era in human history,” adding “that it’s been decades since the last war between major powers. More people live in democracies. We’re wealthier and healthier and better educated, with a global economy that has lifted up more than a billion people from extreme poverty.”

A similar paeon to progress begins last year’s blockbuster Homo Deus (another of Bill Gates’ favorite books of all time). The optimist case has been showing up elsewhere in my research, too. Who knows, maybe utopia isn’t such a bad idea after all. In fact, maybe it’s already here.

Now there’s a thought.

All this ferocious optimism has been bracing, to say the least — it’s been the best challenge yet to what was becoming a comfortably dour outlook on economic reality.

And just as I was beginning to despair of anyone anywhere at any time ever using data to make sense of things, I also ran into an alternative to utopian thinking that both Pinker and Shermer acknowledge. It’s called “protopia,” and we’ll look at it next time.


Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!

Utopia for Realists, Continued

Like humor and satire, utopias throw open the windows of the mind.

Rutger Bregman

Continuing with Rutger Bregman’s analysis of utopian thinking that we began last week:

Let’s first distinguish between two forms of utopian thought. The first is the most familiar, the utopia of the blueprint. Instead of abstract ideals, blueprints consist of immutable rules that tolerate no discussion.

There is, however, another avenue of utopian thought, one that is all but forgotten. If the blueprint is a high-resolution photo, then this utopia is just a vague outline. It offers not solutions but guideposts. Instead of forcing us into a straitjacket, it inspires us to change. And it understands that, as Voltaire put it, the perfect is the enemy of the good. As one American philosopher has remarked, ‘any serious utopian thinker will be made uncomfortable by the very idea of the blueprint.’

It was in this spirit that the British philosopher Thomas More literally wrote the book on utopia (and coined the term). More understood that utopia is dangerous when taken too seriously. ‘One needs to be believe passionately and also be able to see the absurdity of one’s own beliefs and laugh at them,’ observes philosopher and leading utopia expert Lyman Tower Sargent. Like humor and satire, utopias throw open the windows of the mind. And that’s vital. As people and societies get progressively older they become accustomed to the status quo, in which liberty can become a prison, and the truth can become lies. The modern creed — or worse, the belief that there’s nothing left to believe in — makes us blind to the shortsightedness and injustice that still surround us every day.

Thus the lines are drawn between utopian blueprints grounded in dogma vs. utopian ideals arising from sympathy and compassion. Both begin with good intentions, but the pull of entropy is stronger with the former — at least, so says Rutger Bregman, and he’s got good company in Sir Thomas More and others. Blueprints require compliance, and its purveyors are zealously ready to enforce it. Ideals on the other hand inspire creativity, and creativity requires acting in the face of uncertainty, living with imperfection, responding with resourcefulness and resilience when best intentions don’t play out, and a lot of just plain showing up and grinding it out. I have a personal bias for coloring outside the lines, but I must confess that my own attempts to promote utopian workplace ideals have given me pause.

For years, I led interactive workshops designed to help people creatively engage with their big ideas about work and wellbeing — variously tailored for CLE ethics credits or for general audiences. I realized recently that, reduced to their essence, they employed the kinds of ideals advocated by beatnik-era philosopher and metaphysicist Alan Watts. (We met him several months ago — he’s the “What would you do if money were no object?” guy. )

The workshops generated hundreds of heartwarming “this was life-changing” testimonies, but I could never quite get over this nagging feeling that the participants mostly hadn’t achieved escape velocity, and come next Monday they would be back to the despair of “But everybody knows you can’t earn any money that way.”

I especially wondered about the lawyers, for whom “I hate my job but love my paycheck” was a recurrent theme. The Post WWII neoliberal economic tide floated the legal profession’s boat, too, but prosperity has done little for lawyer happiness and well-being. True, we’re seeing substantial quality-of-life change in the profession recently (which I’ve blogged about in the past), but most have been around the edges, while overall lawyers’ workplace reality remains a bulwark of what one writer calls the “over-culture” — the overweening force of culturally-accepted norms about how things are and should be — and the legal over-culture has stepped in line with the worldwide workplace trend of favoring wealth over a sense of meaning and value.

Alan Watts’ ideals were widely adopted by the burgeoning self-help industry, which also rode the neoliberal tide to prosperous heights. Self-help tends to be long on inspiration and short on grinding, and sustainable creative change requires large doses of both. I served up both in the workshops, but still wonder if they were just too… well, um…beatnik … for the law profession. I’ll never know — the guy who promoted the workshops retired, and I quit doing them. If nothing else, writing this series has opened my eyes to how closely law practice mirrors worldwide economic and workplace dynamics. We’ll look more at that in the coming weeks.


Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!


“Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back”

John Maynard Keynes

We met law professor and economics visionary James Kwak a few months ago. In his book Economism: Bad Economics and the Rise of Inequality (2017), he tells this well-known story about John Maynard Keynes:

In 1930, John Maynard Keynes argued that, thanks to technological progress, the ‘economic problem’ would be solved in about a century and people would only work fifteen hours per week — primarily to keep themselves occupied. When freed from the need to accumulate wealth, the human life would change profoundly.

This passage is from Keynes’ 1930 essay:

I see us free, therefore, to return to some of the most sure and certain principles of religion and traditional virtue—that avarice is a vice, that the exaction of usury is a misdemeanor, and the love of money is detestable, that those who walk most truly in the paths of virtue and sane wisdom are take least thought for the morrow. We shall once more value ends above means and prefer the good to the useful. We shall honour those who can teach us how to pluck the hour and the day virtuously and well, the delightful people who are capable of taking direct enjoyment in things, the lilies of the field who toil not neither do they spin.

The timing of Keynes’ essay is fascinating: he wrote it right after the original Black Friday and as the Great Depression was rolling out. Today, it seems as though his prediction was more than out of time, it was just plain wrong. Plus, it was undeniably utopian — which for most of us is usually translated something like, “Teah, don’t I wish, but that’s never going to happen.” Someone says “utopia,” and we automatically hear “dystopia,” which is where utopias usually end up, “reproduc[ing] many of the same tyrannies that people were trying to escape: egoism, power struggles, envy, mistrust and fear.” “Utopia, Inc.,” Aeon Magazine.

It’s just another day in paradise 
As you stumble to your bed 
You’d give anything to silence 
Those voices ringing in your head 
You thought you could find happiness 
Just over that green hill 
You thought you would be satisfied 
But you never will- 

The Eagles

To be fair, the post-WWII surge truly was a worldwide feast of economic utopia, served up mostly by the Mont Pelerin Society and other champions of neoliberal ideology. If they didn’t create the precise utopia Keynes envisioned, that’s because even the best ideas can grow out of time: a growing international body of data, analysis, and commentary indicates that continued unexamined allegiance to neoliberalism is rapidly turning postwar economic utopia into its opposite.

But what if we actually could, if not create utopia, then at least root out some persistent strains of dystopia — things like poverty, lack of access to meaningful work, even a more even-handed and less unequal income distribution? Kwak isn’t alone in thinking we could do just that, but to get there from here will require more than a new ideology to bump neoliberalism aside. Instead, we need an entirely new economic narrative, based on a new understanding of how the world works:

Almost a century [after Keynes made his prediction], we have the physical, financial, and human capital necessary for everyone in our country to enjoy a comfortable standard of living, and within a few generations the same should be true of the entire planet, And yet our social organization remains the same as it was in the Great Depression: some people work very hard and make more money than they will ever need, while many others are unable to find work and live in poverty.

Real change will not be achieved by mastering the details of marginal costs and marginal benefits, but by constructing a new, controlling narrative about how the world works.

Rooting out the persistent strains of economic dystopia in our midst will require a whole new way of thinking — maybe even some utopia thinking. If we’re going to go there, we’ll need to keep our wits about us. More on that next time.

Kevin Rhodes would create workplace utopia if he could. But since he doesn’t trust himself to do that, he writes this blog instead. Thanks for reading!


The Perils of Policy

Economics articles, books, and speeches usually end with policy recommendations. You can predict them in advance if you know the ideological bias of the source. Let’s look at three, for comparison.

First, this Brookings Institute piece— What happens if robots take the jobs? The impact of emerging technologies on employment and public policy — written a couple years back by Darrell M. West, vice president and director of Governance Studies and founding director of the Center for Technology Innovation at the Institute.

Second, this piece — Inequality isn’t inevitable. Here’s what we can do differently — published by the World Economic Forum and written last month by a seriously over-achieving 23-year old globe-trotting Italian named Andrea Zorzetto.

Third, this piece — Mark My Words: This Political Event Will be Unlike Anything We’ve Seen in 50 Years — by Porter Stansberry, which showed up in my Facebook feed last month. Stansberry offers this bio: “You may not know me, but nearly 20 years ago, I started a financial research and education business called Stansberry Research. Today we have offices in the U.S., Hong Kong, and Singapore. We serve more than half a million paid customers in virtually every country (172 at last count). We have nearly 500 employees, including dozens of financial analysts, corporate attorneys, accountants, technology experts, former hedge fund managers, and even a medical doctor.”

The Brookings article is what you would expect: long, careful, reasoned. Energetic Mr. Zorzetto’s article is bright, upbeat, and generally impressive. Porter Stansberry’s missive is … well, we’ll just let it speak for itself. I chose these three because they all cite the same economic data and developments, but reach for different policy ideals. There’s plenty more where these came from. Read enough of them, and they start to organize themselves into multiple opinion categories which after numerous iterations all mush together into vague uncertainty.

There’s got to be a better way. Turns out there is: how about if we ask the economy itself what it’s up to? That’s what the emerging field of study called “complexity economics” does. Here’s a short explanation of it, published online by Exploring Economics, an “open source learning platform.” The word “complexity” in this context doesn’t mean “hard to figure out.” It’s a technical term borrowed from a systems theory approach that originated in science, mathematics, and statistics.

Complexity economics bypasses ideological bias and lets the raw data speak for itself. It’s amazing what you hear when you give data a voice — for example, an answer to the question we heard the Queen of England ask a few posts back, which a group of Cambridge economists couldn’t answer (neither could anyone else, for that matter): Why didn’t we see the 2007-2008 Recession coming? The economy had and answer; you just need to know how to listen to it. (More on that coming up.)

What gives data its voice? Ironically, the very job-threatening technological trends we’ve been talking about in the past couple months:

Big Data + Artificial Intelligence + Brute Strength Computer Processing Power
= Complexity Economics

Which means — in a stroke of delicious irony — guess whose jobs are most threatened by this new approach to economics? You guessed it: the jobs currently held by ideologically-based economists making policy recommendations. For them, economics just became “the dismal science” in a whole new way.

Complex systems theory is as close to a Theory of Everything as I’ve seen. No kidding. We’ll be looking at it in more depth, but first… Explaining is one thing, but predicting is another. Policy-making invariably relies on the ability to predict outcomes, but predicting has its own perils. We’ll look at those next time. In the meantime, just for fun…


If you click on the first image, you’ll go to the original silent movie melodrama series. A click on the second image takes you to Wikipedia re: the 1947 Hollywood technicolor remake. The original is from a period of huge economic growth and quality of life advancements. The movie came out at the beginning of equally powerful post-WWII economic growth. Which leads to another economic history book I can’t recommend highly enough, shown in the image on the left below. Like Americana, which I recommended a couple weeks ago, it’s well researched and readable. They’re both big, thick books, but together they offer a fascinating course on all the American history we never knew. (Click the images for more.)



Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

Bus Riding Economists

Lord, I was born a ramblin’ man
Tryin’ to make a livin’ and doin’ the best I can[1]

A couple economists took the same bus I did one day last week. We’ll call them “Home Boy” and “Ramblin’ Man.”. They made acquaintance when Ramblin’ Man put his money in the fare box and didn’t get a transfer coupon. He was from out of town, he said, and didn’t know how to work it. Home Boy explained that you need to wait until the driver gets back from her break. Ramblin’ Man said he guessed the money was just gone, but the driver showed up about then and checked the meter — it showed he’d put the money in, so he got his transfer. Technology’s great, ain’t it?

Ramblin’ Man took the seat in front of me. Home Boy sat across the aisle. When the conversation turned to economics, I eavesdropped[2] shamelessly. Well not exactly — they were talking pretty loud. Ramblin’ Man said he’d been riding the bus for two days to get to the VA. That gave them instant common ground:  they were both Vietnam vets, and agreed they were lucky to get out alive.

Ramblin’ Man said when he got out he went traveling — hitchhike, railroad, bus, you name it. That was back in the 70’s, when a guy could go anywhere and get a job. Not no more. Now he lives in a small town up on northeast Montana. He likes it, but it’s a long way to get to the VA, but he knew if he could get here, there’d be a bus to take him right to it, and sure enough there was. That’s the trouble with those small towns, said Home Boy — nice and quiet, but not enough people to have any services. I’ll bet there’s no bus company up there, he chuckled. Not full of people like Minneapolis.

Minneapolis! Ramblin’ Man lit up at the mention of it. All them people, and no jobs. He was there in 2009, right after the bankers ruined the economy. Yeah, them and the politicians, Home Boy agreed. Shoulda put them all in jail. It’s those one-percenters. They got it fixed now so nobody makes any money but them. It’s like it was back when they were building the railroads and stuff. Now they’re doing it again. Nobody learns from history — they keep doing the same things over and over. They’re stuck in the past.

Except this time, it’s different, said Ramblin’ Man. It’s all that technology — takes away all the jobs. Back in 09, he’d been in Minneapolis for three months, and his phone never rang once for a job offer. Not once. Never used to happen in the 70’s.

And then my stop came up, and my economic history lesson was over. My two bus riding economists had covered the same developments I’ve been studying for the past 15 months. My key takeaway? That “The Economy” is a lazy fiction — none of us really lives there. Instead, we live in the daily challenges of figuring out how to get the goods and services we need — maybe to thrive (if you’re one of them “one-percenters”), or maybe just to get by. The Economy isn’t some transcendent structure, it’s created one human transaction at a time — like when a guy hits the road to make sense of life after a war, picking up odd jobs along the way until eventually he settles in a peaceful little town in the American Outback. When we look at The Economy that way, we get a whole new take on it. That’s precisely what a new breed of cross-disciplinary economists are doing, and we’ll examine their outlook in the coming weeks.

In the meantime, I suspect that one of the reasons we don’t learn from history is that we don’t know it. In that regard, I recently read a marvelous economic history book that taught me a whole lot I never knew:  Americana: A 400-Year History of American Capitalism (2017)  by tech entrepreneur Bhu Srinivasan. Here’s the promo blurb:

“From the days of the Mayflower and the Virginia Company, America has been a place for people to dream, invent, build, tinker, and bet the farm in pursuit of a better life. Americana takes us on a four-hundred-year journey of this spirit of innovation and ambition through a series of Next Big Things — the inventions, techniques, and industries that drove American history forward: from the telegraph, the railroad, guns, radio, and banking to flight, suburbia, and sneakers, culminating with the Internet and mobile technology at the turn of the twenty-first century. The result is a thrilling alternative history of modern America that reframes events, trends, and people we thought we knew through the prism of the value that, for better or for worse, this nation holds dearest: capitalism. In a winning, accessible style, Bhu Srinivasan boldly takes on four centuries of American enterprise, revealing the unexpected connections that link them.”

This is American history as we never learned it, and the book is well worth every surprising page.

[1] From “Ramblin’ Man,” by the Allman Brothers. Here’s a 1970 live version. And here’s the studio version.

[2] If you wonder, as I did, where “eavesdrop” came from, here’s the Word Detective’s explanation.


Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

On the Third Hand, Continued

Will the machines take over the jobs?

In a recent TED talk, scholar, economist, author, and general wunderkind Daniel Susskindl[1] says the question is distracting us from a much bigger and more important issue: how will we feed, clothe, and shelter ourselves if we no longer work for a living?:

If we think of the economy as a pie, technological progress makes the pie bigger. Technological unemployment, if it does happen, in a strange way will be a symptom of that success — we will have solved one problem — how to make the pie bigger — but replaced it with another — how to make sure that everyone gets a slice. As other economists have noted, solving this problem won’t be easy.

Today, for most people, their job is their seat at the economic dinner table, and in a world with less work or even without work, it won’t be clear how they get their slice. This is the collective challenge that’s right in front of us — to figure out how this material prosperity generated by our economic system can be enjoyed by everyone in a world in which our traditional mechanism for slicing up the pie, the work that people do, withers away and perhaps disappears.

Guy Standing, another British economist, agrees with Susskind about this larger issue. The following excerpts are from his book The Corruption of Capitalism. He begins by quoting Nobel prizewinning economist Herbert Simon’s 1960 prediction:

Within the very near future — much less than twenty-five years — we shall have the technical capacity of substituting machines for any and all human functions in organisations.

And then he makes these comments:

You do not receive a Nobel Prize for Economics for being right all the time! Simon received his in 1978, when the number of people in jobs was at record levels. It is higher still today. Yet the internet-based technological revolution has reopened age-old visions of machine domination. Some are utopian, such as the post-capitalism of Paul Mason, imagining an era of free information and information sharing. Some are decidedly dystopian, where the robots — or rather their owners — are in control and mass joblessness is coupled with a “panopticon” state[2] subjecting the proles to intrusive surveillance, medicalized therapy and brain control. The pessimists paint a “world without work.” With every technological revolution there is a scare that machines will cause “technological unemployment”. This time the Jeremiahs seem a majority.

Whether or not they will do so in the future, the technologies have not yet produced mass unemployment . . . [but they] are contributing to inequality.

While technology is not necessarily destroyed jobs, it is helping to destroy the old income distribution system.

The threat is technology-induced inequality, not technological unemployment.”

Economic inequality and income distribution (sharing national wealth on a basis other than individual earned income) are two sides of the issue of economic fairness — always an inflammatory topic.

When I began my study of economics 15 months ago, I had never heard of economic inequality, and income distribution was something socialist countries did. Now I find both topics all over worldwide economic news and commentary and still mostly absent in U.S. public discourse (such as it is) outside of academic circles. On the whole, most policy-makers on both the left and right maintain their allegiance to the post-WWII Mont Pelerin neoliberal economic model, supported by a cultural and moral bias in favor of working for a living, and if the plutocrats take a bigger slice of pie while the welfare rug gets pulled on the working poor, well then so be it. If the new robotic and super-intelligent digital workers do in fact cause massive technological unemployment among the humans, we’ll all be reexamining these beliefs, big time.

I began this series months ago by asking whether money can buy happiness, citing the U.N.’s World Happiness Report. The 2018 Report was issued this week, and who should be on top but… Finland! And guess what — among other things, factors cited include low economic inequality and strong social support systems (i.e., a cultural value for non-job-based income distribution). National wealth was also a key factor, but it alone didn’t buy happiness: the USA, with far and away the strongest per capita GDP, had an overall ranking of 18th. For more, see this World Economic Forum article or this one from the South China Morning Post.

We’ll be looking further into all of this (and much more) in the weeks to come.

[1] If you’ve been following this column for awhile and the name “Susskind” sounds familiar, a couple years ago, I blogged about the future and culture of the law, often citing the work of Richard Susskind, whose opus is pretty much the mother lode of crisp thinking about the law and technology. His equally brilliant son Daniel joined him in a book that also addressed other professions, which that series also considered. (Those blogs were collected in Cyborg Lawyers.) Daniel received a doctorate in economics from Oxford University, was a Kennedy Scholar at Harvard, and is now a Fellow in Economics at Balliol College, Oxford. Previously, he worked as a policy adviser in the Prime Minister’s Strategy Unit and as a senior policy adviser in the Cabinet Office.

[2] The panopticon architectural structure was the brainchild of legal philosopher Jeremy Bentham. For an introduction to the origins of his idea and its application to the digital age, see this article in The Guardian.


Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

On the Third Hand…

Will the machines take over the jobs? Ask a bunch of economists, and you’ll get opinions organized around competing ideologies, reflecting individual cognitive, emotional, and political biases. That’s been the experience of Martin Fordentrepreneur, TED talker, and New York Times bestselling author of Rise of the Robots: Technology and the Threat of a Jobless Future:

In the field of economics the opinions all too often break cleanly along predefined political lines. Knowing the ideological predisposition of a particular economist is often a better predictor of what that individual is likely to say than anything contained in the data under examination. In other words, if you’re waiting for the economists to deliver some sort of definitive verdict on the impact that advancing technology is having on the economy, you may have a very long wait.[1]

In this Psychology Today article, Dr. Karl Albrecht[2] offers a neurological explanation for polarized thinking:

Recent research suggests that our brains may be pre-wired for dichotomized thinking. That’s a fancy name for thinking and perceiving in terms of two — and only two — opposing possibilities.

These research findings might help explain how and why the public discourse of our culture has become so polarized and rancorous, and how we might be able to replace it with a more intelligent conversation.

[O]ur brains can keep tabs on two tasks at a time, by sending each one to a different side of the brain. Apparently, we toggle back and forth, with one task being primary and the other on standby.

Add a third task, however, and one of the others has to drop off the to-do list.

Scans of brain activity during this task switching have led to the hypothesis that the brain actually likes handling things in pairs. Indeed, the brain itself is subdivided into two distinct half-brains, or hemispheres.

Curiously, part of our cranial craving for two-ness might be related to our own physiology: the human body is bilaterally symmetrical. Draw an imaginary center line down through the front of a person and you see a lot of parts (not all, of course), that come in pairs: two eyes, two ears, two nostrils, matching teeth on left and right sides, two shoulders, two arms, two hands, two nipples, two legs, two knees, and two feet. Inside you’ll find two of some things and one of others.

Some researchers are now extending this reasoning to suggest that the brain has a built-in tendency, when confronted by complex propositions, to selfishly reduce the set of choices to just two. Apparently it doesn’t like to work hard.

Considering how quickly we make our choices and set our opinions, it’s unlikely that all of the options will even be identified, never mind carefully considered.

“On the one hand this, on the other hand that,” we like to say. Lawyers perfect the art. Politics and the press also thrive on dichotomy:

Again, our common language encodes the effect of this anatomical self reference. “On the one hand, there is X. But on the other hand, we have Y.” Many people describe political views as being either “left” or “right.”

The popular press routinely constructs “news” stories around conflicts and differences between pairs of opposing people, factions, and ideologies. Bipolar conflict is the very essence of most of the news.

So, are robots and artificially intelligence going to trash the working world, or not?

Hmmm, there might be another option — several, actually. Dr. Albrecht urges us to find them:

Seek the “third hand” — and any other “hands” you can discover. Ask yourself, and others, “Are there other options to be considered?”

We’ll consider some third hand perspectives about the rise of the robots in the coming weeks.

[1] Martin Ford is also the consulting expert for Societe Generale’s new “Rise of the Robots” investment index, which focuses on companies that are “significant participants in the artificial intelligence and robotics revolution.”

[2] According to his website, Karl Albrecht is “is an executive management consultant, futurist, lecturer, and author of more than 20 books on professional achievement, organizational performance, and business strategy. He is also a leading authority on cognitive styles and the development of advanced thinking skills. The Mensa Society honored him with its lifetime achievement award, for significant contributions by a member to the understanding of intelligence. Originally a physicist, and having served as a military intelligence officer and business executive, he now consults, lectures, and writes about whatever he thinks would be fun.”


Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

Race Against the Machine, Continued

Rational choice theory is a cornerstone of conventional economic thinking. It states that:

Individuals always make prudent and logical decisions. These decisions provide people with the greatest benefit or satisfaction — given the choices available — and are also in their highest self-interest.

Presumably Stephen Hawking, Elon Musk, and Bill Gates had something like this in mind when they published an open letter in January 2015 urging that artificial intelligence R&D should focus “not only on making AI more capable, but also on maximizing the societal benefit,” To execute on this imperative, they urged an interdisciplinary collaboration among “economics, law and philosophy. computer security, formal methods and, of course, various branches of AI itself.” (Since its release, the letter has garnered another 8.000 signatures — you can sign it, too, if you like.)

The letter’s steady, rational four paragraphs praise how technology has benefited the human race, and anticipate more of the same in the future, but its reception and the authors’ comments in other contexts are not so measured. As a result, the letter has become a cheering section for those who think humanity is losing its race against the robots.

Consider, for example, the following from an Observer article:

“Success in creating AI would be the biggest event in human history,” wrote Stephen Hawking in an op-ed, which appeared in The Independent in 2014. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Professor Hawking added in a 2014 interview with BBC, “humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.”

Elon Musk called the prospect of artificial intelligence “our greatest existential threat” in a 2014 interview with MIT students at the AeroAstro Centennial Symposium. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” Mr. Musk cites his decision to invest in the Artificial Intelligence firm, DeepMind, as a means to “just keep an eye on what’s going on with artificial intelligence. I think there is potentially a dangerous outcome there.”

Microsoft co-founder Bill Gates has also expressed concerns about Artificial Intelligence. During a Q&A session on Reddit in January 2015, Mr. Gates said, “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Or consider this Elon Musk comment in Vanity Fair:

In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”

In other words, Hawking, Gates, and Musk aren’t just worried about machines taking over jobs, they’re worried about the end of the world — or at least the human race. This Washington Post op-ed piece thinks that might not be such a bad thing:

When a technology is so obviously dangerous — like nuclear energy or synthetic biology — humanity has an imperative to consider dystopian predictions of the future. But it also has an imperative to push on, to reach its full potential. While it’s scary, sure, that humans may no longer be the smartest life forms in the room a generation from now, should we really be that concerned? Seems like we’ve already done a pretty good job of finishing off the planet anyway. If anything, we should be welcoming our AI masters to arrive sooner rather than later.

Or consider this open letter written back to Hawking, Gates, and Musk, which basically says forget the fear mongering — it’s going to happen no matter what you think:

Progress is inevitable, even if it is reached by accident and happenstance. Even if we do not intend to, sentient AI is something that will inevitably be created, be it through the evolution of a learning AI, or as a byproduct of some research. No treaty or coalition can stop it, no matter what you think. I just pray you do not go from educated men to fear mongers when it happens.

As usual, we’re at an ideological impasse, with both sides responding not so much according to the pros and cons but according to their predispositions. This article suggests a way through the impasse:

At the beginning of this article, we asked if the pessimists or optimists would be right.

There is a third option, though: one where we move from building jobs around processes and tasks, a solution that is optimal for neither human nor machine, to building jobs around problems.

The article is long, well-researched, and… well, very rational. Too bad — conventional thinking aside — other research shows we rarely act from a rational outlook when it comes to jobs and the economy… or anything else for that matter.

More on that next time.



Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

Race Against the Machine

For the past several years, two MIT big thinkers[1] have been the go-to authorities in the scramble to explain how robotics, artificial intelligence, and big data are revolutionizing the economy and the working world. Their two books were published four and six years ago — so yesterday in the world of technology — but they were remarkably prescient when written, and have not diminished in relevance. They are:

Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy (2012)

The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (2014)

Click here for a chapter-by-chapter digest of The Second Machine Age, written by an all star cast of economic commentators. Among other things, they acknowledge the authors’ view that neoliberal capitalism has not fared well in its dealings with the technological juggernaut, but in the absence of a better alternative, we might as well continue to ride the horse in the direction it’s going.

While admitting that History (not human choice) is “littered with unintended . . . side effects of well-intentioned social and economic policies,” the authors cite Tim O’Reilly[2] in pushing forward with technology’s momentum rather than clinging to the past or present. They suggest that we should let the technologies do their work and just find ways to deal with it. They are “skeptical of efforts to come up with fundamental alternatives to capitalism.”

David Rotman, editor of the MIT Technology Review cites The Second Machine Age extensively in an excellent, longer article, “How Technology is Destroying Jobs.” Although the article is packed with contrary analysis and opinion, the following excepts emphasize what many might consider the shadowy side of the street (compared to the sunny side we looked at in the past couple posts). I added the headings below to emphasize that many of the general economic themes we’ve been talking about also apply to the specific dynamics of the job market.

It used to be that economic growth — including wealth creation — also created more jobs. It doesn’t work that way any more. Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States.

For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.

A rising economic tide no longer floats all boats. The result is a skewed allocation of the rewards of growth away from jobs — i.e., economic inequality. The contention that automation and digital technologies are partly responsible for today’s lack of jobs has obviously touched a raw nerve for many worried about their own employment. But this is only one consequence of what ­Brynjolfsson and McAfee see as a broader trend. The rapid acceleration of technological progress, they say, has greatly widened the gap between economic winners and losers—the income inequalities that many economists have worried about for decades..

“[S]teadily rising productivity raised all boats for much of the 20th century,” [Brynjolfsson] says. “Many people, especially economists, jumped to the conclusion that was just the way the world worked. I used to say that if we took care of productivity, everything else would take care of itself; it was the single most important economic statistic. But that’s no longer true.” He adds, “It’s one of the dirty secrets of economics: technology progress does grow the economy and create wealth, but there is no economic law that says everyone will benefit.” In other words, in the race against the machine, some are likely to win while many others lose.

That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States.

Meanwhile, technology is taking over the jobs that are left— blue collar, white collar, and even the professions. [I]mpressive advances in computer technology—from improved industrial robotics to automated translation services—are largely behind the sluggish employment growth of the last 10 to 15 years. Even more ominous for workers, the MIT academics foresee dismal prospects for many types of jobs as these powerful new technologies are increasingly adopted not only in manufacturing, clerical, and retail work but in professions such as law, financial services, education, and medicine.

Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared.

New technologies are “encroaching into human skills in a way that is completely unprecedented,” McAfee says, and many middle-class jobs are right in the bull’s-eye; even relatively high-skill work in education, medicine, and law is affected.

We’ll spend more time on the shadowy side of the street next time.

[1] Erik Brynjolfsson is director of the MIT Center for Digital Business, and Andrew McAfee is a principal research scientist at MIT who studies how digital technologies are changing business, the economy, and society.

[2] According to his official bio on his website, Tim O’Reilly “is the founder and CEO of O’Reilly Media, Inc. His original business plan was simply ‘interesting work for interesting people,’ and that’s worked out pretty well. O’Reilly Media delivers online learning, publishes books, runs conferences, urges companies to create more value than they capture, and tries to change the world by spreading and amplifying the knowledge of innovators.”


Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article:  Meeting Goals the Olympic Way:  Train + Transform.

Bright Sunshiny Day, Continued

Last time, we heard David Lee[1] express his conviction that, far from destroying human jobs, robotic technology will unleash human creativity on a wonderful new world of work. His perspective is so remarkably and refreshingly upbeat that I thought we’d let him continue where he left off last week:

I think it’s important to recognize that we brought this problem on ourselves. And it’s not just because, you know, we are the one building the robots. But even though most jobs left the factory decades ago, we still hold on to this factory mindset of standardization and de-skilling. We still define jobs around procedural tasks and then pay people for the number of hours that they perform these tasks. We’ve created narrow job definitions like cashier, loan processor or taxi driver and then asked people to form entire careers around these singular tasks.

These choices have left us with actually two dangerous side effects. The first is that these narrowly defined jobs will be the first to be displaced by robots, because single-task robots are just the easiest kinds to build. But second, we have accidentally made it so that millions of workers around the world have unbelievably boring working lives.

Let’s take the example of a call center agent. Over the last few decades, we brag about lower operating costs because we’ve taken most of the need for brainpower out of the person and put it into the system. For most of their day, they click on screens, they read scripts. They act more like machines than humans. And unfortunately, over the next few years, as our technology gets more advanced, they, along with people like clerks and bookkeepers, will see the vast majority of their work disappear.

To counteract this, we have to start creating new jobs that are less centered on the tasks that a person does and more focused on the skills that a person brings to work. For example, robots are great at repetitive and constrained work, but human beings have an amazing ability to bring together capability with creativity when faced with problems that we’ve never seen before.

We need to realistically think about the tasks that will be disappearing over the next few years and start planning for more meaningful, more valuable work that should replace it. We need to create environments where both human beings and robots thrive. I say, let’s give more work to the robots, and let’s start with the work that we absolutely hate doing. Here, robot, process this painfully idiotic report.

And for the human beings, we should follow the advice from Harry Davis at the University of Chicago. He says we have to make it so that people don’t leave too much of themselves in the trunk of their car. I mean, human beings are amazing on weekends. Think about the people that you know and what they do on Saturdays. They’re artists, carpenters, chefs and athletes. But on Monday, they’re back to being Junior HR Specialist and Systems Analyst 3.

You know, these narrow job titles not only sound boring, but they’re actually a subtle encouragement for people to make narrow and boring job contributions. But I’ve seen firsthand that when you invite people to be more, they can amaze us with how much more they can be.

[The key is]to turn dreams into a reality. And that dreaming is an important part of what separates us from machines. For now, our machines do not get frustrated, they do not get annoyed, and they certainly don’t imagine.

But we, as human beings — we feel pain, we get frustrated. And it’s when we’re most annoyed and most curious that we’re motivated to dig into a problem and create change. Our imaginations are the birthplace of new products, new services, and even new industries.

If we really want to robot-proof our jobs, we, as leaders, need to get out of the mindset of telling people what to do and instead start asking them what problems they’re inspired to solve and what talents they want to bring to work. Because when you can bring your Saturday self to work on Wednesdays, you’ll look forward to Mondays more, and those feelings that we have about Mondays are part of what makes us human.

We’ll give the other side equal time next week.

[1] David Lee is Vice President of Innovation and the Strategic Enterprise Fund for UPS.


Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: Leadership and Life Lessons From an Elite Athlete and a Dying Man.

Gonna Be a Bright, Bright, Sunshiny Day

We met Sebastian Thrun last time. He’s a bright guy with a sunshiny disposition: he’s not worried about robots and artificial intelligence taking over all the good jobs, even his own. Instead, he’s perfectly okay if technology eliminates most of what he does every day because he believes human ingenuity will fill the vacuum with something better. This is from his conversation with TED curator Chris Anderson:

If I look at my own job as a CEO, I would say 90 percent of my work is repetitive, I don’t enjoy it, I spend about four hours per day on stupid, repetitive email. And I’m burning to have something that helps me get rid of this. Why? Because I believe all of us are insanely creative . . . What this will empower is to turn this creativity into action.

We’ve unleashed this amazing creativity by de-slaving us from farming and later, of course, from factory work and have invented so many things. It’s going to be even better, in my opinion. And there’s going to be great side effects. One of the side effects will be that things like food and medical supply and education and shelter and transportation will all become much more affordable to all of us, not just the rich people.

Anderson sums it up this way:

So the jobs that are getting lost, in a way, even though it’s going to be painful, humans are capable of more than those jobs. This is the dream. The dream is that humans can rise to just a new level of empowerment and discovery. That’s the dream.

Another bright guy with a sunshiny disposition is David Lee, Vice President of Innovation and the Strategic Enterprise Fund for UPS. He, too, shares the dream that technology will turn human creativity loose on a whole new kind of working world. Here’s his TED talk (click the image):

Like Sebastian Thrun, he’s no Pollyanna: he understands that yes, technology threatens jobs:

There’s a lot of valid concern these days that our technology is getting so smart that we’ve put ourselves on the path to a jobless future. And I think the example of a self-driving car is actually the easiest one to see. So these are going to be fantastic for all kinds of different reasons. But did you know that “driver” is actually the most common job in 29 of the 50 US states? What’s going to happen to these jobs when we’re no longer driving our cars or cooking our food or even diagnosing our own diseases?

Well, a recent study from Forrester Research goes so far to predict that 25 million jobs might disappear over the next 10 years. To put that in perspective, that’s three times as many jobs lost in the aftermath of the financial crisis. And it’s not just blue-collar jobs that are at risk. On Wall Street and across Silicon Valley, we are seeing tremendous gains in the quality of analysis and decision-making because of machine learning. So even the smartest, highest-paid people will be affected by this change.

What’s clear is that no matter what your job is, at least some, if not all of your work, is going to be done by a robot or software in the next few years.

But that’s not the end of the story. Like Thrun, he believes that the rise of the robots will clear the way for unprecedented levels of human creativity — provided we move fast:

The good news is that we have faced down and recovered two mass extinctions of jobs before. From 1870 to 1970, the percent of American workers based on farms fell by 90 percent, and then again from 1950 to 2010, the percent of Americans working in factories fell by 75 percent. The challenge we face this time, however, is one of time. We had a hundred years to move from farms to factories, and then 60 years to fully build out a service economy.

The rate of change today suggests that we may only have 10 or 15 years to adjust, and if we don’t react fast enough, that means by the time today’s elementary-school students are college-aged, we could be living in a world that’s robotic, largely unemployed and stuck in kind of un-great depression.

But I don’t think it has to be this way. You see, I work in innovation, and part of my job is to shape how large companies apply new technologies. Certainly some of these technologies are even specifically designed to replace human workers. But I believe that if we start taking steps right now to change the nature of work, we can not only create environments where people love coming to work but also generate the innovation that we need to replace the millions of jobs that will be lost to technology.

I believe that the key to preventing our jobless future is to rediscover what makes us human, and to create a new generation of human-centered jobs that allow us to unlock the hidden talents and passions that we carry with us every day.

More from David Lee next time.

If all this bright sunshiny perspective made you think of that old tune, you might treat yourself to a listen. It’s short, you’ve got time.

And for a look at a current legal challenge to the “gig economy” across the pond, check out this Economist article from earlier this week.


Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: Leadership and Life Lessons From an Elite Athlete and a Dying Man.

Learning to Learn

“I didn’t know robots had advanced so far,” a reader remarked after last week’s post about how computers are displacing knowledge workers. What changed to make that happen? The machines learned how to learn.

This is from Artificial Intelligence Goes Bilingual—Without A Dictionary, Science Magazine, Nov. 28, 2017.

“Imagine that you give one person lots of Chinese books and lots of Arabic books—none of them overlapping—and the person has to learn to translate Chinese to Arabic. That seems impossible, right?” says . . . Mikel Artetxe, a computer scientist at the University of the Basque Country (UPV) in San Sebastiàn, Spain. “But we show that a computer can do that.”

Most machine learning—in which neural networks and other computer algorithms learn from experience—is “supervised.” A computer makes a guess, receives the right answer, and adjusts its process accordingly. That works well when teaching a computer to translate between, say, English and French, because many documents exist in both languages. It doesn’t work so well for rare languages, or for popular ones without many parallel texts.

[This learning technique is called] unsupervised machine learning. [A computer using this technique] constructs bilingual dictionaries without the aid of a human teacher telling them when their guesses are right.

Hmm. . . . I could have used that last year, when my wife and I spent three months visiting our daughter in South Korea. The Korean language is ridiculously complex; I never got much past “good morning.”

Go matches were a standard offering on the gym TV’s where I worked out. (Imagine two guys in black suits staring intently at a game board — not exactly a riveting workout visual.) Go is also ridiculously complex, and mysterious, too: the masters seem to make moves more intuitively than analytically. But the days of human Go supremacy are over. Google wizard and overall overachiever Sebastian Thrun[1] explains why in this conversation with TED Curator Chris Anderson:

Artificial intelligence and machine learning is about 60 years old and has not had a great day in its past until recently. And the reason is that today, we have reached a scale of computing and datasets that was necessary to make machines smart. The new thing now is that computers can find their own rules. So instead of an expert deciphering, step by step, a rule for every contingency, what you do now is you give the computer examples and have it infer its own rules.

A really good example is AlphaGo. Normally, in game playing, you would really write down all the rules, but in AlphaGo’s case, the system looked over a million games and was able to infer its own rules and then beat the world’s residing Go champion. That is exciting, because it relieves the software engineer of the need of being super smart, and pushes the burden towards the data.

20 years ago the computers were as big as a cockroach brain. Now they are powerful enough to really emulate specialized human thinking. And then the computers take advantage of the fact that they can look at much more data than people can. AlphaGo looked at more than a million games. No human expert can ever study a million games. So as a result, the computer can find rules that even people can’t find.

Thrun made those comments in April 2017. AlphaGo’s championship reign was short-lived: it was unseated a mere six months by a new cyber challenger that taught itself without reviewing all that data. This is from “AlphaGo Zero Shows Machines Can Become Superhuman Without Any Help,” MIT Technology Review, October 18, 2017.

AlphaGo wasn’t the best Go player on the planet for very long. A new version of the masterful AI program has emerged, and it’s a monster. In a head-to-head matchup, AlphaGo Zero defeated the original program by 100 games to none.

Whereas the original AlphaGo learned by ingesting data from hundreds of thousands of games played by human experts, AlphaGo Zero started with nothing but a blank board and the rules of the game. It learned simply by playing millions of games against itself, using what it learned in each game to improve.

The new program represents a step forward in the quest to build machines that are truly intelligent. That’s because machines will need to figure out solutions to difficult problems even when there isn’t a large amount of training data to learn from.

“The most striking thing is we don’t need any human data anymore,” says Demis Hassabis, CEO and cofounder of DeepMind [the creators of AlphaGo Zero].

“By not using human data or human expertise, we’ve actually removed the constraints of human knowledge,” says David Silver, the lead researcher at DeepMind and a professor at University College London. “It’s able to create knowledge for itself from first principles.”

Did you catch that? “We’ve removed the constraints of human knowledge.” Wow. No wonder computers are elbowing all those knowledge workers out of the way.

What’s left for human to do? We’ll hear from Sebastian Thrun and others on that topic next time.

[1] Sebastian Thrun’s TED bio describes him as “an educator, entrepreneur and troublemaker. After a long life as a professor at Stanford University, Thrun resigned from tenure to join Google. At Google, he founded Google X, home to self-driving cars and many other moonshot technologies. Thrun also founded Udacity, an online university with worldwide reach, and Kitty Hawk, a ‘flying car’ company. He has authored 11 books, 400 papers, holds 3 doctorates and has won numerous awards.”


Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: Leadership and Life Lessons From an Elite Athlete and a Dying Man.