September 22, 2018

The Perils of Predicting

“We were promised flying cars, and instead what we got was 140 characters.”

Peter Thiel, PayPal co-founder[1]

Economic forecasts and policy solutions are based on predictions, and predicting is a perilous business.

I grew up in a small town in western Minnesota. Our family got the morning paper — the Minneapolis Tribune. The Stars ubscribers got their paper around 4:00. A friend’s dad was a lawyer — his family got both. In a childhood display of cognitive bias, I never could understand why anyone would want an afternoon paper. News was made the day before, so you could read about it the next morning, and that was that.

I remember one Tribune headline to this day: it predicted nuclear war in 10 years. That was 1961, when I was eight. The Cuban missile crisis was the following year, and for awhile it looked like it wouldn’t take all ten years for the headline’s prediction to come true.

The Tribune helpfully ran designs and instructions for building your own fallout shelter. Our house had the perfect place for one: a root cellar off one side of the basement — easily the creepiest place in the house. You descended a couple steps down from the basement floor, through a stubby cinderblock hallway, past a door hanging on one hinge. Ahead of you was a bare light bulb swinging from the ceiling — it flickered, revealing decades of cobwebs and homeowner flotsam worthy of Miss Havisham. It was definitely a bomb shelter fixer-upper, but it was the right size, and as an added bonus it had a concrete slab over it — if you banged the ground above with a pipe it made a hollow sound.

I scoured the fallout shelter plans, but my dad said no. Someone else in town built one — the ventilation pipes stuck out of a room-size mound next to their house. People used to go by it on their Sunday drives. Meanwhile I ran my own personal version of the Doomsday Clockfor the next ten years until my 18th birthday came and went. So much for that headline.

I also remember a Sunday cartoon that predicted driverless cars. I found an article about it in this article from Gizmodo:[2]

The article explains:

The period between 1958 and 1963 might be described as a Golden Age of American Futurism, if not the Golden Age of American Futurism. Bookended by the founding of NASA in 1958 and the end of The Jetsons in 1963, these few years were filled with some of the wildest techno-utopian dreams that American futurists had to offer. It also happens to be the exact timespan for the greatest futuristic comic strip to ever grace the Sunday funnies: Closer Than We Think.

Jetpacks, meal pills, flying cars — they were all there, beautifully illustrated by Arthur Radebaugh, a commercial artist based in Detroit best known for his work in the auto industry. Radebaugh would help influence countless Baby Boomers and shape their expectations for the future. The influence of Closer Than We Think can still be felt today.

Timing is Everything

Apparently timing is everything in the prediction business. The driverless car prediction was accurate, just way too early. The Tribune’s nuclear war prediction was inaccurate (and let’s hope not just because it was too early). Predictions from the hapless mythological prophetess Cassandra were never inaccurate or untimely: she was cursed by Apollo (who ran a highly successful prophecy business at Delphi) with the gift of always being right but never believed.

Now that would be frustrating.

As I said last week, predicting is as perilous as policy-making. An especially perilous version of both is utopian thinking. There’s been plenty of utopian economic thinking the past couple centuries, and today’s economists continue the grand tradition — to their peril, and potentially to ours. We’ll look at some economic utopian thinking (and the case for and against it) beginning next time.

 

Apparently timing is everything in country music, too. I’m not an aficionado, but I did come across this video while researching this post. The guy’s got a nice baritone.


[1]Peter Thiel needn’t despair about the lack of flying cars anymore: here’s a video re: a prototypefrom Sebastian Thrun and his company Kitty Hawk.

[2]The article is worth a look, if you like that sort of thing. So is this Smithsonian articleon the Jetsons. And while we’re on the topic, check out this IEEE Spectrum articleon a 1960 RCA initiative that had self-driving cars just around the corner, and this Atlantic articleabout an Electronic Age/Science Digestarticle that made the same prediction even earlier — in 1958.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

Bus Riding Economists

Lord, I was born a ramblin’ man
Tryin’ to make a livin’ and doin’ the best I can[1]

A couple economists took the same bus I did one day last week. We’ll call them “Home Boy” and “Ramblin’ Man.”. They made acquaintance when Ramblin’ Man put his money in the fare box and didn’t get a transfer coupon. He was from out of town, he said, and didn’t know how to work it. Home Boy explained that you need to wait until the driver gets back from her break. Ramblin’ Man said he guessed the money was just gone, but the driver showed up about then and checked the meter — it showed he’d put the money in, so he got his transfer. Technology’s great, ain’t it?

Ramblin’ Man took the seat in front of me. Home Boy sat across the aisle. When the conversation turned to economics, I eavesdropped[2] shamelessly. Well not exactly — they were talking pretty loud. Ramblin’ Man said he’d been riding the bus for two days to get to the VA. That gave them instant common ground:  they were both Vietnam vets, and agreed they were lucky to get out alive.

Ramblin’ Man said when he got out he went traveling — hitchhike, railroad, bus, you name it. That was back in the 70’s, when a guy could go anywhere and get a job. Not no more. Now he lives in a small town up on northeast Montana. He likes it, but it’s a long way to get to the VA, but he knew if he could get here, there’d be a bus to take him right to it, and sure enough there was. That’s the trouble with those small towns, said Home Boy — nice and quiet, but not enough people to have any services. I’ll bet there’s no bus company up there, he chuckled. Not full of people like Minneapolis.

Minneapolis! Ramblin’ Man lit up at the mention of it. All them people, and no jobs. He was there in 2009, right after the bankers ruined the economy. Yeah, them and the politicians, Home Boy agreed. Shoulda put them all in jail. It’s those one-percenters. They got it fixed now so nobody makes any money but them. It’s like it was back when they were building the railroads and stuff. Now they’re doing it again. Nobody learns from history — they keep doing the same things over and over. They’re stuck in the past.

Except this time, it’s different, said Ramblin’ Man. It’s all that technology — takes away all the jobs. Back in 09, he’d been in Minneapolis for three months, and his phone never rang once for a job offer. Not once. Never used to happen in the 70’s.

And then my stop came up, and my economic history lesson was over. My two bus riding economists had covered the same developments I’ve been studying for the past 15 months. My key takeaway? That “The Economy” is a lazy fiction — none of us really lives there. Instead, we live in the daily challenges of figuring out how to get the goods and services we need — maybe to thrive (if you’re one of them “one-percenters”), or maybe just to get by. The Economy isn’t some transcendent structure, it’s created one human transaction at a time — like when a guy hits the road to make sense of life after a war, picking up odd jobs along the way until eventually he settles in a peaceful little town in the American Outback. When we look at The Economy that way, we get a whole new take on it. That’s precisely what a new breed of cross-disciplinary economists are doing, and we’ll examine their outlook in the coming weeks.

In the meantime, I suspect that one of the reasons we don’t learn from history is that we don’t know it. In that regard, I recently read a marvelous economic history book that taught me a whole lot I never knew:  Americana: A 400-Year History of American Capitalism (2017)  by tech entrepreneur Bhu Srinivasan. Here’s the promo blurb:

“From the days of the Mayflower and the Virginia Company, America has been a place for people to dream, invent, build, tinker, and bet the farm in pursuit of a better life. Americana takes us on a four-hundred-year journey of this spirit of innovation and ambition through a series of Next Big Things — the inventions, techniques, and industries that drove American history forward: from the telegraph, the railroad, guns, radio, and banking to flight, suburbia, and sneakers, culminating with the Internet and mobile technology at the turn of the twenty-first century. The result is a thrilling alternative history of modern America that reframes events, trends, and people we thought we knew through the prism of the value that, for better or for worse, this nation holds dearest: capitalism. In a winning, accessible style, Bhu Srinivasan boldly takes on four centuries of American enterprise, revealing the unexpected connections that link them.”

This is American history as we never learned it, and the book is well worth every surprising page.


[1] From “Ramblin’ Man,” by the Allman Brothers. Here’s a 1970 live version. And here’s the studio version.

[2] If you wonder, as I did, where “eavesdrop” came from, here’s the Word Detective’s explanation.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

On the Third Hand…

Will the machines take over the jobs? Ask a bunch of economists, and you’ll get opinions organized around competing ideologies, reflecting individual cognitive, emotional, and political biases. That’s been the experience of Martin Fordentrepreneur, TED talker, and New York Times bestselling author of Rise of the Robots: Technology and the Threat of a Jobless Future:

In the field of economics the opinions all too often break cleanly along predefined political lines. Knowing the ideological predisposition of a particular economist is often a better predictor of what that individual is likely to say than anything contained in the data under examination. In other words, if you’re waiting for the economists to deliver some sort of definitive verdict on the impact that advancing technology is having on the economy, you may have a very long wait.[1]

In this Psychology Today article, Dr. Karl Albrecht[2] offers a neurological explanation for polarized thinking:

Recent research suggests that our brains may be pre-wired for dichotomized thinking. That’s a fancy name for thinking and perceiving in terms of two — and only two — opposing possibilities.

These research findings might help explain how and why the public discourse of our culture has become so polarized and rancorous, and how we might be able to replace it with a more intelligent conversation.

[O]ur brains can keep tabs on two tasks at a time, by sending each one to a different side of the brain. Apparently, we toggle back and forth, with one task being primary and the other on standby.

Add a third task, however, and one of the others has to drop off the to-do list.

Scans of brain activity during this task switching have led to the hypothesis that the brain actually likes handling things in pairs. Indeed, the brain itself is subdivided into two distinct half-brains, or hemispheres.

Curiously, part of our cranial craving for two-ness might be related to our own physiology: the human body is bilaterally symmetrical. Draw an imaginary center line down through the front of a person and you see a lot of parts (not all, of course), that come in pairs: two eyes, two ears, two nostrils, matching teeth on left and right sides, two shoulders, two arms, two hands, two nipples, two legs, two knees, and two feet. Inside you’ll find two of some things and one of others.

Some researchers are now extending this reasoning to suggest that the brain has a built-in tendency, when confronted by complex propositions, to selfishly reduce the set of choices to just two. Apparently it doesn’t like to work hard.

Considering how quickly we make our choices and set our opinions, it’s unlikely that all of the options will even be identified, never mind carefully considered.

“On the one hand this, on the other hand that,” we like to say. Lawyers perfect the art. Politics and the press also thrive on dichotomy:

Again, our common language encodes the effect of this anatomical self reference. “On the one hand, there is X. But on the other hand, we have Y.” Many people describe political views as being either “left” or “right.”

The popular press routinely constructs “news” stories around conflicts and differences between pairs of opposing people, factions, and ideologies. Bipolar conflict is the very essence of most of the news.

So, are robots and artificially intelligence going to trash the working world, or not?

Hmmm, there might be another option — several, actually. Dr. Albrecht urges us to find them:

Seek the “third hand” — and any other “hands” you can discover. Ask yourself, and others, “Are there other options to be considered?”

We’ll consider some third hand perspectives about the rise of the robots in the coming weeks.


[1] Martin Ford is also the consulting expert for Societe Generale’s new “Rise of the Robots” investment index, which focuses on companies that are “significant participants in the artificial intelligence and robotics revolution.”

[2] According to his website, Karl Albrecht is “is an executive management consultant, futurist, lecturer, and author of more than 20 books on professional achievement, organizational performance, and business strategy. He is also a leading authority on cognitive styles and the development of advanced thinking skills. The Mensa Society honored him with its lifetime achievement award, for significant contributions by a member to the understanding of intelligence. Originally a physicist, and having served as a military intelligence officer and business executive, he now consults, lectures, and writes about whatever he thinks would be fun.”

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

Race Against the Machine, Continued

Rational choice theory is a cornerstone of conventional economic thinking. It states that:

Individuals always make prudent and logical decisions. These decisions provide people with the greatest benefit or satisfaction — given the choices available — and are also in their highest self-interest.

Presumably Stephen Hawking, Elon Musk, and Bill Gates had something like this in mind when they published an open letter in January 2015 urging that artificial intelligence R&D should focus “not only on making AI more capable, but also on maximizing the societal benefit,” To execute on this imperative, they urged an interdisciplinary collaboration among “economics, law and philosophy. computer security, formal methods and, of course, various branches of AI itself.” (Since its release, the letter has garnered another 8.000 signatures — you can sign it, too, if you like.)

The letter’s steady, rational four paragraphs praise how technology has benefited the human race, and anticipate more of the same in the future, but its reception and the authors’ comments in other contexts are not so measured. As a result, the letter has become a cheering section for those who think humanity is losing its race against the robots.

Consider, for example, the following from an Observer article:

“Success in creating AI would be the biggest event in human history,” wrote Stephen Hawking in an op-ed, which appeared in The Independent in 2014. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Professor Hawking added in a 2014 interview with BBC, “humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.”

Elon Musk called the prospect of artificial intelligence “our greatest existential threat” in a 2014 interview with MIT students at the AeroAstro Centennial Symposium. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” Mr. Musk cites his decision to invest in the Artificial Intelligence firm, DeepMind, as a means to “just keep an eye on what’s going on with artificial intelligence. I think there is potentially a dangerous outcome there.”

Microsoft co-founder Bill Gates has also expressed concerns about Artificial Intelligence. During a Q&A session on Reddit in January 2015, Mr. Gates said, “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Or consider this Elon Musk comment in Vanity Fair:

In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”

In other words, Hawking, Gates, and Musk aren’t just worried about machines taking over jobs, they’re worried about the end of the world — or at least the human race. This Washington Post op-ed piece thinks that might not be such a bad thing:

When a technology is so obviously dangerous — like nuclear energy or synthetic biology — humanity has an imperative to consider dystopian predictions of the future. But it also has an imperative to push on, to reach its full potential. While it’s scary, sure, that humans may no longer be the smartest life forms in the room a generation from now, should we really be that concerned? Seems like we’ve already done a pretty good job of finishing off the planet anyway. If anything, we should be welcoming our AI masters to arrive sooner rather than later.

Or consider this open letter written back to Hawking, Gates, and Musk, which basically says forget the fear mongering — it’s going to happen no matter what you think:

Progress is inevitable, even if it is reached by accident and happenstance. Even if we do not intend to, sentient AI is something that will inevitably be created, be it through the evolution of a learning AI, or as a byproduct of some research. No treaty or coalition can stop it, no matter what you think. I just pray you do not go from educated men to fear mongers when it happens.

As usual, we’re at an ideological impasse, with both sides responding not so much according to the pros and cons but according to their predispositions. This article suggests a way through the impasse:

At the beginning of this article, we asked if the pessimists or optimists would be right.

There is a third option, though: one where we move from building jobs around processes and tasks, a solution that is optimal for neither human nor machine, to building jobs around problems.

The article is long, well-researched, and… well, very rational. Too bad — conventional thinking aside — other research shows we rarely act from a rational outlook when it comes to jobs and the economy… or anything else for that matter.

More on that next time.

 

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: When We Move, We Can Achieve the Impossible.”

Race Against the Machine

For the past several years, two MIT big thinkers[1] have been the go-to authorities in the scramble to explain how robotics, artificial intelligence, and big data are revolutionizing the economy and the working world. Their two books were published four and six years ago — so yesterday in the world of technology — but they were remarkably prescient when written, and have not diminished in relevance. They are:

Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy (2012)

The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (2014)

Click here for a chapter-by-chapter digest of The Second Machine Age, written by an all star cast of economic commentators. Among other things, they acknowledge the authors’ view that neoliberal capitalism has not fared well in its dealings with the technological juggernaut, but in the absence of a better alternative, we might as well continue to ride the horse in the direction it’s going.

While admitting that History (not human choice) is “littered with unintended . . . side effects of well-intentioned social and economic policies,” the authors cite Tim O’Reilly[2] in pushing forward with technology’s momentum rather than clinging to the past or present. They suggest that we should let the technologies do their work and just find ways to deal with it. They are “skeptical of efforts to come up with fundamental alternatives to capitalism.”

David Rotman, editor of the MIT Technology Review cites The Second Machine Age extensively in an excellent, longer article, “How Technology is Destroying Jobs.” Although the article is packed with contrary analysis and opinion, the following excepts emphasize what many might consider the shadowy side of the street (compared to the sunny side we looked at in the past couple posts). I added the headings below to emphasize that many of the general economic themes we’ve been talking about also apply to the specific dynamics of the job market.

It used to be that economic growth — including wealth creation — also created more jobs. It doesn’t work that way any more. Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States.

For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.

A rising economic tide no longer floats all boats. The result is a skewed allocation of the rewards of growth away from jobs — i.e., economic inequality. The contention that automation and digital technologies are partly responsible for today’s lack of jobs has obviously touched a raw nerve for many worried about their own employment. But this is only one consequence of what ­Brynjolfsson and McAfee see as a broader trend. The rapid acceleration of technological progress, they say, has greatly widened the gap between economic winners and losers—the income inequalities that many economists have worried about for decades..

“[S]teadily rising productivity raised all boats for much of the 20th century,” [Brynjolfsson] says. “Many people, especially economists, jumped to the conclusion that was just the way the world worked. I used to say that if we took care of productivity, everything else would take care of itself; it was the single most important economic statistic. But that’s no longer true.” He adds, “It’s one of the dirty secrets of economics: technology progress does grow the economy and create wealth, but there is no economic law that says everyone will benefit.” In other words, in the race against the machine, some are likely to win while many others lose.

That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States.

Meanwhile, technology is taking over the jobs that are left— blue collar, white collar, and even the professions. [I]mpressive advances in computer technology—from improved industrial robotics to automated translation services—are largely behind the sluggish employment growth of the last 10 to 15 years. Even more ominous for workers, the MIT academics foresee dismal prospects for many types of jobs as these powerful new technologies are increasingly adopted not only in manufacturing, clerical, and retail work but in professions such as law, financial services, education, and medicine.

Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared.

New technologies are “encroaching into human skills in a way that is completely unprecedented,” McAfee says, and many middle-class jobs are right in the bull’s-eye; even relatively high-skill work in education, medicine, and law is affected.

We’ll spend more time on the shadowy side of the street next time.


[1] Erik Brynjolfsson is director of the MIT Center for Digital Business, and Andrew McAfee is a principal research scientist at MIT who studies how digital technologies are changing business, the economy, and society.

[2] According to his official bio on his website, Tim O’Reilly “is the founder and CEO of O’Reilly Media, Inc. His original business plan was simply ‘interesting work for interesting people,’ and that’s worked out pretty well. O’Reilly Media delivers online learning, publishes books, runs conferences, urges companies to create more value than they capture, and tries to change the world by spreading and amplifying the knowledge of innovators.”

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article:  Meeting Goals the Olympic Way:  Train + Transform.

Bright Sunshiny Day, Continued

Last time, we heard David Lee[1] express his conviction that, far from destroying human jobs, robotic technology will unleash human creativity on a wonderful new world of work. His perspective is so remarkably and refreshingly upbeat that I thought we’d let him continue where he left off last week:

I think it’s important to recognize that we brought this problem on ourselves. And it’s not just because, you know, we are the one building the robots. But even though most jobs left the factory decades ago, we still hold on to this factory mindset of standardization and de-skilling. We still define jobs around procedural tasks and then pay people for the number of hours that they perform these tasks. We’ve created narrow job definitions like cashier, loan processor or taxi driver and then asked people to form entire careers around these singular tasks.

These choices have left us with actually two dangerous side effects. The first is that these narrowly defined jobs will be the first to be displaced by robots, because single-task robots are just the easiest kinds to build. But second, we have accidentally made it so that millions of workers around the world have unbelievably boring working lives.

Let’s take the example of a call center agent. Over the last few decades, we brag about lower operating costs because we’ve taken most of the need for brainpower out of the person and put it into the system. For most of their day, they click on screens, they read scripts. They act more like machines than humans. And unfortunately, over the next few years, as our technology gets more advanced, they, along with people like clerks and bookkeepers, will see the vast majority of their work disappear.

To counteract this, we have to start creating new jobs that are less centered on the tasks that a person does and more focused on the skills that a person brings to work. For example, robots are great at repetitive and constrained work, but human beings have an amazing ability to bring together capability with creativity when faced with problems that we’ve never seen before.

We need to realistically think about the tasks that will be disappearing over the next few years and start planning for more meaningful, more valuable work that should replace it. We need to create environments where both human beings and robots thrive. I say, let’s give more work to the robots, and let’s start with the work that we absolutely hate doing. Here, robot, process this painfully idiotic report.

And for the human beings, we should follow the advice from Harry Davis at the University of Chicago. He says we have to make it so that people don’t leave too much of themselves in the trunk of their car. I mean, human beings are amazing on weekends. Think about the people that you know and what they do on Saturdays. They’re artists, carpenters, chefs and athletes. But on Monday, they’re back to being Junior HR Specialist and Systems Analyst 3.

You know, these narrow job titles not only sound boring, but they’re actually a subtle encouragement for people to make narrow and boring job contributions. But I’ve seen firsthand that when you invite people to be more, they can amaze us with how much more they can be.

[The key is]to turn dreams into a reality. And that dreaming is an important part of what separates us from machines. For now, our machines do not get frustrated, they do not get annoyed, and they certainly don’t imagine.

But we, as human beings — we feel pain, we get frustrated. And it’s when we’re most annoyed and most curious that we’re motivated to dig into a problem and create change. Our imaginations are the birthplace of new products, new services, and even new industries.

If we really want to robot-proof our jobs, we, as leaders, need to get out of the mindset of telling people what to do and instead start asking them what problems they’re inspired to solve and what talents they want to bring to work. Because when you can bring your Saturday self to work on Wednesdays, you’ll look forward to Mondays more, and those feelings that we have about Mondays are part of what makes us human.

We’ll give the other side equal time next week.


[1] David Lee is Vice President of Innovation and the Strategic Enterprise Fund for UPS.

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: Leadership and Life Lessons From an Elite Athlete and a Dying Man.

Learning to Learn

“I didn’t know robots had advanced so far,” a reader remarked after last week’s post about how computers are displacing knowledge workers. What changed to make that happen? The machines learned how to learn.

This is from Artificial Intelligence Goes Bilingual—Without A Dictionary, Science Magazine, Nov. 28, 2017.

“Imagine that you give one person lots of Chinese books and lots of Arabic books—none of them overlapping—and the person has to learn to translate Chinese to Arabic. That seems impossible, right?” says . . . Mikel Artetxe, a computer scientist at the University of the Basque Country (UPV) in San Sebastiàn, Spain. “But we show that a computer can do that.”

Most machine learning—in which neural networks and other computer algorithms learn from experience—is “supervised.” A computer makes a guess, receives the right answer, and adjusts its process accordingly. That works well when teaching a computer to translate between, say, English and French, because many documents exist in both languages. It doesn’t work so well for rare languages, or for popular ones without many parallel texts.

[This learning technique is called] unsupervised machine learning. [A computer using this technique] constructs bilingual dictionaries without the aid of a human teacher telling them when their guesses are right.

Hmm. . . . I could have used that last year, when my wife and I spent three months visiting our daughter in South Korea. The Korean language is ridiculously complex; I never got much past “good morning.”

Go matches were a standard offering on the gym TV’s where I worked out. (Imagine two guys in black suits staring intently at a game board — not exactly a riveting workout visual.) Go is also ridiculously complex, and mysterious, too: the masters seem to make moves more intuitively than analytically. But the days of human Go supremacy are over. Google wizard and overall overachiever Sebastian Thrun[1] explains why in this conversation with TED Curator Chris Anderson:

Artificial intelligence and machine learning is about 60 years old and has not had a great day in its past until recently. And the reason is that today, we have reached a scale of computing and datasets that was necessary to make machines smart. The new thing now is that computers can find their own rules. So instead of an expert deciphering, step by step, a rule for every contingency, what you do now is you give the computer examples and have it infer its own rules.

A really good example is AlphaGo. Normally, in game playing, you would really write down all the rules, but in AlphaGo’s case, the system looked over a million games and was able to infer its own rules and then beat the world’s residing Go champion. That is exciting, because it relieves the software engineer of the need of being super smart, and pushes the burden towards the data.

20 years ago the computers were as big as a cockroach brain. Now they are powerful enough to really emulate specialized human thinking. And then the computers take advantage of the fact that they can look at much more data than people can. AlphaGo looked at more than a million games. No human expert can ever study a million games. So as a result, the computer can find rules that even people can’t find.

Thrun made those comments in April 2017. AlphaGo’s championship reign was short-lived: it was unseated a mere six months by a new cyber challenger that taught itself without reviewing all that data. This is from “AlphaGo Zero Shows Machines Can Become Superhuman Without Any Help,” MIT Technology Review, October 18, 2017.

AlphaGo wasn’t the best Go player on the planet for very long. A new version of the masterful AI program has emerged, and it’s a monster. In a head-to-head matchup, AlphaGo Zero defeated the original program by 100 games to none.

Whereas the original AlphaGo learned by ingesting data from hundreds of thousands of games played by human experts, AlphaGo Zero started with nothing but a blank board and the rules of the game. It learned simply by playing millions of games against itself, using what it learned in each game to improve.

The new program represents a step forward in the quest to build machines that are truly intelligent. That’s because machines will need to figure out solutions to difficult problems even when there isn’t a large amount of training data to learn from.

“The most striking thing is we don’t need any human data anymore,” says Demis Hassabis, CEO and cofounder of DeepMind [the creators of AlphaGo Zero].

“By not using human data or human expertise, we’ve actually removed the constraints of human knowledge,” says David Silver, the lead researcher at DeepMind and a professor at University College London. “It’s able to create knowledge for itself from first principles.”

Did you catch that? “We’ve removed the constraints of human knowledge.” Wow. No wonder computers are elbowing all those knowledge workers out of the way.

What’s left for human to do? We’ll hear from Sebastian Thrun and others on that topic next time.


[1] Sebastian Thrun’s TED bio describes him as “an educator, entrepreneur and troublemaker. After a long life as a professor at Stanford University, Thrun resigned from tenure to join Google. At Google, he founded Google X, home to self-driving cars and many other moonshot technologies. Thrun also founded Udacity, an online university with worldwide reach, and Kitty Hawk, a ‘flying car’ company. He has authored 11 books, 400 papers, holds 3 doctorates and has won numerous awards.”

 

Kevin Rhodes writes about individual growth and cultural change, drawing on insights from science, technology, disruptive innovation, entrepreneurship, neuroscience, psychology, and personal experience, including his own unique journey to wellness — dealing with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Check out Kevin’s latest LinkedIn Pulse article: Leadership and Life Lessons From an Elite Athlete and a Dying Man.

Capitalism on the Fritz

Capitalism on the Fritz[1]

In November 2008, as the global financial crash was gathering pace, the 82-year-old British monarch Queen Elizabeth visited the London School of Economics. She was there to open a new building, but she was more interested in the assembled academics. She asked them an innocent but pointed question. Given its extraordinary scale, how as it possible that no one saw it coming?

The Queen’s question went to the heart of two huge failures. Western capitalism came close to collapsing in 2007-2008 and has still not recovered. And the vast majority of economists had not understood what was happening.

That’s from the Introduction to Rethinking Capitalism (2016), edited by Michael Jacobs and Mariana Mazzucato.[2] The editors and authors review a catalogue of chronic economic “dysfunction” that they trace to policy-makers’ continued allegiance to neoliberal economic orthodoxy even as it has been breaking down over the past four decades.

Before we get to their dysfunction list, let’s give the other side equal time. First, consider an open letter from Warren Buffett published in Time last week. It begins this way:

“I have good news. First, most American children are going to live far better than their parents did. Second, large gains in the living standards of Americans will continue for many generations to come.”

Mr. Buffett acknowledges that “The market system . . . has also left many people hopelessly behind,” but assures us that “These devastating side effects can be ameliorated,” observing that “a rich family takes care of all its children, not just those with talents valued by the marketplace.” With this compassionate caveat, he is definitely bullish on America’s economy:

In the years of growth that certainly lie ahead, I have no doubt that America can both deliver riches to many and a decent life to all. We must not settle for less.

So, apparently, is our Congress. The new tax law is a virtual pledge of allegiance to the neoliberal economic model. Barring a significant pullback of the law (which seems unlikely), we now have eight years to watch how its assumptions play out.

And now, back to Rethinking Capitalism’s dysfunction’s list (which I’ve seen restated over and over in my research):

  • Production and wages no longer move in tandem — the latter lag behind the former.
  • This has been going on now for several decades,[3] during which living standards (adjusted) for the majority of households have been flat.
  • This is a problem because consumer spending accounts for over 70% of U.S. GDP. What hurts consumers hurts the whole economy.
  • What economic growth there has been is mostly the result of spending fueled by consumer and corporate debt. This is especially true of the post-Great Recession “recovery.”
  • Meanwhile, companies have been increasing production through increased automation — most recently through intelligent machines — which means getting more done with fewer employees.
  • That means the portion of marginal output attributable to human (wage-earner) effort is less, which causes consumer incomes to fall.
  • The job marketplace has responded with new dynamics, featuring a worldwide rise of “non-standard’ work (temporary, part-time, and self-employed).[4]
  • Overall, there has been an increase in the number of lower-paid workers and a rise in intransigent unemployment — especially among young people.
  • Adjusting to these new realities has left traditional wage-earners with feelings of meaninglessness and disempowerment, fueling populist backlash political movements.
  • In the meantime, economic inequality (both wealth and income) has grown to levels not seen since pre-revolution France, the days of the Robber Barons, and the Roaring 20s.
  • Economic inequality means that the shrinking share of compensation paid out in wages, salaries, bonuses, and benefits has been dramatically skewed toward the top of the earnings scale, with much less (both proportionately and absolutely) going to those at the middle and bottom. [5]
  • Increased wealth doesn’t mean increased consumer spending by the top 20% sufficient to offset lost demand (spending) by the lower 80% of income earners, other than as reflected by consumer debt.
  • Instead, increased wealth at the top end is turned into “rentable” assets — e.g., real estate. intellectual property, and privatized holdings in what used to be the “commons” — which both drives up their value (cost) and the rent derived from them. This creates a “rentier” culture in which lower income earners are increasingly stressed to meet rental rates, and ultimately are driven out of certain markets.
  • Inequality has also created a new working class system, in which a large share of workers are in precarious/uncertain/unsustainable employment and earning circumstances.
  • Inequality has also resulted in limitations on economic opportunity and social mobility — e.g., there is a new kind of “glass floor/glass ceiling” below which the top 20% are unlikely to fall and the bottom 80% are unlikely to rise.
  • In the meantime, the social safety nets that developed during the post-WWII boom (as Buffett’s “rich family” took care of “all its children”) have been largely torn down since the advent of “workfare” in the 80’s and 90’s, leaving those at the bottom and middle more exposed than ever.

The editors of Rethinking Capitalism believe that “These failings are not temporary, they are structural.” That conclusion has led some to believe that people like Warren Buffett are seriously misguided in their continued faith in Western capitalism as a reliable societal institution.

More on that next time.


[1] I wondered where the expression “on the fritz” came from, and tried to find out. Surprisingly, no one seems to know.

[2] Michael Jacobs is an environmental economist and political theorist; at the time the book was published, he was a visiting professor at University College of London. Mariana Mazzucato is an economics professor at the University of Sussex.

[3] “In the US, real median household income was barely higher in 2014 than it had been in 1990, though GDP had increased by 78 percent over the same period. Though beginning earlier in the US, this divergence of average incomes from overall economic growth has not become a feature of most advanced economies.”  Rethinking Capitalism.

[4] These have accounted for “half the jobs created since the 1990s and 60 per cent since the 2008 crisis.” Rethinking Capitalism.

[5] Meanwhile, those at the very top of the income distribution have done exceedingly well… In the US, the incomes of the richest 1 percent rose by 142 per cent between 1980 and 2013 (from an average of $461,910, adjusted for inflation, to $1,119,315) and their share of national income doubled, from 10 to 20 per cent. In the first three years of the recovery after the 2008 crash, an extraordinary 91 per cent of the gains in income went to the richest one-hundredth of the population.” Rethinking Capitalism.

 

Kevin Rhodes left a successful long-term law practice to scratch a creative itch and lived to tell about it… barely. Since then, he has been on a mission to bring professional excellence and personal wellbeing to the people who learn, teach, and practice the law. He has also blogged extensively and written several books about his unique journey to wellness, including how he deals with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

The Stupidity Paradox

Every day I ride a bus that has a row of seats up front that are folded up, with a sign next to them:

NOTICE
Seats Not in Service
The bus manufacturer has determined
that these seats not be used.

I’ve seen that sign for over a year. Never really thought about it. But recently I wondered: you don’t suppose both those seats and the sign were installed in the factory? It could happen — cheaper than a recall maybe. If so, it would be right in line with this week’s topic: a kind of on-the-job behavior that professors and business consultants Mats Alvesson and André Spicer[1] call The Stupidity Paradox.

Their book by that name began when they were sharing a drink after a conference and found themselves wondering, “Why was it that organisations which employed so many smart people could foster so much stupidity?” They concluded that the cause is “functional stupidity” — a workplace mindset implicitly endorsed because it works.

“We realized something: smart organisations and the smart people who work in them often do stupid things because they work — at least in the short term. By avoiding careful thinking, people are able to simply get on with their job. Asking too many questions is likely to upset others — and to distract yourself. Not thinking frees you up to fit in and get along. Sometimes it makes sense to be stupid.”

In fact, stupidity works so well it can turn into firm culture:

Far from being “knowledge intensive,” many of our most well-known chief organisations have become engines of stupidity. We have frequently seen otherwise smart people stop thinking and start doing stupid things. They stop asking questions. They give no reasons for their decisions. They pay no heed to what their actions cause. Instead of complex thought we get flimsy jargon, aggressive assertions or expert tunnel vision. Reflection, careful analysis and independent reflection decay. Idiotic ideas and practices are accepted as quite sane. People may harbour doubts, but their suspicions are cut short. What’s more, they are rewarded for it. The upshot is a lack of thought has entered the modus operandi of most organisations of today.

I.e., it pays to be stupid on the job: you get things done, satisfy expectations, don’t stand out from the crowd, aren’t labelled a troublemaker. We learned all of that in middle school; we learn it again on the job.

We learn from management:

A central, but often unacknowledged, aspect of making a corporate culture work is what we call stupidity management. Here managers actively encourage employees not to think too much. If they do happen to think, it is best not to voice what emerges. Employees are encouraged to stick within clearcut parameters. Managers use subtle and not so subtle means to prod them not to ask too many tough questions, not to reflect too deeply on their assumptions, and not to consider the broader purpose of their work. Employees are nudged to just get on with the task. They are to think on the bright side, stay upbeat and push doubts and negative thoughts aside.

And then we school ourselves:

Self-stupifying starts to happen when we censor our own internal conversations. As we go through our working day, we constantly try to give some sense to our often chaotic experiences. We do this by engaging in what some scholars call “internal reflexivity.” This is the constant stream of discussion that we have with ourselves. When self-stupidification takes over, we stop asking ourselves questions. Negative or contradictory lines of thinking are avoided. As a result, we start to feel aligned with the thoughtlessness we find around us. It is hard to be someone who thinks in an organization that shuns it.

Back to the seats on my bus… A “manufacturer” is a fiction, like “corporation” is a fiction: both act through humans. Which means that somewhere there’s an employee at a bus manufacturer whose job is to build those seats. Someone else installs them. Someone else puts up the sign. And lots of other people design, requisition, select, negotiate, buy, ship, pack and unpack, file, approve, invoice, pay bills, keep ledgers, maintain software, write memos, confer with legal, hold meetings, and make decisions. All so that the “manufacturer” — i.e., the sum total of all those people doing their jobs — can tell me not to sit there.

Functional stupidity is as common as traffic on your commute. We’ll look more into it next time.


[1] Mats Alvesson is Professor of Business Administration at the University of Lund, Sweden, University of Queensland, and Cass Business School, City University of London. André Spicer is Professor of Organisational Behaviour at Cass Business School, City University of London.

 

Kevin Rhodes is on a mission to bring professional excellence and personal wellbeing to the people who learn, teach, and practice the law. His past blog posts for the CBA have been collected in two volumes — click the book covers for more information.

Could Be Worse

Meaningless work is not inevitable, but we’re often prevented from taking remedial action because our thinking has become corrupted with feelings of powerlessness. As Studs Terkel said in his book Working:

You know, “power corrupts, and absolute power corrupts absolutely.”
It’s the same with powerlessness.
Absolute powerlessness corrupts absolutely.

If we believe there’s something patriotic, virtuous, even sacred about the way we have always viewed working for a living, then if we feel despair about our jobs it must be a personal problem, a character flaw. We ought to put up, shut up, and get cracking. The shame associated with that kind of judgment is absolutely disempowering. As long as we hold onto it, we’ll stay stuck in workplace despair and meaning malaise — a state of mind poet Richard Cecil captures in “Internal Exile,” collected in Twenty First Century Blues (2004):

Although most people I know were condemned
Years ago by Judge Necessity
To life in condos near a freeway exit
Convenient to their twice-a-day commutes
Through traffic jams to jobs that they dislike
They didn’t bury their heads in their hands
And cry “oh, no!” when sentence was pronounced:
Forty years accounting in Duluth!
Or Tenure at Southwest Missouri State!
Instead, they mumbled, not bad. It could be worse,
When the bailiff, Fate, led them away
To Personnel to fill out payroll forms
And have their smiling ID photos snapped.

And that’s what they still mumble every morning
Just before their snooze alarms go off
When Fluffy nuzzles them out of their dreams
Of making out with movie stars on beaches.
They rise at five a.m. and feed their cats
And drive to work and work and drive back home
And feed their cats and eat and fall asleep
While watching Evening News’s fresh disasters —
Blown-up bodies littering a desert
Fought over for the last three thousand years,
And smashed-to-pieces million-dollar houses
built on islands swept by hurricanes.

It’s soothing to watch news about the places
Where people literally will die to live
When you live someplace with no attractions —
Mountains, coastline, history—like here,
Where none aspire to live, though many do.
“A great place to work, with no distractions”
Is how my interviewer first described it
Nineteen years ago, when he hired me.
And, though he moved the day that he retired
To his dream house in the uplands with a vista,
He wasn’t lying—working’s better here
And easier than trying to have fun.

Is that the way it is where you’re stuck, too?

Good question. How would you answer it?

True, one of the factors behind job wretchedness is internal exile: we’re estranged from what we really want out of our work, or we’ve given up on ever having it, and so we settle for could be worse. But there’s more to it than that. There are external factors at work, too — global winds of change propelling people who want to work with passion in directions they never thought they’d be going.

There are krakens out there in the deep. One of them is something two business writers call the “Stupidity Paradox”: a prevalent workplace model that — like the bureaucracies we looked at last week — encourages obeisance to rules (we might say “best practices”) at the cost of independent thinking.

We’ll look at the Stupidity Paradox next time.

 

Kevin Rhodes left a successful long-term law practice to scratch a creative itch and lived to tell about it… barely. Since then, he has been on a mission to bring professional excellence and personal wellbeing to the people who learn, teach, and practice the law. He has also blogged extensively and written several books about his unique journey to wellness, including how he deals with primary progressive MS through an aggressive regime of exercise, diet, and mental conditioning.

Professional Paradigms New and Old (Part 7): Traumatic Transformation, and What Do You Do When Your Paradigm is Done Shifting?

Professional paradigm shifts require transformation not just for the profession’s culture, but for the individuals in it.

wired%20to%20createIn their book Wired to Create: Unraveling the Mysteries of the Creative Mind, authors Scott Barry Kaufman and Carolyn Gregoire identify several ways individual paradigm-shifting transformation gets started. One is inspiration, which they say comes in three stages:

The first stage is that unsolicited moment when we feel inspired, “by a role model, teacher, experience, or subject matter.”

“Next comes transcendent awakening — a moment of clarity and an awareness of new possibilities.

“Which leads to the third hallmark feature of inspiration: a striving to transmit, express, or actualize a new idea, insight, or vision.” (Emphasis in original.)

Individual paradigm shifts are also prompted by traumatic life events, resulting in what psychologists call “posttraumatic growth.” Again from Wired to Create:

After a traumatic event, such as a serious illness or loss of a loved one, individuals intensely process the event—they’re constantly thinking about what happened, and usually with strong emotional reactions.

[T]his kind of repetitive thinking is a critical step toward thriving in the wake of a challenge… we’re working hard to make sense of it and to find a place for it in our lives that still allows us to have a strong sense of meaning and purpose.

I have personal experience with both inspiration and trauma. As I wrote a couple weeks ago, “I have a personal, real-time, vested interest in change because I’ve been on a steep personal transformation learning curve for nearly a decade — for all sorts of reasons I’ve written about in my books, my personal blog, and sometimes in this column.” Learning, writing, and conducting workshops about the psychological and neurological dynamics of transformation has been has been my way of being proactive about something I’ve come to call “traumatic transformation.”

ApocalypseIn fact, I just finished a new book that completes my decade-long intensive on personal transformation. As always, I’ve learned a lot writing it, but the most startling discovery is that paradigm shifts don’t go on forever: a time actually comes when the new fully replaces the old. Now that I’ve finished it, I can see that writing the book was in part a way for me to bring closure to my years of personal paradigm shifting.

That being the case, I’ve decided that it’s time for me to set aside my transformation journey and let its lessons play out for awhile. Which is why, after today’s post, I’m going to take an indefinite vacation from writing this column. At this point, I have no fresh thoughts to add to what I’ve been writing about for the past several years. Instead of repeating myself, I want to take a break and see if anything new comes up. If so, I’ll come back and share it.

In the meantime, my endless thanks to the Colorado Bar Association and CBA-CLE and to my fabulous editor Susan Hoyt for letting me trot out my research and theories and personal revelations in this forum. And equally many thanks to those of you who’ve read and thought about and sometimes even taken some of these ideas to heart and put them into practice.

On the wall above the desk where I write, I have a dry-mounted copy of the very last Sunday Calvin and Hobbes comic strip, which I cut out of the newspaper the morning it ran. (Speaking of paradigm shifts, remember newspapers?) There’s a fresh snow, and our two heroes hop on their sled and go bouncing down a hill as Calvin exults, “It’s a magical world, Hobbes ol’ buddy… Let’s go exploring!”

I suspect Calvin and Hobbes are still out there, exploring. I plan to join them.

You?

Apocalypse: Life On The Other Side Of Over was just published yesterday. It’s a free download from the publisher, like my other books. Or click on this link or the book cover for details.

And if we don’t run into each other out there exploring, feel free to email me.

 

Professional Paradigms New and Old (Part 6): Law Beyond Blame

rhodes(At the end of last week’s post, I promised a follow up this week. We’ll get to that next week. In the meantime, the following was just too pertinent to pass up.)

In several posts over the past couple years, we’ve looked at how technology acts as a disruptive innovator, shifting paradigms in the legal profession. I recently came across another disruptor: the biology of the brain. Its implications reach much further than, let’s say, Rocket Lawyer.

David Eagleman is his own weather system. Here’s his website — talk about creds. His short bio is “a neuroscientist at Baylor College of Medicine, where he directs the Laboratory for Perception and Action, and the Initiative on Neuroscience and the Law.” The latter’s website posts news about “neulaw,” and includes CLE offerings. Among other things, neulaw tackles a bastion of legal theory: the notion of culpability.

Incognito_Cover_EaglemanEagleman’s book Incognito: The Secret Lives of the Brain contains a long chapter entitled “Why Blameworthiness Is The Wrong Question.” It begins with the story of Charles Whitman, who climbed a tower at the University of Texas in August 1966 and started shooting, leaving 13 people dead and 38 wounded before being killed himself. He left a suicide note that included the following:

“I do not understand myself these days. I am supposed to be an average reasonable and intelligent young man. However, lately (I cannot recall when it started) I have been a victim of many unusual and irrational thoughts… If my life insurance policy is valid please pay off my debts… donate the rest to a mental health foundation. Maybe research can prevent further tragedies of this type.”

Whitman’s brain was examined and a tumor was found in the sector that regulates fear and aggression. Psychologists have known since the late 1800s that impairment in this area results in violence and social disturbance. Against this backdrop, Eagleman opens his discussion of blameworthiness with some good questions:

Does this discovery of Whitman’s brain tumor modify your feelings about his senseless murdering? If Whitman had survived that day, would it adjust the sentencing you would consider appropriate for him? Does the tumor change the degree to which you consider it “his fault”?

On the other hand, wouldn’t it be dangerous to conclude that people with a tumor are somehow free of guilt, or that they should be let off the hook for their crimes?

The man on the tower with the mass in his brain gets us right into the heart of the question of blameworthiness. To put it in the legal argot: was he culpable?

The law has accommodated impaired states of mind for a long time, but Eagleman’s analysis takes the issue much further, all the way to the core issue of free will, as currently understood not by moral and ethical theorists but by brain science. Incognito is an extended examination of just how much brain activity occurs beneath the level of conscious detection, in both “normal” and impaired persons. Consider these excerpts:

[T]he legal system rests on the assumption that we do have free will — and we are judged based on this perceived freedom.

As far as the legal system sees it, humans . . . use conscious deliberation when deciding how to act. We make our own decisions.

Historically, clinicians and lawyers have agreed on an intuitive distinction between neurological disorders (“brain problems”) and psychiatric disorders (“mind problems”).

The more we discover about the circuitry of the brain, the more the answers . . . move toward the details of the biology. The shift from blame to science reflects our modern understanding that our perceptions and behaviors are controlled by inaccessible [neurological] subroutines that can be easily perturbed.

[A] slight change in the balance of brain chemicals can cause large changes in behavior. The behavior of the patient cannot be separated from his biology.

Think about that for a moment — as a lawyer, and as a human being. The idea that our biology controls our behavior — not our state of mind or conscious decision-making — is repugnant not only to the law, but to our everyday perceptions of free will and responsibility. Tamper with free will, and a whole lot of paradigms — not just legal notions of culpability — come crashing down.

Eagleman’s discussion of these issues in Incognito is detailed and thoughtful, and far too extensive to convey in this short blog post. If you’re intrigued, I recommend it highly.

Kevin Rhodes has been a lawyer for over 30 years. Drawing on insights gathered from science, technology, disruptive innovation, entrepreneurship, neuroscience, and psychology, and also from his personal experiences as a practicing lawyer and a “life athlete,” he’s on a mission to bring wellbeing to the people who learn, teach, and practice the law.