What comes after smartphones?

With all the press and the inescapable ads for new iPhones, Samsung Galaxy, Google Pixel and other snazzy devices, it’s hard to think of the smart phone as a transitional technology.

But it is.

Here are three recent indicators:

Apple and Facebook share a hypothesis that life contains moments when lugging a smartphone is a drag. The Apple Watch commercials feature active people running with just the Watch and wireless ear buds. (I’m not sure why VR is less alluring with a smartphone unless one plans to be naked and therefore pocketless in real life while visiting virtual life.)

You might be wondering about that third indicator. How does the death of non-internet-connected iPods suggest that smartphones — the technology that replaced the iPod — are going away?

What happened to the iPod will happen to the iPhone.

Once smartphones took off after 2007, Apple cannily realized that this new wave of devices was going to absorb the customer base for listening to digital music from the iPod. Who wants to carry around a smartphone and an mp3 player when the smartphone can play mp3s just fine and sounds the same?

What both iPod and iPhone owners care about is listening to music, not the device. If anybody was going to cannibalize Apple’s iPod customers, the company thought, then it should be Apple.

As I look at technology and behavior trends, one of my axioms is that verbs are more important than nouns.

People want to take pictures, and most people prefer the fastest and easiest option for doing so. Devoted photographers still use single lens reflex cameras — either film or digital — but (as the Kodak company learned to its dismay) most people don’t want the hassle and expense of getting film developed, so instead they just whip out their phones. In our latest Surveying the Digital Future survey, for example, we found that 89 percent of Americans take pictures with their mobile phones.

It’s important to focus our analytical attention on the activity — taking pictures — rather than the device the people use to do the activity, because behavior is liquid and can be poured from one container into another.

None of the actions people perform with smartphones are limited to smartphones, and that means that the smartphone won’t be with us forever.

What will this post-smartphone future look like?

Computing power is increasing, as is the ubiquity of wifi and other over-the-air internet connections. Cloud Computing, where the heavy lifting of computation happens online instead of on a computer, means that smaller and smaller devices will have greater and greater processing power.

There’s a common cliché that today’s smartphone is more powerful than the computer that landed the Apollo 11 on the moon. In a few short years, a device the size of a pea will connect to processing power a thousand times greater than today’s smartphone.

So, instead of smartphones in our pockets or purses as our single, do-everything devices, we’ll have Personal Area Networks (PANs)– clusters of devices worn on different parts of our bodies or hovering nearby.

Instead of the glass-and-metal rectangle of today’s smartphone, we might have the computer guts of our PANs in the shape of a silver dollar, or distributed across a series of beads worn as a necklace.

Both in the data from our Future of Transportation project and in watching the uptake for Amazon’s Alexa, Apple’s Siri and the Google Assistant, we see voice interfaces rising in popularity, so it’s likely that the main PAN input will be our voices.

For output, PAN we will receive information both via the voice of the digital assistant (“turn left here, Brad”) and also via Augmented Reality (AR) glasses like the rumored-to-forthcoming Magic Leap technology. Eventually, these will evolve into contact lenses.

If we need to type, we’ll have a virtual keyboard projected onto our AR vision, and we’ll type on any flat surface– the way we type on touch interfaces today. Likewise, we might wear barely-there connected gloves for input. Or, we might carry around a small stylus for sketching in AR or VR, or even a fancy pen that works on real paper as well as virtual paper.

The cutting-edge health sensors in the latest Apple Watch will seem Flintstonian in comparison to the distributed sensors in clothing as well as implanted in our bodies, continually sharing health information with our CPUs.

What stands in the way of this Post Smart Phone future?

Two things are standing in the way of the brave new world of PANs, one technological and one cultural.

The technological obstacle is battery life. Nobody wants to plug in a dozen or more devices (CPU, glasses, stylus, shoes and socks, underwear, pants, shirt, hat…) every night at bedtime, so battery technology will need to improve and the power-consumption demands of the devices will need to become more efficient.

Electric vehicle manufacturers like Tesla are paving the way for better batteries for cars, and eventually that technology will shrink and trickle down to micro devices.

On the cultural side, if you’re wearing a screen on your face and the processing power is in a silver dollar in your pocket, then how do you take a selfie?

While some people make fun of selfie-obsessed youth (not that young people have any monopoly on either narcissism or the ongoing high-tech curation of it through selfies), as my friend Jill Walker Rettberg compellingly argued in her book Seeing Ourselves Through Technology, selfies are an important emergent genre of self-expression — one that is here to stay.

I predict that many of us will carry a selfie-specialized, lightweight, thin, credit-card sized screen that will have both a powerful camera and high-definition resolution. If you look at the new Google Clips camera announced last week and imagine it even smaller, more powerful and with a display, then you’ll see what I mean.

With increased battery life, some of us will also have selfie drones that will take off and orbit us whenever we simply think about taking a selfie, since we’ll have small sensors affixed to or implanted in our skull paying attention to how our brain waves change when we’re thinking about particular things.

Focus on content, not containers

The death of the smartphone is hard to imagine today.  But when the iPod debuted in 2001, it was hard to imagine that it would be displaced just six years later with the arrival of the iPhone.

The moral of this story is not that we’ll all someday soon be even more wired up and connected than we are today (although we will).

Instead, the important take-away idea is that the smartphone (a noun) is a container for a series of activities (verbs), and that the container is distinct from the content.

Don’t mistake the glass for the wine.*

[Cross-posted on the Center for the Digital Future site and elsewhere.]

* For a sci-fi, near-future dystopian version of some of these interactive technologies, you might enjoy my 2011 novel, Redcrosse.

Car ownership is changing, not dying (yet)

On Monday, Business Insider published an article with the headline, “Uber and Lyft could destroy car ownership in major cities.” It’s a provocative headline, but it misrepresents the carefully worded findings of a recent study by researchers at the University of Michigan, Texas A&M and Columbia.

The study took shrewd advantage of a “natural experiment” that happened when Uber and Lyft, protesting new municipal legislation, stopped operating in Austin, Texas, in May of 2016. A few months later, the study authors surveyed a representative sample of Austin residents who had formerly used Lyft and Uber to see how their transportation habits had changed.

The most interesting findings from the study were that after Uber and Lyft drove out of town, 1) only 3% of respondents switched to public transportation (the technical term for this is “bad news”), and 2) that respondents who switched back to using a personal vehicle were 23% more likely to make more trips than when they’d used Lyft and Uber, increasing congestion for everybody else.

The study authors were careful not to extrapolate beyond the Austin city limits, so the Business Insider headline is overblown in its end-of-days rhetoric. It reminds me of the “Bring Out Your Dead” scene in Monty Python and the Holy Grail where a plague victim isn’t quite dead, but that situation is inconvenient for the person carrying him to a wagon full of corpses:

It’s not only fans of Lyft and Uber who overstate the impact of these services.

In an HBR interview, Nissan Renault CEO Carlos Ghosn — when asked about Uber and other such services cutting into car buying — replied, “I’m not worried. By our estimates, the industry sold 85 million cars worldwide in 2016 and is moving towards 87 million this year– both industry records.”

That is a nonsensical response: it’s like being confronted with a giant asteroid hurtling towards the Earth and replying, “but it’s so sunny outside!”

What’s really changing about transportation

In our work at the Center’s Future of Transportation project, we see a two-stage revolution in transportation that is just beginning.

In the first stage, what we call “Get-a-Ride Services” (or GARS) like Uber, Lyft, Car2Go, Zipcar and others make it thinkable for Americans to give up their own cars, but the move from just thinking about it actually to giving up a car is going to take time.

It’s a good news/bad news/more good news scenario.

We asked a representative sample of all Americans if they’d consider not having their own cars: 80% of respondents said no. That’s good news for car manufacturers– only 20% of Americans will let go of the steering wheel.

The bad news is that when we zoomed in on people who use GARS either frequently or sometimes that 20% consideration doubled to 40%– so use of GARS creates an immense flexibility in how Americans think about transportation.

Then there’s the additional good news: only 16% of Americans use GARS frequently (2%) or sometimes (14%); 17% use them once in a while; 67% never use them. (I discuss this at greater length in this column about liquid behavior.)

Car manufacturers, in other words, don’t have to worry about massive car-buying declines in 2018, but I wouldn’t be optimistic about 2020. We see a slow erosion in car buying, but more importantly we see change within the cars being purchased.

The people who choose to own cars will have more specialized needs (more on this below), and this means that manufacturers will need to customize their vehicles to a greater extent than they do today. That’s grim for mass scale where, for example, Toyota sells a few million Camrys that are all pretty much the same.

On the other hand, new production technologies — like the adjustable drive train from Faraday Futures — will make this customization cheaper for manufacturers. The last stage of production for your next car might happen at the dealership, via a gigantic 3D printer.

The second stage of the transportation revolution is all about self-driving cars, and you can’t find a better overview of why driverless cars will change everything than in this column by Center founder Jeffrey Cole.

Self-driving cars are no longer the stuff of science fiction. This week the U.S. House of Representatives will vote on “a sweeping proposal to speed the deployment of self-driving cars without human controls and bar states from blocking autonomous vehicles, congressional aides said,” according to Reuters.

But even if this legislation magically passed from House to Senate to the president’s desk and received approval in 24 hours, it will still be years before self-driving cars are everywhere. As science fiction author William Gibson famously quipped in 1993, “the future is already here: it’s just very evenly distributed.”

Tomorrow’s car buyer

The national — even global — fascination with self-driving cars is understandable, but it’s also a distraction from important changes in transportation, the first stage of the revolution, that will hit home a lot sooner.

To see this, let’s zoom in on one chart from our forthcoming Future of Transportation report. We asked people who used to have a car but had given it up this question, “Do you miss anything about having access to a car?” Here are the top five answers:

The most interesting answer is the fourth: 31% of respondents miss being able to keep their stuff in a car. The flip side of this, of course, is that 69% of people don’t give a hoot about using a personal car like a high school locker.

This suggests that for the vast majority of people there is no specific, concrete reason to own a car. “Convenience” is vague, and most people will trade convenience for cash much of the time. Independence, the fun of driving and not having to rent a car to go on a long drive, are similarly vague.

But being able to keep things in a car is concrete, and from that we can draw some tentative conclusions about who will own cars in the future.

Parents of very young children — babies these days need approximately a ton of plastic crap that poor Mom and Dad have to lug around — will find it inconvenient to have to install a car seat every time they drive somewhere. Likewise, parents with more than two children won’t want to play Uber-Roulette and risk having to squeeze five plus bodies into four seats in the inevitable Prius.

Anybody who works out of a car — gardener, plumber, contractor, surveyor, electrician, or locksmith — will need a dedicated vehicle. Sporty people who need a lot of equipment — skiers, surfers, kayakers, campers — or bikers who want a rack on their car to drive to the nice places to ride will want a dedicated vehicle.

But for the rest? The people who just need to move their bodies from place to place carrying a backpack or briefcase?

Most of those people will probably buy another car when the time comes: the big question is will they buy another car a few years after that? The answer is only “maybe” because — for the first time in a century — they no longer have to own a car to get around.

[Cross-posted on the Center site and elsewhere.]

Liquid Behavior

Anybody who has tried to lose weight, quit smoking, or train for a marathon knows that creating a new behavior or getting rid of an old one can be very, very challenging.

But it’s not hard to pour a behavior from one container into another, and this has implications for anybody trying to launch a new product or service. Here’s an example: the Center’s Future of Transportation Project turned up a trio of numbers — 86, 80 and 60 — that tell an exciting story about how Americans’ opinions about car ownership are changing. We asked our respondents — a statistically representative snapshot of the U.S. population — if they would give up driving altogether. Eighty-six percent said they would not.

That seems definitive, but it’s not.

We changed the question and asked if Americans would give up owning a car– that is, they’d retain the ability to drive but wouldn’t own or lease a car. That 86% dropped to 80%, or to look at it from the other direction, 14% consideration rose to 20%. That’s not a big difference, and there’s still a vast supermajority of people who would not give up their cars.

But then the story changes.

Instead of looking at our entire population, we focused on the people who use what we call “get a ride services” (GARS) like Lyft, Uber, Getaround, Zipcar or Car2Go, either frequently or sometimes. Only two percent of our respondents use these services frequently, while 14% use them sometimes (84% use them rarely or never– which many find surprising given how often Uber is in the press).

Sixteen percent is a relatively small slice of the population, but the impact of GARS on people’s transportation views is profound. The 80% of people who would never give up owning a car drops to 60%. Or, to reverse the picture, the 20% consideration for no longer owning a car among the general population doubles to 40% among the GARS-using population!

With an ousted CEO, a sexist bro culture, and aggressive takeover movements from Softbank in Japan, Uber has more than its fair share of problems right now, but that’s Uber the company, Uber the noun.

Uber may not last as a company (and I’ll have more to say on this topic in a future column), but uber the verb (as in, “I’ll uber there after my lunch meeting”) isn’t going anywhere.

In other words, it takes surprisingly little to make giving up car ownership thinkable: all you have to do is try GARS sometimes and you suddenly see the hassle and expense of car ownership in a stark new light. This is bad news for car manufacturers, and particularly for the people marketing new cars, because if you look at any recent car ad the thrust of the message is “buy this car.” But the argument that the manufacturers need to be making first is “buy a car” because they can no longer take for granted that Americans know they want to own a car.

Even before we put the survey into the field, I was surprised when more than one of my suburban neighbors speculated that there might come a time when they could reduce the number of cars they have and rely on Uber (or a similar service) to fill in the gaps — this in a neighborhood where the nearest bus stop is a mile away.

Focus on Verbs, not Nouns

This isn’t a column about transportation: it’s about how little it takes to move a behavior, to pour it from one container into another like pouring orange juice from a bottle into a glass.

Previously, I’ve written about how smart phones absorbed the functions of cameras, email, notebooks, calendars, and MP3 players to become the everything-Swiss-Army-Knife devices that we can’t be without. We can extend this list to include flashlights, videogame devices, social lives, banks, zippo lighters, and more. But in this week’s column, let’s flip this phenomenon and look at it from the other direction.

What the GARS data show is the people don’t want to own things per se, they want to achieve their goals — getting around — and they’ll choose a tool — a car — to accomplish that goal, particularly if people commonly associate that tool with the goal in question. But if there’s another tool that’s easier or cheaper and achieves the same goal, then people will migrate their behavior to the new tool as soon as they understand that they have the option.

This is a big deal, because companies often focus on their product features and their competitors rather than on their customers’ needs, and that can make companies blind to new competitors that come from different angles to help customers achieve their goals faster, cheaper, or both.

This notion of liquid behavior connects to classic business thinking. In “Marketing Myopia, a famous 1960 Harvard Business Review article, Theodore Levitt wrote that companies need to ask themselves, “What business are you really in?”

Using railroads as a key example, Levitt argued that the railroads stopped growing because they presumed that they were in the railroad business rather than the transportation business. In other words, they focused on the noun (trains) rather than the verb (transportation). In Levitt’s view, transportation companies would have extended trains into trucks and airplanes, but trains weren’t going to disappear.

More recently, business professor and innovation theorist Clayton Christensen has argued (in the book Competing Against Luck) that companies need to ask their customers, “What job did you hire that product to do?” and iterate product development accordingly. This moves the Levitt question from the corporate level to the individual level. Christensen’s focus on what he calls “Job Theory” helpfully refocuses attention on the actions people want to perform rather than the tools that other people have used previously.

Liquid behavior is different from both the Levitt or Christensen questions because it presumes that today’s products and services will go away but that the actions people perform with those products and services will stick around. Only serious photographers now buy single lens reflex cameras; most people just use their phones to snap pictures. The market for typewriters is vastly smaller than it was forty years ago because most people use word processing programs on their computers to “type” things up. Travelers who want to make their own breakfast now have the option of choosing AirBNB over a traditional hotel.

For a new product or service to succeed, it’s easier to pour an old behavior into a new shape than to create something entirely new. Facebook is a terrific example of this: the service skyrocketed after it allowed its users to share photos. People had already been sharing photos since before the Polaroid, but Facebook made it easy to pour that photo sharing into a new virtual container. Early Facebookers didn’t automatically understand poking or throwing sheep (if you’re old enough, you just got hit by a wave of nostalgia), but photo-sharing was a no-brainer.

The big takeaway here is that incumbent companies are always more vulnerable than they think they are if they delude themselves into thinking that people are loyal to the brands and to the particular products that they use today to achieve their goals. Apple is vulnerable. Google is vulnerable. Facebook is vulnerable. Walmart is vulnerable. Amazon is vulnerable, and so on.

People aren’t loyal. People are busy and often don’t have the mental energy to make a change (this is different than laziness). The chance to save time and money can nudge people to give something new a try, particularly if the new thing doesn’t require a steep learning curve. That’s liquid behavior.

To survive and thrive, companies need to focus on verbs instead of nouns, on behavior instead of brands or products.

[Cross-posted at the Center for the Digital Future website.]

Smart Phones and Drained Brains

As we use our mobile phones to do more and more things, we are paradoxically able to accomplish less— even when the phones are face down and turned off.

My last column explored how smart glasses (“heads up display” or “HUDs”) will increase the amount of digital information we look at, with the ironic twist that these same devices will erode our shared experience of reality. But we don’t need to look to a future technology to see how challenging it is to pay attention to what’s around us. We already carry a dislocating technology around in our pockets: our phones.

I’m deliberate when I say “dislocating” rather than “distracting,” because we’re not necessarily distracted: often we’re fiercely focused on our phones, but we’re dislocated because unless we’re taking pictures or videos we’re not engaged with our immediate physical environments. Distraction is a subset of dislocation.

The charts below show the many ways we use our phones, as described in the newest version of Center’s longitudinal “Surveying the Digital Future” report (it comes out next month):

As the report will observe, texting (93%) has edged out talking (92%) as the most common use of a mobile phone because texting increased six percent year over year while talking stayed flat.

It’s easy to get sucked into data on the individual functions (for example, 67% of people take videos with their phones, a nine percent increase), but doing so misses the big picture: with the exception of talking, Americans have increased their use of every mobile phone function over four years (2012 to 2016).

Phones and the Future of Focus

As with all technologies, increased mobile phone use has both a plus side and a downside.

On the positive side, we’re more connected to our loved ones and the information we want than ever before. We get news of the world instantly and store our important information — from shopping lists to medical documents to that pinot grigio we liked so much in that restaurant that time that we took a picture of the label — in our phones and online where we can always get to it. (I’m the king of productivity apps and can no longer imagine life without Evernote.) With games and apps and email and social media, mobile phones have engineered boredom out of our lives because there is always something fun to do.

But on the negative side, we use our phones more often to do more things, and that time and attention have to come from somewhere — they come from our engagement with the physical reality around us, including the people we are with who increasingly feel ignored unless they too have their noses in their smart phones. If we’re playing Candy Crush waiting in the supermarket checkout line, then we’re not chatting with the cashier or the other people in line who might have something interesting to say. While it sucks to be bored, boredom leads to daydreaming, and most of the great ideas in human history started with a daydream.

Brain Drain

First we’re dislocated, then we’re distracted. In other words, when we finally want to focus on the world around us, it’s getting harder to do so because of our mobile phone use. This is the finding of an important study that came out in the Journal of the Association for Consumer Research in April.

The article — “Brain Drain: The Mere Presence of One’s Own Smartphone Reduces Available Cognitive Capacity” by Adrian F. Ward, Kristen Duke, Ayelet Gneezy and Maarten W. Boz — usefully distinguishes between the things we think about (the orientation of our attention) and how much energy we have to think about those things (the allocation of our attention).

Mobile phones, the authors find, suck attentional energy away from non-phone-based activities, and since we have a limited amount of attention to spend, we’re less capable when we have a task at hand and in front of us.

What’s startling about the study is that mobile phone distraction does not just happen when our phones are on, beeping and flashing and vibrating for our attention. Our mobile phones reduce our ability to function even when the phones are turned off and face down on the table or desk where we’re working. As the authors observe, trying to increase your focus using “intuitive ‘fixes’ such as placing one’s phone face down or turning it off are likely futile.”

Performance gets slightly better if the phone is out of sight in a pocket or bag. Performance substantially increases only when the mobile phone is in another room, entirely out of sight and somewhat out of mind. And the more dependent you are on your mobile phone, the more your focus blurs when your phone is in sight or nearby.

It gets worse: the data shows convincingly that our ability to perform erodes if our phones are nearby, but we do not recognize that degradation of performance:

Across conditions, a majority of participants indicated that the location of their phones during the experiment did not affect their performance (“not at all”; 75.9%) and “neither helped nor hurt [their] performance” (85.6%). This contrast between perceived influence and actual performance suggests that participants failed to anticipate or acknowledge the cognitive consequences associated with the mere presence of their phones.

In other words, we think that we can handle the distraction that comes with our phones being around, but we can’t. In this regard, mobile phones are a bit like drunk driving or texting while driving: we think we can do it without consequence, but often we aren’t aware when we’re impaired and not able to function until it’s too late. (Psychology Today has a nice summary of the study findings.)

Implications: Budgeting Attention

We have a limited amount of attention: this is why a common metaphor for directing our attention towards someone or something is “to pay attention.” Attention is like a currency that we can budget or hoard, but we tend not to do so. Instead, we are attention spendthrifts, throwing our cognitive capacity at all the tasty tidbits that come out of our screens.

The problem with the “pay attention” metaphor is that it obscures something important: our attention can disappear without our having made a conscious decision to pay. For example, when we have notifications enabled on our laptops, tablets, and mobile phones — especially the latter — those bleeps and flashes and buzzes are attention taxes that we don’t realize we’re paying.

What the “Brain Drain” study shows is that even if we have our phones turned off and face down, we’re still paying an attention tax that acts like hidden fees on credit cards.

Brain Drain is different than Information Overload because with Brain Drain there is no information: just the potential for information. Likewise, Brain Drain is different from FOMO (Fear of Missing Out), because Brain Drain happens even when we aren’t fretting about what might be going on somewhere else.

The paradox of mobile phones is that as we use them to do more and more things, it becomes harder and harder to do any one thing. Always using our everything devices mean that we’re often nowhere in particular, and in order to be somewhere we have make a pre-emptive, conscious decision to put the everything device into an entirely different room.

That’s hard to do.

[Cross-posted on the Center for the Digital Future website.]

The Fall and Rise of the Visual Internet

I’m pleased to announce that my role with the Center for the Digital Future at USC Annenberg has expanded, and I’m now the Chief Strategy Officer. This column is cross-posted from the Center’s website, and is the first of many regular pieces from me and my colleagues. And now, onto the column… 

Bennett and I have been friends since we were eight. Over a recent late-night dessert we compared notes about how thinly spread we each felt across work, family and life. Bennett then shared an insight from a counselor he sees: “Y’know how in Kung-Fu movies the hero stands in the center and all the villains gather into a circle around him and take turns attacking him one by one? Life isn’t like that.”

Neither is technology.

Technologies don’t take turns arriving in our lives. Instead, they’re locked in a Darwinian struggle to clutch and hold onto a niche in our lives. Sometimes it’s a head-to-head struggle, like VCR versus Betamax, where the differences are slight and one technology wins because of marketing and luck. Sometimes different trends slam into each other and that collision creates a new thing — like the way that mobile phones ate digital cameras, email, notebooks, calendars, music collections, powerful microprocessors, decent battery life, email and the web to become smart phones.

A new collision is gaining velocity with the emergence of digital assistants and heads-up display. Both new technologies are changing how users interact with information, particularly visual information. As these technologies give users new ways to behave, those behavior changes will pressurize the business models and financial health of digital media companies, particularly ad-supported companies.

Voice-Interfaces Reduce Visual Interaction

Even though newer Echo devices have screens and touch interfaces, the most compelling use case is eyes free and hands free for Amazon’s Alexa, Apple’s Siri in the HomePod, and the Google Assistant in Google Home.

For example, I often use my Echo device when I’m doing the dishes to catch up on the day’s events by asking, “Alexa, what’s in the news?” Or, if I’m about to wade deep into thought at my desk and don’t want to miss a conference call starting an hour later I’ll ask Alexa to “set a timer for 55 minutes.”

I’m a failure at voice-driven commerce because I have yet to ask Alexa to buy anything from Amazon, but I have used IFTTT (the “If This, Then That” service that connects different devices and services) to connect Alexa to my to-do list so that I can add something just by speaking, which spares me from dropping everything to grab my phone or (gasp!) a pen and paper.

Alexa’s answers are pleasantly clutter-free. If I use my desktop computer to search Amazon for the latest John Grisham novel, then along with a prominent link to Camino Island, Amazon serves up a results page with 24 distracting other things that I can buy, as well as hundreds of other links. With Alexa, I just get Camino Island. (With commodity products, unless you specify a brand Amazon will send you its generic house brand: CPG advertisers beware!)

Right now, most queries to smartphone-based digital assistants result in a list of results that I have to look at, switching my attention from ears to eyes, but as these rudimentary artificial intelligences get better my need to look at a screen will decline. Today, if I say, “Hey Siri, where’s a Peet’s coffee near me?” the AI will tell me the address and ask me if I want to call or get directions. If I choose “directions,” then I have to look at my phone. In a short amount of time, Siri will seamlessly transition to Apple Maps and speak turn-by-turn directions, so I won’t have to look away from the road.

The challenge the rise of voice interfaces poses for ad-supported digital companies is that those companies make their money from propinquity— from the background clutter that is near the thing I’m looking at or searching for but that isn’t the thing I’m looking at or searching for.

Google, Facebook, the New York Times, AOL (excuse me, “Oath”), Reddit, Tumblr, Bing, LinkedIn, and others make much of their money from banners, pop-up ads, search results and other things we see but often don’t consciously notice: that is, online display adverting.

Amazon’s Alexa can already read news stories aloud in a smooth, easy-to-follow voice. It won’t be long until all the digital assistants can do so, and can navigate from article to article, site to site without users having to look at anything.

We can listen to only one thing at a time, so there aren’t background ads for Siri, Alexa and their ilk. Moreover, despite decades of conditioning to accept interruptive ads in radio, it’ll be game over the moment Alexa or Siri or Google Assistant says, “I’ll answer your question, but first please listen to this message from our friends at GlaxoSmithKline.”

The most powerful ad blocker turns out to be a switch from eyes to ears as the primary sense for media interaction. As voice-interface digital assistants grow in popularity and capability, the volume of visual inventory for these businesses will erode.

This erosion follows the decline in visual inventory that already happened as users moved most of their computing time to the smaller screens of mobile devices with less visual geography and therefore less room for ads.

In a recent Recode Decode interview, marketing professor and L2 founder Scott Galloway observed, “advertising has become a tax that the poor and the technologically illiterate pay.”

Since wealthier people will have voice-activated digital assistants first, that means that the people more exposed to visual advertising will have less disposable income and will therefore be less desirable targets for many advertisers. This creates more pressure on the display-ad-based media economy.

On the other hand, remember the Kung Fu movie quip? There’s another technology making changes in the visual internet at the same time.

Smart Glasses Increase Visual Interaction

Smart glasses are, simply, computer screens that you wear over your eyes. In contrast with voice-interfaces that are already popular in phones and with speakers, smart glasses haven’t become a big hit because they’re expensive, battery life is limited, and many people get nervous around other people wearing cameras on their faces all the time. (Early Google Glass enthusiasts were sometimes dubbed “glassholes.”)

Some pundits think that just because Google Glass didn’t sweep the nation it means that all smart glasses are doomed to failure. But just as Apple’s failed Newton (1993) presaged the iPhone 14 years later (2007), Google Glass is merely an early prototype for a future technology hit.

Smart glasses come in a spectrum that gets more immersive: augmented reality puts relevant information in your peripheral vision (Google Glass), mixed reality overlays information onto your location that you can manipulate (Microsoft’s HoloLens, with Pokemon Go as a phone-based version), and virtual reality absorbs you into a 360 degree environment that has little relationship to wherever your body happens to be (Facebook’s Oculus Rift, HTC Vive). The overarching category is “Heads-Up Display” or HUD.

What’s important about HUDs is that they increase the amount of digital information in the user’s visual field: not just the visual inventory for ads (like in this clip from the film, “Minority Report“), but for everything.

Wherever you’re reading this column — on a computer, tablet, phone or paper printout — please stop for a moment and pay attention to your peripheral vision. I’m sitting at my desk as I write this. To my left is a window leading to the sunny outdoors. On my desk to the right are a scanner and a coffee cup. Papers lie all over the desk below the monitor, and there are post-it reminders and pictures on the wall behind the monitor. It’s a typical work environment.

If I were wearing a HUD, then all of that peripheral territory would be fair game for digital information pasted over the real world. That might be a good thing: I could have a “focus” setting on my HUD that grays out everything in my visual field that isn’t part of the window where I’m typing or the scattered paper notes about what I’m writing. If I needed to search for a piece of information on Google I might call a virtual monitor into existence next to my actual monitor and run the search without having to hide the text I’m writing. This is the good news version.

In the bad news version, ads, helpful suggestions, notifications, reminders and much more colonize the majority of my visual field: I think about those moments when my smart phone seems to explode with notifications, and then I imagine expanding that chaos to everything I can see. In some instances this might be a maddening cacophony, but others might be more subtle, exposing me to messages in the background at a high but not-irritating frequency in order to make the product more salient. (“I’m thirsty: I’ll have a Coke. Wait, I don’t drink soft drinks… how’d that happen?”) This isn’t as creepy as it sounds, like the old Vance Packard “subliminal advertising” bugaboo, it’s just advertising. Salience results from repetition.

Regardless of what fills the digital visual field, an explosion of visual inventory will be a smorgasbord of yummies for ad-supported media companies.

But there’s a twist.

Filters and the Decline of Shared Reality

Just sitting at my desk as I work is an overly-simplistic use case for wearing a HUD: the real differences in all their complexity come into focus once I leave my office to wander the world.

With Heads-Up Display, every surface becomes a possible screen for interactive information. That’s the output. Since the primary input channel will still be my voice, there’s a disparity between the thin amount of input I give and the explosion of output I receive. This is the digital assistant and HUD collision I mentioned earlier.

Walking in a supermarket, the labels on different products might be different for me than for the person pushing his cart down the aisle a few yards away. The supermarket might generate individualized coupons in real time that would float over the products in question and beckon. If my HUD integrated with my digital assistant, then I might be able to say, “Hey Siri, what can I make for dinner?” and have Siri show me what’s in the fridge and the pantry so that I can buy whatever else I need.

Smart glasses won’t just stick information on top of the reality on the other side of the lenses, they will also filter that reality in different ways.

We can see how this will work by looking at the technologies we already use. For example, businesses will compete to put hyper-customized articles, videos, and ads in front of you, similar to how ads pop-up on your Facebook page today. But these articles and ads will be everywhere you look, rather than contained on your laptop of phone. This is algorithmic filtering based on your past behavior.

Likewise, your digital assistant will insert helpful information into your visual field (such as the name of the person you’re talking with that you can’t remember) that you either ask for or that it anticipates you might find useful. The Google app on many smart phones already does versions of this, like reminding you to leave for the airport so that you aren’t late for your flight.

Finally, you’ll be able to add your own filters by hand, changing people’s appearances or names in real-time. If you’ve given one of your smart phone callers an individual ring tone, changed the name of a contact to something else (“What a Babe” or “Don’t Answer Him,”), or watched a teenager put a dog nose or kitty ears on top of a photo in Snapchat, then you’ve already seen primitive versions of this in action.

An unintended consequence of this visual explosion is the decline of shared reality. We already spend much of our time avoiding the world around us in favor of the tastier, easier world inside our smart phones. But even if the latest meme coming out of Instagram is the funniest thing we’ve ever seen, the majority of what surrounds us is still analog, still the flesh and blood world untouched by digital information.

That changes with HUDs.

In the near future where HUDs are common, you and I might stand side by side on the same street corner looking at the same hodgepodge of people, cars, buildings and signs — but seeing different things because we have idiosyncratic, real-time filters. Each of us will be standing on the same corner but living inside what Eli Pariser calls “filter bubbles” that have ballooned out to surround our entire worlds.

Common knowledge at this point becomes rare because a big part of common knowledge is its social component. In the words of Michael Suk-Young Chwe from his book Rational Ritual, a society’s integration is the result of coordinated activities built on a set of shared information and messages.

For a society to function, Chwe writes, “Knowledge of the message is not enough; what is also required is knowledge of others’ knowledge, knowledge of others’ knowledge of others’ knowledge, and so on — that is, “common knowledge.”

It has been challenging enough in our shared analog reality to achieve things like consensus in politics or word-of-mouth awareness in business. As we each move into new, idiosyncratically personalized environments where we don’t know what other people know, we’ll need to work harder to hear other voices than our own, to connect with each other as friends, family members, customers and citizens.

That may be a tall order.

CES 2017 for Brands: a Skeptical Review

Most years at CES you can spot me leading tours, and most years after the show is over I sit down to ponder what I made of it all, what the pundits got right and what they missed.

While in past years I’ve given presentations on these things, this year I wrote it up for my friends at The Ascendant Network– private to their online group until today.

You can find the PDF here.

My 2016 in Books

This is the third year that I’ve kept a running list of every book that I’ve completed for the first time and then shared that list here as the first thing I write on either the last day of the old year or the first of the new.

You can see the 2015 list here and the 2014 list here, and as always I want to thank my friend David Daniel for the inspiration to do this.

A lot of folks in my line of work spend the waning moments of one year gazing out with predictions about the months ahead, and I’ll be doing plenty of that soon — most publicly at CES where I’ll be leading tours next week. However, I’m not only a futurist, I’m also a historian — a “futuristorian” — and so I look back as well as forward.

Looking back on what I read and when I read it helps me to track each year’s intellectual journey similar to how looking back at old emails or social media posts or journal entries can help me to pinpoint what I was thinking, when and often where.  This year, one change from previous years is that I read more physical books than e-books.

So much of the recent news and social media torrent has been about how 2016 was a crappy year (John Oliver did a great job starting this meme).  I prefer to think of it as a profound challenge, and amidst the challenges I read many wonderful books that I’m pleased to share.  One new feature: at the end I’ll list a few of the books I have on deck for the first part of 2017.

For folks who just want the list without the thoughts after reading, here’s the short version:

  1. Polanyi, Michael. The Tacit Dimension.  
  2. Bach, Rachel. Honor’s Knight. 
  3. Edgerton, David. The Shock of the Old: Technology and Global History Since 1900.
  4. Bujold, Lois McMaster. Gentleman Jole and the Red Queen (Vorkosigan series).  
  5. Grant, Adam. Originals: How Non-Conformists Move the World. 
  6. Bear, Elizabeth. Karen Memory.  
  7. Dunstall, S.K. Alliance: A Linesman Novel.
  8. Nisbett, Richard E. Mindware: Tools for Smart Thinking.
  9. Rushkoff, Douglas. Throwing Rocks at the Google Bus: How Growth Became the Enemy of Prosperity.
  10. Connelly, Michael. Echo Park: A Harry Bosch Novel. 
  11. Connelly, Michael. The Crossing: a Bosch Novel.
  12. Sacks, Oliver. Gratitude.  
  13. Thaler, Richard H. & Cass R. Sunstein. Nudge: Improving Decisions About Heath, Wealth, and Happiness. (Revised & Expanded Edition.) 
  14. Case, Steve. The Third Wave: an Entrepreneur’s Vision of the Future.
  15. Wallace, David Foster. This is Water; Thoughts, Delivered on a Significant Occasion, about Living a Compassionate Life. 
  16. Bujold, Lois McMaster. Penric & the Shaman.
  17. Sennett, Richard. The Craftsman. 
  18. Riordan, Rick. Blood of Olympus.
  19. Levitin, Daniel J. The Organized Mind. 
  20. Lee, Sharon & Steve Miller. Alliance of Equals (Liaden Universe.) 
  21. Riordan, Rick. The Trials of Apollo: Book One, The Hidden Oracle.
  22. Vance, J.D. Hillbilly Elegy: A Memoir of a Family and Culture in Crisis.  
  23. Shafer, David. Whiskey, Tango, Foxtrot: A Novel.
  24. Chiang, Ted. Stories of Your Life and Others.
  25. Bujold, Lois McMaster. Penric’s Mission. 
  26. Connelly, Michael. The Wrong Side of Goodbye: a Bosch Novel.
  27. Schwab, Klaus. The Fourth Industrial Revolution.
  28. Krasny, Michael. Let There be Laughter: A Treasury of Great Jewish Humor and What it all Means. More Than 100 of the Funniest Jewish of all Time.  
  29. Levitin, Daniel J. A Field Guide to Lies: Critical Thinking in the Information Age.
  30. Lewis, Michael. The Undoing Project: a Friendship that Changed Our Minds.
  31. Sawyer, Robert J. Hominids: Volume One of The Neanderthal Parallax.
  32. Gibbs, Stuart. Spy Ski School. 
  33. Perzanowski, Aaron and Jason Schultz. The End of Ownership: Personal Property in the Digital Economy

I read fewer books in 2016 than the whopping 56 of 2015, and less fiction than usual, which shows how busy my head has been with work and other matters.

Here’s the longer version with thoughts, occasional snarky remarks and analysis:

1.  Polanyi, Michael.  The Tacit Dimension.  Finished January 1, 2016.

Polanyi was a mid-twentieth century polymath, and this brief 1966 book came out of a series of lectures at Yale in 1962. The most useful for folks in 2017 are the first two.

In “Tacit Knowing,” Polanyi talks about how we can know more than we can say, and discusses a number of psychological experiments where subjects use information that they understand tacitly but cannot explain explicitly when asked to do so.  

Tacit Knowledge is an important idea for our digital era where more and more things can be tagged, identified, tracked and known. We have ever more that we can say but not necessarily an equally speedy increase in what we can know… and vice versa.

in the second Terry lecture, “Emergence,” Polanyi extends the concept of the leap to show how concepts emerge out of hierarchies, where the emerged form cannot be anticipated from the lower form, as a set of grammatical rules cannot anticipate poetry. In this thinking, Polanyi anticipates by decades recent discussions about moving up and down technological stacks and how innovation builds on platforms.

2. Bach, Rachel.  Honor’s Knight Finished January 3, 2016.

Sequel to Fortune’s Pawn, which I read in 2014.  This would be entirely conventional space opera if the protagonist were male, but because Deviana “Devi” Morris is a woman, it’s more interesting.  Like many trilogies (e.g. “Empire Strikes Back”) this one is pure action and stops right in the middle, tantalizing the reader to go read the 3rd, which I might do if it’s in my local library, as this one was.

3. Edgerton, David.  The Shock of the Old: Technology and Global History Since 1900.  Finished January 25, 2016.  (While on a plane to a conference in Cincinnati.)

This fascinating book challenges innovation-centric thinking by exploring the impact of technologies in use rather than when new devices and services first come over the horizon.  “Our technological museums, with their emphasis on first design, tied to miss out on the extraordinary life stories of the objets they have” (38).

Edgerton focuses on maintenance as well as invention, highlighting how our attitudes toward technology today differ from prior eras: “In the 1920s a Ford Model T buyer ‘never regarded his purchase as a complete finished product. When you bought a Ford you figured you had a start — a vibrant, spirited framework to which could be screwed on almost limitless assortment of decorative and functional hardware’” (97). This is both similar to and different than today’s smart phones with limitless apps and customization opportunities, but few people get under the hoods of their phones and computers. In this, Edgerton’s argument reminds me of Jonathan Zittrain’s arguments about generativity in The Future of the Internet and How to Stop It.

4. Bujold, Lois McMaster. Gentleman Jole and the Red Queen (Vorkosigan series). Finished February 3, 2016.  

Anything by Bujold is a cause for celebration, and a new entry in the Vorkosigan series can provoke a Snoopy-like happy dance. Bujold is my favorite living science fiction writer, and this series is magnificent and sublime. No spoilers: ping me if you want a hint about where to jump onto this terrific ride.

5.  Grant, Adam. Originals: How Non-Conformists Move the World.  Finished 2/19/16.

I loved Grant’s first book, Give and Take, and so I snapped this one up the moment I saw it.  Like the first book, Grant is a stellar writer who could have a second life writing fiction. In Originals, his insights about how things as seemingly-trivial as birth order determine choices, risk aversion and achievements later in life can cut to the bone in a spooky way. This book combines canny analysis with practical, applicable ideas: I wish it had a companion volume or app that would make some of the thinking more easily deployable… something I also think about books #8 and #29.

6. Bear, Elizabeth. Karen Memory.  Finished 3/5/16.

Pacific Northwest Steampunk.  This was genuinely interesting sci-fi that left me wanting to read more by Bear, and I hope for more from the protagonist, a prostitute with an appetite for adventure in an alternate-universe Seattle.

7. Dunstall, S.K. Alliance: A Linesman Novel. Finished March 25, 2016.

This is the second book in the Linesman series: not as good as the first because it was quieter after the sprezzatura world-building of #1.  I remembered the first well but not perfectly, which inhibited my pleasure a bit while reading the second. I look forward to reading #3, which came out when I wasn’t looking in November.

8. Nisbett, Richard E. Mindware: Tools for Smart Thinking. Finished March 26, 2016.

Brilliant and useful and worth a second read as it ties into a bunch of other reading and thinking over the past couple of years.  Many psychologists perform experiments about how flawed we are as thinkers, how irrelevant facets of a story can influence our decisions, and how bad we are at making the distinction between something that is plausible (a good story) and probably (likely).  Nisbett’s “mindware” are rules and tools to help us think more effectively, or at least be aware of biases as we muddle through our lives.

9. Rushkoff, Douglas. Throwing Rocks at the Google Bus: How Growth Became the Enemy of Prosperity. Finished April 8, 2016.

Fascinating and important: Doug Rushkoff is one of those courageous thinkers who tackles foundational presumptions to shake our thinking into new shapes.  The foundational idea that he tackles at the core of Throwing Rocks is that corporations need to grow in order to survive. This hasn’t always been the case (nor have corporations), and the practical consequences of adopting grow-or-die as the operating system for companies is that individual liberty and prosperity becomes subservient to the health of corporations.

Speaking of companies like Uber, AirBNB, Spotify and others, Rushkoff observers “As private companies induce us to become sharers, we contribute our own cars, creativity, and couches to a sharing economy that is more extractive than it is circulatory. Our investments of time, place, and materials are exploited by those who have invested money and actually own the platforms” (218). You won’t think about Uber and the like in the same way after reading this book.

10. Connelly, Michael.  Echo Park: A Harry Bosch Novel Finished April 16, 2016.

Devoured in less than 24 hours, on a quick trip to L.A. to give a dinner keynote. This is the 12th Harry Bosch novel, published in 2006.  I haven’t been a completionist with this series, and the last one I read was The Burning Room at the end of 2014.  That leaves me with The Crossing as the most-recently published that I haven’t read…

11. Connelly, Michael. The Crossing: a Bosch Novel. Finished April 21, 2016.

The Harry Bosch novel published in November of 2016, which I got out of the library Tuesday afternoon and inhaled. This one also features Mickey Haller from the Lincoln Lawyer series, although we see the story from Bosch’s POV.  Connelly is amazing with both character and plot, hence the momentum. He’s also daring with having his main character change over time (like Lois McMaster Bujold with Miles Vorkosigan). Unlike Doyle’s Sherlock Holmes, who is a bag of tics and a narrative function, Bosch changes over time. In this book, Bosch is facing his life after a second retirement from the LAPD and what to do with himself (also as an about-to-be empty nester). 

Reading these two Bosch books back to back also inspired me to watch the terrific Amazon Prime TV adaptation, Bosch.

12. Sacks, Oliver. Gratitude.  Finished 17 May, 2016.  

A little book with big ideas and emotions: it collects Sacks’ last four essays, originally published in The New York Times, written between the time he got his terminal cancer diagnosis and his death.  He was such a beautiful writer, and I’m pleased that my wife Kathi gave me this inspiring little collection for the 2015 holidays.

13. Thaler, Richard H. & Cass R. Sunstein. Nudge: Improving Decisions About Heath, Wealth, and Happiness. (Revised and Expanded Edition.) Finished May 22, 2016.

I’ve been meaning to read this book for years, and that ambition was heightened when I read Thaler’s Misbehaving memoir last year.  A brilliant book about the social context of and need for deep thinking about default assumptions for all manner of situations, as well as how to change them via gentle “nudges” rather than strong mandates. The seeming paradox of “libertarian paternalism” takes an entire book to unwind, and although sometimes it is hard going the read repays the effort.  The authors’ notion of “choice architecture” is profoundly useful, and I’ll be thinking about it a lot… in particular in the context of the connected experiences at the heart of my current work.

14. Case, Steve. The Third Wave: an Entrepreneur’s Vision of the Future. Finished 31 May, 2016.

This book was frustrating to read but has stuck with me in the months since I finished it.

Here’s a transcription of a notecard I scribbled on 5/26/16: “Generally book is platitudinous —> a good keyword to describe most biz books. Not enough concrete examples when he talks about the future, although plenty of the war stories.”

The word “platitudinous” is one happy result of a so-so book. Case’s Third Wave is a powerpoint with elephantiasis— the reverse of Al Gore’s “An Inconvenient Truth” that was a book masquerading as an endless powerpoint presentation. 

The other happy result of reading Case is my idea that most business books are Thneeds, from Dr. Seuss’s The Lorax… an idea that probably needs no teasing out. In Case’s case (ha), the “thneed” is the concept of “the third wave” itself, borrowed from Toffler and transmogrified into IoT.  

Two of Case’s most-useful insights are 1) that the need for infrastructure makes the third wave more similar to the internet’s first wave than to the social/mobile second wave, and 2) that third wave companies will need to partner more effectively with governments in order to succeed… unlike, say, Uber which has grown by ignoring government regulations until they get in trouble.

15. Wallace, David Foster. This is Water; Thoughts, Delivered on a Significant Occasion, about Living a Compassionate Life. Finished 31 May, 2016.

An exquisite little book, which is the slightly revised text of DFW’s famous 2005 Kenyon College commencement address. I think I read the text online once before, and/or heard the audio, but I’ve been thinking about this talk lately and ordered the hard copy on Amazon in order to get it into my head more actively.  The highest compliment that I can give this book is that I’ll have to read it again soon in order to make sure that I’m paying attention to what it asks me to attend to.  

The reason I’ve been thinking about This is Water is that the thread of some other books I’ve been consuming have been more complicated versions of this, like Douglas Rushkoff’s operating systems in Throwing Rocks.

16.  Bujold, Lois McMaster. Penric & the ShamanFinished June 28, 2016.

A brilliant novella in Bujold’s “Five Gods” universe that I bought, downloaded and inhaled in one day despite my hopes of stretching out the experience. This is fantasy rather than science fiction and the sequel to last year’s Penric’s Demon.

17.  Sennett, Richard. The Craftsman. Finished July 2, 2016.

I picked this up after my friend John Willshire talked eloquently about Sennet’s work in a presentation I admired. I couldn’t get through it the first time I tried, but the second time I got hooked and wound up filling 18 notecards with enthusiastic observations and filling the book with marginalia and underlines.  

Sennett is brilliant and insightful, and his work resonates with my thinking about connected experiences, and particularly how using tools expands both our capabilities and our individual senses, leading us into new thinking: “We want to understand how tools can more generically engage us in large intuitive leaps into the unknown” (209),

18. Riordan, Rick. Blood of Olympus. Finished July 26, 2016.

This is the final book in the second Percy Jackson series, which I read because my son wanted me to do so. Although Riordan is always good, the second series was not nearly as good as the first: there are too many characters with a dizzying, constantly-switching POV.

19. Levitin, Daniel J. The Organized Mind: Thinking Straight in the Age of Information OverloadFinished July 26, 2016.

This is a rare exception to my “first reads only” rule, but over the summer I realized that I hadn’t retained as much of this brilliant book the first time I read it last year… perhaps because I read it on the iPad. So I bought a paperback copy and dove back in, pencil in hand and a pile of notecards on the table beside me. The book rewarded a second read just as much as a first, which is rare.

The next phase of the digital revolution is going to put immense pressure on our notions of environments, place, transcending the limits of our bodies and more: Levitin’s book has informed my thinking about these matters. Moreover, his insights about how we all have a limited amount of decision-making energy each day have changed how I approach allowing many forms of stimulation into my life, particularly in the mornings.

20. Lee, Sharon & Steve Miller. Alliance of Equals. (Liaden Universe.) Finished August 14, 2016.

The latest in the Liaden series. I had to reread the prior (Dragon in Exile), as I didn’t remember it well.  As with the last Percy Jackson book, this suffers from an oversupply of plots & POVs and characters, but at least in this case the three main plots all share a theme of transition, becoming and arrival.

21. Riordan, Rick. The Trials of Apollo: Book One, The Hidden Oracle. Finished August 22, 2016.

This is the first book in the next series in Riordan’s Greek myth universe, and the difference between Riordan writing multiple points of view and containing himself to one is profound. Here, Apollo annoys Zeus to such an extent that the Thunder God banishes Apollo to live as a mortal on Earth. The fun of the book is that Apollo is an arrogant prick, which isn’t a huge surprise after being worshipped as a deity for millennia. The narrative engine is virtually identical to that of the 1960s Marvel comic adaptation of Thor, only played for laughs.

22.  Vance, J.D. Hillbilly Elegy: A Memoir of a Family and Culture in Crisis.  Finished September 6, 2016.

This book has been a sensation, particularly around the vexing question of why middle class whites voted Republican in such overwhelming numbers in the last election despite it seeming to Democrats to be an act entirely against the self interest of those same voters.

I want to point out just two things: first, Vance never mentions President-Elect Trump by name or by direct inference, so the application to the election is more interpretive than some reviewers suggest. Second, the political application of this story is a distraction from a terrific read and a modern memoir of class that reminds me of classics like Manchild in the Promised Land by Claude Brown and Hunger of Memory by Richard Rodriguez.

23.  Shafer, David. Whiskey, Tango, Foxtrot: A Novel. Finished October 30, 2016.

My friend Ari Popper recommended this agreeable novel, which is not to be confused with the similarly-titled book that was the source for the Tina Fey movie.

Shafer’s book has a weird “this is the overture: where’s the symphony?” quality. It almost feels like a prequel written long after a novel — the start of a beloved multi-volume action series — that exists to explain to loyal readers where the protagonists originally came from… sort of like if J.K, Rowling wrote the James and Lily Potter story, where the whole point of those two characters was to get killed by Voldemort in a way that empowered Harry.  

There was an interesting container versus the content aspect to this for me: the story kept getting more compelling but I noticed (because I have a paper copy rather than on the iPad) that even though the plot was ramping up I was running out of pages. “How can he resolve all this in the short amount of time he has left?” I think that if I’d been whizzing along reading it on the iPad — and didn’t look at the “you have read X% banner on the bottom — that I’d have felt quite cheated when I slammed into that ending.  I remember this sort of thing happening to me with an Elmore Leonard novel that was one of the first things I ever read on a Kindle where the story ended abruptly even though I had a bunch of % left to read, which was because there was a free preview of another novel. At least with the Shafer book I could see it coming.

24. Chiang, Ted. Stories of Your Life and Others. Finished November 5, 2016.

There aren’t that many people who make me think, “Gosh, why don’t they write MORE?” Chiang’s astonishing collection of sci-fi short stories did just that, and I’m grateful to my friend Mike Parker for recommending the book.

The title story has now been turned into the film Arrival, which I want to see it based on the source material.  The most personally compelling story to me was “Liking What You See: A Documentary” about lookism.

25. Bujold, Lois McMaster. Penric’s Mission. Finished November 12, 2016.

Third novella in Bujold’s newest fantasy series (see #16, above), delightful.

26. Connelly, Michael. The Wrong Side of Goodbye: a Bosch Novel. Finished November 13, 2016.

More Connelly: he’s just so good.  I started the iBooks sample on the elliptical machine at the gym, went home, bought it, and then inhaled most of it that same night, finishing it the following morning. Any mystery lover shouldn’t miss these.

27.  Schwab, Klaus. The Fourth Industrial Revolution. Finished November 16, 2016.

At a conference this fall my friend Tim Murphy recommended this book.  It’s an interesting and mercifully brief “behold the future!” volume written book by the founder and head of the World Economic Forum: it provoked a long set of index card notes.

The book can be frustrating with its dearth of evocative examples, which makes imaginatively seeing what Schwab is talking about hard, but reading it catalyzed a great deal of my own thinking, and perhaps led indirectly to this recent piece.

28. Krasny, Michael. Let There be Laughter: A Treasury of Great Jewish Humor and What it all Means. More Than 100 of the Funniest Jewish Jokes of all Time.  Finished November 26, 2016.

This is a breezy read by the host of KQED’s “Forum.” In an ungenerous mood I told my parents — who lent me the book — that the commentary is so shallow that it aspires to be fatuous, which was unfair but only a little. Krasny spends so much time on self-aggrandizing anecdotes about the celebrities he knows and has interviewed that it can be annoying.  

But he does have some good jokes.

I’ve long bemoaned that 1980 and 1990s identity politics and political correctness essentially killed joke-telling as a social lubricant. For my father and grandfather jokes were the professional currency. I can only think of three friends and colleagues with whom I trade jokes, and I’m sufficiently antsy about this topic that I won’t name them. You know you are, guys.

29.  Levitin, Daniel J. A Field Guide to Lies: Critical Thinking in the Information Age. Finished November 29, 2016.  

A fantastic read, one that inspired many, many notecards. More practical than his previous, brilliant book The Organized Mind (#19) and akin to Nisbett’s Mindware (#8), Levitin provides the reader with tools to evaluate information critically, not to be taken in by poor arguments, and to understand that “we didn’t evolve brains with a sufficient understanding of what randomness looks like” (163). This is a friendlier, more useful version of the work of Nassim Nicholas Taleb of Black Swan fame.

Reading books like Levitin’s can make me despair of ever thinking clearly myself, but I can at least take comfort in always making progress.

30. Lewis, Michael. The Undoing Project: a Friendship that Changed Our Minds. Finished December 10, 2016.

Magnificent. As with the Vance book (#22), this is a huge bestseller so I don’t know how much I can add. I’ve read the works of its main protagonists — Daniel Kahneman and Amos Tversky — and other behavioral economists with fascination for years. What Lewis does is to make their ideas come alive in a powerful platonic love story between two geniuses.

You can watch a long, insightful conversation between Lewis and Adam Grant (a.k.a. #5) here.

31. Sawyer, Robert J. Hominids: Volume One of The Neanderthal Parallax. Finished December 15, 2016.

Delightful, thoughtful and well-structured sci-if novel about a parallel universe where Neanderthals survived & homo sapiens died off. Then, a Neanderthal physicist accidentally drops through a portal to our universe in modern day Canada. The book was published in 2002 & therefore written as the first early phases of the internet took place. I wonder how the story would have come out differently if written a decade later when the internet and the smart phone were established. One key difference between our world and the Neanderthal counterpart is that every Neanderthal has a “Companion” grafted into his or her inner forearm that is like an advanced iPhone with a smarter version of Siri.  The gap, in other words, between the Neanderthal world and ours has shrunk in the years since the book first came out. I just started reading Humans, the second volume: so far, so good.

32. Gibbs, Stuart. Spy Ski School. Finished December 20, 2016.

Fourth in the hilarious series that my son reads and urges me to read immediately after he does. Gibbs channels the minds and concerns of middle-schoolers with a James Bondian overlay that is delightful and funny. With its speedy plot and engaging characters, I can’t believe this series hasn’t been optioned for a TV series. (Note: there was a 2008 movie called “Spy School” that is entirely unrelated.)  

I also admire how closely Gibbs engages with his young readers on his website.

And finally for 2016…

33. Perzanowski, Aaron and Jason Schultz. The End of Ownership: Personal Property in the Digital EconomyFinished December 29, 2016.

In this wide-ranging yet powerfully-focused book, two law professors explore the issues surrounding our cultural move from owning copies (of movies, CDs, books) to EaaS (Everything as a Service) alternatives like Netflix, Spotify and licensed ebooks (versus physical copies that are our property).

We are trading a lot for the convenience and wider selection of digital goods over physical, and after reading this book (which is surprisingly brief) I’m more aware of the tradeoffs than I was before. The ramifications are widespread, from the death of secondary markets (e.g., because you can’t sell the Netflix videos you’re done with on eBay) to attempts to block generic alternatives to manufacturer brands (printer ink, Keurig cups) and beyond… with added implications coming in the world of self-driving cars and 3D printers everywhere. This books makes a remarkable, if inadvertent, bookend with Edgerton’s Shock of the Old that I read last January (#3 on this list).

Thanks for reading! I’d love your comments, critiques and suggestions for further reading.

Here’s a sneak previews of books already on my desk to read or complete in 2017 in alphabetical order by author rather than ordered by likelihood of reading:

  • Samuel Arbesman, Overcomplicated: Technology at the Limits of Comprehension
  • Dan Ariely, Payoff: The Hidden Loginc that Shapes our Motivations
  • Harry Collins, Tacit & Explicit Knowledge
  • Jon Fine, Your Band Sucks: What I Saw at Indie Rock’s Failed Revolution (But Can No Longer Hear)
  • Thomas L. Friedman, Thank You for Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations
  • Jonathan Haidt, The Righteous Mind: Why Good People Are Divided by Politics and Religion
  • Tim Harford, Messy: The Power of Disorder to Transform our Lives
  • Joi Ito and Jeff Howe, Whiplash: How to Survive our Faster Future
  • Jane Jacobs, The Death and Life of Great American Cities
  • Steven Johnson, Wonderland: How Play Made the Modern World
  • Cal Newport, Deep Work: Rules for Focused Success in a Distracted World
  • Robert Sawyer, Humans: Volume Two of the Neanderthal Parallax
  • Pat Shipman, The Invaders: How Humans and Their Dogs Drove Neanderthals to Extinction
  • Cecily Sommers, Think Like a Futurist: Know What Changes, What Doesn’t, and What’s Next
  • Amy Webb, The Signals are Talking: Why Today’s Fringe is Tomorrow’s Mainstream
  • Tim Wu, The Attention Merchants: The Epic Scramble to Get Inside Our Heads

The attentive will see some clear themes extended from both this year’s list and those of previous years. It will be interesting to see how many of these make it onto the 2017 list one year from now.

Happy New Year!

Amazon’s Robot Bodega

Don’t miss this important piece from The Verge, which gives great context around this promotional video about Amazon Go, the robot bodega:

As The Verge captures, there are no cashiers and no checkout lines: you grab what you want and just go.

At the moment, there are human stockers at the Amazon Go beta in Seattle, but that seems temporary as Amazon historically has used robots wherever it can to reduce costs and add efficiency.

Neither the video nor The Verge engage with whether or not Amazon Go follows Amazon.com’s normal practice of being intensely price competitive.  In other words, are the prices at Amazon Go the same, cheaper or more expensive than the human-powered bodega at the corner? Answer: probably cheaper.

This is important because if I’m right about the prices that skews everything about Amazon Go as an experiment.  Shoppers are highly price sensitive in many (but not all) contexts, particularly with CPG.

If Amazon Go is cheaper than the local bodega or 7/11, then this is bad news for those local shops.

Mom and Pop markets will now have big box Walmart with its supply chain strong arm tactics chomping away at their existence from one side and Amazon Go eating its way into the middle from the other.

More importantly, the human-to-human interactions that characterize most grocery shopping today at the cash register evaporate in the face of this frictionless but also emotionless interaction. (This reminds me of Erica Jong’s zipless fuck from Fear of Flying.)

All things being equal, we tend to go with the cheaper option. However, at issue here is what we count among all the things.

What aspects of the shopping experience are in our consideration sets when we go to the store?

For high consideration items, perhaps we want a skilled salesperson to guide us in concert with the research we do on our phones in the store or on our computers before we leave the house.

But for low consideration items like groceries, it’s easy to discount the identity-formation supporting energy that comes with going to the same store regularly, seeing the same people at the checkout line, chatting with the fellow customers — some of whom are friends — that you happen to meet.

This sort of identity-formation isn’t the kind we seek deliberately, like a college or a job or a neighborhood where the houses are the right size and the schools are good: it’s the kind that happens as a matter of course after we make those bigger choices.

Until things like Amazon Go show up on our mental horizons.

By virtue of its very presence in our cognitive landscape, Amazon Go requires us to make a conscious decision to go to the human powered bodega in order to be part of a community… this is a new decision, one that requires us to spend part of our limited daily allotment of decision-making energy.

When it comes to buying books, I frequently vote with my dollar and spend a little bit more at independent bookstores like Powell’s to keep them in existence, although I’m also a frequent orderer at Amazon.

But when I’m in a hurry to grab lunch or need milk before I collect my kids at school, will I think through how that purchase will keep my local market in business?

I’d like to say yes, but I might be lying.

First Thoughts on Amazon’s Echo and Alexa

Based in large part on my friend Jeff Minsky’s enthusiast endorsement, I bought the Amazon Echo device that comes with its voice-activated, Siri-like, AI digital helper named Alexa.  “This is a no-brainer,” Jeff said.  “If nothing else it’s a terrific wireless speaker for under $200, and it does so much more.”

I unboxed Echo on Wednesday, downloaded the iPhone app, plugged it in and had it running in minutes.  Jeff is right about the speaker: it has a great sound and fills up even my (no pun intended) echo-filled living room. 

Here are my first thoughts about the Echo and Alexa, its successes, missed opportunities, and where I see it going.

Surprise and Delight: The Beatles

As I did the morning dishes, my first request was, “Alexa, play The Beatles.”  Seconds later I heard “Long, Long, Long” from The White Album.  Wow!

Over the course of the next few minutes of puttering and tidying, I heard a news briefing from NPR, skipped through a bunch of other music, and also discovered that every CD or MP3 I’d purchased in my 18 years of Amazon.com membership was available to Alexa… and this is quite a bit.  I didn’t even realize that I had a music library outside of the Spotify-like, free-with-Amazon Prime music service.

Then — more surprise, more delight — a query of mine revealed that Alexa also has access to all my Kindle books… and she has a nice reading voice.  Within moments, Alexa was reading Michael Winberg’s It Will Be Awesome if They Don’t Screw it Up book about 3D printing and IP law.  I’ve experienced earlier algorithmic reading of text, and it was no fun: robotic voices with no cadence and a ton of mispronounced words.  By contrast, Alexa is smooth and winning.  Does she measure up against a professional actor performing a text?  By no means.  But for simple, “what’s this book about?” curiosity she’s fine… it’s a Herb Simon satisficing exercise, rather than a premium audio-book optimization.

Alexa is an example of what Jeff Minsky calls the “Ambient Internet.”

Even more so than with the always-in-your-pocket smart phone, having hands-free Echo on the kitchen counter reduces friction between my desire to search something and performing the search.  This is even lower friction than my experience with the Apple Watch, because with the Watch I have to raise my wrist to my face with my left hand and push a button with my right hand to wake it up and have it start listening… although Siri on my iPhone wakes up to “Hey, Siri.” 

Not having to move a muscle and still being able to search something is powerful, even seductive.

Last night, for example, my 10 year old son wanted to go to a barbecue joint for dinner.  So I asked Alexa where the best ribs in Portland are.  Alexa immediately recommended Reo’s Ribs, which is zesty and tangy (it used to be our neighborhood joint before it moved about seven miles further away).  Alexa couldn’t manage to suggest other options (although to be fair I only asked her for the best ribs in my query and, like Siri, she has a limited ability to understand followup questions), so I trotted to my computer and to Google to find a closer joint.

Dinner was delicious. 

There’s a down side to the Ambient Internet, which is that it accelerates my technology-induced quasi-Attention Deficit Disorder.  Having a bunch of devices that can deliver tasty, Doritos-like nibbles of information doesn’t help me buckle down and focus on pressing tasks at hand. 

Likewise, having a cybernetic pal eager to tell me about things happening in the world while I do dishes gets in the way of either mindful attention to task (even if the task is dishes) or mindless day-dreaming that often sparks a creative insight. 

Many smart folks have written extensively about how frictionless 24/7/365 connectivity makes us reactive and superficial rather than proactive and deep (see Nicholas Carr’s 2011 book The Shallows for one good example), and I don’t need to make that argument again here except insofar as how adding another connected device, Echo, throws more cognitive Doritos into my info-diet rather than veggies.

Brief Digression on Cyber-Security: I’m not, in this post, addressing either the Big Brother Question (Amazon is always listening, tracking everything, and using that information to take over the world or maybe just help the NSA) or the Skynet/Robot Apocalypse Question (Alexa will fuse with Siri, find Ultron, and then decide to eliminate the infestation of humans ruining this perfectly good planet). End of Digression.

But what about the shopping?  I have yet to ask Alexa to add something to my Amazon list, but I have no doubt that Alexa’s frictionlessness will aid purchase… and if Alexa continues to live in the kitchen then that will probably hurt my local supermarket if I wind up ordering dishwashing soap through Amazon because I just ask for it when I’m running out rather than having to pull out my phone to add it to the shopping list. 

Missing: a battery.  It’s only by using a device that you see the presumptions that the creators had while designing it.  With Echo, the designer presumed that once you picked a spot for the device it would stay in that spot.  This is a problem as I’m working to figure out where Alexa should be in my house, since every time I unplug the device it powers down and takes a minute-plus to reboot when I plug it into a different socket in another room.  I find this minute-plus reboot time vexing… and it doesn’t fit with the general frictionlessness of Echo. 

A backup battery that would keep the Echo going for a half hour would be useful.

But the lack of a battery also suggests that Amazon has more pervasive ambitions for Alexa.  From the start, I asked myself, “why both Echo and Alexa?”  Why not just call the device Alexa?

The reason, I think, is that Alexa has a bigger future footprint than the Echo.

The Echo is just a speaker, but pretty soon users will find themselves chatting with Alexa on their phones, on their computers and tablets when shopping on Amazon.  And judging from the Ford SYNC integration with Echo that I saw at this year’s CES, Amazon also wants us to chat with Alexa when we’re driving around.  “Alexa, please add Tide to my shopping list,” I’ll be able to say as I drive around.

While Alexa is great, she is no “Jarvis” from Iron Man and The Avengers.  One disappointing thing about Alexa is her half-hearted connectivity to other services.  Yes, you can play your Spotify tunes out of the Alexa speaker, but in order to do so you have to stream Spotify from your phone (or some other device) into Alexa.  So you can’t ask Alexa for a particular playlist or to search Spotify for that rare Bill Evans tune.

The Achilles heel of most connected technologies is having the mobile phone as a tether: this is as true of Echo as it is of the Apple Watch.   

Finally, another designer presumption that a few days of use has turned up: Echo was designed for somebody who lives alone, or at least for a single user.  Here’s a revelatory snippet from the Amazon page on Echo and Alexa:

Alexa—the brain behind Echo—is built in the cloud, so it is always getting smarter. The more you use Echo, the more it adapts to your speech patterns, vocabulary, and personal preferences. And because Echo is always connected, updates are delivered automatically.

This engineering mindset parallels that of the smart phone: one user per device.

But I live with three other humans and a dog.  While the dog’s shopping needs don’t require an AI helper, my son — who is entranced with Alexa — has trouble making himself understood because Alexa has adapted to my voice. 

This is a pervasive problem with shared services.  Since my entire family uses my Amazon Prime account, Prime thinks I suffer from multiple personality disorder since the suggested purchases run across four idiosyncratic people’s interests.  I always know when, for example, my teenaged daughter has been makeup shopping on Amazon because the Amazon banner ads that stalk me across the web shift.

My Netflix recommendations are even worse since we started using the service before the company enabled different profiles.  I have no interest in “Cupcake Wars,” but Netflix disagrees.

In a future post, I’ll write more about Echo/Alexa’s potential impact on business, shopping and branding.

[Cross-posted on Medium.]

Don’t Miss Adam Grant’s new book “Originals”

Of the many compliments that I can give to Adam Grant’s remarkable new book Originals: How Non-Conformists Move the World, a rare one is that I will have to read it again soon.  Grant is an unusual social scientist in that he’s also a terrific writer, a gem-cutting anecdote selector of real-life stories that illuminate his points with a breezy, swallow-it-in-a-gulp momentum so I found myself racing through the book with a smile on my face.  I didn’t even take notes!  That doesn’t happen.  So, I’m going to read it again, slower, pencil in hand.
In the meantime my first tour through Originals haunts my waking life, an insightful shadow nodding in at unexpected moments— as a professional, a thinker and as a parent.
For example, when an academic friend told me she was trying to salvage as much as she could from her recent articles to put into a book she needs to write for tenure, I replied, “Don’t do that. You are prolific and have tons of ideas: only chase the ones that still excite you.”  That’s lifted straight from Grant, who talks about genius as a surprisingly quantitative endeavor: it’s not that creative masters have better ideas than the rest of us, instead they have have a much greater number of ideas so the odds go up that some of those ideas are terrific.
One of Grant’s opening anecdotes explores a non-causal correlation between success in a call center and an employee’s decision to change the default web browser on her or his computer.  If the employee switched away from Internet Explorer to Firefox or Chrome (this isn’t hot-off-the-presses data, I think), then that switch demonstrated a kind of “how can I make this better?” mindset that led to higher job performance.  I’ve thought about my own default choices repeatedly since then. noticing how sometimes I work around the technology when it’s too much bother to make the technology serve me.  Looking at the pile of remote controls near the entertainment center in my living room is one example: I haven’t bothered to research, buy and program one universal remote.
Grant’s notion of strategic procrastination has also proved actionable faster than I might have predicted.  I’ve often been a pressure-cooker worker, mulling things over for a long simmering period before rolling up my sleeves.  Grant has persuaded me, though, that getting started first and then taking a mulling break at the halfway point leads to higher quality outcomes, and I’ve used this to my advantage — and the advantage of the work — on a research project that is taking up most of my time.
Originals isn’t perfect but it’s always provocative.  Another phenomenon that Grant explores is the correlation between birth order and creativity, with younger children — particularly the youngest of many children — often becoming more successful as ground-breaking creatives because they inhabit a different social niche in their families than rule-making parents and rule-abiding oldest children (of which I am one).  Grant’s birth order argument focuses so much on the nuclear family that I wonder if it’s too Western, too settled, too suburban.  My mother, for example, grew up in a close, hodgepodge, overlapping community of immigrant parents, grandparents, aunts, uncles and oodles of cousins.  Her closest peer group were her cousins, with whom she roamed her city neighborhood unsupervised.  The cousins, with whom she is still close decades later, influenced her as much if not more than her sister, eight years her senior and a more distant presence in her childhood than, say, the presence of my 14 year old daughter in my 10 year old son’s day-to-day in our little suburb.  Still, Grant’s birth order research has made me rethink some of my own parenting choices with my older child.
Perhaps my only real complaint with Originals is that I want some additional product that will help me to apply its powerful insights in my everyday life.  As I gobbled up the book, I wanted something like a deck of playing cards with distilled versions of the chapters that I might rifle through to help sharpen my thinking… something like the Oblique Strategies or Story Cubes.
I was a big fan of Grant’s first book, Give and Take, and Originals is just as good if not better.  It was a pleasure to read the first time, and I’m eager to dive in once again… perhaps I’ll make my own deck of helpful playing cards using my friend John Willshire’s product, the Artefact Cards.