Why is “me too” happening now?

It’s challenging to go onto Facebook and Twitter right now and face the ever-swelling river of “me too” posts from women sharing their horrible stories of sexual harassment. It’s good that these posts are happening, good that it’s challenging. Part of what I find challenging is that I don’t know how to respond other than to bear witness.

The spark that started “me too” is Harvey Weinstein’s despicable, sexually predatory behavior — as reported by both The New York Times and The New Yorker. It’s a good thing that this has come to light and that the entertainment industry is exiling him.

And it’s shocking that he got away with it for decades.

Actually, it’s not shocking at all, which is the real problem.

What I don’t understand — what I find curious — is why “me too” is happening now.

Please don’t get me wrong: it’s terrible that — near as I can tell — every woman I know has been sexually assaulted — and it’s courageous and admirable that they are sharing these terrible experiences with the world.

What I’m wondering is why “me too” didn’t happen, say, after the Bill Cosby stories came out. I grew up in Southern California in the 1970s and 1980s, and the child of a celebrity once  mentioned that Cosby was a known philanderer, but I never heard stories of him drugging women and raping them. Hannibal Buress started talking about Cosby as a rapist onstage in 2014 — and it’s fucked up that it took a man talking about it for this to become a thing — and after that women started to come forward to share their horrible Cosby experiences.

But the Cosby stories did not create “me too,” where women all over the world are sharing their stories of sexual harassment by men who aren’t famous.

Nor did the Access Hollywood, Donald Trump, “pussy-grabbing” story — the story that shockingly failed to derail his candidacy — create “me too.”

Perhaps the Cosby stories seemed too bizarre. Although countless women have been drugged or plied with alcohol and then raped, maybe the scenario of the most famous screen dad in the world slipping rufies into the drinks of young actresses didn’t resemble the experiences of other women enough to create “me too.”

In contrast, maybe Harvey Weinstein’s behavior, although profoundly weird, sounded like the experiences most other women have had with a lot of other men, making “me too” less of a leap.

Maybe the rapid succession of Cosby, Trump, Ailes and O’Reilly stories made it possible for women to create “me too” once the Weinstein story broke.

It’s good that “me too” is happening.  Why is it happening now?

Shortly after the “pussy-grabbing” story, Eugene Wei posted a remarkable piece called, “The Age of Distributed Truth,” in which he talks about Cosby, Justin Caldbeck, Trump and Susan Fowler’s post about the toxic bro culture at Uber. Wei then talks about Michael Suk-Young Chwe’s book “Rational Ritual,” and Chwe’s notion of “Common Knowledge”–

Knowledge of the message is not enough; what is also required is knowledge of others’ knowledge, knowledge of others’ knowledge of others’ knowledge, and so on — that is, “common knowledge.”

By this logic, after the New York Times article — followed quickly by the New Yorker article — it was impossible not to know that others knew about Weinstein, which made “me too” possible.

But that still doesn’t explain why it was the Weinstein story that provoked “me too.”

I don’t have an answer, and my question is far from the most important question about me too.

If you have an answer, please share it.

What comes after smartphones?

With all the press and the inescapable ads for new iPhones, Samsung Galaxy, Google Pixel and other snazzy devices, it’s hard to think of the smart phone as a transitional technology.

But it is.

Here are three recent indicators:

Apple and Facebook share a hypothesis that life contains moments when lugging a smartphone is a drag. The Apple Watch commercials feature active people running with just the Watch and wireless ear buds. (I’m not sure why VR is less alluring with a smartphone unless one plans to be naked and therefore pocketless in real life while visiting virtual life.)

You might be wondering about that third indicator. How does the death of non-internet-connected iPods suggest that smartphones — the technology that replaced the iPod — are going away?

What happened to the iPod will happen to the iPhone.

Once smartphones took off after 2007, Apple cannily realized that this new wave of devices was going to absorb the customer base for listening to digital music from the iPod. Who wants to carry around a smartphone and an mp3 player when the smartphone can play mp3s just fine and sounds the same?

What both iPod and iPhone owners care about is listening to music, not the device. If anybody was going to cannibalize Apple’s iPod customers, the company thought, then it should be Apple.

As I look at technology and behavior trends, one of my axioms is that verbs are more important than nouns.

People want to take pictures, and most people prefer the fastest and easiest option for doing so. Devoted photographers still use single lens reflex cameras — either film or digital — but (as the Kodak company learned to its dismay) most people don’t want the hassle and expense of getting film developed, so instead they just whip out their phones. In our latest Surveying the Digital Future survey, for example, we found that 89 percent of Americans take pictures with their mobile phones.

It’s important to focus our analytical attention on the activity — taking pictures — rather than the device the people use to do the activity, because behavior is liquid and can be poured from one container into another.

None of the actions people perform with smartphones are limited to smartphones, and that means that the smartphone won’t be with us forever.

What will this post-smartphone future look like?

Computing power is increasing, as is the ubiquity of wifi and other over-the-air internet connections. Cloud Computing, where the heavy lifting of computation happens online instead of on a computer, means that smaller and smaller devices will have greater and greater processing power.

There’s a common cliché that today’s smartphone is more powerful than the computer that landed the Apollo 11 on the moon. In a few short years, a device the size of a pea will connect to processing power a thousand times greater than today’s smartphone.

So, instead of smartphones in our pockets or purses as our single, do-everything devices, we’ll have Personal Area Networks (PANs)– clusters of devices worn on different parts of our bodies or hovering nearby.

Instead of the glass-and-metal rectangle of today’s smartphone, we might have the computer guts of our PANs in the shape of a silver dollar, or distributed across a series of beads worn as a necklace.

Both in the data from our Future of Transportation project and in watching the uptake for Amazon’s Alexa, Apple’s Siri and the Google Assistant, we see voice interfaces rising in popularity, so it’s likely that the main PAN input will be the our voices.

For output, PAN we will receive information both via the voice of the digital assistant (“turn left here, Brad”) and also via Augmented Reality (AR) glasses like the rumored-to-forthcoming Magic Leap technology. Eventually, these will evolve into contact lenses.

If we need to type, we’ll have a virtual keyboard projected onto our AR vision, and we’ll type on any flat surface– the way we type on touch interfaces today. Likewise, we might wear barely-there connected gloves for input. Or, we might carry around a small stylus for sketching in AR or VR, or even a fancy pen that works on real paper as well as virtual paper.

The cutting-edge health sensors in the latest Apple Watch will seem Flintstonian in comparison to the distributed sensors in clothing as well as implanted in our bodies, continually sharing health information with our CPUs.

What stands in the way of this Post Smart Phone future?

Two things are standing in the way of the brave new world of PANs, one technological and one cultural.

The technological obstacle is battery life. Nobody wants to plug in a dozen or more devices (CPU, glasses, stylus, shoes and socks, underwear, pants, shirt, hat…) every night at bedtime, so battery technology will need to improve and the power-consumption demands of the devices will need to become more efficient.

Electric vehicle manufacturers like Tesla are paving the way for better batteries for cars, and eventually that technology will shrink and trickle down to micro devices.

On the cultural side, if you’re wearing a screen on your face and the processing power is in a silver dollar in your pocket, then how do you take a selfie?

While some people make fun of selfie-obsessed youth (not that young people have any monopoly on either narcissism or the ongoing high-tech curation of it through selfies), as my friend Jill Walker Rettberg compellingly argued in her book Seeing Ourselves Through Technology, selfies are an important emergent genre of self-expression — one that is here to stay.

I predict that many of us will carry a selfie-specialized, lightweight, thin, credit-card sized screen that will have both a powerful camera and high-definition resolution. If you look at the new Google Clips camera announced last week and imagine it even smaller, more powerful and with a display, then you’ll see what I mean.

With increased battery life, some of us will also have selfie drones that will take off and orbit us whenever we simply think about taking a selfie, since we’ll have small sensors affixed to or implanted in our skull paying attention to how our brain waves change when we’re thinking about particular things.

Focus on content, not containers

The death of the smartphone is hard to imagine today.  But when the iPod debuted in 2001, it was hard to imagine that it would be displaced just six years later with the arrival of the iPhone.

The moral of this story is not that we’ll all someday soon be even more wired up and connected than we are today (although we will).

Instead, the important take-away idea is that the smartphone (a noun) is a container for a series of activities (verbs), and that the container is distinct from the content.

Don’t mistake the glass for the wine.*

[Cross-posted on the Center for the Digital Future site and elsewhere.]

* For a sci-fi, near-future dystopian version of some of these interactive technologies, you might enjoy my 2011 novel, Redcrosse.

Brief Rant about Email: Change the Topic, Change the Subject Line

Look, you’re busy. I know you’re busy. I’m busy too.  We’re all busy.

But there’s one inviolable rule of email communications — the prime directive, the Federation’s highest law — and it’s simple.

If you change the topic of an email thread, then you have to change the subject line too.

That is, you have to do this if you want people to read your emails.

Just moments ago, an email correspondent sent a reminder to everybody on an email thread that a component of a project is due today. That part is good.

The bad part is that my correspondent sent this reminder by replying to an email request for a meeting that happened YESTERDAY.

That is mind-numbingly stupid.

At least half the time, email threads continue long after anything that looks like a Brad-shaped action item has long receded in the rear-view mirror, so the likelihood that I will read them promptly if at all is dim. I’m not alone in this.

More importantly, if there’s a new request about an unrelated or adjacent piece of business buried in a reply to a reply to a reply on an old thread that started a while back about something entirely different, then you have only yourself to blame if your request goes unanswered.

Yes, I’m talking to you.

Email is a scourge and a tool. You can make it more tool-like by using it intelligently.

Rant over.

Car ownership is changing, not dying (yet)

On Monday, Business Insider published an article with the headline, “Uber and Lyft could destroy car ownership in major cities.” It’s a provocative headline, but it misrepresents the carefully worded findings of a recent study by researchers at the University of Michigan, Texas A&M and Columbia.

The study took shrewd advantage of a “natural experiment” that happened when Uber and Lyft, protesting new municipal legislation, stopped operating in Austin, Texas, in May of 2016. A few months later, the study authors surveyed a representative sample of Austin residents who had formerly used Lyft and Uber to see how their transportation habits had changed.

The most interesting findings from the study were that after Uber and Lyft drove out of town, 1) only 3% of respondents switched to public transportation (the technical term for this is “bad news”), and 2) that respondents who switched back to using a personal vehicle were 23% more likely to make more trips than when they’d used Lyft and Uber, increasing congestion for everybody else.

The study authors were careful not to extrapolate beyond the Austin city limits, so the Business Insider headline is overblown in its end-of-days rhetoric. It reminds me of the “Bring Out Your Dead” scene in Monty Python and the Holy Grail where a plague victim isn’t quite dead, but that situation is inconvenient for the person carrying him to a wagon full of corpses:

It’s not only fans of Lyft and Uber who overstate the impact of these services.

In an HBR interview, Nissan Renault CEO Carlos Ghosn — when asked about Uber and other such services cutting into car buying — replied, “I’m not worried. By our estimates, the industry sold 85 million cars worldwide in 2016 and is moving towards 87 million this year– both industry records.”

That is a nonsensical response: it’s like being confronted with a giant asteroid hurtling towards the Earth and replying, “but it’s so sunny outside!”

What’s really changing about transportation

In our work at the Center’s Future of Transportation project, we see a two-stage revolution in transportation that is just beginning.

In the first stage, what we call “Get-a-Ride Services” (or GARS) like Uber, Lyft, Car2Go, Zipcar and others make it thinkable for Americans to give up their own cars, but the move from just thinking about it actually to giving up a car is going to take time.

It’s a good news/bad news/more good news scenario.

We asked a representative sample of all Americans if they’d consider not having their own cars: 80% of respondents said no. That’s good news for car manufacturers– only 20% of Americans will let go of the steering wheel.

The bad news is that when we zoomed in on people who use GARS either frequently or sometimes that 20% consideration doubled to 40%– so use of GARS creates an immense flexibility in how Americans think about transportation.

Then there’s the additional good news: only 16% of Americans use GARS frequently (2%) or sometimes (14%); 17% use them once in a while; 67% never use them. (I discuss this at greater length in this column about liquid behavior.)

Car manufacturers, in other words, don’t have to worry about massive car-buying declines in 2018, but I wouldn’t be optimistic about 2020. We see a slow erosion in car buying, but more importantly we see change within the cars being purchased.

The people who choose to own cars will have more specialized needs (more on this below), and this means that manufacturers will need to customize their vehicles to a greater extent than they do today. That’s grim for mass scale where, for example, Toyota sells a few million Camrys that are all pretty much the same.

On the other hand, new production technologies — like the adjustable drive train from Faraday Futures — will make this customization cheaper for manufacturers. The last stage of production for your next car might happen at the dealership, via a gigantic 3D printer.

The second stage of the transportation revolution is all about self-driving cars, and you can’t find a better overview of why driverless cars will change everything than in this column by Center founder Jeffrey Cole.

Self-driving cars are no longer the stuff of science fiction. This week the U.S. House of Representatives will vote on “a sweeping proposal to speed the deployment of self-driving cars without human controls and bar states from blocking autonomous vehicles, congressional aides said,” according to Reuters.

But even if this legislation magically passed from House to Senate to the president’s desk and received approval in 24 hours, it will still be years before self-driving cars are everywhere. As science fiction author William Gibson famously quipped in 1993, “the future is already here: it’s just very evenly distributed.”

Tomorrow’s car buyer

The national — even global — fascination with self-driving cars is understandable, but it’s also a distraction from important changes in transportation, the first stage of the revolution, that will hit home a lot sooner.

To see this, let’s zoom in on one chart from our forthcoming Future of Transportation report. We asked people who used to have a car but had given it up this question, “Do you miss anything about having access to a car?” Here are the top five answers:

The most interesting answer is the fourth: 31% of respondents miss being able to keep their stuff in a car. The flip side of this, of course, is that 69% of people don’t give a hoot about using a personal car like a high school locker.

This suggests that for the vast majority of people there is no specific, concrete reason to own a car. “Convenience” is vague, and most people will trade convenience for cash much of the time. Independence, the fun of driving and not having to rent a car to go on a long drive, are similarly vague.

But being able to keep things in a car is concrete, and from that we can draw some tentative conclusions about who will own cars in the future.

Parents of very young children — babies these days need approximately a ton of plastic crap that poor Mom and Dad have to lug around — will find it inconvenient to have to install a car seat every time they drive somewhere. Likewise, parents with more than two children won’t want to play Uber-Roulette and risk having to squeeze five plus bodies into four seats in the inevitable Prius.

Anybody who works out of a car — gardener, plumber, contractor, surveyor, electrician, or locksmith — will need a dedicated vehicle. Sporty people who need a lot of equipment — skiers, surfers, kayakers, campers — or bikers who want a rack on their car to drive to the nice places to ride will want a dedicated vehicle.

But for the rest? The people who just need to move their bodies from place to place carrying a backpack or briefcase?

Most of those people will probably buy another car when the time comes: the big question is will they buy another car a few years after that? The answer is only “maybe” because — for the first time in a century — they no longer have to own a car to get around.

[Cross-posted on the Center site and elsewhere.]

Open Letter to Twitter CEO Jack Dorsey: Please Cancel the President’s Accounts

Dear Jack Dorsey,

Please cancel U.S. President Donald J. Trump’s Twitter accounts– both the official @POTUS one and @RealDonaldTrump.

Twitter does not have to persist in giving the president a platform where he lies in verifiable ways that responsible media outlets — real news — have detailed time and again.

Twitter does not have to enable the president to say hurtful things, things that violate Twitter’s own rules against abusive behavior.

After all, according to the page to which I linked above, “Twitter reserves the right to immediately terminate your account without further notice in the event that, in its judgment, you violate these Rules or the Terms of Service.”

Even if you and the Twitter legal team were to scrutinize both the rules and the Terms of Service and conclude that you cannot under the current rules terminate the president’s account, then that should not prove a barrier. On your website it states, “Please note that we may need to change these rules from time to time and reserve the right to do so. The most current version will always be available at twitter.com/rules.”

If you need to, please change the rules.

I’m sure you can come up with something logical and defensible.

In doing this, you’d not only be acting as a patriot, but you’d also be joining the other powerful CEOs who have stepped away from the president’s various councils and advisory groups because they find his behavior repugnant and un-American.

Please stop enabling the president’s repugnant behavior.

Wednesday, when a dozen of your peers — these same CEOs — decided to resign en masse from their advisory roles on White House councils:

Before they could make a statement announcing their decision, however, Mr. Trump spoke. He had caught wind of their planned defection and wanted to have the last word. Taking to Twitter, he wrote: “Rather than putting pressure on the businesspeople of the Manufacturing Council & Strategy & Policy Forum, I am ending both. Thank you all!” (New York Times.)

Twitter, the company you lead, allowed the president to try to prevent the CEOs from making an effective statement.

The president uses Twitter to lie, to hurt people, to shame people, to subvert the freedom of the press and in doing so he is making this country a lesser place than it should be.

While you cannot make the president an honest man or a decent president, you could make it harder for him to do his job badly.

Please, Mr. Dorsey, cancel the president’s Twitter accounts.

Sincerely,

Brad Berens (@bradberens)

Liquid Behavior

Anybody who has tried to lose weight, quit smoking, or train for a marathon knows that creating a new behavior or getting rid of an old one can be very, very challenging.

But it’s not hard to pour a behavior from one container into another, and this has implications for anybody trying to launch a new product or service. Here’s an example: the Center’s Future of Transportation Project turned up a trio of numbers — 86, 80 and 60 — that tell an exciting story about how Americans’ opinions about car ownership are changing. We asked our respondents — a statistically representative snapshot of the U.S. population — if they would give up driving altogether. Eighty-six percent said they would not.

That seems definitive, but it’s not.

We changed the question and asked if Americans would give up owning a car– that is, they’d retain the ability to drive but wouldn’t own or lease a car. That 86% dropped to 80%, or to look at it from the other direction, 14% consideration rose to 20%. That’s not a big difference, and there’s still a vast supermajority of people who would not give up their cars.

But then the story changes.

Instead of looking at our entire population, we focused on the people who use what we call “get a ride services” (GARS) like Lyft, Uber, Getaround, Zipcar or Car2Go, either frequently or sometimes. Only two percent of our respondents use these services frequently, while 14% use them sometimes (84% use them rarely or never– which many find surprising given how often Uber is in the press).

Sixteen percent is a relatively small slice of the population, but the impact of GARS on people’s transportation views is profound. The 80% of people who would never give up owning a car drops to 60%. Or, to reverse the picture, the 20% consideration for no longer owning a car among the general population doubles to 40% among the GARS-using population!

With an ousted CEO, a sexist bro culture, and aggressive takeover movements from Softbank in Japan, Uber has more than its fair share of problems right now, but that’s Uber the company, Uber the noun.

Uber may not last as a company (and I’ll have more to say on this topic in a future column), but uber the verb (as in, “I’ll uber there after my lunch meeting”) isn’t going anywhere.

In other words, it takes surprisingly little to make giving up car ownership thinkable: all you have to do is try GARS sometimes and you suddenly see the hassle and expense of car ownership in a stark new light. This is bad news for car manufacturers, and particularly for the people marketing new cars, because if you look at any recent car ad the thrust of the message is “buy this car.” But the argument that the manufacturers need to be making first is “buy a car” because they can no longer take for granted that Americans know they want to own a car.

Even before we put the survey into the field, I was surprised when more than one of my suburban neighbors speculated that there might come a time when they could reduce the number of cars they have and rely on Uber (or a similar service) to fill in the gaps — this in a neighborhood where the nearest bus stop is a mile away.

Focus on Verbs, not Nouns

This isn’t a column about transportation: it’s about how little it takes to move a behavior, to pour it from one container into another like pouring orange juice from a bottle into a glass.

Previously, I’ve written about how smart phones absorbed the functions of cameras, email, notebooks, calendars, and MP3 players to become the everything-Swiss-Army-Knife devices that we can’t be without. We can extend this list to include flashlights, videogame devices, social lives, banks, zippo lighters, and more. But in this week’s column, let’s flip this phenomenon and look at it from the other direction.

What the GARS data show is the people don’t want to own things per se, they want to achieve their goals — getting around — and they’ll choose a tool — a car — to accomplish that goal, particularly if people commonly associate that tool with the goal in question. But if there’s another tool that’s easier or cheaper and achieves the same goal, then people will migrate their behavior to the new tool as soon as they understand that they have the option.

This is a big deal, because companies often focus on their product features and their competitors rather than on their customers’ needs, and that can make companies blind to new competitors that come from different angles to help customers achieve their goals faster, cheaper, or both.

This notion of liquid behavior connects to classic business thinking. In “Marketing Myopia, a famous 1960 Harvard Business Review article, Theodore Levitt wrote that companies need to ask themselves, “What business are you really in?”

Using railroads as a key example, Levitt argued that the railroads stopped growing because they presumed that they were in the railroad business rather than the transportation business. In other words, they focused on the noun (trains) rather than the verb (transportation). In Levitt’s view, transportation companies would have extended trains into trucks and airplanes, but trains weren’t going to disappear.

More recently, business professor and innovation theorist Clayton Christensen has argued (in the book Competing Against Luck) that companies need to ask their customers, “What job did you hire that product to do?” and iterate product development accordingly. This moves the Levitt question from the corporate level to the individual level. Christensen’s focus on what he calls “Job Theory” helpfully refocuses attention on the actions people want to perform rather than the tools that other people have used previously.

Liquid behavior is different from both the Levitt or Christensen questions because it presumes that today’s products and services will go away but that the actions people perform with those products and services will stick around. Only serious photographers now buy single lens reflex cameras; most people just use their phones to snap pictures. The market for typewriters is vastly smaller than it was forty years ago because most people use word processing programs on their computers to “type” things up. Travelers who want to make their own breakfast now have the option of choosing AirBNB over a traditional hotel.

For a new product or service to succeed, it’s easier to pour an old behavior into a new shape than to create something entirely new. Facebook is a terrific example of this: the service skyrocketed after it allowed its users to share photos. People had already been sharing photos since before the Polaroid, but Facebook made it easy to pour that photo sharing into a new virtual container. Early Facebookers didn’t automatically understand poking or throwing sheep (if you’re old enough, you just got hit by a wave of nostalgia), but photo-sharing was a no-brainer.

The big takeaway here is that incumbent companies are always more vulnerable than they think they are if they delude themselves into thinking that people are loyal to the brands and to the particular products that they use today to achieve their goals. Apple is vulnerable. Google is vulnerable. Facebook is vulnerable. Walmart is vulnerable. Amazon is vulnerable, and so on.

People aren’t loyal. People are busy and often don’t have the mental energy to make a change (this is different than laziness). The chance to save time and money can nudge people to give something new a try, particularly if the new thing doesn’t require a steep learning curve. That’s liquid behavior.

To survive and thrive, companies need to focus on verbs instead of nouns, on behavior instead of brands or products.

[Cross-posted at the Center for the Digital Future website.]

Smart Phones and Drained Brains

As we use our mobile phones to do more and more things, we are paradoxically able to accomplish less— even when the phones are face down and turned off.

My last column explored how smart glasses (“heads up display” or “HUDs”) will increase the amount of digital information we look at, with the ironic twist that these same devices will erode our shared experience of reality. But we don’t need to look to a future technology to see how challenging it is to pay attention to what’s around us. We already carry a dislocating technology around in our pockets: our phones.

I’m deliberate when I say “dislocating” rather than “distracting,” because we’re not necessarily distracted: often we’re fiercely focused on our phones, but we’re dislocated because unless we’re taking pictures or videos we’re not engaged with our immediate physical environments. Distraction is a subset of dislocation.

The charts below show the many ways we use our phones, as described in the newest version of Center’s longitudinal “Surveying the Digital Future” report (it comes out next month):

As the report will observe, texting (93%) has edged out talking (92%) as the most common use of a mobile phone because texting increased six percent year over year while talking stayed flat.

It’s easy to get sucked into data on the individual functions (for example, 67% of people take videos with their phones, a nine percent increase), but doing so misses the big picture: with the exception of talking, Americans have increased their use of every mobile phone function over four years (2012 to 2016).

Phones and the Future of Focus

As with all technologies, increased mobile phone use has both a plus side and a downside.

On the positive side, we’re more connected to our loved ones and the information we want than ever before. We get news of the world instantly and store our important information — from shopping lists to medical documents to that pinot grigio we liked so much in that restaurant that time that we took a picture of the label — in our phones and online where we can always get to it. (I’m the king of productivity apps and can no longer imagine life without Evernote.) With games and apps and email and social media, mobile phones have engineered boredom out of our lives because there is always something fun to do.

But on the negative side, we use our phones more often to do more things, and that time and attention have to come from somewhere — they come from our engagement with the physical reality around us, including the people we are with who increasingly feel ignored unless they too have their noses in their smart phones. If we’re playing Candy Crush waiting in the supermarket checkout line, then we’re not chatting with the cashier or the other people in line who might have something interesting to say. While it sucks to be bored, boredom leads to daydreaming, and most of the great ideas in human history started with a daydream.

Brain Drain

First we’re dislocated, then we’re distracted. In other words, when we finally want to focus on the world around us, it’s getting harder to do so because of our mobile phone use. This is the finding of an important study that came out in the Journal of the Association for Consumer Research in April.

The article — “Brain Drain: The Mere Presence of One’s Own Smartphone Reduces Available Cognitive Capacity” by Adrian F. Ward, Kristen Duke, Ayelet Gneezy and Maarten W. Boz — usefully distinguishes between the things we think about (the orientation of our attention) and how much energy we have to think about those things (the allocation of our attention).

Mobile phones, the authors find, suck attentional energy away from non-phone-based activities, and since we have a limited amount of attention to spend, we’re less capable when we have a task at hand and in front of us.

What’s startling about the study is that mobile phone distraction does not just happen when our phones are on, beeping and flashing and vibrating for our attention. Our mobile phones reduce our ability to function even when the phones are turned off and face down on the table or desk where we’re working. As the authors observe, trying to increase your focus using “intuitive ‘fixes’ such as placing one’s phone face down or turning it off are likely futile.”

Performance gets slightly better if the phone is out of sight in a pocket or bag. Performance substantially increases only when the mobile phone is in another room, entirely out of sight and somewhat out of mind. And the more dependent you are on your mobile phone, the more your focus blurs when your phone is in sight or nearby.

It gets worse: the data shows convincingly that our ability to perform erodes if our phones are nearby, but we do not recognize that degradation of performance:

Across conditions, a majority of participants indicated that the location of their phones during the experiment did not affect their performance (“not at all”; 75.9%) and “neither helped nor hurt [their] performance” (85.6%). This contrast between perceived influence and actual performance suggests that participants failed to anticipate or acknowledge the cognitive consequences associated with the mere presence of their phones.

In other words, we think that we can handle the distraction that comes with our phones being around, but we can’t. In this regard, mobile phones are a bit like drunk driving or texting while driving: we think we can do it without consequence, but often we aren’t aware when we’re impaired and not able to function until it’s too late. (Psychology Today has a nice summary of the study findings.)

Implications: Budgeting Attention

We have a limited amount of attention: this is why a common metaphor for directing our attention towards someone or something is “to pay attention.” Attention is like a currency that we can budget or hoard, but we tend not to do so. Instead, we are attention spendthrifts, throwing our cognitive capacity at all the tasty tidbits that come out of our screens.

The problem with the “pay attention” metaphor is that it obscures something important: our attention can disappear without our having made a conscious decision to pay. For example, when we have notifications enabled on our laptops, tablets, and mobile phones — especially the latter — those bleeps and flashes and buzzes are attention taxes that we don’t realize we’re paying.

What the “Brain Drain” study shows is that even if we have our phones turned off and face down, we’re still paying an attention tax that acts like hidden fees on credit cards.

Brain Drain is different than Information Overload because with Brain Drain there is no information: just the potential for information. Likewise, Brain Drain is different from FOMO (Fear of Missing Out), because Brain Drain happens even when we aren’t fretting about what might be going on somewhere else.

The paradox of mobile phones is that as we use them to do more and more things, it becomes harder and harder to do any one thing. Always using our everything devices mean that we’re often nowhere in particular, and in order to be somewhere we have make a pre-emptive, conscious decision to put the everything device into an entirely different room.

That’s hard to do.

[Cross-posted on the Center for the Digital Future website.]

The Fall and Rise of the Visual Internet

I’m pleased to announce that my role with the Center for the Digital Future at USC Annenberg has expanded, and I’m now the Chief Strategy Officer. This column is cross-posted from the Center’s website, and is the first of many regular pieces from me and my colleagues. And now, onto the column… 

Bennett and I have been friends since we were eight. Over a recent late-night dessert we compared notes about how thinly spread we each felt across work, family and life. Bennett then shared an insight from a counselor he sees: “Y’know how in Kung-Fu movies the hero stands in the center and all the villains gather into a circle around him and take turns attacking him one by one? Life isn’t like that.”

Neither is technology.

Technologies don’t take turns arriving in our lives. Instead, they’re locked in a Darwinian struggle to clutch and hold onto a niche in our lives. Sometimes it’s a head-to-head struggle, like VCR versus Betamax, where the differences are slight and one technology wins because of marketing and luck. Sometimes different trends slam into each other and that collision creates a new thing — like the way that mobile phones ate digital cameras, email, notebooks, calendars, music collections, powerful microprocessors, decent battery life, email and the web to become smart phones.

A new collision is gaining velocity with the emergence of digital assistants and heads-up display. Both new technologies are changing how users interact with information, particularly visual information. As these technologies give users new ways to behave, those behavior changes will pressurize the business models and financial health of digital media companies, particularly ad-supported companies.

Voice-Interfaces Reduce Visual Interaction

Even though newer Echo devices have screens and touch interfaces, the most compelling use case is eyes free and hands free for Amazon’s Alexa, Apple’s Siri in the HomePod, and the Google Assistant in Google Home.

For example, I often use my Echo device when I’m doing the dishes to catch up on the day’s events by asking, “Alexa, what’s in the news?” Or, if I’m about to wade deep into thought at my desk and don’t want to miss a conference call starting an hour later I’ll ask Alexa to “set a timer for 55 minutes.”

I’m a failure at voice-driven commerce because I have yet to ask Alexa to buy anything from Amazon, but I have used IFTTT (the “If This, Then That” service that connects different devices and services) to connect Alexa to my to-do list so that I can add something just by speaking, which spares me from dropping everything to grab my phone or (gasp!) a pen and paper.

Alexa’s answers are pleasantly clutter-free. If I use my desktop computer to search Amazon for the latest John Grisham novel, then along with a prominent link to Camino Island, Amazon serves up a results page with 24 distracting other things that I can buy, as well as hundreds of other links. With Alexa, I just get Camino Island. (With commodity products, unless you specify a brand Amazon will send you its generic house brand: CPG advertisers beware!)

Right now, most queries to smartphone-based digital assistants result in a list of results that I have to look at, switching my attention from ears to eyes, but as these rudimentary artificial intelligences get better my need to look at a screen will decline. Today, if I say, “Hey Siri, where’s a Peet’s coffee near me?” the AI will tell me the address and ask me if I want to call or get directions. If I choose “directions,” then I have to look at my phone. In a short amount of time, Siri will seamlessly transition to Apple Maps and speak turn-by-turn directions, so I won’t have to look away from the road.

The challenge the rise of voice interfaces poses for ad-supported digital companies is that those companies make their money from propinquity— from the background clutter that is near the thing I’m looking at or searching for but that isn’t the thing I’m looking at or searching for.

Google, Facebook, the New York Times, AOL (excuse me, “Oath”), Reddit, Tumblr, Bing, LinkedIn, and others make much of their money from banners, pop-up ads, search results and other things we see but often don’t consciously notice: that is, online display adverting.

Amazon’s Alexa can already read news stories aloud in a smooth, easy-to-follow voice. It won’t be long until all the digital assistants can do so, and can navigate from article to article, site to site without users having to look at anything.

We can listen to only one thing at a time, so there aren’t background ads for Siri, Alexa and their ilk. Moreover, despite decades of conditioning to accept interruptive ads in radio, it’ll be game over the moment Alexa or Siri or Google Assistant says, “I’ll answer your question, but first please listen to this message from our friends at GlaxoSmithKline.”

The most powerful ad blocker turns out to be a switch from eyes to ears as the primary sense for media interaction. As voice-interface digital assistants grow in popularity and capability, the volume of visual inventory for these businesses will erode.

This erosion follows the decline in visual inventory that already happened as users moved most of their computing time to the smaller screens of mobile devices with less visual geography and therefore less room for ads.

In a recent Recode Decode interview, marketing professor and L2 founder Scott Galloway observed, “advertising has become a tax that the poor and the technologically illiterate pay.”

Since wealthier people will have voice-activated digital assistants first, that means that the people more exposed to visual advertising will have less disposable income and will therefore be less desirable targets for many advertisers. This creates more pressure on the display-ad-based media economy.

On the other hand, remember the Kung Fu movie quip? There’s another technology making changes in the visual internet at the same time.

Smart Glasses Increase Visual Interaction

Smart glasses are, simply, computer screens that you wear over your eyes. In contrast with voice-interfaces that are already popular in phones and with speakers, smart glasses haven’t become a big hit because they’re expensive, battery life is limited, and many people get nervous around other people wearing cameras on their faces all the time. (Early Google Glass enthusiasts were sometimes dubbed “glassholes.”)

Some pundits think that just because Google Glass didn’t sweep the nation it means that all smart glasses are doomed to failure. But just as Apple’s failed Newton (1993) presaged the iPhone 14 years later (2007), Google Glass is merely an early prototype for a future technology hit.

Smart glasses come in a spectrum that gets more immersive: augmented reality puts relevant information in your peripheral vision (Google Glass), mixed reality overlays information onto your location that you can manipulate (Microsoft’s HoloLens, with Pokemon Go as a phone-based version), and virtual reality absorbs you into a 360 degree environment that has little relationship to wherever your body happens to be (Facebook’s Oculus Rift, HTC Vive). The overarching category is “Heads-Up Display” or HUD.

What’s important about HUDs is that they increase the amount of digital information in the user’s visual field: not just the visual inventory for ads (like in this clip from the film, “Minority Report“), but for everything.

Wherever you’re reading this column — on a computer, tablet, phone or paper printout — please stop for a moment and pay attention to your peripheral vision. I’m sitting at my desk as I write this. To my left is a window leading to the sunny outdoors. On my desk to the right are a scanner and a coffee cup. Papers lie all over the desk below the monitor, and there are post-it reminders and pictures on the wall behind the monitor. It’s a typical work environment.

If I were wearing a HUD, then all of that peripheral territory would be fair game for digital information pasted over the real world. That might be a good thing: I could have a “focus” setting on my HUD that grays out everything in my visual field that isn’t part of the window where I’m typing or the scattered paper notes about what I’m writing. If I needed to search for a piece of information on Google I might call a virtual monitor into existence next to my actual monitor and run the search without having to hide the text I’m writing. This is the good news version.

In the bad news version, ads, helpful suggestions, notifications, reminders and much more colonize the majority of my visual field: I think about those moments when my smart phone seems to explode with notifications, and then I imagine expanding that chaos to everything I can see. In some instances this might be a maddening cacophony, but others might be more subtle, exposing me to messages in the background at a high but not-irritating frequency in order to make the product more salient. (“I’m thirsty: I’ll have a Coke. Wait, I don’t drink soft drinks… how’d that happen?”) This isn’t as creepy as it sounds, like the old Vance Packard “subliminal advertising” bugaboo, it’s just advertising. Salience results from repetition.

Regardless of what fills the digital visual field, an explosion of visual inventory will be a smorgasbord of yummies for ad-supported media companies.

But there’s a twist.

Filters and the Decline of Shared Reality

Just sitting at my desk as I work is an overly-simplistic use case for wearing a HUD: the real differences in all their complexity come into focus once I leave my office to wander the world.

With Heads-Up Display, every surface becomes a possible screen for interactive information. That’s the output. Since the primary input channel will still be my voice, there’s a disparity between the thin amount of input I give and the explosion of output I receive. This is the digital assistant and HUD collision I mentioned earlier.

Walking in a supermarket, the labels on different products might be different for me than for the person pushing his cart down the aisle a few yards away. The supermarket might generate individualized coupons in real time that would float over the products in question and beckon. If my HUD integrated with my digital assistant, then I might be able to say, “Hey Siri, what can I make for dinner?” and have Siri show me what’s in the fridge and the pantry so that I can buy whatever else I need.

Smart glasses won’t just stick information on top of the reality on the other side of the lenses, they will also filter that reality in different ways.

We can see how this will work by looking at the technologies we already use. For example, businesses will compete to put hyper-customized articles, videos, and ads in front of you, similar to how ads pop-up on your Facebook page today. But these articles and ads will be everywhere you look, rather than contained on your laptop of phone. This is algorithmic filtering based on your past behavior.

Likewise, your digital assistant will insert helpful information into your visual field (such as the name of the person you’re talking with that you can’t remember) that you either ask for or that it anticipates you might find useful. The Google app on many smart phones already does versions of this, like reminding you to leave for the airport so that you aren’t late for your flight.

Finally, you’ll be able to add your own filters by hand, changing people’s appearances or names in real-time. If you’ve given one of your smart phone callers an individual ring tone, changed the name of a contact to something else (“What a Babe” or “Don’t Answer Him,”), or watched a teenager put a dog nose or kitty ears on top of a photo in Snapchat, then you’ve already seen primitive versions of this in action.

An unintended consequence of this visual explosion is the decline of shared reality. We already spend much of our time avoiding the world around us in favor of the tastier, easier world inside our smart phones. But even if the latest meme coming out of Instagram is the funniest thing we’ve ever seen, the majority of what surrounds us is still analog, still the flesh and blood world untouched by digital information.

That changes with HUDs.

In the near future where HUDs are common, you and I might stand side by side on the same street corner looking at the same hodgepodge of people, cars, buildings and signs — but seeing different things because we have idiosyncratic, real-time filters. Each of us will be standing on the same corner but living inside what Eli Pariser calls “filter bubbles” that have ballooned out to surround our entire worlds.

Common knowledge at this point becomes rare because a big part of common knowledge is its social component. In the words of Michael Suk-Young Chwe from his book Rational Ritual, a society’s integration is the result of coordinated activities built on a set of shared information and messages.

For a society to function, Chwe writes, “Knowledge of the message is not enough; what is also required is knowledge of others’ knowledge, knowledge of others’ knowledge of others’ knowledge, and so on — that is, “common knowledge.”

It has been challenging enough in our shared analog reality to achieve things like consensus in politics or word-of-mouth awareness in business. As we each move into new, idiosyncratically personalized environments where we don’t know what other people know, we’ll need to work harder to hear other voices than our own, to connect with each other as friends, family members, customers and citizens.

That may be a tall order.

The New Skype for iPhone app SUUUUCKS

Oh the frustration!

I stupidly updated the Skype app on my iPhone 6, and now I’m trapped in a half-baked, “I want to be like Snapchat” social media hellhole where I can’t do the basic productivity things I used to be able to do on Skype effortlessly.

Ability to see which of my contacts are active online? Gone.

Ability to change my status to “invisible” so that I don’t get pinged when I don’t want to but can still see who is out there? Gone.

Ability to set my status to “Do Not Disturb” when I am on a call and want to focus on it? Gone.

Do I want to share “highlights of my day” with my Skype contacts? No!  

It’s a freakin’ productivity app.

What was Microsoft THINKING? 

Make the pain go away, Skype.

I just want this to be over.

Maybe I’ll start using WhatsApp.

Sigh.

WTF: How Quickly Will Reid Hoffman and Marc Pincus’ New Political Platform Get Hacked?

I had mixed emotions as I read yesterday’s Recode story by Tony Romm about how LinkedIn founder Reid Hoffman and Zynga founder Marc Pincus are creating a new political platform called “Win the Future” (shortened amusingly to “WTF”).

On one hand, I agree with so much of what they want to achieve: the two WTF founders “want to force Democrats to rewire their philosophical core, from their agenda to the way they choose candidates in elections — the stuff of politics, they said, that had been out of reach for most voters long before Donald Trump became president.”

That sounds great! Maybe, just maybe, the DNC will start to include White Working Class voters in its platform in a way that makes sense to those voters– and if you haven’t Joan C. Williams’ brilliant book on this topic then stop everything and go buy it right now.

But on the other hand, the Win the Future methodology has me crying “WTF?”

Think of WTF as equal parts platform and movement. Its new website will put political topics up for a vote — and the most resonant ideas will form the basis of the organization’s orthodoxy. To start, the group will query supporters on two campaigns: Whether or not they believe engineering degrees should be free to all Americans, and if they oppose lawmakers who don’t call for Trump’s immediate impeachment.

Participants can submit their own proposals for platform planks — and if they win enough support, primarily through likes and retweets on Twitter, they’ll become part of WTF’s political DNA, too. Meanwhile, WTF plans to raise money in a bid to turn its most popular policy positions into billboard ads that will appear near airports serving Washington, D.C., ensuring that “members of Congress see it,” Pincus said.

I immediately thought of what happened in the brief life of Tay, Microsoft’s AI which existed on Twitter, when a massive of mischievous Twitter users overwhelmed Tay with racist, sexist and political tweets and corrupted the AI in less than a day.

And it’s not just AIs that can flamed, trolled and subverted by participants with either mischief or genuine hate on their minds. Just look at the vituperative comments below any online newspaper article, especially if it’s about politics, or what happens when you post something political on your Facebook timeline and that old friend of yours — who one day moved to the other side of the political spectrum when you weren’t paying attention — regurgitates talking points from her or his favorite extreme political website while not engaging directly with whatever you were saying. (If you found yourself offended by that last sentence, dear reader, please look back and notice that I didn’t identify any particular party: this is a projection test; did you fail?)

Reasoned discourse is at a premium these days.

One way to pre-emptively fight the trolls who are a-comin’ would be to make WTF a verified online platform where users not only use their real names (a la Facebook and LinkedIn) but also get reviewed by the user base with five stars or thumbs-up/thumbs-down (a la eBay and Yelp). However, that sort of crowd-based policing also has its limits, as anybody who has ever tried to get a factual error on Wikipedia corrected will attest. An army of enthusiastic volunteers has a scale that dwarfs a small cluster of paid professionals, but that doesn’t necessarily lead to accuracy or fairness.

I’m also worried about how frictionless the WTF platform seems to be from the sparse details in the Recode piece. Voting about some issue on a website with the click of a mouse or on a smart phone with the swipe of a finger doesn’t require much commitment, whereas real political change does.

Democracy, in other words, is messy, expensive and people driven. Algorithms can help but not replace lots of humans working together.

Hoffman and Marcus are a couple of brilliant guys, so I have hope that they’re way ahead of me on this.

Eventually, if we’re lucky, crying WTF will mean something quite different.