Why is “me too” happening now?

It’s challenging to go onto Facebook and Twitter right now and face the ever-swelling river of “me too” posts from women sharing their horrible stories of sexual harassment. It’s good that these posts are happening, good that it’s challenging. Part of what I find challenging is that I don’t know how to respond other than to bear witness.

The spark that started “me too” is Harvey Weinstein’s despicable, sexually predatory behavior — as reported by both The New York Times and The New Yorker. It’s a good thing that this has come to light and that the entertainment industry is exiling him.

And it’s shocking that he got away with it for decades.

Actually, it’s not shocking at all, which is the real problem.

What I don’t understand — what I find curious — is why “me too” is happening now.

Please don’t get me wrong: it’s terrible that — near as I can tell — every woman I know has been sexually assaulted — and it’s courageous and admirable that they are sharing these terrible experiences with the world.

What I’m wondering is why “me too” didn’t happen, say, after the Bill Cosby stories came out. I grew up in Southern California in the 1970s and 1980s, and the child of a celebrity once  mentioned that Cosby was a known philanderer, but I never heard stories of him drugging women and raping them. Hannibal Buress started talking about Cosby as a rapist onstage in 2014 — and it’s fucked up that it took a man talking about it for this to become a thing — and after that women started to come forward to share their horrible Cosby experiences.

But the Cosby stories did not create “me too,” where women all over the world are sharing their stories of sexual harassment by men who aren’t famous.

Nor did the Access Hollywood, Donald Trump, “pussy-grabbing” story — the story that shockingly failed to derail his candidacy — create “me too.”

Perhaps the Cosby stories seemed too bizarre. Although countless women have been drugged or plied with alcohol and then raped, maybe the scenario of the most famous screen dad in the world slipping rufies into the drinks of young actresses didn’t resemble the experiences of other women enough to create “me too.”

In contrast, maybe Harvey Weinstein’s behavior, although profoundly weird, sounded like the experiences most other women have had with a lot of other men, making “me too” less of a leap.

Maybe the rapid succession of Cosby, Trump, Ailes and O’Reilly stories made it possible for women to create “me too” once the Weinstein story broke.

It’s good that “me too” is happening.  Why is it happening now?

Shortly after the “pussy-grabbing” story, Eugene Wei posted a remarkable piece called, “The Age of Distributed Truth,” in which he talks about Cosby, Justin Caldbeck, Trump and Susan Fowler’s post about the toxic bro culture at Uber. Wei then talks about Michael Suk-Young Chwe’s book “Rational Ritual,” and Chwe’s notion of “Common Knowledge”–

Knowledge of the message is not enough; what is also required is knowledge of others’ knowledge, knowledge of others’ knowledge of others’ knowledge, and so on — that is, “common knowledge.”

By this logic, after the New York Times article — followed quickly by the New Yorker article — it was impossible not to know that others knew about Weinstein, which made “me too” possible.

But that still doesn’t explain why it was the Weinstein story that provoked “me too.”

I don’t have an answer, and my question is far from the most important question about me too.

If you have an answer, please share it.

What comes after smartphones?

With all the press and the inescapable ads for new iPhones, Samsung Galaxy, Google Pixel and other snazzy devices, it’s hard to think of the smart phone as a transitional technology.

But it is.

Here are three recent indicators:

Apple and Facebook share a hypothesis that life contains moments when lugging a smartphone is a drag. The Apple Watch commercials feature active people running with just the Watch and wireless ear buds. (I’m not sure why VR is less alluring with a smartphone unless one plans to be naked and therefore pocketless in real life while visiting virtual life.)

You might be wondering about that third indicator. How does the death of non-internet-connected iPods suggest that smartphones — the technology that replaced the iPod — are going away?

What happened to the iPod will happen to the iPhone.

Once smartphones took off after 2007, Apple cannily realized that this new wave of devices was going to absorb the customer base for listening to digital music from the iPod. Who wants to carry around a smartphone and an mp3 player when the smartphone can play mp3s just fine and sounds the same?

What both iPod and iPhone owners care about is listening to music, not the device. If anybody was going to cannibalize Apple’s iPod customers, the company thought, then it should be Apple.

As I look at technology and behavior trends, one of my axioms is that verbs are more important than nouns.

People want to take pictures, and most people prefer the fastest and easiest option for doing so. Devoted photographers still use single lens reflex cameras — either film or digital — but (as the Kodak company learned to its dismay) most people don’t want the hassle and expense of getting film developed, so instead they just whip out their phones. In our latest Surveying the Digital Future survey, for example, we found that 89 percent of Americans take pictures with their mobile phones.

It’s important to focus our analytical attention on the activity — taking pictures — rather than the device the people use to do the activity, because behavior is liquid and can be poured from one container into another.

None of the actions people perform with smartphones are limited to smartphones, and that means that the smartphone won’t be with us forever.

What will this post-smartphone future look like?

Computing power is increasing, as is the ubiquity of wifi and other over-the-air internet connections. Cloud Computing, where the heavy lifting of computation happens online instead of on a computer, means that smaller and smaller devices will have greater and greater processing power.

There’s a common cliché that today’s smartphone is more powerful than the computer that landed the Apollo 11 on the moon. In a few short years, a device the size of a pea will connect to processing power a thousand times greater than today’s smartphone.

So, instead of smartphones in our pockets or purses as our single, do-everything devices, we’ll have Personal Area Networks (PANs)– clusters of devices worn on different parts of our bodies or hovering nearby.

Instead of the glass-and-metal rectangle of today’s smartphone, we might have the computer guts of our PANs in the shape of a silver dollar, or distributed across a series of beads worn as a necklace.

Both in the data from our Future of Transportation project and in watching the uptake for Amazon’s Alexa, Apple’s Siri and the Google Assistant, we see voice interfaces rising in popularity, so it’s likely that the main PAN input will be our voices.

For output, PAN we will receive information both via the voice of the digital assistant (“turn left here, Brad”) and also via Augmented Reality (AR) glasses like the rumored-to-forthcoming Magic Leap technology. Eventually, these will evolve into contact lenses.

If we need to type, we’ll have a virtual keyboard projected onto our AR vision, and we’ll type on any flat surface– the way we type on touch interfaces today. Likewise, we might wear barely-there connected gloves for input. Or, we might carry around a small stylus for sketching in AR or VR, or even a fancy pen that works on real paper as well as virtual paper.

The cutting-edge health sensors in the latest Apple Watch will seem Flintstonian in comparison to the distributed sensors in clothing as well as implanted in our bodies, continually sharing health information with our CPUs.

What stands in the way of this Post Smart Phone future?

Two things are standing in the way of the brave new world of PANs, one technological and one cultural.

The technological obstacle is battery life. Nobody wants to plug in a dozen or more devices (CPU, glasses, stylus, shoes and socks, underwear, pants, shirt, hat…) every night at bedtime, so battery technology will need to improve and the power-consumption demands of the devices will need to become more efficient.

Electric vehicle manufacturers like Tesla are paving the way for better batteries for cars, and eventually that technology will shrink and trickle down to micro devices.

On the cultural side, if you’re wearing a screen on your face and the processing power is in a silver dollar in your pocket, then how do you take a selfie?

While some people make fun of selfie-obsessed youth (not that young people have any monopoly on either narcissism or the ongoing high-tech curation of it through selfies), as my friend Jill Walker Rettberg compellingly argued in her book Seeing Ourselves Through Technology, selfies are an important emergent genre of self-expression — one that is here to stay.

I predict that many of us will carry a selfie-specialized, lightweight, thin, credit-card sized screen that will have both a powerful camera and high-definition resolution. If you look at the new Google Clips camera announced last week and imagine it even smaller, more powerful and with a display, then you’ll see what I mean.

With increased battery life, some of us will also have selfie drones that will take off and orbit us whenever we simply think about taking a selfie, since we’ll have small sensors affixed to or implanted in our skull paying attention to how our brain waves change when we’re thinking about particular things.

Focus on content, not containers

The death of the smartphone is hard to imagine today.  But when the iPod debuted in 2001, it was hard to imagine that it would be displaced just six years later with the arrival of the iPhone.

The moral of this story is not that we’ll all someday soon be even more wired up and connected than we are today (although we will).

Instead, the important take-away idea is that the smartphone (a noun) is a container for a series of activities (verbs), and that the container is distinct from the content.

Don’t mistake the glass for the wine.*

[Cross-posted on the Center for the Digital Future site and elsewhere.]

* For a sci-fi, near-future dystopian version of some of these interactive technologies, you might enjoy my 2011 novel, Redcrosse.

Car ownership is changing, not dying (yet)

On Monday, Business Insider published an article with the headline, “Uber and Lyft could destroy car ownership in major cities.” It’s a provocative headline, but it misrepresents the carefully worded findings of a recent study by researchers at the University of Michigan, Texas A&M and Columbia.

The study took shrewd advantage of a “natural experiment” that happened when Uber and Lyft, protesting new municipal legislation, stopped operating in Austin, Texas, in May of 2016. A few months later, the study authors surveyed a representative sample of Austin residents who had formerly used Lyft and Uber to see how their transportation habits had changed.

The most interesting findings from the study were that after Uber and Lyft drove out of town, 1) only 3% of respondents switched to public transportation (the technical term for this is “bad news”), and 2) that respondents who switched back to using a personal vehicle were 23% more likely to make more trips than when they’d used Lyft and Uber, increasing congestion for everybody else.

The study authors were careful not to extrapolate beyond the Austin city limits, so the Business Insider headline is overblown in its end-of-days rhetoric. It reminds me of the “Bring Out Your Dead” scene in Monty Python and the Holy Grail where a plague victim isn’t quite dead, but that situation is inconvenient for the person carrying him to a wagon full of corpses:

It’s not only fans of Lyft and Uber who overstate the impact of these services.

In an HBR interview, Nissan Renault CEO Carlos Ghosn — when asked about Uber and other such services cutting into car buying — replied, “I’m not worried. By our estimates, the industry sold 85 million cars worldwide in 2016 and is moving towards 87 million this year– both industry records.”

That is a nonsensical response: it’s like being confronted with a giant asteroid hurtling towards the Earth and replying, “but it’s so sunny outside!”

What’s really changing about transportation

In our work at the Center’s Future of Transportation project, we see a two-stage revolution in transportation that is just beginning.

In the first stage, what we call “Get-a-Ride Services” (or GARS) like Uber, Lyft, Car2Go, Zipcar and others make it thinkable for Americans to give up their own cars, but the move from just thinking about it actually to giving up a car is going to take time.

It’s a good news/bad news/more good news scenario.

We asked a representative sample of all Americans if they’d consider not having their own cars: 80% of respondents said no. That’s good news for car manufacturers– only 20% of Americans will let go of the steering wheel.

The bad news is that when we zoomed in on people who use GARS either frequently or sometimes that 20% consideration doubled to 40%– so use of GARS creates an immense flexibility in how Americans think about transportation.

Then there’s the additional good news: only 16% of Americans use GARS frequently (2%) or sometimes (14%); 17% use them once in a while; 67% never use them. (I discuss this at greater length in this column about liquid behavior.)

Car manufacturers, in other words, don’t have to worry about massive car-buying declines in 2018, but I wouldn’t be optimistic about 2020. We see a slow erosion in car buying, but more importantly we see change within the cars being purchased.

The people who choose to own cars will have more specialized needs (more on this below), and this means that manufacturers will need to customize their vehicles to a greater extent than they do today. That’s grim for mass scale where, for example, Toyota sells a few million Camrys that are all pretty much the same.

On the other hand, new production technologies — like the adjustable drive train from Faraday Futures — will make this customization cheaper for manufacturers. The last stage of production for your next car might happen at the dealership, via a gigantic 3D printer.

The second stage of the transportation revolution is all about self-driving cars, and you can’t find a better overview of why driverless cars will change everything than in this column by Center founder Jeffrey Cole.

Self-driving cars are no longer the stuff of science fiction. This week the U.S. House of Representatives will vote on “a sweeping proposal to speed the deployment of self-driving cars without human controls and bar states from blocking autonomous vehicles, congressional aides said,” according to Reuters.

But even if this legislation magically passed from House to Senate to the president’s desk and received approval in 24 hours, it will still be years before self-driving cars are everywhere. As science fiction author William Gibson famously quipped in 1993, “the future is already here: it’s just very evenly distributed.”

Tomorrow’s car buyer

The national — even global — fascination with self-driving cars is understandable, but it’s also a distraction from important changes in transportation, the first stage of the revolution, that will hit home a lot sooner.

To see this, let’s zoom in on one chart from our forthcoming Future of Transportation report. We asked people who used to have a car but had given it up this question, “Do you miss anything about having access to a car?” Here are the top five answers:

The most interesting answer is the fourth: 31% of respondents miss being able to keep their stuff in a car. The flip side of this, of course, is that 69% of people don’t give a hoot about using a personal car like a high school locker.

This suggests that for the vast majority of people there is no specific, concrete reason to own a car. “Convenience” is vague, and most people will trade convenience for cash much of the time. Independence, the fun of driving and not having to rent a car to go on a long drive, are similarly vague.

But being able to keep things in a car is concrete, and from that we can draw some tentative conclusions about who will own cars in the future.

Parents of very young children — babies these days need approximately a ton of plastic crap that poor Mom and Dad have to lug around — will find it inconvenient to have to install a car seat every time they drive somewhere. Likewise, parents with more than two children won’t want to play Uber-Roulette and risk having to squeeze five plus bodies into four seats in the inevitable Prius.

Anybody who works out of a car — gardener, plumber, contractor, surveyor, electrician, or locksmith — will need a dedicated vehicle. Sporty people who need a lot of equipment — skiers, surfers, kayakers, campers — or bikers who want a rack on their car to drive to the nice places to ride will want a dedicated vehicle.

But for the rest? The people who just need to move their bodies from place to place carrying a backpack or briefcase?

Most of those people will probably buy another car when the time comes: the big question is will they buy another car a few years after that? The answer is only “maybe” because — for the first time in a century — they no longer have to own a car to get around.

[Cross-posted on the Center site and elsewhere.]

Open Letter to Twitter CEO Jack Dorsey: Please Cancel the President’s Accounts

Dear Jack Dorsey,

Please cancel U.S. President Donald J. Trump’s Twitter accounts– both the official @POTUS one and @RealDonaldTrump.

Twitter does not have to persist in giving the president a platform where he lies in verifiable ways that responsible media outlets — real news — have detailed time and again.

Twitter does not have to enable the president to say hurtful things, things that violate Twitter’s own rules against abusive behavior.

After all, according to the page to which I linked above, “Twitter reserves the right to immediately terminate your account without further notice in the event that, in its judgment, you violate these Rules or the Terms of Service.”

Even if you and the Twitter legal team were to scrutinize both the rules and the Terms of Service and conclude that you cannot under the current rules terminate the president’s account, then that should not prove a barrier. On your website it states, “Please note that we may need to change these rules from time to time and reserve the right to do so. The most current version will always be available at twitter.com/rules.”

If you need to, please change the rules.

I’m sure you can come up with something logical and defensible.

In doing this, you’d not only be acting as a patriot, but you’d also be joining the other powerful CEOs who have stepped away from the president’s various councils and advisory groups because they find his behavior repugnant and un-American.

Please stop enabling the president’s repugnant behavior.

Wednesday, when a dozen of your peers — these same CEOs — decided to resign en masse from their advisory roles on White House councils:

Before they could make a statement announcing their decision, however, Mr. Trump spoke. He had caught wind of their planned defection and wanted to have the last word. Taking to Twitter, he wrote: “Rather than putting pressure on the businesspeople of the Manufacturing Council & Strategy & Policy Forum, I am ending both. Thank you all!” (New York Times.)

Twitter, the company you lead, allowed the president to try to prevent the CEOs from making an effective statement.

The president uses Twitter to lie, to hurt people, to shame people, to subvert the freedom of the press and in doing so he is making this country a lesser place than it should be.

While you cannot make the president an honest man or a decent president, you could make it harder for him to do his job badly.

Please, Mr. Dorsey, cancel the president’s Twitter accounts.

Sincerely,

Brad Berens (@bradberens)

The Fall and Rise of the Visual Internet

I’m pleased to announce that my role with the Center for the Digital Future at USC Annenberg has expanded, and I’m now the Chief Strategy Officer. This column is cross-posted from the Center’s website, and is the first of many regular pieces from me and my colleagues. And now, onto the column… 

Bennett and I have been friends since we were eight. Over a recent late-night dessert we compared notes about how thinly spread we each felt across work, family and life. Bennett then shared an insight from a counselor he sees: “Y’know how in Kung-Fu movies the hero stands in the center and all the villains gather into a circle around him and take turns attacking him one by one? Life isn’t like that.”

Neither is technology.

Technologies don’t take turns arriving in our lives. Instead, they’re locked in a Darwinian struggle to clutch and hold onto a niche in our lives. Sometimes it’s a head-to-head struggle, like VCR versus Betamax, where the differences are slight and one technology wins because of marketing and luck. Sometimes different trends slam into each other and that collision creates a new thing — like the way that mobile phones ate digital cameras, email, notebooks, calendars, music collections, powerful microprocessors, decent battery life, email and the web to become smart phones.

A new collision is gaining velocity with the emergence of digital assistants and heads-up display. Both new technologies are changing how users interact with information, particularly visual information. As these technologies give users new ways to behave, those behavior changes will pressurize the business models and financial health of digital media companies, particularly ad-supported companies.

Voice-Interfaces Reduce Visual Interaction

Even though newer Echo devices have screens and touch interfaces, the most compelling use case is eyes free and hands free for Amazon’s Alexa, Apple’s Siri in the HomePod, and the Google Assistant in Google Home.

For example, I often use my Echo device when I’m doing the dishes to catch up on the day’s events by asking, “Alexa, what’s in the news?” Or, if I’m about to wade deep into thought at my desk and don’t want to miss a conference call starting an hour later I’ll ask Alexa to “set a timer for 55 minutes.”

I’m a failure at voice-driven commerce because I have yet to ask Alexa to buy anything from Amazon, but I have used IFTTT (the “If This, Then That” service that connects different devices and services) to connect Alexa to my to-do list so that I can add something just by speaking, which spares me from dropping everything to grab my phone or (gasp!) a pen and paper.

Alexa’s answers are pleasantly clutter-free. If I use my desktop computer to search Amazon for the latest John Grisham novel, then along with a prominent link to Camino Island, Amazon serves up a results page with 24 distracting other things that I can buy, as well as hundreds of other links. With Alexa, I just get Camino Island. (With commodity products, unless you specify a brand Amazon will send you its generic house brand: CPG advertisers beware!)

Right now, most queries to smartphone-based digital assistants result in a list of results that I have to look at, switching my attention from ears to eyes, but as these rudimentary artificial intelligences get better my need to look at a screen will decline. Today, if I say, “Hey Siri, where’s a Peet’s coffee near me?” the AI will tell me the address and ask me if I want to call or get directions. If I choose “directions,” then I have to look at my phone. In a short amount of time, Siri will seamlessly transition to Apple Maps and speak turn-by-turn directions, so I won’t have to look away from the road.

The challenge the rise of voice interfaces poses for ad-supported digital companies is that those companies make their money from propinquity— from the background clutter that is near the thing I’m looking at or searching for but that isn’t the thing I’m looking at or searching for.

Google, Facebook, the New York Times, AOL (excuse me, “Oath”), Reddit, Tumblr, Bing, LinkedIn, and others make much of their money from banners, pop-up ads, search results and other things we see but often don’t consciously notice: that is, online display adverting.

Amazon’s Alexa can already read news stories aloud in a smooth, easy-to-follow voice. It won’t be long until all the digital assistants can do so, and can navigate from article to article, site to site without users having to look at anything.

We can listen to only one thing at a time, so there aren’t background ads for Siri, Alexa and their ilk. Moreover, despite decades of conditioning to accept interruptive ads in radio, it’ll be game over the moment Alexa or Siri or Google Assistant says, “I’ll answer your question, but first please listen to this message from our friends at GlaxoSmithKline.”

The most powerful ad blocker turns out to be a switch from eyes to ears as the primary sense for media interaction. As voice-interface digital assistants grow in popularity and capability, the volume of visual inventory for these businesses will erode.

This erosion follows the decline in visual inventory that already happened as users moved most of their computing time to the smaller screens of mobile devices with less visual geography and therefore less room for ads.

In a recent Recode Decode interview, marketing professor and L2 founder Scott Galloway observed, “advertising has become a tax that the poor and the technologically illiterate pay.”

Since wealthier people will have voice-activated digital assistants first, that means that the people more exposed to visual advertising will have less disposable income and will therefore be less desirable targets for many advertisers. This creates more pressure on the display-ad-based media economy.

On the other hand, remember the Kung Fu movie quip? There’s another technology making changes in the visual internet at the same time.

Smart Glasses Increase Visual Interaction

Smart glasses are, simply, computer screens that you wear over your eyes. In contrast with voice-interfaces that are already popular in phones and with speakers, smart glasses haven’t become a big hit because they’re expensive, battery life is limited, and many people get nervous around other people wearing cameras on their faces all the time. (Early Google Glass enthusiasts were sometimes dubbed “glassholes.”)

Some pundits think that just because Google Glass didn’t sweep the nation it means that all smart glasses are doomed to failure. But just as Apple’s failed Newton (1993) presaged the iPhone 14 years later (2007), Google Glass is merely an early prototype for a future technology hit.

Smart glasses come in a spectrum that gets more immersive: augmented reality puts relevant information in your peripheral vision (Google Glass), mixed reality overlays information onto your location that you can manipulate (Microsoft’s HoloLens, with Pokemon Go as a phone-based version), and virtual reality absorbs you into a 360 degree environment that has little relationship to wherever your body happens to be (Facebook’s Oculus Rift, HTC Vive). The overarching category is “Heads-Up Display” or HUD.

What’s important about HUDs is that they increase the amount of digital information in the user’s visual field: not just the visual inventory for ads (like in this clip from the film, “Minority Report“), but for everything.

Wherever you’re reading this column — on a computer, tablet, phone or paper printout — please stop for a moment and pay attention to your peripheral vision. I’m sitting at my desk as I write this. To my left is a window leading to the sunny outdoors. On my desk to the right are a scanner and a coffee cup. Papers lie all over the desk below the monitor, and there are post-it reminders and pictures on the wall behind the monitor. It’s a typical work environment.

If I were wearing a HUD, then all of that peripheral territory would be fair game for digital information pasted over the real world. That might be a good thing: I could have a “focus” setting on my HUD that grays out everything in my visual field that isn’t part of the window where I’m typing or the scattered paper notes about what I’m writing. If I needed to search for a piece of information on Google I might call a virtual monitor into existence next to my actual monitor and run the search without having to hide the text I’m writing. This is the good news version.

In the bad news version, ads, helpful suggestions, notifications, reminders and much more colonize the majority of my visual field: I think about those moments when my smart phone seems to explode with notifications, and then I imagine expanding that chaos to everything I can see. In some instances this might be a maddening cacophony, but others might be more subtle, exposing me to messages in the background at a high but not-irritating frequency in order to make the product more salient. (“I’m thirsty: I’ll have a Coke. Wait, I don’t drink soft drinks… how’d that happen?”) This isn’t as creepy as it sounds, like the old Vance Packard “subliminal advertising” bugaboo, it’s just advertising. Salience results from repetition.

Regardless of what fills the digital visual field, an explosion of visual inventory will be a smorgasbord of yummies for ad-supported media companies.

But there’s a twist.

Filters and the Decline of Shared Reality

Just sitting at my desk as I work is an overly-simplistic use case for wearing a HUD: the real differences in all their complexity come into focus once I leave my office to wander the world.

With Heads-Up Display, every surface becomes a possible screen for interactive information. That’s the output. Since the primary input channel will still be my voice, there’s a disparity between the thin amount of input I give and the explosion of output I receive. This is the digital assistant and HUD collision I mentioned earlier.

Walking in a supermarket, the labels on different products might be different for me than for the person pushing his cart down the aisle a few yards away. The supermarket might generate individualized coupons in real time that would float over the products in question and beckon. If my HUD integrated with my digital assistant, then I might be able to say, “Hey Siri, what can I make for dinner?” and have Siri show me what’s in the fridge and the pantry so that I can buy whatever else I need.

Smart glasses won’t just stick information on top of the reality on the other side of the lenses, they will also filter that reality in different ways.

We can see how this will work by looking at the technologies we already use. For example, businesses will compete to put hyper-customized articles, videos, and ads in front of you, similar to how ads pop-up on your Facebook page today. But these articles and ads will be everywhere you look, rather than contained on your laptop of phone. This is algorithmic filtering based on your past behavior.

Likewise, your digital assistant will insert helpful information into your visual field (such as the name of the person you’re talking with that you can’t remember) that you either ask for or that it anticipates you might find useful. The Google app on many smart phones already does versions of this, like reminding you to leave for the airport so that you aren’t late for your flight.

Finally, you’ll be able to add your own filters by hand, changing people’s appearances or names in real-time. If you’ve given one of your smart phone callers an individual ring tone, changed the name of a contact to something else (“What a Babe” or “Don’t Answer Him,”), or watched a teenager put a dog nose or kitty ears on top of a photo in Snapchat, then you’ve already seen primitive versions of this in action.

An unintended consequence of this visual explosion is the decline of shared reality. We already spend much of our time avoiding the world around us in favor of the tastier, easier world inside our smart phones. But even if the latest meme coming out of Instagram is the funniest thing we’ve ever seen, the majority of what surrounds us is still analog, still the flesh and blood world untouched by digital information.

That changes with HUDs.

In the near future where HUDs are common, you and I might stand side by side on the same street corner looking at the same hodgepodge of people, cars, buildings and signs — but seeing different things because we have idiosyncratic, real-time filters. Each of us will be standing on the same corner but living inside what Eli Pariser calls “filter bubbles” that have ballooned out to surround our entire worlds.

Common knowledge at this point becomes rare because a big part of common knowledge is its social component. In the words of Michael Suk-Young Chwe from his book Rational Ritual, a society’s integration is the result of coordinated activities built on a set of shared information and messages.

For a society to function, Chwe writes, “Knowledge of the message is not enough; what is also required is knowledge of others’ knowledge, knowledge of others’ knowledge of others’ knowledge, and so on — that is, “common knowledge.”

It has been challenging enough in our shared analog reality to achieve things like consensus in politics or word-of-mouth awareness in business. As we each move into new, idiosyncratically personalized environments where we don’t know what other people know, we’ll need to work harder to hear other voices than our own, to connect with each other as friends, family members, customers and citizens.

That may be a tall order.

David Brooks Calls for a Third Party

I thought I was as done with the election as a boy can be, but despite a Coyote-plummeting-off-the-cliff decline of interest in the news I noticed David Brooks remarkable column from election day, “Let’s Not Do This Again” in which he resignedly calls for a third party to break the D.C. deadlock.

Here’s a relevant excerpt:

There has to be a compassionate globalist party, one that embraces free trade while looking after those who suffer from trade; that embraces continued skilled immigration while listening to those hurt by immigration; that embraces widening ethnic diversity while understanding that diversity can weaken social trust.

This was sufficiently akin to my own early-October call for bringing back the Whigs that it startled me: I admire Brooks but often disagree with him.

And this is yet another moment when, at least in part, I disagree with Brooks. The party he is describing  (and his whole column is worth a read) is the Democratic Party.

Where I agree with Brooks is that the current two-party system is irredeemably and irrevocably broken.

Side Note: For anybody who is still confused by how middle-class, non-coastal, non-college educated white Americans could so unequivocally vote for a New York billionaire narcissist with no intention of making their lives better, then you should click directly to Amazon (or better yet head to a local bookstore if your town still has one) and buy JD Vance’s Hillbilly Elegy: a Memoir of a Family and Culture in Crisis. It’s an amazing read — I ignored everything the day I inhaled it — and explains the psychology of the Trump voter… even though it never mentions Trump and was written when his candidacy was still a joke to most people.

A Modest Proposal: Bring Back the Whigs, or… R.I.P. GOP

Today, in a remarkable interview on NPR’s “Morning Edition,” Florida-based, long-time Republican strategist and lobbyist Mac Stipanovich conceded that Hillary Clinton will win the presidency — and that he himself will vote for her because “I loathe Donald Trump with the passion that I usually reserve for snakes.”

The interview is worth listening to in full, but I wanted to highlight two key passages. The first is when Stipanovich argued that in the coming 2018 and 2020 election cycles…

This thing in going to shake out one way or another. Either real conservative Republicans — men and women of conscience and enough sense to come in out of the rain — will regain control of the party, or they will leave the party. In many ways I think the election process itself will take care of this. One of the things we’re going to learn here is that you can’t be crazy and win a large constituency general election.

A couple more of those lessons in statewide senate races in ’18, governors races in ’18 where people who embrace Trump go down to defeat because of it, and I think you’ll start seeing that Republican candidates in primaries will be more moderate and get closer to the center right so that they have some chance of winning.

What will be the cure for this is the actual outcomes on Election Day, not the BS on social media.

NPR interviewer Renee Montagne then shrewdly asks Stipanovich if the Republican party can afford to lose the sizable population of Trump supporters, to which he replies:

I don’t know that we’ll lose them. Hopefully, there’ll be some re-education, but if we have to lose them then lose them we must. What Trump stands for is wrong. It’s bad for America. It’s bad for the party. And if we have to wander in the wilderness for a decade until we can get a party that stands for the right things and can make a contribution to the future of America, then we need to wander.

I was taken by Stipanovich’s biblical reference to when Moses and the Hebrews wandered in the desert for a generation before the Hebrews entered the Promised Land— without Moses who died just before that happy moment.

For all his pessimism about the current election, Stipanovich is an optimist, since he thinks the GOP can fix itself in 10 years rather than the 40 it took the Hebrews.

But the real power of the biblical allusion lies in an unanswered question: who is Moses in this analogy? Who in the GOP will retire, die or otherwise vamoose before the party swings back to the center, as Stipanovich predicts?

I think the answer is that there is no Moses for today’s Republican party.

Don’t get me wrong: I’m a life-long liberal Democrat, and the prospect of a severely weakened GOP does not fill me with dismay.

But I don’t recognize Trump supporters as classic Republicans. That is, fiscal conservatives who want to limit the size of government and who work in a productive tension with Democrats who want to expand government services to all Americans.

Those fiscal conservatives have no home in today’s GOP, where total obstructionists like Mitch McConnell and gutless weenies like Paul Ryan stand for nothing other than their own will to power.  The basket of deplorables who support Trump — and I thought that was a mild characterization by Secretary Clinton — and the fundamentalist Christians who want to destroy the separation of church and state built into the U.S. Constitution do not live in the same world as many of the classic Republican I know and respect.

And this is different than what’s going on with the Democrats, which is evidenced simply by the fact that Bernie Sanders is actively campaigning for Hillary Clinton— there is enough mutual respect and philosophical alignment between Sanders and Clinton that they can work together, which cannot be said of Trump’s competitors for the GOP nomination.

So I disagree with Stipanovich: it’s not time for the entire Republican party to wander in the wilderness for 10 to 40 years. Instead, it’s time to create a new tent for fiscal conservatives (who may or may not be social liberals) who can assemble under a smaller but rational tent where concepts like evidence, truth, principle and patriotism can build bridges across parties rather than walls around them.

I suggest the name “The New Whig Party,” or NWP. The old Whigs were pro-business, pro-market, constitutional conservatives and against tyranny.

Perhaps a New Whig Party can help move the country forward rather than in circles.

We used to have Reagan Democrats, but I can’t imagine Trump Democrats. But I can see an NWP making choices difficult for centrist Democrats.

And that’s not a bad thing.

Final Note: I moderate comments on this blog. Flame wars and trolls need not apply.

SHORT: Don’t Miss REDEF Original on Truth in Advertising

From the “too long for a tweet” department:

I just finished Adam Wray‘s powerful Fashion REDEFined original article “With Great Power: Seth Matlins on how Advertising can Shift Culture for the Better.”

It’s about Seth Matlins‘ efforts to change how advertisements featuring too-skinny and Photoshopped models body shame girls and women (men too, by the way).

Here’s a useful except from Matlins:

This practice, these ads, cause and contribute to an array of mental health issues, emotional health issues, and physical health issues that include stress, anxiety, depression, self-harm, self-hate. At the most extreme end they contribute to eating disorders, which in turn contribute to the death of more people than any other known mental illness, at least domestically. What we know from the data is that as kids grow up, the more of these ads they see, the less they like themselves.

What we know is 53% of 13-year-old girls are unhappy with their bodies. By the time they’re 17, 53% becomes 78%, so roughly a 50% increase. When they’re adults, 91% of women will not like themselves, will not like something about their bodies. Women on average have 13 thoughts of self-hate every single day. We know that these ads, and ads like these, have a causal and contributory effect because of pleas from the American Medical Association, the National Institute of Health, the Eating Disorder Coalition, and tens of thousands of doctors, mental and physical, educators, psychologists, health care providers, to say nothing of the governments of France, Israel, and Australia, who have urged advertisers to act on the links between what we consider deceptive and false ad practices and negative health consequences. And yet to date, by and large, and certainly at scale, nobody has.

I wish that the numbers in the second paragraph were stunning or surprising, but they aren’t. What they are, however, is infuriating.

My one critique of the article — and the reason for this short post — is that blame for this sort of body shaming doesn’t only lie with advertisers and marketers.

The entertainment industry also propagates unrealistic body images for females and males alike, and let’s not forget all the magazines and websites featuring photoshopped bodies on covers and internal pages.

It’s not just the ads.

As the father of a 15 year old girl and an 11 year old boy (a teen and a tween), I’m hyper-conscious of these images, but aside from trying (often vainly) to restrict their media access there’s only so much my wife and I can do.

So I celebrate Matlins’ efforts.

You don’t have to be a parent to find this article compelling, but if you ARE a parent, particularly to a teen girl, then this is required reading, folks.  It’ll be on the final.

Along these lines, high up on my “to read this summer” list is Nancy Jo Sales’ American Girls: Social Media and the Secret Lives of Teenagers, although I’ll confess that I’m a bit afraid to read it, as I think I’ll feel the way I felt after seeing Schindler’s List for the first time.

Don’t Miss Adam Grant’s new book “Originals”

Of the many compliments that I can give to Adam Grant’s remarkable new book Originals: How Non-Conformists Move the World, a rare one is that I will have to read it again soon.  Grant is an unusual social scientist in that he’s also a terrific writer, a gem-cutting anecdote selector of real-life stories that illuminate his points with a breezy, swallow-it-in-a-gulp momentum so I found myself racing through the book with a smile on my face.  I didn’t even take notes!  That doesn’t happen.  So, I’m going to read it again, slower, pencil in hand.
In the meantime my first tour through Originals haunts my waking life, an insightful shadow nodding in at unexpected moments— as a professional, a thinker and as a parent.
For example, when an academic friend told me she was trying to salvage as much as she could from her recent articles to put into a book she needs to write for tenure, I replied, “Don’t do that. You are prolific and have tons of ideas: only chase the ones that still excite you.”  That’s lifted straight from Grant, who talks about genius as a surprisingly quantitative endeavor: it’s not that creative masters have better ideas than the rest of us, instead they have have a much greater number of ideas so the odds go up that some of those ideas are terrific.
One of Grant’s opening anecdotes explores a non-causal correlation between success in a call center and an employee’s decision to change the default web browser on her or his computer.  If the employee switched away from Internet Explorer to Firefox or Chrome (this isn’t hot-off-the-presses data, I think), then that switch demonstrated a kind of “how can I make this better?” mindset that led to higher job performance.  I’ve thought about my own default choices repeatedly since then. noticing how sometimes I work around the technology when it’s too much bother to make the technology serve me.  Looking at the pile of remote controls near the entertainment center in my living room is one example: I haven’t bothered to research, buy and program one universal remote.
Grant’s notion of strategic procrastination has also proved actionable faster than I might have predicted.  I’ve often been a pressure-cooker worker, mulling things over for a long simmering period before rolling up my sleeves.  Grant has persuaded me, though, that getting started first and then taking a mulling break at the halfway point leads to higher quality outcomes, and I’ve used this to my advantage — and the advantage of the work — on a research project that is taking up most of my time.
Originals isn’t perfect but it’s always provocative.  Another phenomenon that Grant explores is the correlation between birth order and creativity, with younger children — particularly the youngest of many children — often becoming more successful as ground-breaking creatives because they inhabit a different social niche in their families than rule-making parents and rule-abiding oldest children (of which I am one).  Grant’s birth order argument focuses so much on the nuclear family that I wonder if it’s too Western, too settled, too suburban.  My mother, for example, grew up in a close, hodgepodge, overlapping community of immigrant parents, grandparents, aunts, uncles and oodles of cousins.  Her closest peer group were her cousins, with whom she roamed her city neighborhood unsupervised.  The cousins, with whom she is still close decades later, influenced her as much if not more than her sister, eight years her senior and a more distant presence in her childhood than, say, the presence of my 14 year old daughter in my 10 year old son’s day-to-day in our little suburb.  Still, Grant’s birth order research has made me rethink some of my own parenting choices with my older child.
Perhaps my only real complaint with Originals is that I want some additional product that will help me to apply its powerful insights in my everyday life.  As I gobbled up the book, I wanted something like a deck of playing cards with distilled versions of the chapters that I might rifle through to help sharpen my thinking… something like the Oblique Strategies or Story Cubes.
I was a big fan of Grant’s first book, Give and Take, and Originals is just as good if not better.  It was a pleasure to read the first time, and I’m eager to dive in once again… perhaps I’ll make my own deck of helpful playing cards using my friend John Willshire’s product, the Artefact Cards.

The FOMO Myth

In my last post I wrote about how Facebook’s business need to have more people doing more things on its platform more of the time is in tension with how human satisfaction works.

In today’s post, I’m going to dig a little deeper into the satisfaction math (for those of you with a “Math, ewww” reflex, it’s just fractions, man, chill) and then use that to argue that there’s really no such thing as FOMO or “Fear of Missing Out” for most people when it comes to social media.

Here again for your convenience is the whiteboard chart sketching out my sense of how the Facebook satisfaction index works:

chart

I’m less concerned with where the hump is on the horizontal axis (50 connections, 150, 200, 500) than with the shape and trajectory where as you have more and more connections your overall satisfaction with any single interaction moment on Facebook (or any other social networking service) approaches zero. 

Most people’s response to this is to jump onto an accelerating hamster wheel where you check in more and more often hoping for that dopamine rush of “she did THAT? cool!” but not getting it because the odds get worse and worse.

This is because most people, myself included, aren’t interesting most of the time. 

As a rule of thumb, let’s follow Theodore Sturgeon’s Law which argues that 90% of all human effort is crap, and you spend your whole life looking for that decent 10%.*

By this logic, your Facebook friends will post something interesting about 10% of the time— with some people you love this is a comedic exaggeration because a lot of the time we don’t love people because they are interesting: they are interesting because we love them.

Now let’s say you have 150 Facebook friends, which is both close to the average number of Facebook connections and also happens to be psychologist Robin Dunbar’s Number (how many people with whom you can reasonably have relationships).

Next, let’s say you glance at Facebook once per day and see only one thing that a connection has posted with attendant comments. (BTW, I just opened Facebook full screen on my desktop computer and, to my mild surprise, I only see one complete post.)

If we combo-platter Sturgeon’s law with Dunbar’s number then the odds aren’t great that you’ll find the post interesting: 10% of 1/150, or a 1/1,500 chance.

Wait, let’s be generous because we all find different things worthy of our attention at different moments (we are wide, we contain multitudes), and let’s say that in general you’ll find a post interesting for one several reasons:

The poster says or shares something genuinely interesting

You haven’t connected with the poster in a while

The poster says or shares something funny

You think the poster is hot so you’ll be interested in what she or he says regardless of content due to ulterior motives

You just connected with the poster on Facebook (or Twitter, et cetera) recently, so anything she or he says will be novel and therefore interesting

So that’s now a five-fold increase in the ways that we can find a single post interesting, but the odds still aren’t great: 5/1500 which reduces down to 1/300. 

That’s just one post: if you keep on scrolling and take in 30 posts, which you can do in a minute or so, then you’re at 30/300 or a one-in-ten chance that you’ll find something interesting.  (These still ain’t great odds, by the way: a 90% chance of failure.) 

At this point, cognitive dissonance comes into play and you change your metrics rather than convict yourself of wasting time, deciding to find something not-terribly-interesting kinda-sorta interesting after all.

Remember, though, that I’m deriving this satisfaction index from a base of 150 friends: as your number of connections increases — and remember that Facebook has to grow your number of connections to grow its business — to 1,500 (close to my number, social media slut that I am) then your odds of finding something interesting in 30 posts goes down to 1/100 or a 99% failure rate.

Multiply this across Twitter, Instagram, Google+, LinkedIn, Vine, Tumblr and every other social networking service and you have an fraction with an ever-expanding denominator and a numerator that can never catch up.

Or, to translate this into less-fractional lingo, even if you spent all day, every day on social media the days aren’t getting longer but your social network is getting larger, so the likelihood of your finding social media interactions to be satisfying inexorably decreases over time.**

This is different than FOMO.  Sure, pathological fear of missing out exists: people who check the mailbox seventeen times per day, who can never put their smart phones down for fear of missing an email, who pop up at the water cooler to listen to a conversation. 

But with social media it’s not FOMO, it’s DROP: Diminishing Returns On Platform.

Most importantly, there’s a conspiracy-theory-paranoiac interpretation of how people talk about FOMO when it comes to social media: if you attribute checking Facebook too much to FOMO, then it’s a problem with the user, not with Facebook.  The user needs to develop more discipline and stop checking Facebook. 

As I discussed in my last post, this pernicious argument is similar to how Coca-Cola — which needs to have the 50% of the population that drinks soda drink more soda to have business growth — dodges the question of whether it is partly responsible for the U.S. obesity epidemic by saying that people just need to exercise more.

Facebook could create better filters for its users with ease, making a Dunbar filter of 150 that the home display defaults to and letting users toss people into that filter, and remove them easily later.  This is what Path was trying to do, but there’s no business model in it for a startup like Path.  With Facebook’s dominance in social media, it could and should value user satisfaction more than it does.

Right now, though, the only ways to increase your satisfaction with Facebook are either to reduce your number of friends or to reduce your time on platform.

* The Third Millennial Berens Corollary to Sturgeon’s Law is that only 1/10 of 1% is truly excellent but that our signal to noise ratio makes it almost impossible to find excellence.

** This line of thinking is similar to the opportunity costs that Barry Schwartz discusses in his excellent 2004 book “The Paradox of Choice.”