Car ownership is changing, not dying (yet)

On Monday, Business Insider published an article with the headline, “Uber and Lyft could destroy car ownership in major cities.” It’s a provocative headline, but it misrepresents the carefully worded findings of a recent study by researchers at the University of Michigan, Texas A&M and Columbia.

The study took shrewd advantage of a “natural experiment” that happened when Uber and Lyft, protesting new municipal legislation, stopped operating in Austin, Texas, in May of 2016. A few months later, the study authors surveyed a representative sample of Austin residents who had formerly used Lyft and Uber to see how their transportation habits had changed.

The most interesting findings from the study were that after Uber and Lyft drove out of town, 1) only 3% of respondents switched to public transportation (the technical term for this is “bad news”), and 2) that respondents who switched back to using a personal vehicle were 23% more likely to make more trips than when they’d used Lyft and Uber, increasing congestion for everybody else.

The study authors were careful not to extrapolate beyond the Austin city limits, so the Business Insider headline is overblown in its end-of-days rhetoric. It reminds me of the “Bring Out Your Dead” scene in Monty Python and the Holy Grail where a plague victim isn’t quite dead, but that situation is inconvenient for the person carrying him to a wagon full of corpses:

It’s not only fans of Lyft and Uber who overstate the impact of these services.

In an HBR interview, Nissan Renault CEO Carlos Ghosn — when asked about Uber and other such services cutting into car buying — replied, “I’m not worried. By our estimates, the industry sold 85 million cars worldwide in 2016 and is moving towards 87 million this year– both industry records.”

That is a nonsensical response: it’s like being confronted with a giant asteroid hurtling towards the Earth and replying, “but it’s so sunny outside!”

What’s really changing about transportation

In our work at the Center’s Future of Transportation project, we see a two-stage revolution in transportation that is just beginning.

In the first stage, what we call “Get-a-Ride Services” (or GARS) like Uber, Lyft, Car2Go, Zipcar and others make it thinkable for Americans to give up their own cars, but the move from just thinking about it actually to giving up a car is going to take time.

It’s a good news/bad news/more good news scenario.

We asked a representative sample of all Americans if they’d consider not having their own cars: 80% of respondents said no. That’s good news for car manufacturers– only 20% of Americans will let go of the steering wheel.

The bad news is that when we zoomed in on people who use GARS either frequently or sometimes that 20% consideration doubled to 40%– so use of GARS creates an immense flexibility in how Americans think about transportation.

Then there’s the additional good news: only 16% of Americans use GARS frequently (2%) or sometimes (14%); 17% use them once in a while; 67% never use them. (I discuss this at greater length in this column about liquid behavior.)

Car manufacturers, in other words, don’t have to worry about massive car-buying declines in 2018, but I wouldn’t be optimistic about 2020. We see a slow erosion in car buying, but more importantly we see change within the cars being purchased.

The people who choose to own cars will have more specialized needs (more on this below), and this means that manufacturers will need to customize their vehicles to a greater extent than they do today. That’s grim for mass scale where, for example, Toyota sells a few million Camrys that are all pretty much the same.

On the other hand, new production technologies — like the adjustable drive train from Faraday Futures — will make this customization cheaper for manufacturers. The last stage of production for your next car might happen at the dealership, via a gigantic 3D printer.

The second stage of the transportation revolution is all about self-driving cars, and you can’t find a better overview of why driverless cars will change everything than in this column by Center founder Jeffrey Cole.

Self-driving cars are no longer the stuff of science fiction. This week the U.S. House of Representatives will vote on “a sweeping proposal to speed the deployment of self-driving cars without human controls and bar states from blocking autonomous vehicles, congressional aides said,” according to Reuters.

But even if this legislation magically passed from House to Senate to the president’s desk and received approval in 24 hours, it will still be years before self-driving cars are everywhere. As science fiction author William Gibson famously quipped in 1993, “the future is already here: it’s just very evenly distributed.”

Tomorrow’s car buyer

The national — even global — fascination with self-driving cars is understandable, but it’s also a distraction from important changes in transportation, the first stage of the revolution, that will hit home a lot sooner.

To see this, let’s zoom in on one chart from our forthcoming Future of Transportation report. We asked people who used to have a car but had given it up this question, “Do you miss anything about having access to a car?” Here are the top five answers:

The most interesting answer is the fourth: 31% of respondents miss being able to keep their stuff in a car. The flip side of this, of course, is that 69% of people don’t give a hoot about using a personal car like a high school locker.

This suggests that for the vast majority of people there is no specific, concrete reason to own a car. “Convenience” is vague, and most people will trade convenience for cash much of the time. Independence, the fun of driving and not having to rent a car to go on a long drive, are similarly vague.

But being able to keep things in a car is concrete, and from that we can draw some tentative conclusions about who will own cars in the future.

Parents of very young children — babies these days need approximately a ton of plastic crap that poor Mom and Dad have to lug around — will find it inconvenient to have to install a car seat every time they drive somewhere. Likewise, parents with more than two children won’t want to play Uber-Roulette and risk having to squeeze five plus bodies into four seats in the inevitable Prius.

Anybody who works out of a car — gardener, plumber, contractor, surveyor, electrician, or locksmith — will need a dedicated vehicle. Sporty people who need a lot of equipment — skiers, surfers, kayakers, campers — or bikers who want a rack on their car to drive to the nice places to ride will want a dedicated vehicle.

But for the rest? The people who just need to move their bodies from place to place carrying a backpack or briefcase?

Most of those people will probably buy another car when the time comes: the big question is will they buy another car a few years after that? The answer is only “maybe” because — for the first time in a century — they no longer have to own a car to get around.

[Cross-posted on the Center site and elsewhere.]

Open Letter to Twitter CEO Jack Dorsey: Please Cancel the President’s Accounts

Dear Jack Dorsey,

Please cancel U.S. President Donald J. Trump’s Twitter accounts– both the official @POTUS one and @RealDonaldTrump.

Twitter does not have to persist in giving the president a platform where he lies in verifiable ways that responsible media outlets — real news — have detailed time and again.

Twitter does not have to enable the president to say hurtful things, things that violate Twitter’s own rules against abusive behavior.

After all, according to the page to which I linked above, “Twitter reserves the right to immediately terminate your account without further notice in the event that, in its judgment, you violate these Rules or the Terms of Service.”

Even if you and the Twitter legal team were to scrutinize both the rules and the Terms of Service and conclude that you cannot under the current rules terminate the president’s account, then that should not prove a barrier. On your website it states, “Please note that we may need to change these rules from time to time and reserve the right to do so. The most current version will always be available at twitter.com/rules.”

If you need to, please change the rules.

I’m sure you can come up with something logical and defensible.

In doing this, you’d not only be acting as a patriot, but you’d also be joining the other powerful CEOs who have stepped away from the president’s various councils and advisory groups because they find his behavior repugnant and un-American.

Please stop enabling the president’s repugnant behavior.

Wednesday, when a dozen of your peers — these same CEOs — decided to resign en masse from their advisory roles on White House councils:

Before they could make a statement announcing their decision, however, Mr. Trump spoke. He had caught wind of their planned defection and wanted to have the last word. Taking to Twitter, he wrote: “Rather than putting pressure on the businesspeople of the Manufacturing Council & Strategy & Policy Forum, I am ending both. Thank you all!” (New York Times.)

Twitter, the company you lead, allowed the president to try to prevent the CEOs from making an effective statement.

The president uses Twitter to lie, to hurt people, to shame people, to subvert the freedom of the press and in doing so he is making this country a lesser place than it should be.

While you cannot make the president an honest man or a decent president, you could make it harder for him to do his job badly.

Please, Mr. Dorsey, cancel the president’s Twitter accounts.

Sincerely,

Brad Berens (@bradberens)

The Fall and Rise of the Visual Internet

I’m pleased to announce that my role with the Center for the Digital Future at USC Annenberg has expanded, and I’m now the Chief Strategy Officer. This column is cross-posted from the Center’s website, and is the first of many regular pieces from me and my colleagues. And now, onto the column… 

Bennett and I have been friends since we were eight. Over a recent late-night dessert we compared notes about how thinly spread we each felt across work, family and life. Bennett then shared an insight from a counselor he sees: “Y’know how in Kung-Fu movies the hero stands in the center and all the villains gather into a circle around him and take turns attacking him one by one? Life isn’t like that.”

Neither is technology.

Technologies don’t take turns arriving in our lives. Instead, they’re locked in a Darwinian struggle to clutch and hold onto a niche in our lives. Sometimes it’s a head-to-head struggle, like VCR versus Betamax, where the differences are slight and one technology wins because of marketing and luck. Sometimes different trends slam into each other and that collision creates a new thing — like the way that mobile phones ate digital cameras, email, notebooks, calendars, music collections, powerful microprocessors, decent battery life, email and the web to become smart phones.

A new collision is gaining velocity with the emergence of digital assistants and heads-up display. Both new technologies are changing how users interact with information, particularly visual information. As these technologies give users new ways to behave, those behavior changes will pressurize the business models and financial health of digital media companies, particularly ad-supported companies.

Voice-Interfaces Reduce Visual Interaction

Even though newer Echo devices have screens and touch interfaces, the most compelling use case is eyes free and hands free for Amazon’s Alexa, Apple’s Siri in the HomePod, and the Google Assistant in Google Home.

For example, I often use my Echo device when I’m doing the dishes to catch up on the day’s events by asking, “Alexa, what’s in the news?” Or, if I’m about to wade deep into thought at my desk and don’t want to miss a conference call starting an hour later I’ll ask Alexa to “set a timer for 55 minutes.”

I’m a failure at voice-driven commerce because I have yet to ask Alexa to buy anything from Amazon, but I have used IFTTT (the “If This, Then That” service that connects different devices and services) to connect Alexa to my to-do list so that I can add something just by speaking, which spares me from dropping everything to grab my phone or (gasp!) a pen and paper.

Alexa’s answers are pleasantly clutter-free. If I use my desktop computer to search Amazon for the latest John Grisham novel, then along with a prominent link to Camino Island, Amazon serves up a results page with 24 distracting other things that I can buy, as well as hundreds of other links. With Alexa, I just get Camino Island. (With commodity products, unless you specify a brand Amazon will send you its generic house brand: CPG advertisers beware!)

Right now, most queries to smartphone-based digital assistants result in a list of results that I have to look at, switching my attention from ears to eyes, but as these rudimentary artificial intelligences get better my need to look at a screen will decline. Today, if I say, “Hey Siri, where’s a Peet’s coffee near me?” the AI will tell me the address and ask me if I want to call or get directions. If I choose “directions,” then I have to look at my phone. In a short amount of time, Siri will seamlessly transition to Apple Maps and speak turn-by-turn directions, so I won’t have to look away from the road.

The challenge the rise of voice interfaces poses for ad-supported digital companies is that those companies make their money from propinquity— from the background clutter that is near the thing I’m looking at or searching for but that isn’t the thing I’m looking at or searching for.

Google, Facebook, the New York Times, AOL (excuse me, “Oath”), Reddit, Tumblr, Bing, LinkedIn, and others make much of their money from banners, pop-up ads, search results and other things we see but often don’t consciously notice: that is, online display adverting.

Amazon’s Alexa can already read news stories aloud in a smooth, easy-to-follow voice. It won’t be long until all the digital assistants can do so, and can navigate from article to article, site to site without users having to look at anything.

We can listen to only one thing at a time, so there aren’t background ads for Siri, Alexa and their ilk. Moreover, despite decades of conditioning to accept interruptive ads in radio, it’ll be game over the moment Alexa or Siri or Google Assistant says, “I’ll answer your question, but first please listen to this message from our friends at GlaxoSmithKline.”

The most powerful ad blocker turns out to be a switch from eyes to ears as the primary sense for media interaction. As voice-interface digital assistants grow in popularity and capability, the volume of visual inventory for these businesses will erode.

This erosion follows the decline in visual inventory that already happened as users moved most of their computing time to the smaller screens of mobile devices with less visual geography and therefore less room for ads.

In a recent Recode Decode interview, marketing professor and L2 founder Scott Galloway observed, “advertising has become a tax that the poor and the technologically illiterate pay.”

Since wealthier people will have voice-activated digital assistants first, that means that the people more exposed to visual advertising will have less disposable income and will therefore be less desirable targets for many advertisers. This creates more pressure on the display-ad-based media economy.

On the other hand, remember the Kung Fu movie quip? There’s another technology making changes in the visual internet at the same time.

Smart Glasses Increase Visual Interaction

Smart glasses are, simply, computer screens that you wear over your eyes. In contrast with voice-interfaces that are already popular in phones and with speakers, smart glasses haven’t become a big hit because they’re expensive, battery life is limited, and many people get nervous around other people wearing cameras on their faces all the time. (Early Google Glass enthusiasts were sometimes dubbed “glassholes.”)

Some pundits think that just because Google Glass didn’t sweep the nation it means that all smart glasses are doomed to failure. But just as Apple’s failed Newton (1993) presaged the iPhone 14 years later (2007), Google Glass is merely an early prototype for a future technology hit.

Smart glasses come in a spectrum that gets more immersive: augmented reality puts relevant information in your peripheral vision (Google Glass), mixed reality overlays information onto your location that you can manipulate (Microsoft’s HoloLens, with Pokemon Go as a phone-based version), and virtual reality absorbs you into a 360 degree environment that has little relationship to wherever your body happens to be (Facebook’s Oculus Rift, HTC Vive). The overarching category is “Heads-Up Display” or HUD.

What’s important about HUDs is that they increase the amount of digital information in the user’s visual field: not just the visual inventory for ads (like in this clip from the film, “Minority Report“), but for everything.

Wherever you’re reading this column — on a computer, tablet, phone or paper printout — please stop for a moment and pay attention to your peripheral vision. I’m sitting at my desk as I write this. To my left is a window leading to the sunny outdoors. On my desk to the right are a scanner and a coffee cup. Papers lie all over the desk below the monitor, and there are post-it reminders and pictures on the wall behind the monitor. It’s a typical work environment.

If I were wearing a HUD, then all of that peripheral territory would be fair game for digital information pasted over the real world. That might be a good thing: I could have a “focus” setting on my HUD that grays out everything in my visual field that isn’t part of the window where I’m typing or the scattered paper notes about what I’m writing. If I needed to search for a piece of information on Google I might call a virtual monitor into existence next to my actual monitor and run the search without having to hide the text I’m writing. This is the good news version.

In the bad news version, ads, helpful suggestions, notifications, reminders and much more colonize the majority of my visual field: I think about those moments when my smart phone seems to explode with notifications, and then I imagine expanding that chaos to everything I can see. In some instances this might be a maddening cacophony, but others might be more subtle, exposing me to messages in the background at a high but not-irritating frequency in order to make the product more salient. (“I’m thirsty: I’ll have a Coke. Wait, I don’t drink soft drinks… how’d that happen?”) This isn’t as creepy as it sounds, like the old Vance Packard “subliminal advertising” bugaboo, it’s just advertising. Salience results from repetition.

Regardless of what fills the digital visual field, an explosion of visual inventory will be a smorgasbord of yummies for ad-supported media companies.

But there’s a twist.

Filters and the Decline of Shared Reality

Just sitting at my desk as I work is an overly-simplistic use case for wearing a HUD: the real differences in all their complexity come into focus once I leave my office to wander the world.

With Heads-Up Display, every surface becomes a possible screen for interactive information. That’s the output. Since the primary input channel will still be my voice, there’s a disparity between the thin amount of input I give and the explosion of output I receive. This is the digital assistant and HUD collision I mentioned earlier.

Walking in a supermarket, the labels on different products might be different for me than for the person pushing his cart down the aisle a few yards away. The supermarket might generate individualized coupons in real time that would float over the products in question and beckon. If my HUD integrated with my digital assistant, then I might be able to say, “Hey Siri, what can I make for dinner?” and have Siri show me what’s in the fridge and the pantry so that I can buy whatever else I need.

Smart glasses won’t just stick information on top of the reality on the other side of the lenses, they will also filter that reality in different ways.

We can see how this will work by looking at the technologies we already use. For example, businesses will compete to put hyper-customized articles, videos, and ads in front of you, similar to how ads pop-up on your Facebook page today. But these articles and ads will be everywhere you look, rather than contained on your laptop of phone. This is algorithmic filtering based on your past behavior.

Likewise, your digital assistant will insert helpful information into your visual field (such as the name of the person you’re talking with that you can’t remember) that you either ask for or that it anticipates you might find useful. The Google app on many smart phones already does versions of this, like reminding you to leave for the airport so that you aren’t late for your flight.

Finally, you’ll be able to add your own filters by hand, changing people’s appearances or names in real-time. If you’ve given one of your smart phone callers an individual ring tone, changed the name of a contact to something else (“What a Babe” or “Don’t Answer Him,”), or watched a teenager put a dog nose or kitty ears on top of a photo in Snapchat, then you’ve already seen primitive versions of this in action.

An unintended consequence of this visual explosion is the decline of shared reality. We already spend much of our time avoiding the world around us in favor of the tastier, easier world inside our smart phones. But even if the latest meme coming out of Instagram is the funniest thing we’ve ever seen, the majority of what surrounds us is still analog, still the flesh and blood world untouched by digital information.

That changes with HUDs.

In the near future where HUDs are common, you and I might stand side by side on the same street corner looking at the same hodgepodge of people, cars, buildings and signs — but seeing different things because we have idiosyncratic, real-time filters. Each of us will be standing on the same corner but living inside what Eli Pariser calls “filter bubbles” that have ballooned out to surround our entire worlds.

Common knowledge at this point becomes rare because a big part of common knowledge is its social component. In the words of Michael Suk-Young Chwe from his book Rational Ritual, a society’s integration is the result of coordinated activities built on a set of shared information and messages.

For a society to function, Chwe writes, “Knowledge of the message is not enough; what is also required is knowledge of others’ knowledge, knowledge of others’ knowledge of others’ knowledge, and so on — that is, “common knowledge.”

It has been challenging enough in our shared analog reality to achieve things like consensus in politics or word-of-mouth awareness in business. As we each move into new, idiosyncratically personalized environments where we don’t know what other people know, we’ll need to work harder to hear other voices than our own, to connect with each other as friends, family members, customers and citizens.

That may be a tall order.

David Brooks Calls for a Third Party

I thought I was as done with the election as a boy can be, but despite a Coyote-plummeting-off-the-cliff decline of interest in the news I noticed David Brooks remarkable column from election day, “Let’s Not Do This Again” in which he resignedly calls for a third party to break the D.C. deadlock.

Here’s a relevant excerpt:

There has to be a compassionate globalist party, one that embraces free trade while looking after those who suffer from trade; that embraces continued skilled immigration while listening to those hurt by immigration; that embraces widening ethnic diversity while understanding that diversity can weaken social trust.

This was sufficiently akin to my own early-October call for bringing back the Whigs that it startled me: I admire Brooks but often disagree with him.

And this is yet another moment when, at least in part, I disagree with Brooks. The party he is describing  (and his whole column is worth a read) is the Democratic Party.

Where I agree with Brooks is that the current two-party system is irredeemably and irrevocably broken.

Side Note: For anybody who is still confused by how middle-class, non-coastal, non-college educated white Americans could so unequivocally vote for a New York billionaire narcissist with no intention of making their lives better, then you should click directly to Amazon (or better yet head to a local bookstore if your town still has one) and buy JD Vance’s Hillbilly Elegy: a Memoir of a Family and Culture in Crisis. It’s an amazing read — I ignored everything the day I inhaled it — and explains the psychology of the Trump voter… even though it never mentions Trump and was written when his candidacy was still a joke to most people.

A Modest Proposal: Bring Back the Whigs, or… R.I.P. GOP

Today, in a remarkable interview on NPR’s “Morning Edition,” Florida-based, long-time Republican strategist and lobbyist Mac Stipanovich conceded that Hillary Clinton will win the presidency — and that he himself will vote for her because “I loathe Donald Trump with the passion that I usually reserve for snakes.”

The interview is worth listening to in full, but I wanted to highlight two key passages. The first is when Stipanovich argued that in the coming 2018 and 2020 election cycles…

This thing in going to shake out one way or another. Either real conservative Republicans — men and women of conscience and enough sense to come in out of the rain — will regain control of the party, or they will leave the party. In many ways I think the election process itself will take care of this. One of the things we’re going to learn here is that you can’t be crazy and win a large constituency general election.

A couple more of those lessons in statewide senate races in ’18, governors races in ’18 where people who embrace Trump go down to defeat because of it, and I think you’ll start seeing that Republican candidates in primaries will be more moderate and get closer to the center right so that they have some chance of winning.

What will be the cure for this is the actual outcomes on Election Day, not the BS on social media.

NPR interviewer Renee Montagne then shrewdly asks Stipanovich if the Republican party can afford to lose the sizable population of Trump supporters, to which he replies:

I don’t know that we’ll lose them. Hopefully, there’ll be some re-education, but if we have to lose them then lose them we must. What Trump stands for is wrong. It’s bad for America. It’s bad for the party. And if we have to wander in the wilderness for a decade until we can get a party that stands for the right things and can make a contribution to the future of America, then we need to wander.

I was taken by Stipanovich’s biblical reference to when Moses and the Hebrews wandered in the desert for a generation before the Hebrews entered the Promised Land— without Moses who died just before that happy moment.

For all his pessimism about the current election, Stipanovich is an optimist, since he thinks the GOP can fix itself in 10 years rather than the 40 it took the Hebrews.

But the real power of the biblical allusion lies in an unanswered question: who is Moses in this analogy? Who in the GOP will retire, die or otherwise vamoose before the party swings back to the center, as Stipanovich predicts?

I think the answer is that there is no Moses for today’s Republican party.

Don’t get me wrong: I’m a life-long liberal Democrat, and the prospect of a severely weakened GOP does not fill me with dismay.

But I don’t recognize Trump supporters as classic Republicans. That is, fiscal conservatives who want to limit the size of government and who work in a productive tension with Democrats who want to expand government services to all Americans.

Those fiscal conservatives have no home in today’s GOP, where total obstructionists like Mitch McConnell and gutless weenies like Paul Ryan stand for nothing other than their own will to power.  The basket of deplorables who support Trump — and I thought that was a mild characterization by Secretary Clinton — and the fundamentalist Christians who want to destroy the separation of church and state built into the U.S. Constitution do not live in the same world as many of the classic Republican I know and respect.

And this is different than what’s going on with the Democrats, which is evidenced simply by the fact that Bernie Sanders is actively campaigning for Hillary Clinton— there is enough mutual respect and philosophical alignment between Sanders and Clinton that they can work together, which cannot be said of Trump’s competitors for the GOP nomination.

So I disagree with Stipanovich: it’s not time for the entire Republican party to wander in the wilderness for 10 to 40 years. Instead, it’s time to create a new tent for fiscal conservatives (who may or may not be social liberals) who can assemble under a smaller but rational tent where concepts like evidence, truth, principle and patriotism can build bridges across parties rather than walls around them.

I suggest the name “The New Whig Party,” or NWP. The old Whigs were pro-business, pro-market, constitutional conservatives and against tyranny.

Perhaps a New Whig Party can help move the country forward rather than in circles.

We used to have Reagan Democrats, but I can’t imagine Trump Democrats. But I can see an NWP making choices difficult for centrist Democrats.

And that’s not a bad thing.

Final Note: I moderate comments on this blog. Flame wars and trolls need not apply.

SHORT: Don’t Miss REDEF Original on Truth in Advertising

From the “too long for a tweet” department:

I just finished Adam Wray‘s powerful Fashion REDEFined original article “With Great Power: Seth Matlins on how Advertising can Shift Culture for the Better.”

It’s about Seth Matlins‘ efforts to change how advertisements featuring too-skinny and Photoshopped models body shame girls and women (men too, by the way).

Here’s a useful except from Matlins:

This practice, these ads, cause and contribute to an array of mental health issues, emotional health issues, and physical health issues that include stress, anxiety, depression, self-harm, self-hate. At the most extreme end they contribute to eating disorders, which in turn contribute to the death of more people than any other known mental illness, at least domestically. What we know from the data is that as kids grow up, the more of these ads they see, the less they like themselves.

What we know is 53% of 13-year-old girls are unhappy with their bodies. By the time they’re 17, 53% becomes 78%, so roughly a 50% increase. When they’re adults, 91% of women will not like themselves, will not like something about their bodies. Women on average have 13 thoughts of self-hate every single day. We know that these ads, and ads like these, have a causal and contributory effect because of pleas from the American Medical Association, the National Institute of Health, the Eating Disorder Coalition, and tens of thousands of doctors, mental and physical, educators, psychologists, health care providers, to say nothing of the governments of France, Israel, and Australia, who have urged advertisers to act on the links between what we consider deceptive and false ad practices and negative health consequences. And yet to date, by and large, and certainly at scale, nobody has.

I wish that the numbers in the second paragraph were stunning or surprising, but they aren’t. What they are, however, is infuriating.

My one critique of the article — and the reason for this short post — is that blame for this sort of body shaming doesn’t only lie with advertisers and marketers.

The entertainment industry also propagates unrealistic body images for females and males alike, and let’s not forget all the magazines and websites featuring photoshopped bodies on covers and internal pages.

It’s not just the ads.

As the father of a 15 year old girl and an 11 year old boy (a teen and a tween), I’m hyper-conscious of these images, but aside from trying (often vainly) to restrict their media access there’s only so much my wife and I can do.

So I celebrate Matlins’ efforts.

You don’t have to be a parent to find this article compelling, but if you ARE a parent, particularly to a teen girl, then this is required reading, folks.  It’ll be on the final.

Along these lines, high up on my “to read this summer” list is Nancy Jo Sales’ American Girls: Social Media and the Secret Lives of Teenagers, although I’ll confess that I’m a bit afraid to read it, as I think I’ll feel the way I felt after seeing Schindler’s List for the first time.

Don’t Miss Adam Grant’s new book “Originals”

Of the many compliments that I can give to Adam Grant’s remarkable new book Originals: How Non-Conformists Move the World, a rare one is that I will have to read it again soon.  Grant is an unusual social scientist in that he’s also a terrific writer, a gem-cutting anecdote selector of real-life stories that illuminate his points with a breezy, swallow-it-in-a-gulp momentum so I found myself racing through the book with a smile on my face.  I didn’t even take notes!  That doesn’t happen.  So, I’m going to read it again, slower, pencil in hand.
In the meantime my first tour through Originals haunts my waking life, an insightful shadow nodding in at unexpected moments— as a professional, a thinker and as a parent.
For example, when an academic friend told me she was trying to salvage as much as she could from her recent articles to put into a book she needs to write for tenure, I replied, “Don’t do that. You are prolific and have tons of ideas: only chase the ones that still excite you.”  That’s lifted straight from Grant, who talks about genius as a surprisingly quantitative endeavor: it’s not that creative masters have better ideas than the rest of us, instead they have have a much greater number of ideas so the odds go up that some of those ideas are terrific.
One of Grant’s opening anecdotes explores a non-causal correlation between success in a call center and an employee’s decision to change the default web browser on her or his computer.  If the employee switched away from Internet Explorer to Firefox or Chrome (this isn’t hot-off-the-presses data, I think), then that switch demonstrated a kind of “how can I make this better?” mindset that led to higher job performance.  I’ve thought about my own default choices repeatedly since then. noticing how sometimes I work around the technology when it’s too much bother to make the technology serve me.  Looking at the pile of remote controls near the entertainment center in my living room is one example: I haven’t bothered to research, buy and program one universal remote.
Grant’s notion of strategic procrastination has also proved actionable faster than I might have predicted.  I’ve often been a pressure-cooker worker, mulling things over for a long simmering period before rolling up my sleeves.  Grant has persuaded me, though, that getting started first and then taking a mulling break at the halfway point leads to higher quality outcomes, and I’ve used this to my advantage — and the advantage of the work — on a research project that is taking up most of my time.
Originals isn’t perfect but it’s always provocative.  Another phenomenon that Grant explores is the correlation between birth order and creativity, with younger children — particularly the youngest of many children — often becoming more successful as ground-breaking creatives because they inhabit a different social niche in their families than rule-making parents and rule-abiding oldest children (of which I am one).  Grant’s birth order argument focuses so much on the nuclear family that I wonder if it’s too Western, too settled, too suburban.  My mother, for example, grew up in a close, hodgepodge, overlapping community of immigrant parents, grandparents, aunts, uncles and oodles of cousins.  Her closest peer group were her cousins, with whom she roamed her city neighborhood unsupervised.  The cousins, with whom she is still close decades later, influenced her as much if not more than her sister, eight years her senior and a more distant presence in her childhood than, say, the presence of my 14 year old daughter in my 10 year old son’s day-to-day in our little suburb.  Still, Grant’s birth order research has made me rethink some of my own parenting choices with my older child.
Perhaps my only real complaint with Originals is that I want some additional product that will help me to apply its powerful insights in my everyday life.  As I gobbled up the book, I wanted something like a deck of playing cards with distilled versions of the chapters that I might rifle through to help sharpen my thinking… something like the Oblique Strategies or Story Cubes.
I was a big fan of Grant’s first book, Give and Take, and Originals is just as good if not better.  It was a pleasure to read the first time, and I’m eager to dive in once again… perhaps I’ll make my own deck of helpful playing cards using my friend John Willshire’s product, the Artefact Cards.

The FOMO Myth

In my last post I wrote about how Facebook’s business need to have more people doing more things on its platform more of the time is in tension with how human satisfaction works.

In today’s post, I’m going to dig a little deeper into the satisfaction math (for those of you with a “Math, ewww” reflex, it’s just fractions, man, chill) and then use that to argue that there’s really no such thing as FOMO or “Fear of Missing Out” for most people when it comes to social media.

Here again for your convenience is the whiteboard chart sketching out my sense of how the Facebook satisfaction index works:

chart

I’m less concerned with where the hump is on the horizontal axis (50 connections, 150, 200, 500) than with the shape and trajectory where as you have more and more connections your overall satisfaction with any single interaction moment on Facebook (or any other social networking service) approaches zero. 

Most people’s response to this is to jump onto an accelerating hamster wheel where you check in more and more often hoping for that dopamine rush of “she did THAT? cool!” but not getting it because the odds get worse and worse.

This is because most people, myself included, aren’t interesting most of the time. 

As a rule of thumb, let’s follow Theodore Sturgeon’s Law which argues that 90% of all human effort is crap, and you spend your whole life looking for that decent 10%.*

By this logic, your Facebook friends will post something interesting about 10% of the time— with some people you love this is a comedic exaggeration because a lot of the time we don’t love people because they are interesting: they are interesting because we love them.

Now let’s say you have 150 Facebook friends, which is both close to the average number of Facebook connections and also happens to be psychologist Robin Dunbar’s Number (how many people with whom you can reasonably have relationships).

Next, let’s say you glance at Facebook once per day and see only one thing that a connection has posted with attendant comments. (BTW, I just opened Facebook full screen on my desktop computer and, to my mild surprise, I only see one complete post.)

If we combo-platter Sturgeon’s law with Dunbar’s number then the odds aren’t great that you’ll find the post interesting: 10% of 1/150, or a 1/1,500 chance.

Wait, let’s be generous because we all find different things worthy of our attention at different moments (we are wide, we contain multitudes), and let’s say that in general you’ll find a post interesting for one several reasons:

The poster says or shares something genuinely interesting

You haven’t connected with the poster in a while

The poster says or shares something funny

You think the poster is hot so you’ll be interested in what she or he says regardless of content due to ulterior motives

You just connected with the poster on Facebook (or Twitter, et cetera) recently, so anything she or he says will be novel and therefore interesting

So that’s now a five-fold increase in the ways that we can find a single post interesting, but the odds still aren’t great: 5/1500 which reduces down to 1/300. 

That’s just one post: if you keep on scrolling and take in 30 posts, which you can do in a minute or so, then you’re at 30/300 or a one-in-ten chance that you’ll find something interesting.  (These still ain’t great odds, by the way: a 90% chance of failure.) 

At this point, cognitive dissonance comes into play and you change your metrics rather than convict yourself of wasting time, deciding to find something not-terribly-interesting kinda-sorta interesting after all.

Remember, though, that I’m deriving this satisfaction index from a base of 150 friends: as your number of connections increases — and remember that Facebook has to grow your number of connections to grow its business — to 1,500 (close to my number, social media slut that I am) then your odds of finding something interesting in 30 posts goes down to 1/100 or a 99% failure rate.

Multiply this across Twitter, Instagram, Google+, LinkedIn, Vine, Tumblr and every other social networking service and you have an fraction with an ever-expanding denominator and a numerator that can never catch up.

Or, to translate this into less-fractional lingo, even if you spent all day, every day on social media the days aren’t getting longer but your social network is getting larger, so the likelihood of your finding social media interactions to be satisfying inexorably decreases over time.**

This is different than FOMO.  Sure, pathological fear of missing out exists: people who check the mailbox seventeen times per day, who can never put their smart phones down for fear of missing an email, who pop up at the water cooler to listen to a conversation. 

But with social media it’s not FOMO, it’s DROP: Diminishing Returns On Platform.

Most importantly, there’s a conspiracy-theory-paranoiac interpretation of how people talk about FOMO when it comes to social media: if you attribute checking Facebook too much to FOMO, then it’s a problem with the user, not with Facebook.  The user needs to develop more discipline and stop checking Facebook. 

As I discussed in my last post, this pernicious argument is similar to how Coca-Cola — which needs to have the 50% of the population that drinks soda drink more soda to have business growth — dodges the question of whether it is partly responsible for the U.S. obesity epidemic by saying that people just need to exercise more.

Facebook could create better filters for its users with ease, making a Dunbar filter of 150 that the home display defaults to and letting users toss people into that filter, and remove them easily later.  This is what Path was trying to do, but there’s no business model in it for a startup like Path.  With Facebook’s dominance in social media, it could and should value user satisfaction more than it does.

Right now, though, the only ways to increase your satisfaction with Facebook are either to reduce your number of friends or to reduce your time on platform.

* The Third Millennial Berens Corollary to Sturgeon’s Law is that only 1/10 of 1% is truly excellent but that our signal to noise ratio makes it almost impossible to find excellence.

** This line of thinking is similar to the opportunity costs that Barry Schwartz discusses in his excellent 2004 book “The Paradox of Choice.”

The Girl in the Spider’s Web isn’t terrible, isn’t great

Over the weekend I zoomed through the new David Lagercrantz novel, The Girl in the Spider’s Web, which is the not-written-by-Stieg-Larsson sequel to the Millenium Trilogy that started with The Girl with the Dragon Tattoo.

I’ll start with some thoughts about the book itself — so you have your spoiler alert — but I’ll wind up this post with some thoughts about the the aesthetics of ephemera and vice versa.

About the novel: It’s a good gulp-it-down novel, quickly plotted and dark in similar ways to the Larsson books (although not nearly as dark as Larsson’s third, which sucked the light of out the room where I was reading it).

But the book feels unnecessary. After the riveting revelations about Salander’s childhood in Larsson’s third book, The Girl who Kicked the Hornet’s Nest, there’s not much left to say about Lisbeth Salander’s past, and any changes to the character in service of a future would risk betraying the readers who want more of the same. This is a terrible trap for a novelist.

Lagercrantz couldn’t escape the trap, so he has reduced Salander to a series of narrative functions rather like what happened to Sherlock Holmes in the Holmes stories written by others after Conan Doyle’s death (and there are thousands). In most of these stories, Holmes is a pastiche of narrative-advancing tricks (he deduces that Watson been to the horse races from a bit of straw on Watson’s shoe, causing gullible Watson always to be astounded yet again) rather than a character that interests the reader himself. With the exception of Nicholas Meyer’s The Seven Percent Solution, talking about Holmes as a character is like talking about Batman’s utility belt as a character— it’s not all that useful.

In the post-Larsson world of the Lagercrantz, Salander is an angry superhero, superhacker, protector of innocents who bursts onto the scene regularly, makes things happen, and then disappears. 

The Girl in the Spider’s Web is a misleading title for this book, since Salander is never caught, never motionless, never the prey despite being hunted— she is the predator.

I don’t regret reading the book — despite my sense that it serves the publisher’s greed rather than the readers’ need — but I probably won’t read the next one, and I’m sure there will be a next one.

The aesthetics of ephemera: Perhaps more importantly, I don’t regret reading the book last weekend— my satisfaction index will never be higher than just a few days after its August 27th release date. The longer I wait, the more information from the world will trickle in to spoil my fun.

This isn’t just true of The Girl in the Spider’s Web, of course. The reason that a movie’s lifetime economic success usually is a function of its opening weekend is that the water cooler conversation about a movie is at its frothiest after opening weekend. 

I love to see movies (particularly popcorn movies) opening weekend — although I rarely get to do so — because that’s the moment of maximum potential for having that explosive moment of connection in my own head to other movies and works, and it’s also the moment of maximum potential for having fun discussions with other people about the movie and its broader context.

But the longer I wait to see a movie, the more likely I’ll hear something about it that will diminish that connection-making pleasure for me. I’m not talking about classic “the girl’s really a guy!” plot spoilers, although those suck. Instead, I’m talking about those trying-to-be-helpful hints that come from people who’ve already seen the movie. “I’m not going to tell you anything, but you have to stay all the way to the end of the credits: it’s really cool!”

This is a horrible thing to say to somebody going to a movie you’ve already seen since it means that the viewer will detach from the climax of the movie early, in order to focus on the extra coming after the end.

The ephemera of aesthetics: We don’t have good language to talk about this phenomenon, the very short half-life of the water cooler effect on how we experience culture.

We’re good at talking about the work itself, the creation of the work, the background and previous efforts of the creators of the work.

But we’re bad at talking about how we are a moving point in time relative to the work, and how satisfaction decays with some works but deepens with others.

For example, I’ve been a fan of Lois McMaster Bujold’s Vorkosigan series for about 20 years now, and they merit re-reading. I see new things in the characters, the plot, and her writing when I revisit the books. Although Bujold’s books are masterfully plotted, I can’t reduce my satisfaction with her books to the plot, and this is good.

Lagercrantz’s book is entirely about the plot: at the end of the story all the energy has been released from the plot, a bunch of the characters are either dead or narratively exhausted, and Salander will need to be released into a new situation to exercise her narrative function.

Some sorts of aesthetic experience, then, are fragile in Nasism Nicholas Taleb’s notion of fragility and antifragility.

Plot is fragile. Character is not inherently, but for a character to be antifragile that character must exceed the needs of the plot in which the character embedded. 

Ironically, inside the world of The Girl in the Spider’s Web Lisbeth Salander is indestructible: nothing stops her. Meanwhile, for this reader the experience of reading about Salander’s latest adventure is soap bubble ephemeral.

Pop.

[Cross posted with Medium.]

Stewart, Cosby, Williams: Tough Times for U.S. Comedy

Take heed, sirrah, the whip.
   King Lear to his Fool

Jon Stewart’s farewell episode of The Daily Show last night proved joyful rather than sad as dozens of people whose careers took root and bloomed under Stewart’s watch turned up to celebrate and — despite his resistance — to thank him.

For the under-30 crowd, last night was their May 22, 1992: Johnny Carson’s last episode of The Tonight Show.  Unlike Carson, Stewart has no plans to disappear from public life; yet more dissimilar Stewart is universally reported to be a great guy rather than a jerk.

No reasonable person can fault Stewart for wanting to do something new after brilliant 17 years, but it’s a stabbing loss to nightly political commentary and to comedy. 

Funny people abound in U.S. comedy — and I’ve now reached my tautology quotient for the day — but in different ways we’ve lost three icons in the last year, Stewart the most recent.

Bill Cosby was the second: like Stewart, Cosby is alive, but since Hannibal Buress put the spotlight on Cosby’s history of sexual assault last fall all the joy Cosby had brought to us over the decades tastes sour.  Don’t get me wrong: Buress was right to do it, and it’s a shame on us all that until a man said it nobody took alleged attacks on women seriously. 

And I mourn the loss of the joy.  For most of my life, Cosby’s voice hasn’t been far from my inner ear.  Just this morning I found myself thinking about an early routine called “Roland and the Roller Coaster,” but then frowned as all the stories of his assaults on women rolled into my mind. 

I’ve heard stories of Cosby’s infidelity since I was in high school.  One of the dubious privileges of growing up in L.A. is knowing a lot of celebrities and their kids.  I was in a play with the kid of a famous woman who knew Cosby well.  I don’t know how it came up — I must have been merrily quoting a Cosby routine — but the kid said, “you know he cheats on his wife all the time, right?”  I don’t remember having an intelligent response beyond, “oh.”  Even then, infidelity was something that struck me as being an issue among the people directly involved rather than the public’s business. 

I remained a Cosby fan, and his observations intertwined with those of George Carlin as a running commentary in my head.

Now when I hear Cosby’s voice in my head I change the mental channel with a flinch.

It’s the second time that I’ve found myself dancing across the minefield of my own responses to Cosby: the first was after the mysterious 1997 murder of his son Ennis just a couple of miles from where I grew up.  After that, I couldn’t listen to any of Cosby’s routines about his kids, and particularly his son, without sadness. 

But I still listened. 

Not anymore.

Next week bring the one-year anniversary of the third and most grievous loss, the suicide of Robin Williams. 

A friend stumbled across LIFE magazine’s tribute issue to Williams at a garage sale and bought it for me, as she knew I was a huge fan.  I’ll read it on Tuesday, on the anniversary of his death, but I haven’t been able to open it yet.

I had the privilege of seeing the incandescent Robin Williams perform live onstage three times and saw or listened to him numberless other times.  The speed and depth and genius of his wit will never leave me.  His 2001 appearance on Inside the Actor’s Studio with James Lipton was the most astonishing display of mental gymnastics that I’ve ever seen.

Darkness always lives in comedy, and when the light is that bright the simple math of it says that shadows must go deep.  I wish I could have done something for him, even though we never met.  I understand this but I still can’t accept it: the funniest man in the world killed himself.

Dustin Hoffman captured the unfathomable, unacceptable, incomprehensible nature of Willams’ suicide in an unguarded moment during an onstage interview with Alec Baldwin that later became a June episode of Baldwin’s wonderful Here’s the Thing podcast.  Hoffman was talking about Lenny Bruce, and how Bruce didn’t prepare set material.  The only other person Hoffman could think of who was like Bruce was Robin Williams.  As he said the name, Hoffman broke down in a sob that hit him like a lightning bolt from a clear blue sky, and it took him several seconds to collect himself.  I cried too.

Good luck, Jon Stewart, and thanks. 

Bill Cosby, I wish you were as good a man as you are a funny man, although that’s a tall order.

Robin Williams, rest in peace.  You deserve it.

[Cross-posted on Medium.]