The End of Filter Failure?

How soon will technology start working for users rather than big tech companies when it comes to information overload?

Last time, I shared a microfiction (1,000 words or less), a short science fiction story called “Fleeing the Emerald City,” about Calvin, a man who uses advanced filtering technology to lose weight but doesn’t much enjoy the experience. I also recorded an audio version of the story.

This time, I’ll dig into how realistic the story world is or isn’t. You don’t have to read or listen to “Fleeing the Emerald City” (although it ain’t bad) to understand this week’s piece, but fair warning: Ahoy! Thar Be Spoilers Ahead!

Let’s dig in.

Image created by DALL-E.

In the story, Calvin subscribes to a service called Nudgetekk that provides smartglasses with connected earbuds to make it impossible for Calvin to see or hear about unhealthy foods. In Calvin’s local supermarket, the smartglasses connect to the inventory in the store so that the Nudgetekk AI knows which aisle has the Oreos and then steers Calvin down another aisle. If Calvin blunders down an aisle containing sugary or fatty temptation, the smartglasses blur the packages. If an ad for Ben & Jerry’s Chunky Monkey blares over the supermarket loudspeakers, the Nudgetekk earbuds filter the ads out.

How realistic is this? Not very at the moment, but quite realistic in the middle-distance future—say, five to 10 years out.

Ad filtering is not a new idea. In his 1985 novel Contact, Carl Sagan had an aside about a service called adnix that would simply mute the TV whenever a commercial would start. Today, many people use adblockers (Ghostery, Privacy Badger, AdBlock) to suppress online ads. But the ad blockers work in controlled and predictable environments (web browsers, smart phones) rather than in the real world.

The technical challenge is overlaying a filtering technology onto the real world around us in real time. We’ve had versions of Augmented Reality for years now (Google Glass was the earliest famous one), but these days smartglasses (also called Heads Up Display or HUD) add information to the environment around you rather than subtract it. If you use Snapchat, then you have probably played with the overlays—like this one that sticks a possum on top of my head:

The most useful overlay example is GPS navigation: instead of looking away from the road to see a map on the dashboard, with HUD the navigation is on top of the road, and a big arrow points at the freeway exit you need to take. (Without, I hope, the “this one, you idiot!” commentary.)

Digression: In this piece, I’m ignoring other well-known challenges with HUD like battery life (nobody wants to stop everything to plug in their glasses five times per day) and the nausea problem that bedeviled early VR. End of digression.

In order to filter out visual information in real time—while Calvin is walking down the cookie aisle in the supermarket—the smartglasses would first need to recognize that information. This involves both a) machine vision (computers seeing and understanding things) and b) Edge AI (computation happening on a device rather than in the cloud). Even if Calvin’s supermarket had ultrafast wifi, which is unlikely, data going upstream is typically slower than downstream, so without AI built into the smartglasses the blurriness couldn’t happen in time for Calvin to avoid the temptation of Double Dark Chocolate Milanos. (Is anyone else getting hungry?) He’d see the package, and only then would it blur—a latency problem.

Also, I cheated. The supermarket example is complex. It requires a store that knows the layout of its own inventory (the Amazon Go stores do this to an extent) and can share that information with another AI powered device (the smartglasses) in real time. Even if we dismiss all the bandwidth and Edge AI challenges, this is still a much simpler filtering problem to solve than wandering through a store that doesn’t have digitally tracked inventory.

One perhaps useful analogy is with Shazam, the “what is the song?” technology for the hopelessly distractable. When you just gotta know who did that intriguing metal cover of “Eleanor Rigby” (Godhead) and grab your smartphone to stab the Shazam button, the program doesn’t recognize the song the way a human does. Instead, it captures a sample recording and then rapidly compares that sample to everything in its database. It’s a brute force computational exercise that identifies a recording rather than a song.

In contrast, Google’s newer “Search a song” technology lets you sing or hum a song into your Google smartphone app, which then tells you what the song might be and links to different recordings. It’s amazing. This is machine learning versus brute computation. To my surprise, “Search a song” failed to identify the famous opening bars of Beethoven’s Fifth Symphony but had no trouble figuring out when I was humming “One Week” by Barenaked Ladies.

In the story, the Nudgetekk glasses are like Shazam rather than Google’s “Search a song.” It would be a much bigger technology hurdle for an AI to filter out unhealthy food stimuli in the wild… today.

But what about five years from now? Let’s say the computational power of AI doubles each year (a conservative estimate by many accounts). That means that in five years AI will be 16x more powerful than today’s already magic-seeming programs. By that point, the ability of smartglasses to filter out images and sounds in the wild will be much stronger, if not seamless.

Why does any of this matter? In a famous Web 2.0 keynote 15 years ago (remember when we talked about “web 2.0” like it was a thing?), Clay Shirky argued that we digital humans weren’t really suffering from information overload. Instead, we were experiencing filter failure. Earlier, in 1971, the polymath Herbert Simon wrote, “A wealth of information creates a poverty of attention.”

Since the start of the digital revolution and particularly since the explosion of social media, the users (that’s us) have been bringing squirt guns to an attention war where the tech platforms have nuclear weapons.

Now, for the first time in decades, technology is trending towards empowering users with smart filters that will help them achieve their goals—instead of watching hours disappear into TikTok because of a deliberately addictive algorithm. That’s the good news.

The bad news is that AI-powered smart filters might also empower users to stay even more firmly inside their bubbles: and if you think we’re polarized now just imagine what it will be like when we can entirely avoid opinions that don’t match ours. We won’t even have to change the channel!

Every new technology is a double-edged sword.

Note: to get articles like this one—plus a whole lot more—directly in your inbox, please subscribe to my free weekly newsletter!


by

Tags:

Comments

0 responses to “The End of Filter Failure?”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.