Beyond Digital Ethics

Extremist Suggestions

Earlier this week, Zeynep Tufekci appeared on Ezra Klein’s podcast. If you don’t know Tufekci, you should: she’s one of my favorite academic thinkers on the intersection of technology and society.

During the interview, Tufecki discussed her investigation of YouTube’s autoplay recommendation algorithm. She noticed that YouTube tends to push users toward increasingly extreme content.

If you start with a mainstream conservative video, for example, and let YouTube’s autoplay feature keep loading your next video, it doesn’t take long until you’re watching white supremacists.

Similarly, if you start with a mainstream liberal video, it doesn’t take long until you’re mired in a swamp of wild government and health conspiracies.

Tufecki is understandably concerned about this state of affairs. But what’s the solution? She offers a suggestion that has become increasingly popular in recent years:

“We owe it to ourselves to [ask], how do we design our systems so they help us be our better selves, [rather] than constantly tempting us with things that, if we sat down and were asked about, would probably say ‘that’s not what we want.’”

This represents a standard response from the growing digital ethics movement, which believes that if we better train engineers about the ethical impact of their technology design choices, we can usher in an age in which our relationship with these tools is more humane and positive.

A Pragmatic Alternative

I agree that digital ethics is an important area of inquiry; perhaps one of the more exciting topics within modern philosophical thought.

But I don’t share the movement’s optimism that more awareness will influence the operation of major attention economy conglomerates such as YouTube. The algorithm that drives this site’s autoplay toward extremes does so not because it’s evil, but because it was tasked to optimize user engagement, which in turn optimizes revenue — the primary objective of a publicly traded corporation.

It’s hard to imagine companies of this size voluntarily reducing revenue in response to a new brand of ethics. It’s unclear, given their fiduciary responsibility to their shareholders, if they’re even allowed to do so.

By contrast, I’ve long supported a focus on culture over corporations. Instead of quixotically convincing some of the most valuable business enterprises in the history of the world to behave against their interests, we should convince individuals to adopt a much more skeptical and minimalist approach to the digital junk these companies peddle.

We don’t need to convince YouTube to artificially constrain the effectiveness of its AutoPlay algorithm, we should instead convince users of the life-draining inanity of idly browsing YouTube.

I’m not alone in holding this position.

Consider Tristan Harris, who, to quote his website, spent three years “as a Google Design Ethicist developing a framework for how technology should ‘ethically’ steer the thoughts and actions of billions of people from screens.”

After realizing that Google actually had very little interest in making their technology more ethical, he quit to start a non-profit that eventually became the Center for Humane Technology.

I’m both a fan and close observer of Harris, so I’ve been intrigued to observe how his focus has shifted increasingly from promoting better digital ethics, and toward other forms of defending against the worst excesses of the attention economy.

The current website for his center, for example, now includes emphases on political pressure, cultural change (as I promote), and a focus on smartphone manufacturers, who don’t directly profit from exploiting user attention, and might  therefore be persuaded to introduce more bulwarks against cognitive incursions.

I appreciate this pragmatism and think it hints at a better technological future. We need to harness the discomfort we increasingly feel toward the current crop of tech giants and redirect it toward an honest examination of our own behavior.


Blog – Cal Newport