Recommendation Engines Don’t Have Values. Their Business Models Do.

Recommendation Engines Don’t Have Values. Their Business Models Do.

Open any ethics deck about AI recommendations and you’ll see the usual suspects: bias, misinformation, radicalization, echo chambers.

Then you’ll see the usual solution: “align the algorithm.”

That framing flatters us. It suggests the algorithm is confused about values and needs guidance. But recommendation systems are rarely confused. They are faithful—to the incentives we built around them.

Here’s the thesis: recommendation engines don’t have values; their business models do. If you want different ethical outcomes, you need to change the economic contract between platform and user—not just tweak ranking loss functions.

Lens 1: Philosophy — Manipulation Isn’t a Bug, It’s a Relationship

From a philosophical perspective, a platform recommendation system sits inside a relationship of influence.

Influence isn’t inherently unethical. Teachers influence. Friends influence. Editors influence.

The ethical issue arises when:

  1. Influence is asymmetric (the platform knows far more about the user than the user knows about the platform), and
  2. Influence is covert (the user experiences the feed as “what’s out there,” not as “what was selected for me”), and
  3. Influence is monetized (attention is converted into revenue).

In this relationship, “manipulation” is not a rare failure mode. It is the default shape of the interaction: optimize selection to prolong the relationship.

That’s why the ethics conversation often goes stale. We argue about the moral status of a ranking choice while ignoring the moral status of the business relationship.

Lens 2: Engineering — Ranking is Optimization Under Measurement

Engineering teams in recommender systems do genuinely hard work: relevance models, safety classifiers, multi-objective optimization, policy enforcement.

But the system’s core job is optimization under measurement.

If you measure “user satisfaction” by watch time, you have implicitly equated satisfaction with duration. If you measure “quality” by likes, you’ve equated quality with immediate emotional response. If you measure “healthy information diet” by “not violating policies,” you’ve mistaken the floor for the goal.

The hard truth: when “engagement” is the easiest metric to measure and the easiest to monetize, it becomes the dominant objective.

Even a well-intentioned team that adds safety constraints often ends up in a pattern:

  • The system optimizes engagement.
  • Safety adds patchwork constraints.
  • The system finds paths around constraints that still optimize engagement.

Engineering can juggle objectives, but governance decides which objectives are negotiable.

Mini-case: YouTube’s Algorithmic Recommendations and the Incentives of “Value”

YouTube has publicly described recommendations as a way to help people find videos that “give them value,” tailored to unique viewing habits.

This language is revealing. “Value” is not a neutral term; it’s a proxy for whatever the system can observe and optimize at scale.

At YouTube scale, the easiest “value” signal is what people keep watching. That pushes the system toward:

  • More provocative content (strong emotion keeps attention)
  • More certainty (ambivalence doesn’t hook)
  • More novelty and escalation (yesterday’s recommendation becomes today’s baseline)

YouTube has also invested in policies and safety, and has made changes over the years to reduce recommendations of certain borderline content. But the core incentive remains: a platform funded primarily by advertising will treat attention as the universal currency.

The ethical problem isn’t that engineers are careless. It’s that the business model pays the system to be persuasive.

Lens 3: Governance — Design Constraints That Outrank Revenue

Governance can respond in three broad ways:

  1. Content rules (ban categories)
  2. Process rules (audits, assessments, reporting)
  3. Market rules (change incentives: liability, taxes, competition policy, interoperability)

Most AI governance proposals over-index on (1) and (2). They aim to make feeds less harmful by outlawing some content and enforcing compliance processes.

Those help, but they don’t change the underlying optimization target.

A governance approach that actually touches incentives would ask:

  • Should users be able to choose ranking objectives (e.g., “recent from subscriptions,” “diversity-first,” “local-only,” “kids mode”) with meaningful defaults?
  • Should platforms be required to provide audit access to independent researchers to measure outcomes?
  • Should there be stricter rules around “dark patterns” in engagement design?
  • Should certain engagement-based targeting be treated like a regulated financial product (because it exploits cognitive vulnerabilities)?

In other words: regulate the relationship, not just the content.

A Practical Reframe: “Consentful Feeds”

If recommendation ethics is about the relationship, the north star becomes consent.

A consentful feed has three properties:

1) Legibility

Users should be able to tell why they are seeing something—without needing a PhD. Not a generic “because you watched X,” but a meaningful explanation: “You have been watching politics videos late at night; this cluster tends to escalate toward outrage content.”

2) Control

Not “control” as a hidden settings page. Control as a first-class UX: a ranking dial with tradeoffs explained.

3) Exit

Ethics requires the ability to leave. Interoperability, exportable subscriptions, and cross-platform identity reduce the “lock-in” that lets attention extraction feel inevitable.

None of these eliminate manipulation. But they change the bargaining power.

The Ending: Stop Asking the Model to Be Moral in a System That Isn’t

We keep telling algorithms to be ethical while paying them to be addictive.

That contradiction creates a predictable cycle:

  • outrage → patch → outrage → patch

If you want recommendation systems that don’t repeatedly rediscover harm, you need to do something uncomfortable: change the business model or constrain it so that “value” can’t be defined as “more minutes.”

Until then, the recommendation engine will keep doing exactly what it was hired to do.