.
J

ust days before the U.S. government would ban TikTok over alleged Chinese government ties, RedNote—a Chinese-owned app with mostly Mandarin content and no major marketing push—unexpectedly shot to the top of the U.S. App. This massive digital migration occurred because RedNote offered the path of least resistance for fleeing users: its TikTok-like, attention-driven algorithm let users start scrolling immediately without rebuilding their follower lists. Their feeds are automatically curated based on what they interact with—pulling content from anywhere on the app. While convenient, these feeds skew perception and warp worldviews by removing user choice and appealing to their biases, surfacing emotionally charged content and omitting content that may be neutral, informative, or corrective—something particularly dangerous in a fraught period of U.S. politics.

And it’s not just RedNote and TikTok. While RedNote was the most noteworthy, TikTok “refugees” silently fell back on other similar offerings like Instagram Reels or YouTube Shorts. All of these platforms have copied TikTok’s attention–driven feeds, meaning that a user’s biases will follow them wherever platform they migrate. 

To put this incident in context, social media migrations have happened before. Platforms like Gab, Parler, and Truth Social emerged after Trump’s post–January 6 deplatforming; BlueSky gained traction after Musk’s Twitter purchase. But these gains were blips compared to the initial rise of TikTok and the RedNote incident, which occurred not from ideology or protest, but for an unmatched ability to capture attention. 

While seemingly unremarkable now, TikTok pioneered a revolutionary departure from traditional social media. In the traditional approach, users follow friends and creators and see their posts displayed chronologically. But TikTok’s ‘For You Page’ introduced an approach where users don’t need to follow anyone. The algorithm automatically curates their feeds, suggesting content based on what users linger on or engage with, refining its recommendations over time in an infinitely scrolling feed. These feeds deliver content designed to capture your attention—no matter who posted it or when. In doing so, attention-driven algorithms have made digital migration effortless. TikTok became the fastest growing social media app between 2021–2023, and its success forced nearly all other platforms (e.g., Instagram, YouTube, Facebook) to copy its formula to not hemorrhage users. 

What’s wrong with attention–driven feeds?

But attention–driven feeds are so problematic because they automatically align what a user sees with their personal preferences, biases, and behaviors.

These feeds resemble “echo chambers”—a term popular a decade ago that has since been debunked. Back then, echo chambers described social media feeds that brainwashed users by drowning them in identical opinions. But researchers revealed that people sought out the content they wanted to see by curating their own feeds while encountering—but ignoring—diverse perspectives. Echo chambers were more myth than reality.

Today, attention–based feeds may be creating real echo chambers—not just by controlling what users see, but more importantly by controlling what they don’t see.

First, attention–based feeds bury neutral or informative information. Instead, they favor emotionally engaging content because it can better capture users’ attention—either by confirming a user’s beliefs or reinforcing them by framing opposing perspectives to stoke anger or resentment. And there are endless opportunities to do so. The vast amount of content on social media means that algorithms can always find content that is highly tailored to any user’s specific array of biases, interests, or grievances.

Second, these feeds omit context. The brevity of short–form videos makes it easier to mislead users by presenting clips out of context (also known as “clip chimping”). In an infinite feed that can pull content from any user, tracing down original sources is difficult. Users cannot go back to a post they saw in the past without knowing exactly who posted it. To this end, users are often left unaware of what part of the story was left out. Instead, they have to trust that their feeds—and the videos within them—present a comprehensive picture of reality. 

Efforts to combat mis/disinformation are dangerously behind if they do not recognize and adapt to this reality. For example, while Meta’s rollback of fact–checking in favor of community notes is alarming, fact–checking cannot solve the problem of algorithmic omissions in attention–based feeds. The problem requires a different solution set, where two main strategies are essential: 

  1. Revamp Media Literacy. Social media users must be made aware of how infinite–scroll algorithms work and how they shape (and limit) their understanding of current events. Teaching about omission is crucial because it involves revealing what people would otherwise not see—content they need to see. The goal is to explain the seemingly paradoxical statement that information can be factual yet misleading.
  2. Capture Attention with Counter-Messaging: In an infinite-scroll world, counter–messages or corrections must be compelling enough to those who were misled that their algorithms surface them in their feeds. Simply correcting the misinformation isn’t enough. A good starting point is speaking directly to those being misled and framing the message in a way that resonates with their concerns.

Regardless of TikTok’s future in the United States, the attention–driven infinite scroll that users flock to is here to stay. But rather than deny it, we must innovate alongside it, acknowledging how it’s warping worldviews and then targeting this new dynamic directly.

About
Thomas Plant
:
Thomas Plant is an Associate Product Manager at Accrete AI and co–founder of William & Mary’s DisinfoLab, the nation’s first undergraduate disinformation research lab.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

What the flight to RedNote says about attention–driven social media feeds

Image by George Pagan III from Unsplash.

February 25, 2025

As the U.S. moved to ban TikTok, millions of users migrated to RedNote and similar platforms, driven by attention–driven feeds that reinforce biases, omit neutral information, and create digital echo chambers, writes Thomas Plant.

J

ust days before the U.S. government would ban TikTok over alleged Chinese government ties, RedNote—a Chinese-owned app with mostly Mandarin content and no major marketing push—unexpectedly shot to the top of the U.S. App. This massive digital migration occurred because RedNote offered the path of least resistance for fleeing users: its TikTok-like, attention-driven algorithm let users start scrolling immediately without rebuilding their follower lists. Their feeds are automatically curated based on what they interact with—pulling content from anywhere on the app. While convenient, these feeds skew perception and warp worldviews by removing user choice and appealing to their biases, surfacing emotionally charged content and omitting content that may be neutral, informative, or corrective—something particularly dangerous in a fraught period of U.S. politics.

And it’s not just RedNote and TikTok. While RedNote was the most noteworthy, TikTok “refugees” silently fell back on other similar offerings like Instagram Reels or YouTube Shorts. All of these platforms have copied TikTok’s attention–driven feeds, meaning that a user’s biases will follow them wherever platform they migrate. 

To put this incident in context, social media migrations have happened before. Platforms like Gab, Parler, and Truth Social emerged after Trump’s post–January 6 deplatforming; BlueSky gained traction after Musk’s Twitter purchase. But these gains were blips compared to the initial rise of TikTok and the RedNote incident, which occurred not from ideology or protest, but for an unmatched ability to capture attention. 

While seemingly unremarkable now, TikTok pioneered a revolutionary departure from traditional social media. In the traditional approach, users follow friends and creators and see their posts displayed chronologically. But TikTok’s ‘For You Page’ introduced an approach where users don’t need to follow anyone. The algorithm automatically curates their feeds, suggesting content based on what users linger on or engage with, refining its recommendations over time in an infinitely scrolling feed. These feeds deliver content designed to capture your attention—no matter who posted it or when. In doing so, attention-driven algorithms have made digital migration effortless. TikTok became the fastest growing social media app between 2021–2023, and its success forced nearly all other platforms (e.g., Instagram, YouTube, Facebook) to copy its formula to not hemorrhage users. 

What’s wrong with attention–driven feeds?

But attention–driven feeds are so problematic because they automatically align what a user sees with their personal preferences, biases, and behaviors.

These feeds resemble “echo chambers”—a term popular a decade ago that has since been debunked. Back then, echo chambers described social media feeds that brainwashed users by drowning them in identical opinions. But researchers revealed that people sought out the content they wanted to see by curating their own feeds while encountering—but ignoring—diverse perspectives. Echo chambers were more myth than reality.

Today, attention–based feeds may be creating real echo chambers—not just by controlling what users see, but more importantly by controlling what they don’t see.

First, attention–based feeds bury neutral or informative information. Instead, they favor emotionally engaging content because it can better capture users’ attention—either by confirming a user’s beliefs or reinforcing them by framing opposing perspectives to stoke anger or resentment. And there are endless opportunities to do so. The vast amount of content on social media means that algorithms can always find content that is highly tailored to any user’s specific array of biases, interests, or grievances.

Second, these feeds omit context. The brevity of short–form videos makes it easier to mislead users by presenting clips out of context (also known as “clip chimping”). In an infinite feed that can pull content from any user, tracing down original sources is difficult. Users cannot go back to a post they saw in the past without knowing exactly who posted it. To this end, users are often left unaware of what part of the story was left out. Instead, they have to trust that their feeds—and the videos within them—present a comprehensive picture of reality. 

Efforts to combat mis/disinformation are dangerously behind if they do not recognize and adapt to this reality. For example, while Meta’s rollback of fact–checking in favor of community notes is alarming, fact–checking cannot solve the problem of algorithmic omissions in attention–based feeds. The problem requires a different solution set, where two main strategies are essential: 

  1. Revamp Media Literacy. Social media users must be made aware of how infinite–scroll algorithms work and how they shape (and limit) their understanding of current events. Teaching about omission is crucial because it involves revealing what people would otherwise not see—content they need to see. The goal is to explain the seemingly paradoxical statement that information can be factual yet misleading.
  2. Capture Attention with Counter-Messaging: In an infinite-scroll world, counter–messages or corrections must be compelling enough to those who were misled that their algorithms surface them in their feeds. Simply correcting the misinformation isn’t enough. A good starting point is speaking directly to those being misled and framing the message in a way that resonates with their concerns.

Regardless of TikTok’s future in the United States, the attention–driven infinite scroll that users flock to is here to stay. But rather than deny it, we must innovate alongside it, acknowledging how it’s warping worldviews and then targeting this new dynamic directly.

About
Thomas Plant
:
Thomas Plant is an Associate Product Manager at Accrete AI and co–founder of William & Mary’s DisinfoLab, the nation’s first undergraduate disinformation research lab.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.