The study, from researchers at the University of Washington, Stanford University and Northeastern University, also indicates that it may one day be possible to let users take control of their social media algorithms.
“Social media algorithms direct our attention and influence our moods and attitudes, but until now, only platforms had the power to change their algorithms’ design and study their effects,” said co-lead author Martin Saveski, a UW assistant professor in the Information School. “Our tool gives that ability to external researchers.”
“Previous studies intervened at the level of the users or platform features — demoting content from users with similar political views, or switching to a chronological feed, for example. But we built on recent advances in AI to develop a more nuanced intervention that reranks content that is likely to polarize,” Saveski said.
For this study -that was published in Science-, the team drew from previous sociology research identifying categories of antidemocratic attitudes and partisan animosity that can be threats to democracy. In addition to advocating for extreme measures against the opposing party, these attitudes include statements that show rejection of any bipartisan cooperation, skepticism of facts that favor the other party’s views, and a willingness to forgo democratic principles to help the favored party.
The team created a web extension tool coupled with an artificial intelligence large language model that scans posts for these types of antidemocratic and extreme negative partisan sentiments. The tool then reorders posts on the user’s X feed in a matter of seconds.
Then, in separate experiments, the researchers had a group of participants view their feeds with this type of content downranked or upranked over seven days and compared their reactions to a control group. No posts were removed, but the more incendiary political posts appeared lower or higher in their content streams.
The impact on polarization was clear.
“When the participants were exposed to less of this content, they felt warmer toward the people of the opposing party,” said co-lead author Tiziano Piccardi, an assistant professor at Johns Hopkins University. “When they were exposed to more, they felt colder.”
The researchers are now looking into other interventions using a similar method, including ones that aim to improve mental health. The team has also made the code of the current tool available, so other researchers and developers can use it to create their own ranking systems independent of a social media platform’s algorithm.
“In this work, we focused on affective polarization, but our framework can be applied to improve other outcomes, including well-being, mental health and civic engagement,” Saveski said. “We hope that other researchers will use our tool to explore the vast design space of potential feed algorithms and articulate alternative visions of how social media platforms could operate.”
Citation #
- The study Reranking partisan animosity in algorithmic social media feeds alters affective polarization was published on Science journal. Authors: Tiziano Piccardi, Martin Saveski, Chenyan Jia, Jeffrey Hancock, Jeanne L. Tsai, and Michael S. Bernstein.
Image #
Many thanks to Anna kropekk_pl for her image on Pixabay, that we used here!!!.
Contact [Notaspampeanas](mailto: notaspampeanas@gmail.com)