Twitter is deploying new features to curb 2020 election disinformation

Share

Twitter is deploying new features on Thursday that it says will keep pace with disinformation and influence operations targeting the 2020 election.

A new policy on “synthetic and manipulated media,” attempts to flag and provide greater context for content that the platform believes to have been “significantly and deceptively altered or fabricated.”

Starting Thursday, when users scroll through posts, they may begin seeing Twitter’s new labeling system — a blue exclamation point and the words “manipulated media” underneath a video, photo or other media that the platform believes to have been tampered with or deceptively shared.

This could include deepfakes — high tech videos that depict events that never happened — or “cheepfakes” made with low-tech editing, like speeding up a video or slowing it down.

Twitter’s head of site integrity, Yoel Roth, told NPR that moderators will be watching for two things:

“We’re looking for evidence that the video or image or audio have been significantly altered in a way that changes their meaning,” Roth said. “In the event that we find evidence that the media was significantly modified, the next question we ask ourselves is, is it being shared on Twitter in a way that is deceptive or misleading?”

If the media has been modified to the extent that it could “impact public safety or cause serious harm,” then Twitter says it will remove the content entirely.

Twitter says this could include threats to the physical safety of a person or a group or an individual’s ability to express their human rights such as participating in elections.

When users tap on newly labelled posts, they’ll provide “expert context” explaining why the content isn’t trustworthy.

If someone tries to retweet or “like” the content, they’ll receive a message asking if they really want to amplify an item that is likely to mislead others.

Twitter says they may also reduce the visibility of the misleading content.

Evolution of interference

Twitter was caught “flat-footed” by the active measures that targeted the U.S. election in 2016, Roth said. During that year, influence specialists used fake accounts across social media to spread disinformation and amplify discord.

That work never really stopped, national security officials have said, and officials warned ahead of Super Tuesday’s primaries that it continues at a comparatively low but steady state.

Roth told NPR that while Twitter has not traced specific tweets about the 2020 campaign back to Russia, it is trying to apply what it learned from the last presidential race, including about the use of fake personae.

While this tactic has remained a part of Russia’s toolkit, the playbook also has continued to expand.

In 2018, Russia didn’t just interfere in the election, influence-mongers tried to make it look like there was more interference than there actually was.

“We saw activity that we believe to have been connected with the Russian Internet Research Agency that was specifically targeting journalists in an attempt to convince them that there had been large scale activity on the platform that didn’t actually happen,” Roth said.

This time around, rather than create its own messaging, Russia and other foreign actors are amplifying the voices of real Americans, Roth said. By re-sharing extreme — albeit authentic content — they’re able to manipulate the platform without introducing any additional misinformation.

“I think in 2020, we’re facing a particularly divisive political moment here in the United States and attempts to capitalize on those divisions among Americans seem to be where malicious actors are heading,” Roth said.

The social network has studied fake accounts and coordinated manipulation efforts and have put its new policies to the test during elections in the European Union and India.

Roth says Twitter has also built a community of experts, including community moderators and partners in government and academia. Another priority is transparency. More NPR.org…

Share