Twitter has introduced a new way for users to report misleading posts. __S.3__
Many large social media platforms were accused of failing to do enough to combat disinformation spread during the Covid pandemic as well as the US elections campaigns.
Twitter has not offered any clear options for posting such messages through its reporting feature.
The new feature is a testing tool and will be limited to a handful of countries.
The tech giant stated that “some people” will see “it’s misleading” now when reporting a tweet.
The system could not be of significant benefit to users, it warned.
“We’re assessing if this is an effective approach so we’re starting small,” the company said on its safety account.
We may not be able to respond to every report, but we will use your feedback and identify trends in order for us improve our overall misinformation efforts.
Twitter intends to launch this feature in additional countries.
Reporters of misinformation have to choose between “it’s suspicious, spammy” and “it’s abusive or hurtful” to report it.
Because the options are so specific, it can often be unclear which one to use.
I would be very wealthy if I could earn a dollar for every message I get asking me why Twitter doesn’t allow misinformation to be reported.
Social media has been under pressure since the outbreak of the pandemic to take action against the flood of falsehoods spreading online.
This includes conspiracy theories about vaccines and Covid-19, and the events surrounding Donald Trump’s election last year, which sparked riots on Capitol Hill. Trump had his account suspended.
The past year and a half has been spent covering the true-world effects of misleading online posts – scaring people away from Covid jabs.
Critics argue that reporting misinformation ought to have been available months before to prevent offline damage. The question is: What impact will it really have?
It is possible that social media sites will not be able to manage the flood of information being reported. This includes falsehood propagators who flag misleading content as inaccurate.
Twitter has focused on issuing suspensions and bans to accounts which consistently spread harmful Covid-19 misinformation when they come to the company’s attention.
In early 2020, it began to put warning labels on these tweets, and announced that it was collaborating with news organizations to try to discredit false information. A pilot program, which allowed a few people to send “notes” regarding misleading content, started in January.
Twitter, along with other tech companies, continue to be criticized for spreading false information.
Chief executives of US companies have been repeatedly asked questions by politicians about their policies. However, groups like the Center for Countering Digital Hate accuse them of failing to do enough to fight vaccine misinformation and other harm.
Publiated at Wed 18 August 2021, 12:37:33 +0000