Twitter on Tuesday announced a thrilling news on their plan to curb the trolls and abusive words on its social media platform with the goal of making Twitter a safer place. It came up with the following objectives:
Prevent the creation of new abusive accounts;
Make search safer; and
Collapse potentially abusive or low-quality tweets.
Twitter also pledged to persist in its anti-abuse endeavors, saying it would keep rolling out product changes, some more visible than others, and updating users on its progress every step of the way. According to Jim McGregor, a principal analyst at Tirias Research he said; "Twitter is more vulnerable than other social media because people expect it to be their link to the world, and not just their friends".
He also told TechNewsWorld saying, "People use it for news and for access to quick gossip"; "its open-ended structure makes it an easier target for abuse".
However, Twitter has come up with the following measures
Latest Offensive
Twitter will also identify account owners it has suspended permanently and block them from creating new accounts after being considered a culprit so to stop the creation of multiple fake accounts.
Safe Search
This involves filtering tweets that contain potentially sensitive content, as well as tweets from blocked and muted accounts, from search results. However, users would have other ways to search for and access those tweets. Under the new system, potentially abusive and low-quality replies will be collapsed, although they will be available if users want to seek them out.
Protection or Cybergagging?
Twitter will also determine what constitutes cyber harassment or any kind of inappropriate behavior on Twitter as a subjective undertaking. Michael Jude, a program Manager on Stratecas/Frost highlighted that, "
"As soon as you introduce subjectivity into regulating Twitter, it loses its appeal," he told TechNewsWorld. "One person's freedom of speech is another person's microaggression. Twitter's best bet is to say, 'Abandon all hope ye who enter here.'"
Although it can be so challenging to determine what an abusive word or troll is, except, the content of the conversation is contextual, McGregor added. Friends would couch statements in terms that might be considered inappropriate when relayed to a stranger, he pointed out. "For example, I could tweet the word 's**t' to a friend in response to something he'd said or a news item we were discussing, and it would be all right."
Using artificial intelligence to filter out potentially offending tweets isn't going to resolve the issue, because "AI systems have to learn like humans do, and no AI solution will really work unless you have a finite number of inputs," McGregor pointed out.
However, the new development will take effect in the coming months.
Tags:
ARTICLES