和谐英语

您现在的位置是:首页 > 英语听力 > 其他品牌英语 > 科学美国人

正文

科学美国人60秒:人工智能学会反驳偏执者

2020-08-17来源:和谐英语

This is Scientific American's 60-second Science, I'm Christopher Intagliata.
Social media platforms like Facebook use a combination of artificial intelligence and human moderators to scout out and eliminate hate speech. But now researchers have developed a new AI tool that wouldn't just scrub hate speech but would actually craft responses to it, like this: "The language used is highly offensive. All ethnicities and social groups deserve tolerance."
"And this type of intervention response can hopefully short-circuit the hate cycles that we often get in these types of forums."
Anna Bethke, a data scientist at Intel. The idea, she says, is to fight hate speech with more speech—an approach advocated by the ACLU and the U.N. High Commissioner for Human Rights.
So with her colleagues at U.C. Santa Barbara, Bethke got access to more than 5,000 conversations from the site Reddit and nearly 12,000 more from Gab—a social media site where many users banned by Twitter tend to resurface.

The researchers had real people craft sample responses to the hate speech in those Reddit and Gab conversations. Then they let natural-language-processing algorithms learn from the real human responses and craft their own, such as: "I don't think using words that are sexist in nature contribute to a productive conversation."
Which sounds pretty good. But the machines also spit out slightly head-scratching responses like this one: "This is not allowed and un time to treat people by their skin color."
And when the scientists asked human reviewers to blindly choose between human responses and machine responses—well, most of the time, the humans won. The team published the results on the site Arxiv and will present them next month in Hong Kong at the Conference on Empirical Methods in Natural Language Processing.
Ultimately, Bethke says, the idea is to spark more conversation.
"And not just to have this discussion between a person and a bot but to start to elicit the conversations within the communities themselves—between the people that might be being harmful and those they're potentially harming."
In other words, to bring back good ol' civil discourse?
"Oh! I don't know if I'd go that far. But it sort of sounds like that's what I just proposed, huh?"
Thanks for listening for Scientific American's 60-second Science. I'm Christopher Intagliata.