正文
尽管谷歌脸谱都在打击 假新闻依旧层出不穷
The social media website Facebook and the search engine Google are struggling to ensure that what they present as news is correct.
The huge internet companies have been fighting “fake news” for nearly one year.
But their efforts have not been as successful as many people would like.
The internet services are designed to provide news and information that interests users. Now, people at the companies are finding it is not easy to make sure that this information is truthful.
This is especially the case when so-called internet “trolls” use these websites. Trolls are people who try to make others angry with their posts. And people with harmful plans continuously work to avoid the new controls that the companies put in place to stop them.
Immediately after the mass shooting in Las Vegas in early October, Facebook launched its “Crisis Response” page for the event. The page was meant to share helpful information about the attack. But it also shared a false news story that wrongly identified the gunman and claimed he was a politically “far left,” mentally sick person.
Google’s “Top Stories” search results shared a similar story from the website 4chan. It also wrongly identified the gunman.
The site 4chan is an imageboard website, which mainly posts images with the most recent postings showing first. It lets its users post things without revealing their identity.
One day after the attack, searches for “Las Vegas shooting” on the video-sharing website YouTube, which is owned by Google, showed one video as the fifth result. This video explained a conspiracy theory video claiming that several people were involved in the attack.
The stories were not true. Police identified the gunman as Stephen Paddock from the American state of Nevada. The reason why he shot at people at the music festival on October 1 is still not known. His attack left 58 people dead and hundreds wounded.
Facebook and Google quickly removed the stories and changed the way stories appeared on their websites. These changes tried to give greater importance to sources of information with stronger knowledge of the events.
But the companies still have a lot of work to do.
Why do these mostly computer-operated services keep failing to separate truth from lies? One reason is that most Internet-based service systems often bring greater attention to posts that get and keep their users interested.
This is exactly what a lot of fake news is designed to do.
David Caroll is a professor of media design at the Parsons School of Design in New York City. He told the Associated Press that Facebook and Google get caught without warning because their computer programs “look for signs of popularity.”
That problem is much bigger soon after a disaster when facts are still unclear and the demand for information is very high.
Mandy Jenkins told the Associated Press that people with harmful plans make use of this problem. Jenkins is the head of news at the news and social media research agency Storyful.
“They know how the sites work,” she said. “They know how the media works.”
Users of 4chan’s “Politically Incorrect” page often discuss “how to deploy fake news strategies,” Dan Leibson told the Associated Press. Leibson is the vice president of search at the internet advertising company Local SEO Guide.
Hours after the deadly attack in Las Vegas, people were talking about it online supporting their theories. Leibson said there were people discussing how to get and keep reader interest all night.
More and more people all over the world have been separating themselves from others because of their political beliefs. And this has made the idea of what makes sources of news credible a point of disagreement.
Many reporters with the largest and most popular media companies often express opinions about the credibility of different publications. They base their opinions on the history of the publications and how strongly they present facts.
But that is a much more complex issue for services seeking to appeal to millions of people like Facebook and Google. This is especially true given how popular many false news sources are among different political groups.
Gateway Pundit, for example, is a website that supports President Donald Trump. It published the false Las Vegas story that Facebook shared. Its reporters also have received an invitation to White House press briefings. And it has more than 620,000 fans on its Facebook page.
Earlier this month, Facebook said it is “working to fix the issue” that led it to share false reports about the Las Vegas shooting. However, it did not say exactly what it planned to do.
The company has already taken steps to fight fake news since last December. It now includes fact reviews from outside organizations. It also puts warnings on disputed stories and has taken attention away from false stories on people’s social media pages.
Making sure facts are correct is harder with developing news
News that is developing is especially difficult for computer-operated systems to judge. Google said the 4chan post that wrongly identifying the Las Vegas gunman should not have appeared among its “Top Stories.” However, it took the computer program that controls Google search results a few hours to replace it.
Outside experts said two different issues created problems for Google.
First, its “Top Stories” program provides results from all of the internet, not only from news agencies. Second, the signals that help Google examine the credibility of a website are not available in breaking news situations.
Matthew Brown is an expert who helps internet search engine companies improve their operations.
Brown told the Associated Press: “If you have enough … references to something … that’s going to look good to Google.”
United States federal law does not currently hold Facebook, Google and other similar companies responsible for material their users publish. But recent events are forcing the technology companies to accept more responsibility for the information they spread.
Also earlier this month, Facebook said it would employ an extra 1,000 people to look at advertisements. This comes after the company found that a Russian agency had bought ads meant to influence the 2016 U.S. presidential election.
Sally Lehrman is the project director of the Trust Project at Santa Clara University in California. The project is supported by Google.
She says Google is working to include markers, such as where reporters are from, awards they have received and other information. Future computer programs that operate Google could then use this information to decide what news stories people will see.
I’m Pete Musto. And I’m Anna Matteo.
Barbara Ortutay and Ryan Nakashima reported this for the Associated Press. Pete Musto adapted it for VOA Learning English. Mario Ritter was the editor.
We want to hear from you. How responsible are companies like Facebook and Google for the materials their users publish? How can they best fight fake news? Write to us in the Comments Section or on our www.hxen.net .
- 上一篇
- 下一篇