Google-owned YouTube is reworking its recommendation algorithm that will suggest what videos users should view next in a new bid to stem the flow of conspiracy theories and false information on the massive video platform.
The company, which has been criticized by lawmakers and called out in studies for pushing viewers toward fraudulent content and conspiracy theories, said in a Friday blog post that it is taking a “closer look” at how to reduce the spread of content that does not quite violate YouTube’s Community Guidelines but comes close to doing so.
“To that end, we’ll begin reducing recommendations of borderline content and content that could misinform users in harmful ways—such as videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11,” the company said in its blog post.
YouTube’s recommendation algorithm, a secretive formula that determines which clips are promoted in the “Up Next” column beside the video player, drives a large percentage of traffic on the video platform, where over a billion hours of footage are watched each day.
However, the company said this latest tweak will apply to less than one percent of the content on YouTube and will not impact whether a video is allowed on the site.
“This change relies on a combination of machine learning and real people. We work with human evaluators and experts from all over the United States to help train the machine learning systems that generate recommendations,” YouTube said. “These evaluators are trained using public guidelines and provide critical input on the quality of a video.”
A Washington Post investigation in December found that — one year after YouTube promised to curb “problematic” videos — the tech giant continued to harbor and recommend hateful, conspiratorial videos and permitted racists and anti-Semites to harness the video platform as a way of spreading their views. After the Parkland school shooting, one of the top trending videos on YouTube claimed that a survivor of the incident was a “crisis actor.” The company later took down the video and apologized.
The logo of location-based social search mobile app Tinder is displayed on a smartphone on October 05, 2018. (Thomas Trutschel/Photothek via Getty IMages) Hookup app Tinder has reached a $17.3 million settlement with California-based users aged 30 and up who alleged that the service charged them twice as much as their 29-and-under counterparts. The class-action...
Facial-detection technology that Amazon is marketing to law enforcement often misidentifies women, particularly those with darker skin, according to researchers from MIT and the University of Toronto. Privacy and civil rights advocates have called on Amazon to stop marketing its Rekognition service because of worries about discrimination against minorities. Some Amazon investors have also asked the company...