In his testimony, Haugen also reiterated that Facebook’s uneven coverage of different languages makes these incidents worse in non-English speaking areas.
“In countries that do not have an integrity system in their local language, and in Ethiopia, there are 100 million people and six languages. Facebook supports only two of these languages for its integrity system,” she said. “This strategy of AI focusing on language-specific, content-specific systems to save us is destined to fail.”
He continued. “Investing in non-content-based ways to slow down the platform not only protects freedom of speech, but also protects people’s lives.”
We will cover this in a separate article earlier this year detailing the limitations of the Large Language Model (LLM).
Despite these linguistic flaws in LLMs, Facebook relies heavily on LLMs to automate content moderation globally. during the war in the tigre[, Ethiopia] This happened for the first time in November. [AI ethics researcher Timnit] Gable saw a surge of false information in the Platform Flounder deal. This symbolizes the enduring pattern that the researchers observed in material moderation. The language-speaking communities that Silicon Valley does not prioritize are the ones who suffer the most from the hostile digital environment.
Gable said this was not the end of the loss. If fake news, hate speech, and even murder threats are not curtailed, they will be left as training data for building the next generation of LLMs. And those models would train and recreate these toxic language patterns on the Internet.
How is Facebook content ranking related to adolescent mental health?
One of the more shocking findings from the journal’s Facebook file was an internal Instagram study that found the platform was worsening the mental health of teenage girls. “32 percent of teenage girls say Instagram makes them feel sick when they feel sick,” the researchers wrote in a slide presentation in March 2020.
Haugen has also linked the phenomenon to an engagement-based ranking system, which he told the Senate today that “teens is exposed to more anorexic content.”
“If Instagram is such a positive force, have you seen the golden age of teen mental health in the last decade? No, suicide and depression rates among teens are on the rise, I’ve seen them do it,” she continued. There is extensive research supporting the idea that social media use increases the risk of these mental health threats.”
In my own report, I heard from a former AI researcher who saw this effect extend to Facebook.
A team of researchers … Users who post or engage in depressing content (potentially suggestive of depression) are more likely to easily consume negative content that threatens further deterioration of mental health. I have found that there is a possibility to do so.
However, like Hogan, researchers have found that leadership is not interested in making basic algorithmic changes.
The team suggested tweaking the content ranking model for these users to stop maximizing engagement and reduce the amount of frustration. “The question of leadership was whether we needed to optimize our engagement if we found someone in a critical situation,” he recalls.
But anything that undermines engagement, even for reasons like not exacerbating one’s depression, has resulted in a lot of hemming and hopping among the leadership. With the performance reviews and payroll that led to the project’s success, employees quickly learned to overcome pushback and continue to work on tasks that were directed from top to bottom…
Meanwhile, the former employee no longer allows his daughter to use Facebook.
How do you fix this?
Haugen opposes the abolition of Facebook and the repeal of Section 230 of the US Communications Decency Act. This protects you from liability for the content delivered by the Technology Platform.
Instead, she recommends creating a more targeted relaxation in section 230 for algorithmic ranking. It “removes engagement-based rankings,” she claims. She also advocates a return to Facebook’s chronological news feed.
Frances Haugen says Facebook’s algorithm is dangerous. This is the reason.
Source link Frances Haugen says Facebook’s algorithm is dangerous. This is the reason.