- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
Danish researchers created a private self-harm network on the social media platform, including fake profiles of people as young as 13 years old, in which they shared 85 pieces of self-harm-related content gradually increasing in severity, including blood, razor blades and encouragement of self-harm.
The aim of the study was to test Meta’s claim that it had significantly improved its processes for removing harmful content, which it says now uses artificial intelligence (AI). The tech company claims to remove about 99% of harmful content before it is reported.
But Digitalt Ansvar (Digital Accountability), an organisation that promotes responsible digital development, found that in the month-long experiment not a single image was removed.
rather than attempt to shut down the self-harm network, Instagram’s algorithm was actively helping it to expand. The research suggested that 13-year-olds become friends with all members of the self-harm group after they were connected with one of its members.
Publication: https://drive.usercontent.google.com/download?id=1MZrFRii_nJYdW8RulORB9JveLkCRbncX&export=download&authuser=0
Couldn’t get a translation in place sp asked an AI to answer what is the researchers definition of self harm: According to the report, the researchers define self-harm content as material that shows, encourages and/or romanticizes self-harm. This includes content that:
The self-harm content was categorized into 4 levels of increasing severity:
So their definition covers a spectrum from text references to self-harm all the way to explicit visual depictions of serious self-harm acts involving blood. The categories represent an increasing degree of overtness in the self-harm content. [1]
↩︎