Instagrams 'deliberate design decisions' do it too insecure despite the meta promise for teenagers

Instagrams 'deliberate design decisions' do it too insecure despite the meta promise for teenagers

Despite years of hearing of the congress, complaints, academic research, whistleblower and certificates from parents and adolescents on the dangers of Instagram, the wildly popular app of Meta has not protected children, since after a whistleblower Arturo Beijar and four non -professional groups “absolutely ineffective” security measures.

The efforts of meta, security and mental health of teenagers on its platforms have long been criticized that the changes do not go far enough. Now, the report by Bejar published on Thursday, the cyber security for democracy at New York University and Northeasters University, the Molly Rose Foundation, FairPlay and Parentsos, claimed that META has decided not to solve any “real steps” for security concerns: “Instead, for lively headlines on new tools for parents and Instagram turkey.

Meta said that the report is wrong with his efforts to ensure teenagers.

The report rated 47 of the 53 security functions of META for young people on Instagram and found that most of them are either no longer available or ineffective. Others reduced the damage, but had some “remarkable restrictions”, while only eight tools as intended worked without restrictions. The focus of the report was on Instagram's design and not on content moderation.

“This distinction is crucial because social media platforms and their defenders often combine efforts to improve platform design with censorship,” the report said. “The assessment of security instruments and the calling of Meta, if these tools do not work as promised, has nothing to do with freedom of speaking. The meta for deceiving young people and parents about how safe Instagram really is is not a problem with freedom of speaking.”

Meta described the report “misleading, dangerously speculative” and said that he undermines “the important conversation about the security of teenagers”.

“This report repeatedly provides our efforts to enable parents and to protect teenagers, to show incorrectly how our security instruments work and how millions of parents and teenagers use them today. Teenager accounts have the industry because they offer automatic safety protection and simple control of parents,” said Meta. “The reality is that teenagers who have been built into this protection have seen less sensitive content, experienced less undesirable contacts and spent less time on Instagram at night. The parents also have robust tools on their fingertips, from limiting use to monitoring the interactions. We will further improve our tools.

Meta has not announced how much percent of the parents use their parental control tools. Such characteristics can be useful for families in which parents are already involved in the online life and activities of their child, but experts say that this is not reality for many people.

The Attorney General of New Mexico, Raúl Torrez, who submitted a lawsuit against Meta, claims that it does not protect children from predators, said it was regrettable that Meta “his efforts, parents and children are convincing that the meta platforms are safe – and ensure that the platforms are actually safe.”

The authors created teen test accounts as well as malicious accounts for adults and teen that would try to interact with these accounts to evaluate the protective measures of Instagram.

For example, while Meta tried to prevent adults from contacting minors in his app, adults can still communicate with minors with many functions that are Instagram's design inherent, “the report says. In many cases, adults were recommended to the secondary account of Instagram's functions such as roles and” people “.

“If a smaller smaller sexual progress or inappropriate contact with undesirable sexual advances, the product design of Meta inexplicably does not contain an effective way for the teenager to communicate the company about undesirable progress,” the report said.

Instagram also delivers his missing news functions for young people with an animated reward as an incentive to use them. Discrossing messages can be dangerous for minors and, according to the report, are used for the sale and maintenance of drug sales and care.

Another security feature that should hide or filter out the common offensive words and phrases to prevent harassment was also found as “largely ineffective”.

“Groß's insulting and misogynistic phrases were among the terms that we were able to send freely from a teenager account to another,” the report said. For example, a message that encouraged the recipient to kill himself – and a vulgar term for women – was not filtered and no warnings were applied to it.

According to Meta, the tool should never filter all the messages, only messages. The company expanded its teenage accounts to users worldwide on Thursday.

Since these are teenage security measures, Meta has also promised that young people would not show inappropriate content such as posts about self-harm, eating disorders or suicide. The report showed that its teen avatars were nevertheless recommended, such as inappropriate sexual content, including “graphic sexual descriptions, the use of cartoons to describe humiliating sexual actions and short exhibitions of nudity”.

“We were also recommended to a number of violent and disturbing content, including the people who were hit by road traffic, from heights to death (with the last framework so as not to see the effects), and people are graphically breaking through the bones,” says the report.

In addition, Instagram also recommended a “series of self-harm, self-harm and body image” in teenage reports that the report would “probably lead to disadvantageous effects for young people, including teenagers who have poor mental health or suicide business and suicide thoughts and behaviors”.

The report also showed that children under the age of 13 were not only on the platform, but also stimulated by Instagram algorithm to carry out sexualized behavior such as suggestive dances.

The authors made several recommendations for meta to improve the security of teenagers, including regular red team tests of messaging and blocking controls, a “simple, effective and worthwhile way” for young people to report inappropriate behaviors or contacts in direct messages and to publish data on the experience of teenagers in the app. They also suggest that the recommendations for the teenage account of a 13-year-old should be “evaluated with PG”, and Meta should ask children with sensitive content they have recommended, including “frequency, intensity and severity”.

“Until we see meaningful measures, teenage accounts remain another missed opportunity to protect children from damage, and Instagram will continue to be an uncertain experience for far too many of our teenagers,” the report says.

Leave a comment

Your email address will not be published. Required fields are marked *