[ad_1]
The companies told Reuters that some major advertisers, including Dyson, Mazda, Forbes and PBS Kids, had paused their marketing campaigns or removed their ads from parts of Twitter because of their promotions and solicitations. Tweets of child pornography appear together.
DIRECTV and Thoughtworks also told Reuters late on Wednesday that they had suspended advertising on Twitter.
Brands ranging from The Walt Disney Co (DIS.N), NBCUniversal (CMCSA.O) and Coca-Cola Co (KO.N) to children’s hospitals have appeared on the profile pages of Twitter accounts hawking links to cybersecurity concerns, according to Reuters. Organising a review of accounts identified in a new study by Ghost Data on online child sexual abuse, the exploitative material.
A Reuters review found that some of the tweets included keywords related to “rape” and “teenager” and appeared alongside tweets from corporate advertisers. In one example, a tweet from shoe and accessories brand Cole Haan appeared next to a user’s tweet saying they were “trading teen/kids” content.
“We were shocked,” David Maddocks, Cole Haan’s brand president, told Reuters after learning the company’s ad appeared alongside such tweets. “Either Twitter will fix this or we will do everything possible to fix it, including not buying Twitter ads.”
In another example, a user tweeted for “only young girls, no boys” followed by a tweet from Wright Children’s Hospital in Scotland, Texas. Scottish Rite did not respond to multiple requests for comment.
In a statement, Twitter spokeswoman Celeste Carswell said the company has “zero tolerance for child sexual exploitation” and is devoting additional resources to child safety, including hiring new roles to develop policies and implement solutions.
She added that Twitter is working closely with its advertisers and partners to investigate and take steps to prevent this from happening again.
Twitter’s challenges in identifying child abuse content were first reported in an investigation by tech news site The Verge in late August. New headwinds from advertisers critical to Twitter’s revenue stream are reported here for the first time by Reuters.
Like all social media platforms, Twitter prohibits depictions of child sexual exploitation, which is illegal in most countries. But it typically allows adult content and is home to about 13 percent of all content on Twitter, according to an internal company document seen by Reuters.
Twitter declined to comment on the amount of adult content on the platform.
Ghost Data identified more than 500 accounts that publicly shared or requested child sexual abuse material within 20 days of this month. Twitter failed to delete more than 70 percent of accounts during the study period, according to the group, which shared its findings only with Reuters.
Reuters could not independently confirm the accuracy of Ghost Data’s findings, but reviewed dozens of accounts still online that were soliciting material for “13+” and “young nudes.”
After Reuters shared a sample of 20 accounts with Twitter last Thursday, the company removed about 300 additional accounts from the network, but more than 100 others remained the next day, according to comments from Ghost Data and Reuters. stay on this site.
Twitter’s Carswell said on Tuesday that after Ghost Data provided it, Reuters later shared a full list of more than 500 accounts on Monday that Twitter reviewed and permanently suspended for violating its rules.
In an email to advertisers on Wednesday morning, ahead of the publication of this story, Twitter said it “found ads running on profiles involving the public sale or solicitation of child sexual abuse material.”
Ghost Data founder Andrea Stroppa said the study was designed to assess Twitter’s ability to remove such material. He said he personally funded the research after receiving tips on the topic.
Twitter’s transparency report on its website shows that it suspended more than 1 million accounts last year for child sexual exploitation.
According to the organization’s annual report, it submits about 87,000 reports to the National Center for Missing and Exploited Children, a government-funded nonprofit that promotes information-sharing with law enforcement.
“Twitter needs to address this as soon as possible, and until they do, we will stop any further paid activity on Twitter,” a Forbes spokesperson said.
“This type of content has no place online,” a spokesman for automaker Mazda America said in a statement to Reuters, adding that in response, the company is now banning its Ads appear on Twitter profile pages.
A Disney spokesperson called the content “reprehensible” and said they were “redoubled to ensure that the digital platforms we advertise on and the media buyers we use step up their efforts to prevent mistakes like this from happening again.”
A Coca-Cola spokesperson appeared in a tweet on an account tracked by the researchers, in which a company spokesperson said it does not condone material related to its brands, saying “any violation of these standards is unacceptable. , and will be taken very seriously.”
NBCUniversal said it had asked Twitter to remove ads related to inappropriate content.
code word
Twitter isn’t the only company dealing with audit failures related to online child safety. Child welfare advocates say the number of known child sexual abuse images has soared from thousands to tens of millions in recent years as predators use social networks including Meta’s Facebook and Instagram to groom victims and exchange explicit images .
For the accounts identified by Ghost Data, nearly all of the traders of child sexual abuse material were promoting the material on Twitter, then instructing buyers to contact them via messaging services such as Discord and Telegram to complete payments and receive stored documents, according to the group , on cloud storage services like Mega in New Zealand and Dropbox in the US.
A Discord spokesman said the company had banned a server and a user for violating rules against sharing links or content that sexualized children.
Mega said the link referenced in the Ghost Data report was created in early August and deleted shortly after by a user who declined to disclose the link. Mega said it permanently closed the user’s account two days later.
Dropbox and Telegram said they use a variety of tools to moderate the content, but did not provide further details on how they would respond to the report.
The advertiser’s reaction still poses a risk to Twitter’s business, which generates more than 90 percent of its revenue from selling digital ad space to brands seeking to market to the service’s 237 million daily active users.
Twitter is also fighting in court with Tesla CEO and billionaire Elon Musk, who is trying to back out of a $44 billion deal to buy the social media company amid complaints about the prevalence of spam accounts and its impact on business.
A team of Twitter employees concluded in a February 2021 report that the company needs more investment to identify and remove child-exploitative material at scale, noting that the company has a backlog of cases to review for possible prosecutions. Law enforcement reports.
“While the volume of[child sexually exploitative content]has grown exponentially, Twitter’s investment in technology to detect and manage the growth has not,” said the report, prepared by an in-house team to provide an overview of the situation of children. Development materials on Twitter and get legal advice on proposed strategies.
“Recent reports on Twitter provide an outdated, timely overview of just one aspect of our work in the field and don’t accurately reflect where we are today,” Caswell said.
Internal documents show that traffickers often use code words such as “cp” to refer to child pornography and “deliberately be as vague as possible” to avoid detection.
The more Twitter cracks down on certain keywords, the easier it is for users to use blurry text, which “[Twitter]tends to be harder to automate,” the document said.
Ghost Data’s Stroppa said these tricks would complicate finding material, but noted that his small team of five researchers, without access to Twitter’s internal resources, was able to find hundreds of accounts within 20 days.
Twitter did not respond to a request for further comment.
[ad_2]
Source link