[go: up one dir, main page]

Company

15th Transparency Report: Increase in proactive enforcement on accounts

By
Thursday, 31 October 2019

We’re continually striving to be more proactive and open in the work we do to serve the public conversation on Twitter. Part of that effort is our biannual Twitter Transparency Report, which we’ve produced since July 2012 to share global trends across a number of areas of our enforcement on Twitter, including the Twitter Rules and legal requests we receive. 

The report is ever-evolving. For the first time, we’re incorporating data and insights regarding impersonation policy enforcement, as well as state-backed information operations datasets that were previously released to the public to empower research and awareness of these campaigns. 

Since the last Twitter Transparency Report, we’ve continued to further invest in proactive technology to positively and directly impact people’s conversations on the service. 

Here are key highlights from that work, which relate to the latest reporting period (January 1 to June 30, 2019)*:

  • More than 50% of Tweets we take action on for abuse are now proactively surfaced using technology, rather than relying on reports to Twitter;
  • 105% increase in accounts actioned by Twitter (locked or suspended for violating the Twitter Rules);
  • Continuing a year-on-year trend, a 30% decrease in accounts suspended for the promotion of terrorism; and
  • 67% more global legal demands, originating from 49 different countries. 

*All figures compared to the last reporting period. 

This post is unavailable
This post is unavailable.

Twitter Rules enforcement
Our continued investment in proprietary technology is steadily reducing the burden on people to report to us. For example, more than 50% of Tweets we take action on for abuse are now being surfaced using technology. This compares to just 20% a year ago. 

Additionally, due to a combination of our increased focus on proactively surfacing potentially violative content for human review and the inclusion of impersonation data for the first time, we saw a 105% increase in accounts locked or suspended for violating the Twitter Rules.

Specific policy content areas:

  • Private information: We saw a 48% increase in accounts reported for potential violations of our private information policies. We suspended 119% more accounts than the previous reporting period. This increase may be attributed to the launch of improvements to our reporting flow that make it easier to report private information, as well as changes to our internal enforcement processes which permit bystanders to report potential private information violations for review.
  • Sensitive media: We saw a 37% increase in accounts reported for potential violations of our sensitive media policies. We actioned 41% more accounts.
  • Hateful conduct: There was a 48% increase in accounts reported for potential violations of our hateful conduct policies. We actioned 133% more accounts compared to the last reporting period.
  • Abuse: We saw a 22% increase in accounts reported for potential violations of our abuse policies. We took action on 68% more accounts compared to the last reporting period.
  • Impersonation: Impersonation is when an account poses as another person, brand, or organization in a confusing or deceptive manner and is prohibited by the Twitter Rules. During this reporting period, we took enforcement action on 124,339 accounts for violating our impersonation policy.
This post is unavailable
This post is unavailable.

Platform manipulation
We also continue to focus on deterring potentially spammy accounts at the time of account creation; often before their first Tweet. However, our enforcement actions tend to fluctuate for a variety of reasons, often due to the type of spam.

This reporting period, our anti-spam challenges — where we ask people to provide a phone number or email address or fill in a ReCAPTCHA code to verify there is a human behind an account — fell by nearly 50%. 

Removal of terrorist content
A total of 115,861 accounts were suspended for violations related to the promotion of terrorism this reporting period, down 30% from the previous reporting period. This continues a year-on-year decrease in the number of accounts promoting terrorist content on Twitter as we take more comprehensive enforcement action using our technology and strengthen partnerships with our peers. Of those suspensions, 87% consisted of accounts we proactively flagged using internal, proprietary removal tools.

Removal of child sexual exploitation content
During this reporting period, we suspended a total of 244,188 accounts for violations related to child sexual exploitation. Of the unique accounts suspended, 91% were surfaced by a combination of technology solutions (including PhotoDNA and internal, proprietary tools).

Legal requests
In addition to enforcing the Twitter Rules, we also may take action in response to legal requests.

  • Copyright violations:

We received a 101% increase in DMCA takedown notices since our last report. However, many were incomplete or not actionable. We continue to see a high volume of fraudulent DMCA reports from Turkey and Japan, while fraudulent reports from Brazil also continue to increase.

  • Trademark notices:

We saw a 39% increase in the total number of trademark notices received since our last report. The increase is likely due to an influx of reports that failed to provide sufficient information to take any action on our part.

  • Information requests (legal requests for account information):

Information requests from the United States continue to make up the highest percentage of legal requests for account information. During this reporting period, 29% of all global requests for account information originated within the United States.

  • National security requests:

At this time we are only able to share information about the number of National Security Letters (NSLs) received which are no longer subject to non-disclosure orders. We believe it is much more meaningful to publish these actual numbers than reporting in the bands authorized per the USA Freedom Act. Our litigation in the case Twitter v. Barr continues.

During this reporting period, we notified people affected by three additional NSLs after the gag orders were lifted. As reflected in the report, non-disclosure orders for 17 total NSLs have been lifted to date. Twitter is committed to continuing to use the legal mechanism available to us to request a judicial review of these gag orders.

  • Removal requests (legal requests for content removal)*:

Compared to the previous reporting period, we received roughly 67% more legal requests to remove content, originating from 49 different countries. Of the requests received, 80% of the volume originated from Japan, Russia, and Turkey. We withheld content in a country 2,457 times, at either an account or Tweet level.

*We continue to publish these legal requests when we take action directly to the Lumen Database, a partnership with Harvard’s Berkman Klein Center for Internet & Society.

We remain deeply committed to transparency at Twitter — it continues to be one of our key guiding principles. This commitment is reflected in the evolution and expansion of the report in recent years: It now includes dedicated sections on platform manipulation and spam, our Twitter Rules enforcement, and state-backed information operations we’ve previously removed from the service. 

This report reflects not only the evolution of the public conversation on our service but the work we do every day to protect and support the people who use Twitter. Follow @Policy and @TwitterSafety for relevant updates, initiatives or announcements regarding our efforts. 

This post is unavailable
This post is unavailable.