Automated Systems and Artificial Intelligence

A core function of trust and safety is analyzing content, behavior, and actions across millions or billions of accounts to determine whether a service’s policy has been violated. This scale often requires the use of automated tools, such as machine learning (ML) models and other forms of artificial intelligence (AI). These tools can serve as the “first line of defense” to proactively detect policy-violating content, sometimes before it has been viewed by others. Automation can also be used to more efficiently route user-generated content for human review, measure effectiveness of policy enforcement, and assist with more consistent decision-making.

This chapter unpacks how trust and safety teams build, test, and deploy technologies used for automation, describes common forms of automation, explores challenges associated with developing and deploying automation techniques, and discusses key considerations and limitations of the use of automation. Because many tools—particularly the more sophisticated models designed to spot policy-violating content—rely on AI, this chapter also discusses potential biases in AI models. Notably, although not covered in this chapter, there is a rich history of scholarship about the ethical development and deployment of AI technologies, much of which applies to the use of AI within trust and safety.

In this chapter, we cover the following: