[go: up one dir, main page]

Jump to content
Our Commitments

Our Approach to AI

How does YouTube responsibly approach Generative AI?

Generative AI has already begun to transform the ways that creators can express themselves – from storyboarding ideas to experimenting with music tools. We’re excited to be at the forefront of broadening access to these features to help everyone create. At YouTube, innovation and responsibility go hand-in-hand and while we look forward to seeing how our creators continue to harness AI across our platform, responsibility remains at the center of all we do.

Our Community Guidelines will continue to define the rules of the road for all content on the platform. We have and will continue to remove synthetic or altered content that violates any of our Community Guidelines, including those that prohibit hate speech, violent or graphic content, and harassment.

We believe it’s in everyone’s interest to maintain a healthy ecosystem of information on YouTube and have launched new policies and product experiences to fulfill our commitment to embrace AI boldly and responsibly.

This includes:

  • Transparency tools for altered or synthetic content: A tool in Creator Studio to help creators share when their content is meaningfully altered or synthetic and seems real, including generative AI. Creators are required to disclose this content when it’s realistic, meaning that a viewer could easily mistake what’s being shown with a real person, place, or event. Based on creator's disclosure, labels will appear within the video description, and if content is related to sensitive topics like health, news, elections, or finance, we will also display a label on the video itself in the player window.

Video player label

Video description label

While we expect creators to self disclose when they’ve used altered or synthetic content in their videos, we may also apply a label to some videos in cases where this disclosure hasn’t occurred, especially when the content discusses the sensitive topics mentioned above.

  • Deploying generative AI technology to power content moderation: Generative AI is already helping us rapidly expand the set of information our AI classifiers are trained on, which allows us to identify and catch abusive content more quickly. Improved speed and accuracy of our systems also allows us to reduce the amount of harmful content human reviewers are exposed to.

These are just a few of the first steps we’re taking in an ongoing process, and we’ll continue to evolve and iterate as we go to ensure we are balancing the tremendous benefits this technology offers with the continued safety of our community at this pivotal moment.