YouTube, the online video-sharing platform which relies on user-generated content, announced some new rules to relieve “new risks” concerns by the users, as the Artificial Intelligence (AI) content is also introduced on the website.
What Are YouTube’s New AI Content Rules?
as generative AI unlocks new forms of creativity, we’re introducing measures that balance innovation with our responsibility to protect the YouTube community
learn more about upcoming changes to AI disclosures, privacy options, and more → https://t.co/fyvXZIdx6b
— YouTube Creators (@YouTubeCreators) November 14, 2023
YouTube in its new rules talks about the disclosure agreement, stating that, “We believe it’s in everyone’s interest to maintain a healthy ecosystem of information on YouTube. We have long-standing policies that prohibit technically manipulated content that misleads viewers and may pose a serious risk of egregious harm. However, AI’s powerful new forms of storytelling can also be used to generate content that has the potential to mislead viewers—particularly if they’re unaware that the video has been altered or is synthetically created.”
The disclosure agreement further said, “To address this concern, over the coming months, we’ll introduce updates that inform viewers when the content they’re seeing is synthetic. Specifically, we’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools.”
The company further said, “When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material. For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do.”
That being said, YouTube is not regulating AI, but helping with content moderation in various ways.
from Science and Technology News - Latest science and technology news https://ift.tt/IqkUmWn
0 Comments