YouTube has recently updated its privacy policy regarding AI.
The online video sharing giant introduced a tool in Creator Studio this March for the creators to disclose if the content is altered or synthetic (content that are artificially created) and seems real. Once disclosed, YouTube will label it as “Altered or synthetic content”.
Now YouTube (owned by Google) without making a buzz updated its policy concerning AI. TechCrunch, as usual, was quick enough to notice this. So, what does this new policy say? Well, if you find any content that is synthetic or has been altered with the help of AI, and if it resembles you, when I say you, it could be your voice or looks, then you can report it. Basically, you could request it to be removed. Now, YouTube won’t remove it immediately, it has its checklist, including things like if it is altered, or the content is disclosed to viewers as altered, or if content contains parody/satire, or if the content has some well-known figures involved in any sensitive behavior (crime, violence, endorsing political candidates). No wonder, their intention is to ensure we, users, are safe on the platform but also, they made sure they updated their policy before the November elections (US).
Process of raising complaint
Out of curiosity, I clicked on the link for raising complaint, you know, just to know the process, not intending to raise one. I would like to know on how they will check the validity of the content with our face or voice. So, as of now, it’s a 6-step process. After explaining how concerned they are, they ask us if we feel harassed in the second step. In the third step, they ask us to contact the uploader to let them know about it. If we still decide to proceed, in the fourth, they want us to know if we have reviewed the Community guidelines. In the fifth step, they warn us that in case of false report, they could suspend the account. Finally, in the last step, aside from two buttons allowing us to report in case of content that uniquely identify us, there is one additional button which allows you to report if the content is altered/AI generated and looks (or sounds) like you.
This then takes you to a form. It starts with the usual: personal information such as name, country, email and on whose behalf, you are reporting. Next is regarding the content: how many entities you wish to report, and what is it – channel, comment or video?
Then there’s a max-1000 characters text box that you need to fill in on how you feel your appearance or actions were altered. Another text box asks you to describe yourself so that YouTube could differentiate from others in the reported content.
After providing the usual details – link, agreement, signature and date – you can then submit the form.
Conclusion
I am a bit confused about how YouTube will verify the authenticity since they are not asking for any picture or audio. Will the text alone do? Why would they not require any video clip or voice clip. At least a photo. Am I missing something? Will this process work?
Nonetheless, it is good to see companies coming up with updated policies that disallows AI misuse/to rethink on their plans. How much of it is practical and actually followed, is yet to be seen.
+ There are no comments
Add yours