Spread the love

Remember, how many times you have seen a mail informing you of a change in their policy, and you just didn’t care to read it. Maybe from next time, try adding it to your to-do lists, to read the terms and then click on “Agree” rather than I-am-lazy-to-read-so-I-opt-in. AI era has begun. How your data is used should be your concern, if it wasn’t till yesterday. So, take your time and read. Decide if you are okay with it or not.

Meta facebook
Photo by Dima Solomin on Unsplash

Meta has been using users’ personal data in the U.S. for some time to train their AI models. They had rolled out notifications making the users aware of this consuming pattern. Turns out that it isn’t easy for them to do the same in the EU and the U.K. The Data Protection Commission (DPC) posted on its website that it welcomes the decision by Meta to pause its plans to use user’s private data in training the AI models across EU/EEA. Earlier, Meta had planned to start using the users’ data by June 26th.

None of your business – oh wait, I wasn’t being rude, that’s a non-profit organization, noyb, that is fighting for right to privacy.  It’s a beautiful organization, and much needed to protect users. It is in the Europe. So yeah, noyb, filed complaints in not one but 11 countries in the Europe, urging authorities to come and stop this before 26th.

So, now Meta agreed to pausing their plans, though as it sounds Meta has put up blog already feeling sad for the EU. One of the statements read, “But questions still remain: will Europeans have equal access to groundbreaking AI?”.

What Meta says is that they are not taking all of the private data. Not the ones you send to your friends or family. They only take data that you commented on publicly. Most importantly, it is used to train models and it is not designed to identify anyone. This is more like an attempt to understand local references or colloquial phrases.

What Noyb had to say was that, “Hey, don’t use our data and you have to make the opting out option a bit more visible?” kind of thing. According to them, Meta’s notification was just a normal notification that could have been easily missed. This I agree. I mean, how many times do you actually read all of the notifications. Another point they make is that it is hard to find the opt-out option. Also, you have to fill out a form and then it is at Meta’s disposition to accept or reject the requests. I also, agree with the part where most of the times, it is difficult to find the opt-in opt-out options. And imagine, clicking and filling out the form, it is a tedious process and we just feel like, “Well, yeah, I don’t have time. You just go ahead with it” putting our trust in the company.

So, yeah what do you think? I fell Meta is responsible for explaining on how they intend to classify data that would be given to these models. How do you trust that they are not going to use the personal data? The main priority is that users should be told in a simple manner on what they are planning to do with the data, with two buttons at the bottom – Opt-In, Opt-out. Again, the form shouldn’t be so large and complex to bore out the readers.

Featured Image by Alessio Jacona from Rome, Italy

You May Also Like

+ There are no comments

Add yours