You have the right to object to Meta using posts, photos, captions and messages you’ve shared using their services to develop and improve Artificial Intelligence (AI). You have to submit a form including an impact statement to exercise that right. That seems quite an administrative burden but lets do it. I may not be Scarlett Johansson, but I think we can all agree that its for the best to keep my personality out of AI too.
Start here. Note that even if you do this it seems that Meta may ignore it if you are not covered by a relevant data protection law. And Meta may process information about you anywhere on their platforms where images of you are shared or mentioned in posts by someone else. Short of deleting all Meta products and looking for alternatives that do not behave in this way submitting an impact statement is the best you can do.
TL:DR – Meta are the owners of Facebook, Instagram, WhatsApp and Threads. Your information is everything except your private messages between friends and family. This could include postings on pages, photos, captions, and messages not between friends and family as well as the very specific graph information that these services collect about you to serve advertising and analytics. I'm going to suggest you don't want that! You can at least use the form to express in writing your request to prohibit the practice of your data being fed into these AI technologies as training data. It requires an impact statement which is a chore but I've created one. Whether it really achieves anything remains to be seen. Facebook responded pretty instantanously probably automatically, to my form.
Contents
- Steps to object to your information being used for AI at Meta
- Data Protection Impact Assessment
- Objection to Data Use in AI Training at Meta
- Response to "Please tell us how this processing impacts you."
- Response to "Please provide any additional information that could help us review your objection."
- Submitting the form
- Success
Steps to object to your information being used for AI at Meta
You'll need an impact statement. Firstly, I created a Data Protection Impact Assessment (DPIA), to help me create an impact statement. Ironically I did this with a little help from ChatGPT using the prompt
"write a short data protection impact assessment on why my data should not be used to train an AI."
A DPIA is a process to help you identify and minimise the data protection risks of an activity. A deep dive on DPIA is available over at the ICO website.
Data Protection Impact Assessment
Purpose of the Assessment: To evaluate the impact of using personal data to train AI models and justify why it should not be used.
Scope: This assessment covers personal data that could be used for training AI, including data collected through digital interactions, personal identifiers, and behavioral data.
Key Risks Identified:
-
Privacy Invasion: Using personal data to train AI can lead to unauthorized access and misuse of sensitive information. This may infringe on individual privacy rights and could result in exposure of private details without consent.
-
Lack of Control: Individuals lose control over personal data once it is used in AI training. They cannot track how this data is being used, who has access to it, or for what purposes it may be repurposed in the future.
-
Potential for Bias and Discrimination: Personal data used in AI training can inadvertently reinforce biases, leading to unfair treatment or discrimination against certain groups. This can have serious ethical and social implications.
-
Data Security Concerns: Storing and processing large amounts of personal data increases the risk of data breaches. Unauthorized access to this data can lead to identity theft, financial loss, or other personal harm.
-
Regulatory Compliance: The use of personal data for AI training must comply GDPR. Non-compliance can result in legal penalties and damage to reputation.
Mitigation Measures:
-
Anonymization and Pseudonymization: Ensure that any data used is anonymized or pseudonymized to protect individual identities.
-
Informed Consent: Obtain explicit consent from individuals before using their data for AI training, ensuring they are fully aware of how their data will be used.
-
Transparency: Maintain transparency about data usage practices and provide clear, accessible information to individuals regarding how their data is being used.
-
Regular Audits: Conduct regular audits to ensure data protection measures are effective and compliant with relevant regulations.
Conclusion: Given the significant risks to privacy, lack of control, potential for bias, data security concerns, and stringent regulatory requirements, personal data should not be used to train AI models without robust safeguards. Prioritizing individual rights and ensuring ethical data practices are crucial in mitigating these risks.
Objection to Data Use in AI Training at Meta
So to specifically deal with the questions on the form from Meta again I turned to ChatGPT
"Use this to answer the questions "Please tell us how this processing impacts you." and "Please provide any additional information that could help us review your objection."
Response to "Please tell us how this processing impacts you."
The use of my personal data for AI training by Meta significantly impacts me in the following ways:
-
Privacy Invasion: Utilizing my personal data without explicit consent is a direct violation of my privacy. This unauthorized access to my sensitive information causes me distress and discomfort, knowing that my private details could be exposed and misused.
-
Loss of Control: I lose control over my personal data once it is used in AI training. This lack of transparency means I cannot track how my data is being used, who has access to it, or for what purposes it may be repurposed in the future. This uncertainty and inability to manage my data usage is concerning.
-
Risk of Bias and Discrimination: AI models trained on personal data can inadvertently perpetuate biases, leading to unfair treatment and discrimination. The possibility that my data could contribute to such outcomes is troubling and ethically problematic.
-
Data Security Concerns: Storing and processing large volumes of personal data heightens the risk of data breaches. Unauthorized access to my data could result in identity theft, financial loss, and other personal harms, impacting my overall sense of security.
-
Regulatory Non-compliance: If Meta's data practices do not fully comply with data protection laws such as GDPR, it exposes me to additional risks and uncertainties regarding the protection of my rights.
Success
"We’ve reviewed your request and will honor your objection. This means your request will be applied going forward.
If you want to learn more about generative AI, and our privacy work in this new space, please review the information we have in Privacy Center. facebook.com/privacy/genai
Privacy Operations."