The privacy paradox with AI


  • Greenspoon Marder LLP

October 31, 2023 – As artificial intelligence (“AI”) rapidly advances and impacts various industries, it transforms the way we live, work, and interact. One of the most notable developments is AI’s potential to affect privacy rights and the protection of users’ personal data.

The spotlight on data privacy has intensified in recent years. High-profile lawsuits against Silicon Valley giants, escalating public concern about data privacy, and landmark legislative actions globally have underscored the critical and urgent nature of this issue. Sweeping regulations, both nationally and internationally, were enacted to safeguard consumers and their data. However, these privacy regulations were conceived in a pre-AI era and could scarcely foresee the profound implications of the rapid evolution of AI.

The European Union’s General Data Protection Regulation (“GDPR”) is the most comprehensive privacy regulation in the world. It governs data protection and privacy for all individuals within the EU and the European Economic Area and provides extensive rights to data subjects. The GDPR also imposes strict obligations on data controllers and processors, requiring them to implement data protection principles and adhere to stringent standards when handling personal data.

In the United States, privacy laws consist of both federal and state regulations. At the federal level, sector-specific laws such as the Health Insurance Portability and Accountability Act (“HIPAA”) and the Children’s Online Privacy Protection Act (“COPPA”) protect specific types of data or apply to certain industries. However, there is no overarching federal privacy law that comprehensively addresses AI-driven data processing. At the state level, the California Consumer Privacy Act (“CCPA”) is the most robust privacy law in the United States, granting California residents extensive rights over their personal data and imposing obligations on businesses that collect, use, or sell their information.

AI, at its core, leverages machine learning algorithms to process data, facilitate autonomous decision-making, and adapt to changes without explicit human instruction. The technology has pervaded almost every industry, from health care to fashion, finance to agriculture, and beyond. As this technology continues to expand across these industries, it creates a labyrinth of privacy concerns, thereby challenging traditional norms of personal data protection.

AI’s privacy dilemma rests on a handful of key issues. Firstly, the technology’s insatiable appetite for extensive personal data to feed its machine-learning algorithms has raised serious concerns about data storage, usage, and access. Where is this data coming from? Where is this data stored? Who can access it? And under what circumstances? These are questions that traditional data protection laws are not equipped to answer.

Moreover, AI’s remarkable capacity to analyze data and make complex analyses amplifies privacy concerns. The technology’s potential to infer sensitive information, such as a person’s location, preferences, and habits, poses risks of unauthorized data dissemination. Coupled with the potential for identity theft and unwarranted surveillance, AI presents a unique set of challenges that demand immediate proactive solutions.

AI developments are prompting a need for ethical guidelines and best practices to minimize privacy risks. Several industry leaders have already taken steps to address these concerns, such as Elon Musk’s open letter in March calling for a six-month pause on AI development to assess the technology’s societal impact. His unprecedented move served as a wake-up call for the industry to scrutinize AI’s implications more closely.

Several esteemed bodies have risen to this challenge by proposing ethical benchmarks. The Partnership on AI (PAI) a coalition of leading companies, organizations, and individuals impacted by artificial intelligence, stands out as a beacon. By amalgamating diverse stakeholders — from tech giants to AI users — PAI creates a shared platform, fostering collaboration between entities that might not typically interact. Their mission hinges on establishing common ground, positioning PAI as a unifying catalyst for positive change within the AI ecosystem.

Meanwhile, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems has a clear directive, “to ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.” They champion the idea that AI, in its design and application, should inherently prioritize human welfare, ensuring that ethical considerations aren’t mere afterthoughts but are integral to AI’s evolution.

Another body formed in furtherance of this cause is the United Nations’ Multistakeholder Advisory Body on Artificial Intelligence conceived as a part of the Secretary-General’s Roadmap for Digital Cooperation in 2020. Recognizing the duality of AI’s potential — its immense benefits and the substantial risks to user safety and agency — this body underscores the necessity for heightened multi-stakeholder efforts in AI cooperation on a global scale. It is currently in formation, poised to spearhead analyses and forward recommendations for the international governance of AI.

Together, these entities and their guidelines underscore a collective commitment to meld AI’s progress with the tenets of transparency, accountability, fairness, and the overarching umbrella of privacy.

Even major tech conglomerates like IBM are taking steps to acknowledge their responsibility in regulating AI’s societal impact. They are actively displaying their ethical principles on their websites, and in 2020, Forbes reported that IBM decided it would no longer sell general-purpose facial recognition technology. “Why It Matters That IBM Abandoned Its Facial Recognition Technology,” Forbes, June 18, 2020.

This decision reflects their concerns about potential misuse and advocating for a broader dialogue on its appropriate use. Such initiatives underline a growing consensus to address the ethical, legal, and societal implications of AI and promote best practices.

For effective navigation of the privacy paradox presented by AI, a sophisticated, multifaceted approach is necessary. The role of lawmakers and policymakers in this context cannot be overstated. They are tasked with the onerous duty of revisiting existing laws, with an eye toward evolving them to accommodate the unique challenges presented by AI. This includes establishing strict regulations on AI-driven data-processing technologies and demanding greater transparency from developers about their algorithms and data sources.

Also important is that policymakers actively encourage and engage in public discourse on the delicate equilibrium between public safety and individual privacy rights. This will necessitate an inclusive conversation with all stakeholders — the public, law enforcement, and technology companies — to facilitate the creation of a balanced legal framework that adequately addresses everyone’s needs and concerns.

Legislators are seeing the wisdom of adopting a proactive approach, anticipating future developments in AI technology and establishing preemptive measures, rather than merely reacting to existing challenges. The rapidly evolving nature of AI demands a corresponding dynamism in regulatory frameworks.

In a significant gathering steered by Senate Majority Leader Chuck Schumer on Sept. 13, 2023, the inaugural session of the Washington AI Summit convened more than 60 senators, high-profile tech CEOs, and representatives from civil society to deliberate on the prospective regulation of the artificial intelligence industry. Attendees underscored the “above zero” existential risk posed by AI, suggesting that mishandling its rise could have “severe” repercussions.

While there was unanimity regarding the need for federal oversight of AI, the specifics remain undetermined. The ambitious agenda, spanning nine sessions, marks the onset of a rigorous legislative endeavor to foster the safe and beneficial development of AI technologies, balancing innovation with precaution in an industry known for both its tremendous promise and its potential perils.

Despite a shared commitment to nurturing innovation through increased federal investment in research and development, key issues such as the formulation of a dedicated federal agency to oversee AI remained notably absent from the discourse, illustrating the challenging path lawmakers have embarked on in their journey to navigate the uncharted waters of AI regulation.

Finally, AI developers and tech companies have a significant role to play in this arena, prioritizing ethical considerations and integrating industry best practices into their development processes. Incorporating privacy-by-design principles, engaging in self-regulation, and actively participating in industry initiatives can help them build a foundation of trust in their technologies while mitigating potential privacy risks.

More specifically, AI developers have been urged to create models that respect user privacy by minimizing data requirements and implementing robust data protection measures. Innovative approaches such as differential privacy and federated learning, which offer new ways of learning from data without compromising privacy, are also emerging.

As AI technologies proliferate at an extraordinary pace, the corresponding legal frameworks, ethical guidelines, and industry practices must adapt with equal speed to address emerging challenges. The dialogue on AI’s impact on data privacy is ongoing and complex, necessitating sustained engagement from policymakers, technology developers, and the public. The future of AI-driven data processing and its impact on privacy rights will undoubtedly be shaped by these dialogues and the actions that stem from them.

By advocating for collaboration, transparency, and accountability, we can harness the potential benefits of AI technology responsibly, while preserving an unwavering commitment to safeguarding the fundamental rights of individuals. The AI privacy paradox represents one of the most significant challenges of our time. As we move forward, we must ensure that our pursuit of technological advancement does not come at the cost of our privacy rights.

Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias. Westlaw Today is owned by Thomson Reuters and operates independently of Reuters News.

Acquire Licensing Rights, opens new tab

Gai Sher

Gai Sher is senior counsel in the innovation and technology practice group at Greenspoon Marder LLP where she represents and advises startups, emerging growth companies, brands, creators, and executives in media, technology, and consumer products in all aspects of commercial transactions. She can be reached at [email protected]

Ariela Benchlouch

Ariela Benchlouch is an associate in the corporate and innovation and technology practice groups at the firm. She focuses her practice on innovative technology, fintech, entertainment and media law, and digital asset transactions. She can be reached at [email protected].


Leave a Reply

Your email address will not be published. Required fields are marked *