Advancement of AI is Accelerating the Need for a Federal Privacy Law
On 11 July 2024, Senate Committee on Commerce, Science and Transportation Chair Maria Cantwell (D-WA) held a full committee hearing, titled “The Need to Protect Americans’ Privacy and the AI Accelerant,” in which Cantwell stressed the need for a federal privacy law to prevent AI data misuse.
In April, Cantwell and House Energy & Commerce Committee Chair Cathy McMorris Rodger (R-WA) released a draft of the bipartisan bicameral American Privacy Rights Act (APRA), which seeks to introduce federal privacy regulations to replace the current patchwork of state laws. The APRA is the successor to the American Data Privacy and Protection Act (ADPPA) which showed great promise but ultimately stalled due to a lack of support. The APRA addresses some of the issues that resulted in the failure of ADPPA and was recently amended to attract more support. On May 23, 2024, the U.S. House Committee on Energy and Commerce Subcommittee on Data, Innovation, and Commerce released a revised APRA draft ahead of a scheduled markup by the House Energy and Commerce Committee.
The revised draft no longer included sections on civil rights protections, includes a new section on privacy by design, data protection for covered minors and the Children’s Online Privacy Protection Act, an expansion of public research permitted purposes, further obligations for data brokers, and allows individuals to request that consequential decisions be made by a human rather than an algorithm. The revised draft attracted considerable backlash from civil rights and privacy groups and following strong opposition from GOP leaders, the markup was canceled and it is unclear when that will now take place.
More than 140 countries have passed comprehensive federal privacy laws, yet in the United States, it has been left to individual states to introduce their own laws, and the current patchwork of state laws has created regulatory inconsistencies and makes compliance challenging for businesses. Now, with the rapid advancement of AI technologies, federal data privacy protections are an even more pressing matter.
There is currently no law that prevents U.S. companies from training their large language models on personal data without informing consumers, no restrictions on algorithms being used to make decisions on housing, credit, and employment – which can have serious implications for consumers, and data can be bought and sold without consumer approval. The lack of regulation has allowed U.S. companies to accelerate the development and use of AI technologies, but that is coming at the expense of privacy. “Americans’ privacy is under attack,” said Sen. Cantwell at the hearing. “We are being surveilled, tracked online and in the real world through connected devices. Now, when you add AI, it is like putting fuel on a campfire in the middle of a windstorm.”
Experts testified at the hearing that the advancement of AI systems has grave implications for data privacy. AI systems allow detailed consumer profiling and online surveillance and allow fraud and deepfakes to be done at scale with little human involvement and minimal cost. Dr. Ryan Calo, University of Washington School of Law and Co-Director of the University’s Technology Lab, warned that AI technologies, which are trained to recognize patterns in large data sets, are allowing companies to derive sensitive insights about individuals from seemingly innocuous information. For instance, AI can derive information about mental health and pregnancy from seemingly non-sensitive information, which creates a serious gap in privacy protection. He also warned that the sensitive insights gained by AI systems are already being used in ways to disadvantage consumers, and consumers are unaware of what is happening behind the scenes. Calo warned that the problem will only get worse.
Amba Kak, Co-Executive Director of the AI Now Institute, explained the trajectory of AI is at a crucial inflection point. “Without regulatory intervention, we are doomed to replicate the extractive, invasive, and often harmful data practices and business models that have characterized the past decade of the tech industry,” said Kak. “A federal data privacy law, especially one with strong data minimization, could act as a foundational intervention to break this cycle and challenge the culture of impunity and recklessness that is hurting both consumers and competition. She explained that strong federal data privacy protections would put “reason in place of recklessness.” Companies would be required to assess whether the benefits of using new AI components outweigh the potential for harm.

