In comments to the newly established National Artificial Intelligence Research Resource Task Force, EPIC called on the Task Force to prioritize privacy, civil rights, and civil liberties by creating resources for companies to develop purposeful, accountable, transparent, and fair AI. EPIC also urged the Task Force to provide regulators with the resources required to enforce civil rights and consumer protection laws against companies that deploy AI systems. EPIC recently submitted comments to the Office of Management and Budget and the National Security Commission on Artificial Intelligence advising both agencies to follow the Universal Guidelines for AI and to push for actionable legal rights to protect against algorithmic harms.
Facebook announced today that it is pausing its work on a kids’ version of Instagram after facing widespread criticism. In March 2021, reports leaked that Facebook was planning to build a version of Instagram for kids under the age of 13. Regarding today’s announcement, Fairplay’s Executive Director stated, “Today is a watershed moment for the growing tech accountability movement and a great day for anyone who believes that children’s wellbeing should come before Big Tech’s profits.” The earlier reports faced swift backlash as consumer protection advocacy groups and politicians asked Facebook to halt its plan. This announcement came after senators announced an investigation into Facebook’s negative effects on teenagers and a series of investigations by the Wall Street Journal which revealed that Facebook is aware of its harmful effects on users. EPIC signed onto a letter by Campaign for a Commercial-Free Childhood, now known as Fairplay, urging Facebook to cancel its plan for Instagram Kids. EPIC has fought for transparency and accountability for Facebook's privacy abuses for over a decade, from filing the original FTC Complaint in 2009 that led to the FTC's 2012 Consent Order with the company, to moving to intervene in and filing an amicus brief challenging the FTC's 2019 settlement with Facebook.
The Ninth Circuit announced today police violated a defendant’s Fourth Amendment rights when they warrantlessly searched files that Google automatically reported using a proprietary algorithm designed to detect child sexual abuse material (“CSAM”). Prosecutors in the case, United States v. Wilson, had argued that the police officer’s search of the defendant’s files did not violate the Fourth Amendment because Google, a private party, had conducted the initial search. The district court agreed, finding that there was a “virtual certainty” that the files Google sent to police were identical to files previously identified by a Google employee as CSAM. But no Google employee reviewed the defendant's files before sending them to police—instead, Google automatically forwarded the files to law enforcement after a proprietary algorithm matched the files to previously-identified CSAM images. EPIC filed an amicus brief in the Ninth Circuit appeal to explain that prosecutors had failed to show that the proprietary Google algorithm reliably matched images. EPIC also urged the court to narrowly apply the private search exception. The Ninth Circuit found that the police search “allowed the government to learn new, critical information” and “expanded the scope of the antecedent private search because the government agent viewed Wilson’s email attachments even though no Google employee—or other person—had done so.” The Ninth Circuit also echoed EPIC’s amicus brief: “on the limited evidentiary record, the government has not established that what a Google employee previously viewed were exact duplicates of Wilson’s images.” The decision in this case diverges from previous federal appeals and state court decisions on the issue and may lead the Supreme Court to review the important privacy implications of mass automatic file scanning programs.