We saved hundreds of Biden-era AI documents, so you don't have to

Image credit: Adapted from Anton Grabolle / Better Images of AI / Classification Cupboard / CC-BY 4.0


On January 20th, Trump will begin his second term in office.

If Trump and his administration follow through on their campaign promises, the transition of power will bring devasting consequences for our fundamental rights and freedoms, particularly for immigrants, reproductive health care seekers, people of color, poor people, and queer and trans communities.

Among the many anticipated impacts, the new administration has signaled it intends to reverse Biden-era progress related to protecting our civil rights and civil liberties in the realms of privacy and artificial intelligence.

The day after the election, Trump announced his intention to repeal a Biden era executive order regulating AI. Since then, Trump has granted Elon Musk unprecedented influence over government affairs, and hosted scores of tech leaders at Mar-a-Lago. Tech CEOs have in turn donated millions to Trump’s inauguration, with contributions from Apple, Meta, Amazon, and OpenAI far outpacing their donations to Biden in 2020.

These and other developments foreshadow an incoming administration that is going to take a lax approach toward regulating technology companies.

Over the last four years, the Biden administration made important strides in this area, including a directive increasing transparency into government use of AI, FTC enforcement against location data brokers, and a DOJ lawsuit challenging the legality of RealPage’s rent price-fixing algorithm.

These efforts were buttressed by legal documents, reports, blogs, and other records laying out the administration’s efforts to protect consumers and hold Big Tech accountable. But once administrative agencies change hands under Trump, there is no guarantee that documentation of these and other initiatives will remain accessible to advocates, journalists, and interested members of the public.

So, we saved them.

Below, you can view archived copies of over 250 documents and webpages on topics like algorithmic discrimination, generative AI, and biometric surveillance. Try filtering by agency (e.g., DHS, FTC) or keywords (e.g., AI use case inventory, Kochava, risk). Use the drop-down menu to change how many rows you can see at a time.

No matter what happens, the ACLU of Massachusetts remains committed to fighting for law reforms to protect the public interest, civil rights, and civil liberties. Click here to find out ways you can get involved.


AgencyDocumentSource
Administrative ConferenceAI and regulatory enforcementOriginal source
Administrative ConferenceGuidance on agency use of AI (2020)Original source
Administrative ConferenceReport on automated legal guidance at federal agenciesOriginal source
ANSIPublic private partnershipsOriginal source
CFPBAdverse action notification on credit algorithmsOriginal source
CFPBAI in financial services - commentOriginal source
CFPBAI in home appraisals - blogpostOriginal source
CFPBAI in home appraisals - ruleOriginal source
CFPBBackground dossiers and AI in employmentOriginal source
CFPBCall to action for tech whistleblowingOriginal source
CFPBFair Credit Reporting and Name-Only Matching ProceduresOriginal source
CFPBGuidance on black box algorithmsOriginal source
CFPBGuidance on credit denials using AIOriginal source
CFPBNo-action letter to Upstart - blogpostOriginal source
CFPBNo-action letter to Upstart (credit lender)Original source
CFPBOn false matches in tenant and employment screeningOriginal source
CFPBProposed registry for AI harm repeat offendersOriginal source
CFPBRights for job seekers on background screeningOriginal source
CFPBRights for tenants on rental application denialsOriginal source
CFPBTenant screening - consumer snapshotOriginal source
CFPBTenant screening - market reportOriginal source
Chief Information Officers CouncilGuidance for federal agencies on AI use case inventory (2024)Original source
CommerceDepartment of Commerce AI use case inventory (2023)Original source
CongressAdvancing American AI actOriginal source
Copyright OfficeCopyright and AI: Digital ReplicasOriginal source
Department of StateDepartment of State AI use case inventory (2024)Original source
Department of StateDepartment of State use of AI compliance planOriginal source
DHSDHS AI roadmap (2024)Original source
DHSDHS AI use case inventory - blog post Original source
DHSDHS AI use case inventory - landing pageOriginal source
DHSDHS AI use case inventory (2022)Original source
DHSDHS AI use case inventory (2023)Original source
DHSDHS AI use case inventory (2024)Original source
DHSDHS playbook for GenAI in the public sectorOriginal source
DHSDHS Simplified AI use case inventory landing pageOriginal source
DHSDHS use of AI compliance planOriginal source
DHSPress release on DHS playbook on GenAI in the public sectorOriginal source
DODDOD use of AI compliance planOriginal source
DOJAI and disability discrimination in hiringOriginal source
DOJDOJ AI use case inventoryOriginal source
DOJDOJ Sues RealPage for Algorithmic Pricing Scheme Original source
DOJDOJ Sues Six Large Landlords for Algorithmic Pricing Scheme Original source
DOJUS et al. v. RealPage - proposed final judgmentOriginal source
DOJUS vs RealPageOriginal source
DOJUS vs RealPage - amended compliantOriginal source
DOLAI and ADS under Fair Labor Standards ActOriginal source
DOLAI and worker well-being principlesOriginal source
DOLFederal contractors use of AIOriginal source
StateRisk Management Profile for AI and Human RightsOriginal source
DOTDOT AI use case inventoryOriginal source
DOTDOT use of AI compliance planOriginal source
EducationAI Discrimination in EducationOriginal source
EducationDepartment of Education AI use case inventoryOriginal source
EducationDepartment of Education use of AI compliance planOriginal source
Election Assistance CommissionElection Assistance Commission use of AI compliance planOriginal source
EnergyDepartment of Energy AI use case inventory (2023)Original source
EnergyDepartment of Energy GenAI Reference GuideOriginal source
EnergyDepartment of Energy use of AI compliance planOriginal source
EOCCAddressing adverse impact of AI in employment under CRAOriginal source
EOCCArtificial Intelligence and Algorithmic Fairness InitiativeOriginal source
EOCCEOCC use of AI compliance plan (2024)Original source
EOCCGuidance on AI in employment and ADA Original source
EOCCImplications of big data for equal employment opportunity lawOriginal source
EOCCiTutorGroup age discrimination lawsuitOriginal source
EOCCiTutorGroup settlementOriginal source
EOCCPress release on guidance on AI in employment and ADA Original source
EOCCTips for workers on AI and ADAOriginal source
EOCCVisual Disabilities in the Workplace and ADAOriginal source
EOCCWearables in the workplace under federal discrimination lawsOriginal source
EOTAI and future of teaching and learningOriginal source
EPAEPA use of AI compliance planOriginal source
Executive Office of PresidentPromoting the Use of Trustworthy Artificial Intelligence in the Federal GovernmentOriginal source
FCCAI in robocalls and proposed rule on robotextsOriginal source
Federal Housing Finance AgencyFederal Housing Finance Agency use of AI compliance plan (2024)Original source
FTCAI and the risk of consumer harmOriginal source
FTCApproaches to Address AI-enabled Voice CloningOriginal source
FTCBest practices for use of FRTOriginal source
FTCBlogpost on location data casesOriginal source
FTCExplainer on proposed settlements with Avast, X-Mode, and InMarketOriginal source
FTCExplainer on real-time biddingOriginal source
FTCExplainer on surveillance pricingOriginal source
FTCFRT vs Facebook (2012) settlementOriginal source
FTCFTC on deceptive AI claimsOriginal source
FTCFTC vs Ascend Ecom deceptive claim lawsuitOriginal source
FTCFTC vs DoNotPay "AI lawyer" deceptive claim - agreed consent orderOriginal source
FTCFTC vs DoNotPay "AI lawyer" deceptive claim - complaintOriginal source
FTCFTC vs Ecommerce Empire deceptive claim complaintOriginal source
FTCFTC vs Everalbum Photo App complaintOriginal source
FTCFTC vs Everalbum Photo App press releaseOriginal source
FTCFTC vs Everalbum settlementOriginal source
FTCFTC vs Facebook (2012) settlement order press releaseOriginal source
FTCFTC vs Facebook settlement orderOriginal source
FTCFTC vs Facebook settlement order press releaseOriginal source
FTCFTC vs FBA machine deceptive claimOriginal source
FTCFTC vs FBA machine orderOriginal source
FTCFTC vs Flo Health press releaseOriginal source
FTCFTC vs Flo Health statement on settlementOriginal source
FTCFTC vs GravyAnalytics complaintOriginal source
FTCFTC vs GravyAnalytics concurring statement (1 of 3)Original source
FTCFTC vs GravyAnalytics concurring statement (2 of 3)Original source
FTCFTC vs GravyAnalytics concurring/dissenting statement (3 of 3)Original source
FTCFTC vs GravyAnalytics press releaseOriginal source
FTCFTC vs GravyAnalytics proposed orderOriginal source
FTCFTC vs InMarket complaintOriginal source
FTCFTC vs InMarket press releaseOriginal source
FTCFTC vs InMarket press release on finalized orderOriginal source
FTCFTC vs Inmarket proposed orderOriginal source
FTCFTC vs Intellivision (unsupported claims about FRT) complaintOriginal source
FTCFTC vs Intellivision blogpostOriginal source
FTCFTC vs Intellivision press releaseOriginal source
FTCFTC vs Intellivision proposed consent orderOriginal source
FTCFTC vs Kochava amended complaintOriginal source
FTCFTC vs Kochava case pageOriginal source
FTCFTC vs Kochava complaintOriginal source
FTCFTC vs Kochava concurring statementOriginal source
FTCFTC vs Kochava memorandum decision 02/2024Original source
FTCFTC vs Kochava press releaseOriginal source
FTCFTC vs MobileWalla complaintOriginal source
FTCFTC vs MobileWalla complaintOriginal source
FTCFTC vs MobileWalla proposed settlement orderOriginal source
FTCFTC vs RiteAid complaintOriginal source
FTCFTC vs RiteAid concurring statementOriginal source
FTCFTC vs RiteAid press releaseOriginal source
FTCFTC vs RiteAid proposed orderOriginal source
FTCFTC vs Rytr (writing assistant) deceptive claim complaintOriginal source
FTCFTC vs Rytr proposed orderOriginal source
FTCFTC vs X-mode agreement with consent order (Jan 2024)Original source
FTCFTC vs X-mode analysis of proposed consent orderOriginal source
FTCFTC vs X-Mode and Outlogic press release on finalized orderOriginal source
FTCFTC vs X-mode and Outlogic press release on outcomeOriginal source
FTCFTC vs X-mode complaint (April 2024)Original source
FTCFTC vs X-mode complaint (Jan 2024)Original source
FTCFTC vs X-mode decision and order (April 2024)Original source
FTCFTC vs X-mode proposed order (Jan 2024)Original source
FTCHealth app breach notification rule Original source
FTCImpersonation rule press releaseOriginal source
FTCOperation AI ComplyOriginal source
FTCPolicy statement on use of biometric informationOriginal source
FTCPress release on health app breach notification rule Original source
FTCStatement by FTC commissioner on health app breachesOriginal source
FTCSurveillance pricing order press releaseOriginal source
FTCSurveillance pricing order to file reportOriginal source
FTCWarning about biometric surveillanceOriginal source
GAOGov AI accountability highlightsOriginal source
GAOGov AI accountability reportOriginal source
General Services AdministrationGSA use case inventory Original source
General Services AdministrationGSA use of AI compliance plan - governanceOriginal source
General Services AdministrationGSA use of AI compliance plan - responsible innovationOriginal source
General Services AdministrationGSA use of AI compliance plan - risksOriginal source
HealthDepartment of Health AI use case inventory (2024)Original source
HHSU.S. Department of Health and Human Services use of AI compliance planOriginal source
HUDFair Housing Act and use of criminal records in housing transactionsOriginal source
HUDFair Housing Act guidance on AIOriginal source
HUDGuidance on AI in advertising of housing, credit and real estateOriginal source
HUDGuidance on AI in tenant screeningOriginal source
HUDHUD AI use case inventory (2023)Original source
HUDHUD AI use case inventory (2024)Original source
HUDHUD use of AI compliance plan (2024)Original source
Industry and Security BureauProposed rule for mandatory AI reportingOriginal source
InteriorDepartment of Interior AI use case inventory (2024)Original source
InteriorDepartment of Interior use of AI compliance planOriginal source
IRSIRS transitions away from FRT for third-party verificationOriginal source
LaborDOL AI use case inventoryOriginal source
LaborDOL use of AI compliance planOriginal source
Library Of CongressAI and copyrightOriginal source
Multi-agency2023 Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated SystemsOriginal source
Multi-agency2024 Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated SystemsOriginal source
Multi-agency2024 use case inventory reporting instructionsOriginal source
Multi-agencyCFPB and Federal Partners on ADSOriginal source
Multi-agencyCFPB and NLRB Announce Information Sharing Agreement Original source
Multi-agencyFederal Agency AI Use Case Inventory (2023)Original source
Multi-agencyFederal Agency AI Use Case Inventory (2024)Original source
Multi-agencyFederal Agency AI Use Case Inventory READMEOriginal source
Multi-agencyJoint statement against AI biasOriginal source
Multi-agencyJoint statement on ADS enforcementOriginal source
NAIACAI literaryOriginal source
NAIACAI positive impact on science and medicineOriginal source
NAIACAI's Procurement ChallengeOriginal source
NAIACData Challenges and Privacy Protections for Safeguarding Civil Rights in GovernmentOriginal source
NAIACExpand the AI Use Case Inventory by Limiting the ñ€˜Common Commercial Productsñ€™ ExceptionOriginal source
NAIACExpand the AI Use Case Inventory by Limiting the ñ€˜Sensitive Law Enforcementñ€™ ExceptionOriginal source
NAIACFAQs on Foundation Models and GenAIOriginal source
NAIACFindings and recommendations on AI SafetyOriginal source
NAIACFindings on The Potential Future Risks of AIOriginal source
NAIACGenAI risksOriginal source
NAIACHarnessing AI for Scientific ProgressOriginal source
NAIACImplementing the NIST AI Rights Management FrameworkOriginal source
NAIACInstitutional Structures to Support Safer AI SystemsOriginal source
NAIACNAIAC webpagesOriginal source
NAIACNational Artificial Intelligence Advisory Committee: Year 1 ReportOriginal source
NAIACNational Artificial Intelligence Advisory Committee: Year 2 ReportOriginal source
NAIACNational Campaign on Lifelong AI Career SuccessOriginal source
NAIACOn adverse event reporting of emerging risks from AI Original source
NAIACOn field testing of law enforcement AI toolsOriginal source
NAIACOn implementing the NIST AI Safety InstituteOriginal source
NAIACPublic Summary Reporting on Use of High-Risk AIOriginal source
NAIACPublic Use Policies for High-Risk AIOriginal source
NAIACRationales, Mechanisms, and Challenges to Regulating AIOriginal source
NAIACReport on impact of AIOriginal source
NAIACReport on law enforcement use of AIOriginal source
NAIACResponsible Procurement Innovation for AI at Government AgenciesOriginal source
NAIACStatement of support on Safe, Secure and Trustworthy AI EOOriginal source
NAIACStatement on AI and Existential RiskOriginal source
NAIACTowards Standards for Data Transparency for AI ModelsOriginal source
NAIACWorking Group on Rights-Respecting AIOriginal source
NAIRRNAIRR task force final reportOriginal source
NAIRRNAIRR task force interim reportOriginal source
NAIRRNational AI Research Resource pilot launchOriginal source
NASANASA use of AI compliance planOriginal source
NISTAI risk management frameworkOriginal source
NISTAI risk management playbookOriginal source
NISTAI technical standardsOriginal source
NISTGenAI risk managementOriginal source
NISTGenAI software development practicesOriginal source
NISTNIST landing post on Safe, Secure, Trustworthy AI EOOriginal source
NISTOne-pager on carrying out Safe, Secure, Trustworthy AI EOOriginal source
NISTRisks of foundation modelsOriginal source
NISTSecure Software Development FrameworkOriginal source
NISTStandards for identifying biasOriginal source
NSFNSF 2023/2024 AI use case inventoryOriginal source
NSFNSF use of AI compliance planOriginal source
NSSCETNational Standards Strategy for Critical and Emerging Technology roadmapOriginal source
NSTTrustworthy AI R&DOriginal source
Nuclear Regulatory CommissionNuclear Regulatory Commission use of AI compliance planOriginal source
OETGuidance for developers on AI in EducationOriginal source
Office of Personnel ManagementOffice of Personnel management use of AI compliance planOriginal source
OMBBlogpost on responsible acquisition of AIOriginal source
OMBMemo on AI governanceOriginal source
OMBMemo on responsible acquisition of AIOriginal source
OSTPBlueprint for AI Bill Of RightsOriginal source
SECSEC AI use case inventory 2024Original source
SECSEC use of AI compliance planOriginal source
SSASSA use of AI compliance planOriginal source
TreasuryAI use case inventory May 2023Original source
TreasuryTreasury use of AI compliance planOriginal source
US Commission on Civil RightsAI in K-12 educationOriginal source
US Commission on Civil RightsCivil rights implications of algorithmsOriginal source
US Commission on Civil RightsCivil rights implications of FRTOriginal source
US Commission on Civil RightsCivil rights implications of FRT factsheetOriginal source
US Commission on Civil RightsUSCCR use of AI compliance planOriginal source
USAIDUSAID use of AI compliance planOriginal source
USDAUSDA AI strategyOriginal source
USDAUSDA AI use case inventory (2024)Original source
USDAUSDA use of AI compliance planOriginal source
VeteransDepartment of Veterans Affairs use of AI compliance planOriginal source
White HouseEO on Safe, Secure and Trustworthy AIOriginal source
White House EO on Safe, Secure and Trustworthy AI - press releaseOriginal source
White HouseDelivering on the Promise of AI to Improve Health OutcomesOriginal source
White HouseEO on Responsible AIOriginal source
White HouseFramework on AI governance in national securityOriginal source
White HouseNational Standards Strategy for Critical and Emerging Technology - press releaseOriginal source
White HouseThe Cost of Anticompetitive Pricing Algorithms in Rental HousingOriginal source
White HouseVoluntary Commitment on AI - press releaseOriginal source

Can’t find what you’re looking for? It may be available on the Internet Archive.


Eyes in the Sky: Big Brother is (still) watching

New records obtained from the Federal Aviation Administration (FAA) in December 2023 show that the number of drones licensed by government agencies has gone up across the Commonwealth. We’ve updated our interactive tool, which lets anyone explore the dataset and identify drones owned by public entities in their communities.

Search Government Drones in Massachusetts

Keep reading to see what we learned. 

In 2021, we published a report detailing data acquired by the ACLU on government agencies’ use of drones in Massachusetts. According to late 2023 data, it remains the case that almost half (43%) of active government drones in Massachusetts are registered to police departments. The Massachusetts State Police has the largest number of drones of any police agency in the state. 

In 2022, we published documents revealing police used drones to monitor Black Lives Matter protests in five cities in Massachusetts, including Boston. Video feeds from these drones were streamed in real-time to local police departments and the State Police “Commonwealth Fusion Center,” which shares information with federal agencies and out-of-state police entities.  

Protecting public safety?  

One of the most common drones registered by government agencies is the DJI Matrice line. Just one of these drones costs between $10,000 to $20,000 dollars. With 133 Matrice drones in operation in Massachusetts as of 2023, these drones alone likely cost taxpayers $1 to 2 million dollars.

Despite the hefty price tag, DJI drones – especially the Matrice and Mavic lines – have been prone to crashes. In August 2022, a police-operated DJI Mavic 2 Enterprise drone used to locate a suspect in the UK crashed into a building after its battery failed and it plummeted 130 feet. According to FAA documents, the Massachusetts State Police has registered eight DJI Mavic 2 Enterprise drones.

The DJI Matrice 210 drone is even less reliable. Reports of crashes were frequent enough that, in June 2020, the website www.reportdroneaccident.com urged Matrice 210 pilots to “not fly over any people,” echoing a warning from the drone manufacturer from 2018. Regarding the Matrice 210, a FAA-certified pilot and drone expert with the Wake Forest Fire Department reported in 2018 that “three public safety agencies 
 had batteries fail in flight.” In 2020, a Matrice 210 failed at 270 feet, crashing hard enough that a piece of the drone ended up “buried 8 inches deep.” Based on the DROPS standards, if a Matrice 210 drone were to fall on someone from just six feet or more, the injury would be fatal.  

These crashes are not due to pilot error but rather stem from known issues with the technology itself. Despite these problems, as of December 2023, Massachusetts government agencies had 58 active Matrice 210 drones.  

More drones, more money, more problems 

The December 2023 FAA data shows that across Massachusetts, the total number of drones registered by government agencies increased by 169 between 2021 and 2023. The Massachusetts State Police acquired 19 additional drones, amounting to a 25% increase in the department’s drone fleet. Likewise, the Norfolk County Sheriff’s Department, which had a single drone in 2021, had acquired 19 more drones by 2023. Four of these new drones were the Mavic 2 Enterprise drones, discussed above.  

Two years ago, we raised concerns that government entities, particularly police departments, were not doing enough to prevent the misuse of drones or drone footage. In 2022, Worcester City Council approved a request by the Worcester Police to purchase a $25,000 drone. The police department had come under fire from homeless advocates for using a drone to monitor people at encampments.  

While Massachusetts has no laws on the books regulating police use of drones, the ACLU of Massachusetts supports legislation that would ban the weaponization of drones and other robots. 

Search government drones in Massachusetts 

If you want to look at the data yourself, you can use our interactive tool, where you can explore the data or download the data in full.   

Search Government Drones in Massachusetts

If you’re interested in learning more about how government agencies in Massachusetts use drones, you can use our model public records request to find out. More information about this process and relevant resources are available here. 


Boston Police Records Show Nearly 70 Percent of ShotSpotter Alerts Led to Dead Ends

Image credit: Sketch illustration by Inna Lugovyh

The ACLU of Massachusetts has acquired over 1,300 documents detailing the use of ShotSpotter by the Boston Police Department from 2020 to 2022. These public records shed light for the first time on how this controversial technology is deployed in Boston.  

ShotSpotter — now SoundThinking — is a for-profit technology company that uses microphones, algorithmic assessments, and human analysts to record audio and attempt to identify potential gunshots. A public records document from 2014 describes a deployment process that considers locations for microphones including government buildings, housing authorities, schools, private buildings and utility poles.

According to city records, Boston has spent over $4 million on ShotSpotter since 2012, deploying the technology mostly in communities of color. Despite the hefty price tag, in nearly 70 percent of ShotSpotter alerts, police found no evidence of gunfire. The records indicate that over 10 percent of ShotSpotter alerts flagged fireworks, not weapons discharges.

The records add more evidence to support what researchers and government investigators have found in other cities: ShotSpotter is unreliable, ineffective, and a danger to civil rights and civil liberties. It’s time to end Boston’s relationship with ShotSpotter. Boston’s ShotSpotter contract expires in June, making now the pivotal moment to stop wasting millions on this ineffective technology.

Boston’s relationship with ShotSpotter dates from 2007. A recent leak of ShotSpotter locations confirms ShotSpotter is deployed almost exclusively in communities of color. In Boston, ShotSpotter microphones are installed primarily in Dorchester and Roxbury, in areas where some neighborhoods are over 90 percent Black and/or Latine.  

Coupled with the high error rate of the system, BPD records indicate that ShotSpotter perpetuates the over-policing of communities of color, encouraging police to comb through neighborhoods and interrogate residents in response to what often turn out to be false alarms.  

For each instance of potential gunfire, ShotSpotter makes an initial algorithmic assessment (gunshot, fireworks, other) and sends the audio file to a team of human analysts, who make their own prediction about whether it is definitely, possibly, or not gunfire. These analysts use heuristics like whether the audio waveform looks like “a sideways Christmas tree” and if there is “100% certainty of gunfire in the reviewer’s mind.” 

ShotSpotter relies heavily on these human analysts to correct ShotSpotter predictions; an internal document estimates that human workers overrule around 10 percent of the company’s algorithmic assessments. The remaining alerts comprise the reports we received: cases in which police officers were dispatched to investigate sounds ShotSpotter identified as gunfire. But the records show that in most cases, dispatched police officers did not recover evidence of shots fired. 

Analyzing over 1,300 police reports, we found that almost 70 percent of ShotSpotter alerts returned no evidence of shots fired.  

In all, 16 percent of alerts corresponded to common urban sounds: fireworks, balloons, vehicles backfiring, garbage trucks and construction. Over 1 in 10 ShotSpotter alerts in Boston were just fireworks, despite a “fireworks suppression mode” that ShotSpotter implements on major holidays.  

ShotSpotter markets its technology as a “gunshot detection algorithm,” but these records indicate that it struggles to reliably and accurately perform that central task. Indeed, email metadata we received from the BPD describe several emails that seem to refer to inaccurate ShotSpotter readings. The records confirm what public officials and independent researchers have reported about the technology’s use in communities across the country. For example, in 2018, Fall River Police abandoned ShotSpotter, saying it didn’t “justify the cost.” In recent years, many communities in the United States have either declined to adopt ShotSpotter after unimpressive pilots or elected to stop using the technology altogether.

Coupled with reports from other communities, these new BPD records indicate that ShotSpotter is a waste of money. But it’s worse than just missed opportunities and poor resource allocation. In the nearly 70 percent of cases where ShotSpotter sent an alert but police found no evidence of gunfire, residents of mostly Black and brown communities were confronted by police officers looking for shooters who may not have existed, creating potentially dangerous situations for residents and heightening tension in an otherwise peaceful environment.  

Just this February, a Chicago police officer responding to a ShotSpotter alert fired his gun at a teenage boy who was lighting fireworks. Luckily, the boy was not physically harmed. Tragically, 13-year-old Chicago resident Adam Toledo was not so fortunate; he was killed when Chicago police officers responded to a ShotSpotter alert in 2021. The resulting community outrage led Chicago Mayor Brandon Johnson to campaign on the promise of ending ShotSpotter. This year, Mayor Johnson followed through on that promise by announcing Chicago would not extend its ShotSpotter contract. 

The most dangerous outcome of a false ShotSpotter alert is a police shooting. But over the years, ShotSpotter alerts have also contributed to wrongful arrests and increased police stops, almost exclusively in Black and brown neighborhoods. BPD records — detailing incidents from 2020-2022 — include several cases where people in the vicinity of an alert were stopped, searched, or cited — just because they happened to be in the wrong place at the wrong time.  

For instance, in 2021, someone driving in the vicinity of a ShotSpotter alert was pulled over and cited for an “expired registration, excessive window tint, and failure to display a front license plate.” Since ShotSpotter devices in Boston are predominately located in Black and brown neighborhoods, its alerts increase the funneling of police into those neighborhoods, even when there is no evidence of a shooting. This dynamic exacerbates the cycle of over-policing of communities of color and increases mistrust towards police among groups of people who are disproportionately stopped and searched.  

This dynamic can lead to grave civil rights harms. In Chicago, a 65-year-old grandfather was charged with murder after he was pulled over in the area of a ShotSpotter alert. The charges were eventually dismissed, but only after he had already spent a year in jail.  

In summary, our findings add to the large and growing body of research that all comes to the same conclusion: ShotSpotter is an unreliable technology that poses a substantial threat to civil rights and civil liberties, almost exclusively for the Black and brown people who live in the neighborhoods subject to its ongoing surveillance. 

Since 2012, Boston has spent over $4 million on ShotSpotter. But BPD records indicate that, more often than not, police find no evidence of gunfire — wasting officer time looking for witness corroboration and ballistics evidence of gunfire they never find. The true cost of ShotSpotter goes beyond just dollars and cents and wasted officer time. ShotSpotter has real human costs for civil rights and liberties, public safety, and community-police relationships.  

For these and other reasons, cities including Canton, OH, Charlotte, NC, Dayton, OH, Durham, NC, Fall River, MA, and San Antonio, TX have decided to end the use of this controversial technology. In San Diego, CA, after a campaign by residents to end the use of ShotSpotter, officials let the contract lapse. And cities like Atlanta, GA and Portland, OR tested the system but decided it wasn’t worth it. 

From coast to coast, cities across the country have wised up about ShotSpotter. The company appears to have taken notice of the trend, and in 2023 spent $26.9 million on “sales and marketing”. But the cities that have decided not to partner with the company are right: Community safety shouldn’t rely on unproven surveillance that threatens civil rights. Boston’s ShotSpotter contract is up for renewal in June. To advance racial justice, effective anti-violence investments, and civil rights and civil liberties, it’s time for Boston to drop ShotSpotter. 

An earlier version of this post stated that one of the false alerts was due to a piñata. That was incorrect.


Further reading 

Emiliano Falcon-Morano contributed to the research for this post. With thanks to Kade Crockford for comments and Tarak Shah from HRDAG for technical advice.


Yes, All Location Data: Separating fact from myth in industry talking points about “anonymous” location data

Image credit: Joahna Kuiper / Better Images of AI / Little data houses (square) / CC-BY 4.0

We carry our phones around wherever we go – and our cellphone location data follows us every moment along the way, revealing the most sensitive and intimate things about us. Everywhere we go, everyone we meet, and everything we do – it’s all accessible to anyone with a credit card, thanks to the data broker industry.  

Apps use location data for a variety of purposes including finding directions, logging runs, ordering food, and hailing rideshares. While this information can be used for legitimate purposes, this sensitive data is also exploited for profit and extremist agendas, putting every cellphone user at risk. In 2023, right-wing extremists capitalized on the unregulated open data marketplace to out gay Catholic priests. This disturbing undertaking was possible because data brokers are allowed to buy location information, repackage it, and sell it to anyone who wants to buy it. And, currently, there’s nothing stopping them.  

As independent researchers have shown time and time again, it is all too easy to trace cellphone location data back to the people holding those phones. 

Data generated from apps are superficially ‘pseudo-anonymized’ by assigning each user a unique combination of numbers called a MAID (“Mobile Advertising ID”), also known as an IDFA (“ID For Advertisers”). But since each MAID is associated with a single device and common across apps, it’s easy to paint a unique picture of someone by aggregating location datapoints across apps.  

In fact, just a few data points are sufficient to uniquely identify most individuals. Several highly-cited scientific studies using real-world cellphone location data – including a Scientific Reports research paper – showed that a few linked spatiotemporal data points are enough to uniquely identify most individuals from a crowd. Intuitively, if someone finds out where your phone is between midnight and five a.m., then they know where you likely live. If they then find out where your phone is between nine a.m. and five p.m. on weekdays, then they know where you likely work.  

While two location points – home and work – are plenty, data brokers have much more data than that. In fact, data brokers peddle a sprawling digital dossier on millions of people with incredible temporal and spatial detail. 

Recently, data broker Kochava was thrown into the spotlight as a result of a shocking investigation by the Federal Trade Commission (FTC). Among other revelations, analysts from the FTC were able to obtain a free sample of cellphone location data and use that information to track someone who visited an abortion clinic all the way back to their home. This data, like all data from data brokers, was supposed to be anonymous – instead, it revealed a person’s private health care practices and real identity. For vulnerable people travelling from states where reproductive health care is now a crime, the open sale of their cellphone location data is a serious matter. But Kochava is not a lone bad apple. They are one company out of a multibillion dollar industry that exists solely to profit off our data, putting us – and our loved ones – at risk.  

Just in case location information is not sufficient to identify someone, it is easy to connect this data with other pieces of information that are easily accessible, such as a person’s public work directory, LinkedIn profile, or by using one of many people search sites that list people’s full names and addresses. Indeed, a spinoff industry has cropped up that offers “identity resolution” services to do just that. For instance, a company called Liveramp partners with several well-known location data brokers, claiming to “resolve data to the user or household level”, helping ad companies “build, configure, and maintain a unified view of your customer, easily connecting customer data from any and all data sources.” Similarly, data brokers like Adobe and Oracle offer identity resolution services to aggregate data across disparate data sources.  

Mobile advertising IDs, as mentioned above, are part of the problem – but not the end of the road. In 2021, Google made some strides to secure MAIDs – but left opting out to more tech-savvy users. Meanwhile, Apple phased out MAIDs for users who don’t explicitly opt in to tracking. While these moves were a step in the right direction, they still leave a lot of room for loopholes. From consent for cookies to Do Not Track requests, the ad industry has historically countered every superficial privacy win with dogged – and successful – efforts to circumvent restrictions. When it comes to the “end” of MAIDs, the ad industry has already developed workarounds, allowing companies to match location data to users using “identity graphs”, even if they lack advertising IDs for those people.  

As an executive of ad tech company himself described, “when you move to these more restrictive methods, what happens is that all the shady companies 
 try to find alternative workarounds to the MAID but with methods the user doesn’t have any control over, ultimately hurting end-user privacy.”  

Data brokers claim they want to protect our privacy as much as we do. But we can’t trust that they will choose our privacy over their profits. We need more than superficial solutions.  

That’s why the ACLU of Massachusetts and our partners are working to pass legislation to ban the sale of cellphone location data. This bill would prevent location data being tracked or traded for anyone in the state of Massachusetts. It is a vital defense to stop this multibillion-dollar industry from profiting from our personal data. We can’t do this without your help – so click here to contact your legislator and urge them to pass this crucial legislation. It’s time to end this shady practice once and for all.  

Essential reading 


Balancing the scales of justice: Why right to counsel in eviction cases is a racial justice and housing justice issue

Evictions devastate lives and communities. Research shows evictions lead to displacement from neighborhoods, decreased physical and mental well-being, instability in employment and education, increased likelihood that children will be placed in foster or other out-of-home care, and greater reliance on social service supports.  

Legislation before the Massachusetts Joint Committee on the Judiciary, An Act promoting access to counsel and housing security in Massachusetts (H.1731/S.864), would provide both low-income tenants and low-income owner-occupants with access to full legal representation in eviction proceedings – and thus the crucial fighting power to stay in their homes. This legislation is supported by a broad coalition, including the legal community, health care providers, local politicians, and faith-based organizations. 

Despite the many harms of evictions, only 3 percent of tenants in Massachusetts facing an eviction have a lawyer representing them in housing court. In contrast, over 90 percent of Massachusetts landlords have legal representation in those cases. The result of this imbalance is no surprise: evictions.

While it is illegal to evict someone without going through housing court, this protection is meaningless if tenants have no legal support to fight their pending eviction. Since most eviction cases are due to non-payment of rent, defendants, who can’t afford rent, probably can’t afford a lawyer. Tenants without counsel must face the confusing court system and complex housing law on their own, while others might not be able to attend their court hearing at all due to childcare, employment, or transportation issues. For people with disabilities and those who do not speak English, the barriers are even higher. 

In 2020, as COVID-19 hit Massachusetts, the state put a temporary moratorium on evictions. With the scores of the population out of work, this humanitarian stopgap was essential in allowing people to stay in their homes during times of a deadly transmissible virus and stay-at-home orders. But the moratorium ended in October 2020, followed by the end of the federal moratorium in August 2021. Since then, evictions have snowballed.

Evictions are a racial justice issue. Black and Latine households are more likely than white households to rent. Research indicates these communities are also over-represented in households facing eviction. In Massachusetts, eviction cases and eviction outcomes were more frequent in communities with a higher proportion of Black and Hispanic residents. This correlation was highly statistically significant.

Adults aren’t the only ones affected by evictions – kids are too. On average, 11 percent of children under age 5 face eviction each year in Massachusetts. For Black and Hispanic communities, the percentage of children facing eviction triples at 27 percent. These evictions lead to a vicious cycle of disrupting educational engagement, contributing to higher dropout rates, and negatively affecting physical and mental health. In this way, evictions contribute to lasting generational harms that can scar communities of color for many years to come.

We need meaningful action to prevent unfair evictions. Right to counsel will correct the power imbalance that gives landlords an unfair advantage in eviction cases. Tenants deserve a fair process. Massachusetts legislators can balance the scales.  


Learn more: An Act promoting access to counsel and housing stability in Massachusetts (H.1731/S.864)


Further reading:

Anthony Cilluffo, A.W. Geiger & Richard Fry, More U.S. Households Are Renting Than At Any Point In 50 Years, Pew Research Center Fact Tank (July 19, 2017), https://www.pewresearch.org/fact-tank/2017/07/19/more-u-s-households-are-renting-than-at-any-point-in-50-years/  

Jaboa Lake, The Pandemic Has Exacerbated Housing Instability for Renters of Color, Center for American Progress (October 30, 2020), https://cdn.americanprogress.org/content/uploads/2020/10/29133957/Renters-of-Color-2.pdf

Emily Badger, Claire Cain Miller & Alicia Parlapiano. The Americans Most Threatened by Eviction: Young Children, The New York Times (October 2, 2023). https://www.nytimes.com/2023/10/02/upshot/evictions-children-american-renters.htmlÂ