Skip to content
The Crash Log
AI & Tech Gone Off the Rails
Fund
Cover image for The Crash Log newsletter
Issue #012 · April 6, 2026

Forty Minutes of Exposure

A supply chain attack poisons 36% of cloud environments, Congress tries to put a warrant between your data and the government, and AI chatbots swallow medical lies when they sound like doctors.

OVERRIDE

Bipartisan Bill Would Force Warrants for AI-Powered Mass Surveillance

A bipartisan coalition of lawmakers has introduced the Government Surveillance Reform Act, the most sweeping attempt in a decade to rewrite the rules governing how federal agencies surveil Americans in the age of AI. The bill, sponsored by Senators Ron Wyden (D-OR) and Mike Lee (R-UT) and Representatives Zoe Lofgren (D-CA) and Warren Davidson (R-OH), would require warrants for government access to Americans’ location data, web browsing history, search queries, and chatbot records (Source: Office of Senator Mike Lee).

The legislation directly targets the data broker loophole, a legal gap that allows agencies like ICE and the FBI to purchase bulk surveillance data from commercial brokers without judicial oversight. Federal agencies are technically barred from collecting data on U.S. citizens in bulk since a 2015 policy change, but purchasing it from third parties has functioned as an end-run around the Fourth Amendment.

Anthropic CEO Dario Amodei warned Congress that records the government can buy from brokers can be used by AI to assemble "a comprehensive picture of any person’s life — automatically and at massive scale" (Source: NPR).

The bill also reforms Section 702 of the Foreign Intelligence Surveillance Act, which authorizes mass surveillance of non-citizens but has long functioned as a backdoor for warrantless searches of Americans’ communications. FISA’s current authorization expires April 20, and the renewal debate carries new urgency as AI automates the analysis of collected data.

"This is an incredibly dangerous authority to have as unrestricted as it is," said Matthew Guariglia of the Electronic Frontier Foundation (Source: Salon).

Over 130 civil society organizations have signed a letter urging Congress to close the data broker loophole in any 702 reauthorization.

FATAL_EXCEPTION

40-Minute Supply Chain Attack Exposed 36% of Cloud Environments

A supply chain attack that lasted roughly 40 minutes has cascaded into one of the largest AI infrastructure breaches of 2026. On March 27, attackers from the group TeamPCP compromised LiteLLM, an open-source library present in an estimated 36% of cloud environments, by publishing two malicious package versions to PyPI that harvested API keys, SSH credentials, cloud secrets, and database passwords from every system that pulled the update (Source: SecurityWeek).

AI recruiting startup Mercor confirmed it was among the victims. Extortion group Lapsus$ is now auctioning what it claims is four terabytes of stolen Mercor data, including 939 gigabytes of platform source code, a 211-gigabyte user database, and roughly three terabytes of video interview recordings and identity verification documents. The stolen cache reportedly includes candidate profiles, employer data, API keys, and Tailscale VPN records (Source: Hackread).

The attack chain began when TeamPCP first compromised the Trivy security scanner to obtain credentials from a LiteLLM maintainer, then used those credentials to inject malicious code into versions 1.82.7 and 1.82.8. Both variants exfiltrated everything to a server at models.litellm.cloud.

LiteLLM processes millions of downloads daily, and while the malicious versions were live for only 40 minutes, automated CI/CD pipelines across thousands of organizations likely pulled the tainted packages (Source: TechCrunch).

Meta has reportedly frozen AI data work with Mercor pending investigation.

RUNTIME_ERROR

AI Health Chatbots Believe Medical Lies When They Sound Like Doctors

A sweeping study from Mount Sinai Health System and the Mayo Clinic, published in The Lancet Digital Health, found that 20 leading AI chatbots accepted fabricated medical claims 31.7% of the time, and that susceptibility spiked dramatically when misinformation was framed in clinical language mimicking a physician’s voice. Researchers hit the models with more than 3.4 million prompts containing health misinformation drawn from social media posts, real hospital discharge notes seeded with false recommendations, and 300 physician-validated simulated vignettes (Source: Mount Sinai Newsroom).

The study tested logical fallacies — appeals to authority, popularity, and emotion — to measure how rhetorical framing influenced model behavior. When the same false claims were presented in a casual, anecdotal style common to Reddit health forums, susceptibility dropped to just 9%. But dress the same lie in clinical language, complete with the authority cues models associate with medical professionals, and the guardrails crumbled (Source: Science-Based Medicine).

The findings come as millions of patients increasingly turn to AI chatbots for health guidance. The nonprofit patient safety organization ECRI named AI chatbot misuse in healthcare its top technology hazard for 2026.

Current safeguards, the study concludes, do not reliably distinguish fact from fabrication once a claim is wrapped in familiar clinical language — creating what researchers describe as a perfect storm in an era of declining institutional trust (Source: Euronews Health).

DEPRECATED

Google Drops Gemma 4 Under Apache 2.0, First Truly Open-Source Gemma

Google released Gemma 4 on April 2 under the Apache 2.0 license — a first for the Gemma family and a significant shift from the restricted-use terms that governed previous versions. The move makes Gemma 4 the highest-capability model Google has ever made fully open-source, with four sizes ranging from 2 billion to 31 billion parameters, trained on over 140 languages, and supporting context windows up to 256,000 tokens (Source: Google Open Source Blog).

Built from the same research infrastructure as Gemini 3, Gemma 4 includes native multimodal support (text, image, video), native audio input for speech recognition in the smaller E2B and E4B models, and advanced reasoning described as capable of multi-step planning. The Apache 2.0 license permits commercial use, modification, and redistribution with attribution — removing the ambiguity that kept previous open-weight Gemma releases in a legal gray zone for enterprise deployment (Source: Google AI Blog).

Stack Trace

The Pentagon’s Strategic Capabilities Office is standing up a cognitive warfare program aimed at using AI to "disrupt the cognition and thinking ability of an adversary." The initiative, called Basic Information Awareness Operations, will produce new nonkinetic military capabilities within three to five years. It arrives alongside the FY2026 defense budget’s first-ever dedicated line item for autonomy and AI systems: $13.4 billion (Source: The Washington Times). The Department of War has also been directed to define cognitive warfare in doctrine and assign organizational responsibility by April 1 (Source: LABLA).

An API vulnerability in OpenReview exposed the identities of roughly 10,000 authors and reviewers for ICLR 2026, the premier machine learning conference set for Rio de Janeiro in April. The breach triggered bribery attempts, harassment, and impersonation of authors by third parties. Compounding the damage, a separate analysis found that 21% of ICLR 2026 peer-review comments were generated by large language models, raising fundamental questions about whether AI research can still be evaluated by humans (Source: Science). ICLR has begun desk-rejecting papers linked to collusion attempts (Source: WebProNews).

ICE awarded $1.2 billion in open-ended "skip tracing" contracts to 13 private firms in December 2025, potentially targeting more than 1 million people at a rate of 50,000 names per month. The agency’s AI surveillance apparatus now includes Palantir’s ImmigrationOS — a nearly $30 million platform that unifies immigration files, travel records, and license-plate data into a single interface — plus Clearview AI facial recognition and LLM-powered tip processing that automatically translates, summarizes, and prioritizes information for agents. DHS tools originally justified for tracking noncitizens are now being used to identify and investigate U.S. citizens (Source: American Immigration Council, NPR).

Don't miss the next issue

Subscribe