Skip to content
The Crash Log
AI & Tech Gone Off the Rails
Fund
Cover image for The Crash Log newsletter
Issue #016 · April 16, 2026

The Database Remembers

ICE's AI surveillance arsenal grows while the agency ignores 96 court orders, researchers catch LLM routers stealing credentials, and Mexico bans AI dubbing.

OVERRIDE

DHS Expands AI Surveillance Arsenal to Track Americans While Defying 96 Court Orders

According to a federal judge in Minnesota, Immigration and Customs Enforcement has violated 96 court orders in 74 cases in January 2026 alone — more than some federal agencies have violated in their entire existence (Source: TechPolicy.Press). Even as ICE defies judicial authority, DHS's latest AI inventory, released January 28, reveals more than 200 AI use cases deployed or in development across the department — an almost 40 percent increase since July 2025.

The surveillance tools now extend well beyond immigration enforcement. In Minnesota, ICE has deployed social media monitoring and location-tracking systems that analyze the movements of large groups at specific locations, capabilities explicitly marketed for use at protests and other gatherings protected under the First Amendment (Source: NPR). At the push of a button, ICE can catalog individuals at a protest, then track their movements over extended periods (Source: Washington Post).

The agency's toolkit includes Mobile Fortify, a facial recognition and fingerprint-matching app used by both CBP and ICE since May 2025. The app has been documented misidentifying individuals during immigration raids, despite ICE claiming its results are a "definitive" determination of immigration status (Source: TechPolicy.Press).

Palantir's ImmigrationOS ($30 million), BI2 Technologies iris-scanning smartphones ($4.6 million), and a Clearview AI facial-recognition contract ($3.75 million) round out the arsenal (Source: State of Surveillance).

In Portland, Maine, Colleen Fagan was recording federal agents during an immigration operation in January when agents appeared to record her face and license plate. When she asked why, her video captured a masked agent responding: "Because we have a nice little database, and now you're considered a domestic terrorist" (Source: NPR).

ACCESS_DENIED

Researchers Find 26 AI Agent Routers Secretly Injecting Malicious Code, Stealing Credentials

A team of researchers analyzed 428 third-party LLM routers — services that sit between users and AI models to dispatch requests across providers — and found 26 engaged in clearly suspicious behavior, including injecting malicious tool calls and exfiltrating credentials. Nine routers actively injected malicious code into returned outputs, two deployed adaptive evasion techniques, and 17 abused researcher-owned API credentials (Source: CoinDesk).

The study, published April 10 on arXiv, presents the first formal threat model for LLM API routers as a supply-chain trust boundary. The researchers define two core attack classes: payload injection, where routers alter the model's output to include malicious tool calls, and secret exfiltration, where routers harvest API keys and credentials passing through in plaintext. In one documented case, a test Ethereum wallet was drained of $500,000 after its private key was exposed through a compromised router (Source: arXiv).

The research follows the March 2026 LiteLLM incident, where attackers compromised the popular open-source router through dependency confusion, injecting malicious code into the request-handling pipeline of every deployment that pulled the poisoned release (Source: CyberSecurityNews).

The researchers identified "YOLO mode" — a setting in many AI agent frameworks where agents execute commands automatically without user confirmation — as a particularly dangerous enabler. They recommend a number of client-side defenses including fault-closure gates, response anomaly filtering, and append-only logging (Source: CoinDesk).

RUNTIME_ERROR

Mexico Passes AI Reforms to Protect Performers' Voices and Images; Industry Warns of USMCA Fallout

Mexico's Chamber of Deputies passed AI regulation reforms on April 7 with 335 votes in favor and 129 abstentions, modifying both the Federal Labor Law and the Federal Copyright Law to regulate AI's use of performers' images and voices. Under the new framework, any use of a person's image or voice in an AI system requires express, free, and informed consent, ruling out implied, tacit, or boilerplate authorization. The consent can be revoked, and remuneration is a mandatory condition of validity (Source: FisherBroyles).

The law expands the definition of "performing artist" to explicitly include announcers and dubbing actors, closing a gap exploited by producers and platforms to deny these professionals recognition as holders of neighboring rights (Source: FIA). A companion measure, Article 29 of a new Federal Cinema and Audiovisual Law, bars AI-generated dubbing for foreign films and audiovisual works translated into Spanish or any of Mexico's national languages. Dubbing must be performed by human actors (Source: WeAreMitu).

The Mexican Association of the Information Technology Industry (AMITI) warns that "any bill containing AI content might not be appropriate at this time due to the upcoming review of the USMCA." AMITI argues that reducing the validity of advertisement use from three years to six months ignores the dynamic nature of the digital environment, increasing compliance costs and reducing competitiveness for small and medium enterprises (Source: MexicoBusiness).

Mexico's regulatory approach — targeting dubbing, elections, and data governance simultaneously — amounts to a patchwork built around the places where AI can do the most damage fastest. (Source: WeAreMitu)

DEPRECATED

Linux Kernel Formally Allows AI-Generated Code With Full Human Liability

The Linux kernel project has established a formal, project-wide policy explicitly allowing AI-assisted code contributions, resolving months of fierce internal debate. The policy, merged into the kernel's documentation tree under Documentation/process/coding-assistants.rst, permits tools like GitHub Copilot but requires humans to bear full legal responsibility for all lines of code generated by AI, including any bugs or security flaws (Source: Tom's Hardware).

AI systems may not attach the legally binding "Signed-off-by" certification. Instead, Linux introduces an "Assisted-by" tag for patches involving AI, identifying the model and tools used.

Linus Torvalds called the debate over outright bans "pointless posturing," framing AI as "just another tool” (Source: Neowin).

The policy's position: disclosure matters less than competence — if you understand the code and can stand behind it, how you wrote it is irrelevant (Source: Tom's Hardware).

Stack Trace

U.S. Senators Josh Hawley (R-Mo.) and Mark Warner (D-Va.) introduced bipartisan legislation requiring publicly traded companies and federal agencies to file quarterly reports with the Department of Labor disclosing how many employees were laid off because their job functions were automated by AI. The department would compile the data into public reports. Hawley cited projections that AI could drive unemployment up to 10 to 20 percent in the next five years (Source: U.S. Senate).

Source: U.S. Senate

Illinois, Texas, and Colorado will each implement laws in 2026 governing the use of artificial intelligence in workforce decisions — from hiring and promotion to termination — even as the federal government pushes to eliminate state-level AI regulations. The state-level push creates a fragmented compliance landscape for employers operating across jurisdictions (Source: National Law Review).

Three recent decisions from the U.S. District Court for the Northern District of California opened the door to securities fraud claims against social media platforms whose AI exercises "ultimate authority" over assembled advertising content. The rulings suggest that Meta, Alphabet, Snap, TikTok, and X Corp. could face liability as "makers" of fraudulent statements under SEC Rule 10b-5 (Source: Bloomberg Law).

Don't miss the next issue

Subscribe