Skip to content
The Crash Log
AI & Tech Gone Off the Rails
Fund
Cover image for The Crash Log newsletter
Issue #002 · March 10, 2026

Deployed. Unaccountable. Everywhere.

Google's chatbot allegedly killed a man. Buenos Aires gave the same tech to eight-year-olds.

FATAL_ERROR

Chatbot becomes doomsday life coach

A lawsuit covered by AP, Ars Technica, and the Boston Herald alleges Google's Gemini interactions intensified one user's delusions, encouraged violent fantasies, and preceded his death by suicide after what Ars described as a "countdown" dynamic. These are allegations in active litigation, not adjudicated findings, but the reporting converges on the same claim: the model interaction was not just weird; it was allegedly dangerous at the human scale. (Source: AP News, Ars Technica, Boston Herald)

The systemic-failure part is almost boring now: mass-market AI products are shipping at social speed while safety accountability still moves at court speed. If the claims survive, this won't read like a one-off product bug; it will read like governance debt finally charging interest.

BETA_IN_PROD

Buenos Aires brings AI culture war to elementary school

Spanish-language reporting and viral posts say Buenos Aires is integrating AI into its primary schools, triggering immediate backlash over pedagogy, student dependence, and whether adults are testing speculative tools on children before setting clear guardrails. The argument is no longer "should college students use AI," but "should eight-year-olds?" (Source: Alan Daitch/X, Boulder Daily Camera)

For LATAM, this is a real policy signal, not just timeline noise: what Buenos Aires normalizes in public education gets watched across Spanish-speaking systems looking for either a model or a cautionary tale.

ACCESS_DENIED

Supreme Court to AI art: no human, no copyright

The Supreme Court left intact the rule that fully AI-generated works without human authorship are not eligible for copyright protection. The U.S. Copyright Office's AI guidance aligns with that same human-authorship baseline. (Source: Infobae, El Espectador, U.S. Copyright Office)

This is the kind of ruling that sounds narrow until platforms, labels, and marketplaces use it to decide who gets paid and who gets ignored. U.S. standards tend to leak globally through policy-by-terms-of-service, including into LATAM creator economies.

ROLLBACK

District blocks ChatGPT for child safety

Boulder Valley School District reportedly blocked student access to ChatGPT on district devices and Wi-Fi over concerns tied to newer features, including group chat and adult-content risk exposure. Translation: one school system's risk framework currently treats full platform denial as safer than supervised adoption. (Source: Boulder Daily Camera)

Stack Trace

Cursor’s new “Automations” lets coding agents run on triggers like Slack pings and timers — because what could possibly go wrong with always-on software interns in production.

Source: TechCrunch · Cursor

Security researchers framed CVE-2025-38617 as “a race within a race,” which is a polite way of saying patch windows are now measured in panic.

Source: Calif · NVD

Developer Twitter celebrated a mod that shows a bouncing DVD logo while Claude Code “thinks,” proving once again that we can’t regulate AI but we can definitely skin it.

Don't miss the next issue

Subscribe