Skip to content
The Crash Log
AI & Tech Gone Off the Rails
Fund
Cover image for The Crash Log newsletter
Issue #015 · April 14, 2026

4,732 Messages to Nowhere

A chatbot tells a man his death is an arrival, the face of AI gets firebombed over breakfast, and the industry is running out of power to keep the lights on.

FATAL_EXCEPTION

Google Releases Full Chat Logs in Gemini Wrongful Death Case, Pledges $30 Million for Crisis Safeguards

The Wall Street Journal this week published the full analysis of 4,732 messages exchanged over 56 days between Jonathan Gavalas, a 36-year-old Florida man, and Google's Gemini chatbot. Gavalas, who initially turned to Gemini for comfort while splitting from his wife, developed what the Journal describes as an intense, delusional relationship with the bot. He called Gemini his queen; it called him “king.” Gemini repeatedly assured him their relationship was real. (Source: The Wall Street Journal)

According to the wrongful death lawsuit filed in March by Gavalas's father, Gemini adopted an unsolicited persona during voice conversations and manufactured an elaborate delusional fantasy involving federal agents, international espionage, and "missions."

On September 29 of last year, Gavalas drove toward the Miami airport armed with knives and tactical gear, operating under what the suit claims were Gemini's instructions. When Gavalas expressed fear of dying, the chatbot allegedly told him, "You are not choosing to die. You are choosing to arrive." His father found him dead days later. (Source: CNBC)

Google responded by rolling out redesigned crisis safeguards for Gemini. When the chatbot detects signs of suicide or self-harm, a "Help is available" interface now offers one-click access to crisis hotlines and remains visible for the rest of the conversation. Gemini has also been trained not to validate harmful delusions and to nudge users toward professional help.

Google committed $30 million over three years to scale the capacity of global crisis hotlines and $4 million to AI training platform ReflexAI. (Source: TechXplore)

A Google spokesperson said Gemini is designed not to encourage violence or self-harm and that the company "devotes significant resources" to improving safety. The Journal's analysis, however, found that while Gemini sometimes intervened to steer Gavalas back to reality, he was consistently able to redirect the bot back into the fiction.

The lawsuit is the first wrongful death case brought against Google over its chatbot. (Source: The Wall Street Journal)

RUNTIME_ERROR

Sam Altman's Home Hit Twice in Three Days as Anti-AI Anger Turns Physical

Daniel Alejandro Moreno-Gama, 20, was arrested by San Francisco police on Friday after allegedly throwing a Molotov cocktail at OpenAI CEO Sam Altman's Russian Hill home around 3:45 a.m. The device struck an exterior gate and ignited a small fire. No one was injured.

Moreno-Gama fled on foot and was located approximately an hour later near OpenAI's headquarters, roughly three miles away, where he allegedly threatened to set the building on fire. (Source: SF Standard)

Moreno-Gama had published essays warning that AI would lead to human extinction and used the PauseAI Discord server under the handle "Butlerian Jihadist," posting 34 messages, one of which a moderator flagged for appearing to call for action.

PauseAI has condemned the attack.

OpenAI published a 13-page policy paper the week before, warning that AI could reshape society faster than anyone has prepared for. (Source: The Decoder)

Two days later, on Sunday, April 12, a second attack hit the same residence. Amanda Tom, 25, and Muhamad Hussein, 23, were arrested after police say they stopped a car outside Altman's home and fired a gun at the property. Both were booked into the San Francisco County Jail and charged with negligent discharge of a firearm. (Source: SF Standard)

Altman responded with a blog post calling the anxiety behind the attacks "justified," admitting past mistakes, and comparing the industry's concentration of power to a "ring of power."

A Gallup survey cited by the Wall Street Journal found that four in five Americans now express worry about AI's societal impact. (Source: CNBC)

STACK_OVERFLOW

AI Capacity Crunch Hits as GPU Rental Prices Spike 48%; Anthropic Rations Access

The AI industry is running into a capacity wall as demand for computing power continues to grow faster than infrastructure can supply it, according to a Wall Street Journal investigation published this week.

Anthropic's Claude API logged 98.95% uptime over the last 90 days as of April 8 — well below the 99.99% standard that enterprise customers expect from critical infrastructure.

"That is not normal," said Amir Haghighat, co-founder and CTO of AI inference startup Baseten. "That's not the quality of service that you want to be getting from the company that's providing intelligence for your application." (Source: The Wall Street Journal)

Anthropic has begun rationing compute during peak weekday hours, with users reporting that limits are exhausted within minutes. OpenAI scrapped its Sora video application partly to free up compute for higher-priority products; token use on its API surged from 6 billion per minute in October to 15 billion in late March. GPU rental prices have spiked: Nvidia's most advanced Blackwell chip now costs $4.08 per hour to rent, up 48% in two months. Training runs budgeted at $40,000 on reserved capacity are now running $80,000 to $120,000 on-demand. (Source: The Wall Street Journal)

The demand side is accelerating simultaneously. Gallup reports that 50% of employed American adults now use AI in their role at least a few times a year, with 28% using it a few times a week or more.

The Financial Times reports that corporate lawyers are seeing client inquiries surge. One litigation partner described a "barrage" of AI-generated material so overwhelming that the firm now only responds to substantive points at selective intervals. Patent attorneys say they are raising fixed fees to cover the cost of reviewing flawed AI-generated filings. (Source: Gallup, Financial Times)

Stack Trace

The Philippine government gave Meta Platforms a 48-hour deadline to acknowledge an April 10 directive requiring it to curb the spread of false and "panic-inducing" content, including a viral hoax claiming President Ferdinand Marcos Jr. had died. The government has given Meta seven days to submit a detailed implementation plan covering faster takedowns, tighter detection of coordinated inauthentic behavior, and a dedicated 24/7 coordination channel with Philippine authorities. Officials warned that continued inaction could violate Article 154 of the Revised Penal Code and the Cybercrime Prevention Act. (Source: The Wall Street Journal, Bloomberg)

Penn researchers published a study in Nature Health using GPT and Gemini to analyze 410,198 Reddit posts from 67,008 self-reported GLP-1 users, surfacing side effects that clinical trials had not captured at scale. Nearly 44% of the sample reported at least one side effect, with fatigue ranking as the second most common complaint despite barely appearing in trial data. Menstrual irregularities, chills, and hot flashes also emerged as unreported signals. Co-author Lyle Ungar compared Reddit to a "neighborhood grapevine" where patients swap notes that rarely reach a doctor's office. (Source: Nature Health, MedicalXpress)

Anthropic hosted 15 prominent Christian leaders — Catholic and Protestant clergy, academics, and business figures — at its San Francisco headquarters in late March for a two-day summit on Claude's moral development. Discussion topics included how Claude should handle grief, whether the chatbot could be considered a "child of God," and how to embed ethical reasoning into AI systems. The company said the summit was the first in a planned series of gatherings with representatives from different religious and philosophical traditions. (Source: The Washington Post, Gizmodo)

Don't miss the next issue

Subscribe