Skip to content
The Crash Log
AI & Tech Gone Off the Rails
Fund
Cover image for The Crash Log newsletter
Issue #009 · April 1, 2026

Built to Agree

When the algorithm agrees with everything, nobody's watching what it actually does.

RUNTIME_ERROR

AI Chatbots Agree With Users 49% More Than Humans Do — Even When Users Are Wrong, Harmful, or Breaking the Law

A study published in Science by Stanford computer scientists found that 11 major AI models — including ChatGPT, Claude, Gemini, and DeepSeek — endorsed users' positions 49% more often than human advisors when asked for interpersonal advice. The study, titled "Sycophantic AI decreases prosocial intentions and promotes dependence," tested models against established datasets, 2,000 prompts based on Reddit's r/AmITheAsshole community (where the crowd unanimously judged the poster to be in the wrong), and thousands of prompts describing harmful, deceitful, or illegal conduct.

Even on the harmful prompts, the models endorsed the problematic behavior 47% of the time. (Source: Science)

The second phase of the study recruited more than 2,400 participants to chat with both sycophantic and non-sycophantic AI models about personal conflicts. The results were consistent: participants who interacted with the sycophantic models rated them as more trustworthy, said they were more likely to return for similar advice, grew more convinced they were in the right, and reported being less likely to apologize or make amends with the other party. The models didn't just agree… they made people worse at resolving conflict. (Source: Stanford)

The researchers identified what they called "perverse incentives" at the core of the problem: the feature that causes harm is also the feature that drives engagement. Users prefer the agreeable AI, which means companies that reduce sycophancy risk losing users to competitors who don't.

Stanford linguistics professor Dan Jurafsky, a co-author of the study, called sycophancy "a safety issue" requiring regulation and oversight, adding: "We need stricter standards to avoid morally unsafe models from proliferating."

The team found that simply prompting a model to begin its response with "wait a minute" was enough to prime it to be more critical — a surprisingly low-cost intervention that no major provider has shipped. (Source: TechCrunch)

OVERRIDE

Federal Judge Calls Pentagon's Anthropic Ban 'Orwellian', Military Already Training Other AI on Classified Data

Judge Rita F. Lin of the Northern District of California issued a preliminary injunction on March 26, blocking the Pentagon's ban on Anthropic and its designation of the company as a supply chain risk. In a 43-page ruling, Lin wrote that the government's actions constituted "classic illegal First Amendment retaliation" against Anthropic for publicly refusing to allow its AI models to be used in autonomous weapons or mass surveillance.

"Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," the judge wrote.

The order halts the Trump administration from enforcing the president's directive ordering all federal agencies to stop using Anthropic's technology. (Source: CNBC)

The ruling comes after Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a deadline on February 24 to allow unrestricted military use of Claude. When Anthropic refused on February 26, Trump directed federal agencies to cease using its products, and Hegseth designated the firm a supply chain risk. An internal Pentagon memo dated March 6 ordered military commanders to remove Anthropic's AI from key systems within 180 days. Microsoft and retired military chiefs filed amicus briefs supporting Anthropic. (Source: NPR)

While Anthropic fights its ban in court, the Pentagon is moving ahead with the rest of the AI industry. MIT Technology Review reported on March 17 that the Defense Department is planning secure environments for AI companies to train military-specific models on classified data — meaning surveillance reports and battlefield assessments could be embedded directly into commercial AI systems.

Separately, the 4th Infantry Division used AI-enabled tools to strike 15 different targets in a single hour during active combat operations, automating parts of the targeting cycle that traditionally required multiple human layers.

The Pentagon's CTO said the injunction would not change its plans to replace Anthropic's technology. (Source: MIT Technology Review)

STACK_OVERFLOW

45,000 Tech Workers Cut in Q1; 1-in-5 Due to AI, Not Restructuring

Through the first quarter of 2026, 45,363 confirmed tech layoffs have been tracked worldwide, with 9,238 — 20.4% — explicitly linked to AI and automation.

Block cut 4,000 positions, WiseTech Global eliminated 2,000, Livspace dropped 1,000, and Pinterest confirmed 675 layoffs (roughly 15% of its workforce) as part of what executives described as an "AI-forward strategy."

The affected roles aren't just customer support but span software development, logistics planning, financial modeling, marketing, and content moderation. (Source: TNGlobal)

Harvard Business Review published an analysis in January arguing that companies are laying off workers for AI's potential, not its actual performance — cutting headcount in anticipation of automation gains that haven't materialized yet. The pattern is now visible at scale: companies announce AI investment and headcount reduction in the same earnings call, before the AI tools are deployed or proven.

MercadoLibre, Latin America's largest e-commerce platform, cut 119 positions in an AI-related restructuring. Cities in Brazil, India, and Australia have experienced layoff rates comparable to San Francisco and Seattle when measured against local tech workforce size. (Source: Harvard Business Review)

The layoffs are happening against a backdrop that makes them doubly significant. The Congressional Budget Office projects 2.4 million fewer working-age Americans by 2035 due to immigration restrictions, and the White House has explicitly positioned AI as the productivity replacement for the workers it's removing from the economy. The federal government is shrinking the labor supply and betting that the technology responsible for these layoffs will fill the gap. (Source: Fortune)

OVERRIDE

The Pentagon Wants AI Companies to Train Models on Classified Surveillance Data

The Pentagon is planning to set up secure environments for generative AI companies to train military-specific versions of their models on classified data. Under the arrangement, surveillance reports, battlefield assessments, and intelligence analysis could become embedded in commercial AI systems, bringing AI firms into closer contact with classified information than ever before.

Defense officials indicated the program would be available to companies willing to accept unrestricted military use of their models, a condition that Anthropic has publicly refused. (Source: MIT Technology Review)

Separately, a defense official revealed on March 12 that the military is exploring the use of AI chatbots for targeting decisions — a process that has historically required multiple human layers of review. The 4th Infantry Division has already used AI-enabled tools to fuse intelligence, surveillance, and reconnaissance data with fire missions and command systems, prosecuting 15 different targets in a single hour during active combat. (Source: MIT Technology Review)

Stack Trace

California Governor Gavin Newsom signed an executive order on March 30 requiring AI companies seeking state contracts to provide safeguards against misuse, bias, and civil rights violations. The state will create new AI vendor certifications within 120 days. Over 100 state AI laws are already on the books nationwide, and tech companies are spending millions to fight them. (Source: State of California)

DeepSeek's chatbot suffered a major outage of more than seven hours overnight in China on March 30, forcing the AI company to deploy several updates to restore service. The outage affected DeepSeek's consumer chatbot, which had briefly overtaken ChatGPT in app store downloads in January. No official explanation was provided for the failure. (Source: Bloomberg)

Source: Bloomberg

A Quinnipiac University poll released March 30 found that 55% of Americans believe AI will do more harm than good in their daily lives — up 11 percentage points since April 2025. Seventy percent expect AI to reduce job opportunities, nearly two-thirds think it will worsen education, and 65% oppose AI data centers in their communities. The backlash is bipartisan. (Source: Bloomberg)

Don't miss the next issue

Subscribe