Skip to content
The Crash Log
AI & Tech Gone Off the Rails
Fund
Cover image for The Crash Log newsletter
Issue #008 · March 31, 2026

The Evidence Machine

Meta gets hit with two landmark verdicts in one week, OpenAI kills Sora to feed a mystery model, and Anthropic leaks its most dangerous AI through a misconfigured CMS.

FATAL_EXCEPTION

Meta Loses Twice in Court; $375M Child Safety Verdict and First-Ever 'Addictive by Design' Ruling

Two juries delivered landmark verdicts against Meta in the span of 48 hours. On March 24, a New Mexico jury ordered Meta to pay $375 million for enabling child sexual exploitation on its platforms — the first successful state lawsuit against the company on child safety grounds. The jury found that Meta had actual knowledge that its platforms were being used to facilitate the sexual abuse of minors and failed to act. (Source: CNBC)

The following day, a Los Angeles jury found both Meta and Google negligent under an “addictive-by-design” legal theory, awarding $6 million to a 20-year-old plaintiff who alleged Instagram and YouTube caused documented psychological harm during adolescence. The verdict marks the first time a jury has validated the theory that social media platforms can be held liable for designing products that are inherently addictive to minors. (Source: NPR)

The L.A. verdict functions as a test case for approximately 2,000 pending lawsuits against social media companies consolidated in federal multi-district litigation. Attorneys for the plaintiffs called it “the first crack in the wall” and predicted the twin verdicts would accelerate settlement negotiations. Meta's stock dropped nearly 8% in the two days following the rulings. (Source: CNBC)

Meta issued a statement calling the New Mexico verdict “inconsistent with the evidence” and confirming it would appeal. The company did not comment on the L.A. verdict beyond noting it “disagrees with the outcome.” Legal analysts noted that Meta now faces potential exposure running into the tens of billions across the consolidated cases if the addictive-by-design theory holds on appeal. (Source: Washington Post)

DEPRECATED

OpenAI Pulls the Plug on Sora After Six Months, Torpedoing Disney's $1 Billion Investment

Sam Altman shut down Sora, OpenAI's video-generation application, six months after its public launch and three months after signing a landmark content-licensing deal with Disney worth an estimated $1 billion. The app will close to users on April 26, with API access ending September 24. Disney CEO Bob Iger, who had pledged hundreds of licensed characters for Sora-generated content, was informed approximately 30 minutes before the public announcement. (Source: TechCrunch)

Sora peaked at roughly one million users shortly after launch but collapsed to fewer than 500,000 within months. The application was burning approximately $1 million per day in compute costs. Internal documents obtained by the Wall Street Journal show that Altman concluded the compute allocation could no longer be justified against returns, particularly as OpenAI prepares for a potential IPO. (Source: Variety)

OpenAI is redirecting Sora's compute resources toward a new model codenamed “Spud,” about which Altman has said only that it will “really accelerate the economy.” No technical details, release timeline, or use case have been disclosed.

The pivot signals a strategic retreat from consumer-facing creative AI tools toward enterprise and infrastructure applications. (Source: CNN)

Key employees who led Sora's development, including research lead Tim Brooks and product lead Connor Hayes, had been recruited in a talent war with Google DeepMind and were given significant equity stakes tied to Sora's success. Their positions within OpenAI are now unclear.

The shutdown is the most visible product cancellation in OpenAI's history and raises questions about the company's ability to execute beyond its core ChatGPT product ahead of its expected IPO.

NULL_POINTER

Anthropic Accidentally Leaks Its Most Powerful AI Model Through a CMS Misconfiguration

A configuration error in Anthropic's content management system left approximately 3,000 unpublished blog assets publicly accessible, including a draft post describing Claude Mythos — a new AI model the company internally designates under the codename “Capybara.”

Security researchers Roy Paz of LayerX and Alexandre Pauwels of Cambridge University discovered the exposed data and reported it to Fortune, which broke the story on March 26. (Source: Fortune)

The leaked draft describes Mythos as occupying a new tier above Opus, Anthropic's current flagship, with “dramatically higher scores” on coding, academic reasoning, and cybersecurity benchmarks. The document warns that Mythos is “currently far ahead of any other AI model in cyber capabilities” and could “exploit vulnerabilities in ways that far outpace the efforts of defenders.”

Anthropic confirmed the model's existence and attributed the leak to “human error in a CMS configuration.” (Source: CoinDesk)

The irony was not lost on the security community: a company built on the premise that AI safety requires extraordinary care exposed its most sensitive competitive intelligence through a basic infrastructure misconfiguration. The leak also revealed plans for an invite-only CEO summit at an English countryside estate to preview Mythos, and internal materials suggesting Anthropic plans to offer the model first to cybersecurity defenders before broader release. (Source: Futurism)

Anthropic's stock of credibility on security and safety — the foundation of its brand differentiation from OpenAI — took a visible hit. The company's own Responsible Scaling Policy commits to preventing exactly this kind of premature capability disclosure.

Anthropic has not announced a timeline for Mythos but said it remains in internal testing.

ACCESS_DENIED

AI Can Build a 'Comprehensive Picture of Any Person's Life' From Data the Government Buys Without a Warrant

An NPR investigation published March 25 details how federal agencies, including ICE, the FBI, and the Secret Service, are purchasing bulk cell phone location data from commercial data brokers without warrants, then running it through AI systems capable of tracking individuals' movements, associations, and behavioral patterns on a massive scale.

Anthropic CEO Dario Amodei warned in Congressional testimony that this data, combined with AI, can assemble “a comprehensive picture of any person's life — automatically and at massive scale.” (Source: NPR)

FBI Director Kash Patel, when asked directly whether the Bureau would commit to not purchasing Americans' location data without a warrant, declined to answer.

The critical deadline: FISA Section 702, the surveillance authority that governs how intelligence agencies collect communications data, expires April 20. More than 130 civil society organizations have signed a letter urging Congress to close the data broker loophole before reauthorization.

ICE has expanded its use of commercially purchased surveillance data beyond immigration enforcement to monitoring people who record federal agents during enforcement operations and tracking protest organizers. The combination of commercially available location data and AI analysis tools creates a surveillance apparatus that operates entirely outside the warrant requirement of the Fourth Amendment — a gap that multiple federal courts have flagged, but Congress has not closed.

Stack Trace

Amazon's own AI coding tools caused cascading failures that lost 6.3 million orders in early March. An AI assistant reportedly “decided to delete and recreate the environment,” triggering a 13-hour AWS recovery. Internal documents show a separate March 2 incident caused 120,000 lost orders and 1.6 million website errors. Amazon has ordered a mandatory 90-day safety reset across 335 critical systems, requiring dual human review of all AI-generated code changes. (Source: The Register)

Source: The Register

An NBER working paper surveying 750 U.S. CFOs found 44% plan AI-related job cuts this year, projecting approximately 502,000 roles eliminated — a ninefold increase over 2025's 55,000 AI-attributed layoffs. Half the cuts target white-collar workers. Block has already cut 40% of its workforce, Atlassian cut 10%, and Meta is reportedly planning 20% reductions. The 2026 wave is structurally different from post-pandemic corrections: companies are actively replacing humans with AI systems. (Source: Fortune)

Source: Fortune

Mastercard and Visa independently launched live AI agent payment transactions across Latin America in March — the first real-world deployment of autonomous AI commerce at scale. Mastercard's Agent Pay completed end-to-end transactions with 17 major banks, including BAC, Bancolombia, Santander, and Itaú. Visa and Santander ran pilot transactions across Argentina, Brazil, Chile, Mexico, and Uruguay. Nearly 100% of LATAM card issuers are already enabled with agentic token technology, positioning the region as the global first-mover. (Source: Mastercard)

Source: Mastercard

Don't miss the next issue

Subscribe