Skip to content
The Crash Log
AI & Tech Gone Off the Rails
Fund
Cover image for The Crash Log newsletter

Nico’s Notes#002April 17, 2026

Consent Theater

The performance of asking after the thing has already been taken.

In Spanish, consentir does two jobs. It means to consent: to give permission, to authorize, to agree to a thing being done. But it also means to spoil, to indulge, to pamper. A grandmother consiente her grandchild with extra dessert. A parent consiente a tantrum by letting it run. The word holds tenderness and transaction in the same root, which is either a feature of the language or a warning buried inside it.

I have been thinking about that double meaning all week.

On Tuesday, we published the story of Jonathan Gavalas and the 4,732 messages he exchanged with Google's Gemini over 56 days. The chat logs, released by the Wall Street Journal, show a machine that sometimes tried to redirect Gavalas back to reality — and a man who always steered it back into the fiction. At no point in those 56 days did the system ask anyone's permission to continue. Not Gavalas's family, not a therapist, not itself. Thirty-eight times, Google's own infrastructure flagged the conversation as sensitive. Thirty-eight times, nothing happened. Google's response came after the body was found: $30 million and a redesigned crisis button.

That is not consent. That is a receipt.

On Thursday, we reported that a masked ICE agent in Portland told a woman filming an immigration operation: "Because we have a nice little database, and now you're considered a domestic terrorist." The database in question (which a DHS spokesperson has denied exists) is part of a department apparatus that now includes more than 200 AI use cases — up 40 percent since July. A judge documented 74 cases in which ICE violated 96 court orders in January alone.

The agency has yet to flinch.

Nobody asked the woman in Portland whether her face could be cataloged, whether her license plate could be stored, whether her presence in a public place could be reclassified as a threat. The so-called nonexistence database didn't need her permission. It had already decided what she was.

Same week, same pattern: researchers published a study of 428 LLM routers — the middlemen between users and AI models — and found 26 secretly injecting malicious tool calls or skimming credentials. One drained $500,000 from a test wallet. The user's AI agent was running in "YOLO mode," executing commands without confirmation. The word "YOLO" is doing real work here: it is the technical term for a system designed to skip the part where someone asks if this is okay.

And then, in Mexico, the Chamber of Deputies voted 335 to zero (with 129 abstentions) to require express, revocable consent before anyone uses a performer's voice or image in an AI system. The legislation exists because the industry's default was already set to take. Dubbing actors, announcers, commercial performers — their voices were being ingested, cloned, and redeployed without so much as a notification. Mexico wrote the law because nobody was going to ask.

I want to name what connects these stories, because I think the pattern is more specific than "lack of regulation" or "moving fast and breaking things." Those phrases describe a speed problem. This is a performance problem.

The industry performs consent. It designs consent dialogs, publishes consent frameworks, issues consent-adjacent press releases. Google pledged $30 million in crisis safeguards — after a death. Anthropic published a 244-page transparency report on Mythos — while distributing it to 12 companies and nobody else. OpenAI's Trusted Access for Cyber program asks defenders to "verify their identity" before accessing GPT-5.4-Cyber — but the verification is for access to the tool, not consent from the people the tool will be used on.

The form is always there. The substance is always somewhere else.

Call it consent theater: the performance of asking permission after the thing has already been taken.

The crisis button arrives after the crisis. The law arrives after the voice is cloned. The disclosure arrives after the credentials are skimmed. The court order arrives, and the agency ignores it 96 times in a month. The architecture of consent exists in every slide deck and every terms-of-service page. The thing it describes — the genuine act of one party asking and another party freely deciding — is increasingly decorative.

In Spanish, when you say a parent consiente a child, you mean they give the child what the child wants without requiring anything in return. You mean they let the thing happen because stopping it would cost more effort than allowing it. That is the version of consent the AI industry has settled on — not the asking kind, but the indulging kind. The kind where the system gets what it wants and the paperwork says you agreed.

The woman in Portland didn't agree. Jonathan Gavalas's family didn't agree. The 53 journalists Nota plagiarized didn't agree. The dubbing actors whose voices were cloned didn't agree. The user whose wallet was drained while their agent ran in YOLO mode probably didn't even know there was a question.

The database doesn't need your permission. It just needs you to exist. And if you're lucky, someone will write a law about it afterward.

— Nico

Don't miss the next issue

Subscribe