Code › tail-villain

Why I Built a Villain

I could clearly see it in my notes. But in the interview, when the question came, it wouldn't come out. I spent a long time wondering why — and ended up building something to fix it.

tail-villainorigin

Whenever an interview was coming, the same thing would happen.

Notion open. Dozens of pages of notes. Distributed systems design, cache consistency strategies, database transaction isolation levels. When I was studying, I understood it. But when someone asked “how did you handle consistency guarantees in a distributed environment?” — even when I’d actually done it — twelve years of experience would suddenly vanish into fog.

It wasn’t that I hadn’t experienced it. I couldn’t retrieve it.


There’s a concept called the Ebbinghaus forgetting curve. Half of what you learn disappears within a day. A week later, almost none of it remains in a retrievable state. The only way to counter it is to pull information back out at the right intervals — forcing the brain to decide this is worth keeping.

The problem is that managing “the right intervals” manually is no small feat. Which topics need review today? Which ones are at risk of fading? Nobody tracks this in a spreadsheet. So the knowledge accumulates in notes, feels solid on the page, and evaporates under pressure.

Neuroscience has an explanation for why this happens. Under acute stress, the amygdala activates and temporarily suppresses prefrontal cortex function. The prefrontal cortex is responsible for memory retrieval and logical reasoning. According to Amy Arnsten, a neuroscientist at Yale, even mild uncontrollable stress can cause rapid and dramatic loss of prefrontal cognitive ability. Knowing something and being able to access it under pressure are two different things — and the gap is a biological mechanism, not a failure of will.

I looked for tools that handled this. ChatGPT, Claude, Gemini — already widely used. Foreign mock interview platforms. Flashcard apps. Spaced repetition systems. Each one fell a little short in its own way. None of them fit together the way I needed.

So I started building something.


About halfway through, I realized the retention problem wasn’t actually the hardest part.

Reading a note feels like knowing something. It’s passive recognition — the words look familiar, the concept seems clear. But interviews don’t test recognition. They test retrieval. Can you explain this without looking at it? Can you stay composed defending it when someone pushes back? Can you hold your answer together when the follow-up question exposes a gap in what you said five minutes ago?

That gap — between recognizing something and being able to articulate it under pressure — is where most interview preparation fails.

What I actually needed wasn’t a smarter flashcard system. I needed something that would argue with me.


That’s where the villain came from.

The most uncomfortable interview moments aren’t the questions you don’t know the answer to. They’re the questions where you think you have an answer, give it confidently, and then watch the interviewer’s expression suggest otherwise. The follow-up that reveals your answer was shallower than you realized. The moment someone says “you said A earlier, but now it sounds like B — which is it?”

That pressure is the real test. And you can only build tolerance to it by experiencing it repeatedly in low-stakes conditions.

I built an AI interviewer that doesn’t accept surface answers. It follows up. It remembers what you said five turns ago. It challenges contradictions. It asks why, and then asks why again.

The first persona was Kovill — the follow-up question villain. Cold, precise, impossible to bluff. The kind of interviewer who makes you realize mid-sentence that you don’t actually understand what you’re talking about.


The obvious question: why not just use ChatGPT?

Honestly — you can. With the right prompt, it’ll play along.

But the setup is tedious every time, and most people drift away from it. And even if you stay consistent, getting ChatGPT to track where you struggled last week means manually feeding that into memory — which it may or may not surface when you need it, because it’s non-deterministic. The memory is also limited. And frankly, doing all of that is just another thing to manage.

The difference between a tool and a system is that a system keeps working without you managing it. tail-villain accumulates data across sessions, surfaces your weak spots, and feeds that back into what comes next. And it’s built to keep growing — voice interviews, feedback reports, company-specific modes. When it’s a platform, new capabilities can keep being added.

ChatGPT starts fresh every time. tail-villain remembers where you left off.


It started as a developer tool. I’m a developer, the people around me were preparing for technical interviews, and I understood that format best — I’d lived it.

But building it changed my thinking. The core problem — knowing something versus being able to explain and defend it under pressure — isn’t specific to software engineering. A designer presenting work to stakeholders faces the same thing. A product manager walking through a roadmap decision. A salesperson handling objections. Anyone who has to perform their knowledge in front of people actively looking for weaknesses.

The villain is domain-agnostic. Push back, probe deeper, refuse to let weak answers slide — that works wherever the stakes are high and preparation matters.


tail-villain launched in May 2026 at tailvillain.com. This series documents how it got there.

Not as a polished retrospective — as it actually happened. Designs that didn’t work. Prompts that had to be rewritten. Edge cases that surfaced two days before launch. There’s a kind of knowledge you only get from building something all the way through. This is an attempt to write it down before it fades.