Not recaps. Not drama. Operator-grade analysis of outages, breaches, and cascading failures — what actually failed, why the blast radius was that large, and the exact controls that would have contained it. Every episode ships with a reusable artifact: battle card, checklist, or runbook.
Searchable archive. Filter by type and domain, then jump to watch / write-up / sources (and the artifact, when available).
Credibility > vibes. Rule: no speculation presented as fact. Confirmed vs likely vs unknown stays explicit.
12 years in IT systems engineering. I’ve been the one on call, the one fielding the 2am page, and the one explaining to leadership why it happened. This channel exists because most incident breakdowns are either too shallow to learn from or too internal to ever be shared. Disaster Dissected fills that gap: public incidents, operator-grade analysis, no drama.
The breakdowns are built from primary sources: vendor postmortems, status page timelines, confirmed CVEs, public incident reports. Speculation is labeled. If it’s not sourced, it’s not stated as fact.
Every artifact ships from the same question: what would I actually want in front of me during this incident?
Tips, corrections, sponsorship inquiries, or “please dissect this incident” — send it over.
If you have links/sources, include them. “Receipts” speed everything up.