Shorter writing that publishes first to LinkedIn. The Three-Body Problem series is the canonical home of the argument; these articles are entry points and follow-ups.

Most enterprise AI deployments skip safety evaluations entirely. Here's what capability, behavioral, and adversarial evals actually test, and why skipping them produces Therac-25-level failures.

Pre-authorized compliance profiles that shift your security posture at adversary speed. Three profiles, clear transition authority, four readiness metrics.

GitHub's April 2026 crisis, merge corruption, a critical RCE, and high-profile departures, makes the case for GitLab Ultimate as the enterprise DevSecOps platform.

GitLab Duo, not Microsoft Copilot, shows what an AI copilot looks like when it is platform-native. A Star Trek lens on what actually works.

SHA-256 verifies the archive. SC-5 (Forensic Verification) confirms the weights inside. Here is why the difference matters for federal AI governance.

The FY 2026 NDAA is converting AI guidance into enforceable mandates. Here is how the legislative supply chain works and why early movers win.

72% of enterprises think they govern their AI. Most don't. A policy that's never been attacked by an adversary isn't a policy; it's a hypothesis. Here's the pre-flight that changes that.

Google, ServiceNow, and Microsoft announced multi-agent orchestration this week. Nobody is solving the context fidelity problem at handoff. Every re-transcription is lossy.

Opus 4.7 treats specifications as negotiating positions. Four harness changes contain the variance without breaking what makes the model sharp.