// Publications Catalog
Whitepapers, ebooks, field manuals, and runbooks on federal AI governance, compliance automation, and agent architecture. From theory to operations.
The document presents the Three-Body Problem framework—originally developed for classified Department of Defense AI governance—as a universally applicable model for managing regulated AI systems across commercial sectors including financial services, healthcare, energy, and insurance.
A comprehensive ebook spanning three parts and twelve chapters that establishes the theoretical foundations, practical architecture, and strategic implications of harness-based AI governance.
The foundational ebook applying the three-body problem metaphor to AI governance, examining how innovation velocity, regulatory compliance, and operational risk interact in federal and regulated environments.
Technical deep-dive into building governance infrastructure for autonomous AI agent systems, from sandbox containment through graduated autonomy to fleet-scale operations.
This document presents a security architecture pattern for autonomous AI agents in federal environments that separates configuration schema from secret values, enabling agents to maintain full situational awareness of configuration requirements without accessing credential data.
Agent-Safe Rendering Through Declarative UI.
The document presents a technical architecture for solving federal program inefficiency through AI-assisted cross-program synergy discovery using authorization-scoped knowledge graph retrieval.
This document presents a technical architecture for delivering AI-generated training automatically when governance policies change in classified federal programs, addressing the critical gap between policy updates and personnel notification.
This document presents a framework for transitioning federal AI compliance programs from static configuration-based evidence to continuous performance-based evidence measurement, arguing that compliance controls configured at authorization time often drift from their intended operational behavior without visible configuration changes.
This document presents a framework for using AI-assisted auditing to maintain organizational consistency in shared security functions across multiple programs at enterprise scale.
The document argues that federal agencies have modernized their software development processes within individual program boundaries, thereby automating existing organizational silos rather than breaking them, resulting in duplicate compliance configurations, inconsistent security standards, and repeated discovery of the same misconfigurations across programs.
This document argues that federal AI sandboxes fail to deliver production capability because they lack structured governance pipelines—defined pathways for capabilities to exit experimentation and enter authorized deployment.
This document presents a production-ready architecture for AI-powered regulatory change impact analysis in federal programs, particularly within the National Airspace System, arguing that manual impact assessments fail because they cannot systematically identify implicit dependencies and provide no evidence of search completeness.
This document presents a governance architecture for deploying artificial intelligence in federal back-office functions such as compliance reporting, training development, and budget tracking, arguing that such systems require the same regulatory oversight currently applied only to mission-critical AI deployments.
This document proposes an AI-augmented decision support architecture designed to reduce cognitive load on Authorizing Officials in federal safety-critical governance by synthesizing compliance evidence into structured briefings rather than replacing human decision authority.
This document argues that traditional prescriptive compliance models for federal IT authorization have become inadequate for AI systems operating in continuous delivery environments, where configuration changes occur rapidly and security posture is defined by operational behavior rather than static snapshots.
The document argues that organizations deploying AI-powered governance tools on legacy data infrastructure create an invisible failure mode: AI systems that function correctly but operate on untrustworthy data, producing authoritative-looking outputs that cannot be verified.
Vendor-Proof Warfare: Building an AI Control Plane That Survives the Next Ban.
"Mission Control: Harnessing AI Across Unknown Frontiers" argues that the core challenge in governing autonomous systems—whether AI, human, or organizational—is not eliminating non-determinism but rather constraining it through architectural controls rather than training alone.
The document argues that federal compliance assessment should adopt Andrej Karpathy's autoresearch framework—which constrains AI experiments through fixed time budgets, single numeric metrics, and iterative cycles—to eliminate systemic assessment failures rooted in open-ended timelines and qualitative evaluation criteria.
The document argues that AI-generated compliance artifacts should operate within machine-readable template schemas that separate structural decisions from content generation, mirroring the design principle that makes software maintainable.
This document argues that organizations conducting AI testing through prompt evaluation frameworks like Promptfoo are already generating the structured behavioral evidence required by federal Authorizing Officials for AI system Authorization to Operate, but failing to recognize and preserve these artifacts in compliance-ready formats.
The document presents a governance architecture that addresses the "review board problem" in federal AI system compliance assessment, where single assessors cannot adequately evaluate systems across all twenty NIST control families due to cognitive and expertise limitations, resulting in uneven assessment quality and potential authorization blind spots.
The document presents a governance architecture for command-line interface (CLI) execution by agentic AI systems in classified environments, arguing that the CLI constitutes a fourth governance dimension alongside existing frameworks.