AI Adoption Science
Janus Labs studies how governance affects AI reasoning. We publish AI Adoption Science research, maintain the Janus Protocol, and build evaluation infrastructure for teams that care about reliable agent behavior.
Many AI systems are wrapped in prompt overhead, reporting requirements, and control layers on the assumption that more governance means better outcomes. In practice, that overhead can compete with the reasoning it is supposed to protect.
We call this the Governance Paradox: as governance load rises, reasoning quality can fall. It is an observed phenomenon, not a finished theory, and it sits at the center of the Janus Labs research program.
Why heavy governance can make AI look safer while reasoning worse.
O-1 PhenomenonWhy AI governance should tighten only when performance slips.
O-4 PrincipleWhy the field needs a clearer language for what is observed, hypothesized, and actually proven.
Meta-frameworkSeparate roles for generation and critique. The Builder advances the task. The Watcher checks for drift, repetition, and failure without crowding the main reasoning path.
A design principle: governance should stay quiet when the system is healthy and become visible when deviation appears. The goal is higher verifiability with lower overhead.
A lightweight escalation heuristic: N=1 pass, N≥2 warn, N≥3 halt. Useful on its own, and stronger when paired with semantic and confidence signals.
Measure what your governance layers cost in context, latency, and operator effort. Find out whether extra process is improving verifiability or just adding friction.
Move claims from Observed to Validated. The taxonomy provides a shared vocabulary for epistemic status. The gaps in the matrix are the research agenda.
Ask vendors what kind of evidence sits behind their safety claims. The taxonomy gives buyers a way to distinguish observation, hypothesis, and validated results.