OccamO focuses on one practical outcome: detect only what got worse in a pull request and make that signal actionable in the same CI context where teams already work.
Problem
Most static analysis tools generate too much noise in pull requests. Teams needed a system that highlights regressions, not generic findings.
Constraints
- CI runtime had to remain practical for active repositories.
- Findings had to map clearly to changed functions and base-branch baselines.
- Outputs needed to integrate with existing workflows (PR comments, check runs, SARIF).
Architecture
- Baseline diff model comparing base and head snapshots to isolate regressions.
- Stable function IDs (path, qualified name, normalized body hash) for robust cross-branch matching.
- Multi-output adapters (JSON, Markdown, SARIF, annotations, check payloads).
- Policy-as-code gating for warning/fail thresholds and no-regression paths.
Tradeoffs and Failures
- Deep analysis increases precision but can raise CI cost on very large pull requests.
- Polyglot parser support improves coverage but introduces dependency and maintenance overhead.
- Strict policies reduce drift but can block teams if baseline quality is already poor.
Engineering Impact
- Shifted performance and complexity governance directly into PR review.
- Reduced alert fatigue by reporting only worsened signals.
- Enabled enforceable engineering quality controls through CI gate presets.
Outcomes
- Actionable regression deltas with before-vs-after context.
- Native ecosystem integration through SARIF upload and PR feedback.
- Improved review velocity by combining changed-only mode with policy budgets.
What Made This Approach Different
OccamO is built around comparative signal quality: not “what exists,” but “what degraded.” This baseline-first philosophy drives both architecture and developer experience.