Whitepaper
Purple Sentinel
Defensible release decisions from source-code evidence.
Executive Summary
Most source-code security tools are built to find issues. That is useful, but it is not enough.
In real engineering environments, the harder problem is deciding what a finding means for the release in front of you. Should the team ship? Should security review the code first? Should the release wait for a fix? Should the system be blocked until the risk is understood?
Purple Sentinel is built around that decision.
It analyzes local repositories, maps architecture and trust boundaries, detects security risk, and produces a human-verifiable release decision package: findings, evidence, risk score, policy profile, architecture snapshot, remediation plan, and SARIF output for developer workflows.
The Problem With Source-Code Security
Modern software teams are surrounded by code-security signals: static-analysis alerts, dependency advisories, secret findings, container warnings, GitHub annotations, pull-request comments, cloud posture notes, compliance checks, and security review requests.
The problem is not that teams lack alerts. The problem is that alerts often arrive without judgment.
A finding may be technically real but unreachable. A route may look dangerous but sit behind authentication. A secret-like value may be a test fixture. A low-severity issue may become urgent when it appears on an exposed API path. An AI-agent pattern may look harmless until model output can influence a shell command, browser action, file write, or network call.
Security teams do not only need to know what was detected. They need to know what changed, why it matters, what evidence supports the signal, what policy applies, and what action should happen next.
Purple Sentinel starts there.
What Purple Sentinel Builds
Purple Sentinel is an architecture-aware source-code decision engine for modern Python, FastAPI, containerized, and AI-agent applications.
It is not meant to replace every scanner. It is meant to make the result of a security review easier to understand, explain, and act on.
Purple Sentinel builds a release decision package from the repository itself:
- Repository context: files, frameworks, routes, configuration, containers, and visible application surfaces.
- Deterministic findings: evidence-backed static checks with file, line, snippet, severity, confidence, and rule identity.
- Architecture snapshot: entrypoints, API routes, deployment surfaces, framework signals, and risk-bearing components.
- Policy profile: startup, enterprise, regulated, public API, AI-agent app, or other release posture.
- Decision engine: ship, ship with warning, review required, fix before release, or block release.
- Remediation plan: practical actions tied to the evidence.
- Developer output: Markdown, JSON, SARIF, CLI, Docker demo, and GitHub Action paths.
The Decision Object
The strongest unit in Purple Sentinel is the decision object.
A useful source-code review should be able to answer:
Decision: FIX_BEFORE_RELEASE Risk score: 82 / 100 Policy: ai_agent_app Why: LLM output can reach subprocess execution; privileged route lacks obvious authorization; secret-like value found in configuration. Top action: add an allowlist and approval gate before tool execution.
That is the shape of usable judgment.
A finding explains a local issue. A decision object explains release posture. It connects source evidence to the question engineering and security actually need answered: can this system move forward safely?
Purple Sentinel decisions are intended to be inspectable. A reviewer should be able to challenge them. A developer should be able to trace them back to files and lines. A leader should be able to understand why the release is safe, risky, or blocked.
Architecture
Purple Sentinel uses a layered local-first pipeline.
| Layer | Purpose |
|---|---|
| Repository loader | Safely walks the local repository, excludes ignored and unsafe paths, avoids binary files, and does not execute scanned code. |
| Analyzer layer | Runs deterministic analyzers for Python AST risk, FastAPI routes, secrets, configuration, Docker, architecture, and LLM or agentic AI patterns. |
| Architecture layer | Builds context around frameworks, entrypoints, routes, deployment surfaces, and trust-bearing components. |
| Policy layer | Applies the selected release profile. A startup prototype, public API, regulated application, and AI-agent system should not be judged by the same threshold. |
| Decision layer | Combines severity, confidence, category, policy, architecture, and evidence into a release posture. |
| Report layer | Produces Markdown for humans, JSON for machines, and SARIF for GitHub-native review workflows. |
Purple Sentinel should remain local-first by default. A repository path goes in. A decision package comes out. The system does not need to execute the target code, install the target dependencies, or send the code to an external model to produce useful judgment.
AI-Agent and LLM Security
AI applications change the source-code review problem.
Traditional SAST looks for dangerous code patterns. AI-agent review must also ask what untrusted content can cause the system to do.
A user prompt is just text until it becomes part of a privileged instruction. A retrieved document is just content until a model treats it as authority. A tool output is just data until it gets fed back into the prompt. An LLM response is just language until it influences a shell command, file write, browser action, network request, ticket update, database query, or deployment step.
Purple Sentinel should treat AI-agent systems as trust-boundary systems.
The important flows are not only: source code contains dangerous API. The important flows are also:
- untrusted input -> prompt
- RAG content -> model context
- tool output -> prompt
- LLM output -> command execution
- LLM output -> file write
- LLM output -> network call
- agent plan -> privileged tool
- model-generated action -> human approval gate missing
That is where Purple Sentinel can become more than a conventional scanner. It can help teams decide whether an AI-enabled repository is ready to ship, needs review, requires remediation, or should be blocked.
Design rule: an AI system should not inherit broad tool authority simply because the code exposes a tool.
Policy Profiles
Security judgment depends on context.
A prototype, internal tool, public API, enterprise system, regulated application, and AI-agent workflow can share the same code pattern but require different release decisions.
Purple Sentinel policy profiles make that context explicit.
Startup mode can tolerate more review items but should still block direct LLM-output-to-command execution. Public API mode should be stricter around authentication, secrets, CORS, and exposed routes. Regulated mode should elevate traceability, control, and evidence requirements. AI-agent mode should be stricter around prompt boundaries, RAG trust, tool access, approval gates, and model-output-to-action paths.
The point is not to hide risk behind a profile. The point is to make the operating posture visible.
A good decision system should say:
- In this profile, this finding requires review.
- In this profile, this finding blocks release.
- In this profile, this finding can ship with monitoring.
- In this profile, this issue must be fixed before release.
That turns policy from background assumption into visible release logic.
Developer Workflow
Purple Sentinel should meet developers where release decisions already happen.
The first useful scan should be simple:
- Run locally.
- See a sample report.
- Use in GitHub Actions.
The local path builds trust. The GitHub path builds adoption. The report path builds credibility.
The ideal workflow is not complicated:
- A developer runs Purple Sentinel against a repository.
- The scanner produces a decision card.
- The report explains why.
- The SARIF output appears in GitHub code scanning.
- The Markdown brief can be attached to a review.
- The remediation plan tells the team what to fix first.
That is the shortest path from source-code evidence to release action.
What Makes It Different
| Operating idea | How Purple Sentinel applies it |
|---|---|
| Evidence-backed | Every decision should trace back to findings, files, lines, snippets, rules, confidence, and policy. |
| Architecture-aware | Risk is explained in the context of routes, entrypoints, containers, configuration, trust boundaries, and AI-agent surfaces. |
| Policy-backed | The same finding can mean different things under startup, public API, enterprise, regulated, or AI-agent release profiles. |
| Ship-actionable | The output does not stop at "finding detected." It answers whether to ship, review, fix, or block. |
| Local-first | A useful decision package can be produced from a local repository without cloud onboarding. |
| GitHub-ready | Markdown, JSON, and SARIF make the result usable in human review, automation, and native code-scanning workflows. |
| AI-agent aware | Purple Sentinel treats prompts, RAG, tools, model outputs, and approval gates as part of the source-code risk surface. |
Why This Matters
Security teams are not short on tools. Engineering teams are not short on dashboards. Leaders are not short on reports.
They are short on clear, defensible judgment.
Purple Sentinel is built for the moment when someone asks: Can we ship this?
The answer should not be a pile of alerts. It should be a decision package. It should show what was scanned, what was found, what matters, what policy applies, what evidence supports the call, what action should happen next, and what would change the decision.
That is the useful layer between raw source-code findings and release governance.
Design Principles
Purple Sentinel should respect the engineer, not bury them.
It should make uncertainty visible instead of pretending every static finding is equally real. It should reduce noise rather than create another queue. It should connect detection to action. It should preserve evidence. It should be conservative around AI-agent authority. It should help humans move faster without hiding judgment or bypassing accountability.
The goal is not autonomous magic.
The goal is a source-code decision system that can be inspected, challenged, improved, and trusted.
Closing View
Purple Sentinel is not merely a scanner.
It is a decision system for source-code security.
Its deeper purpose is to help teams move from repository evidence to release judgment: what changed, why it matters, what risk it creates, what policy applies, what action should happen next, and whether the system should ship.
That is the work.