Research Programs

Practical AI research organized around systems that must explain themselves.

Evening Star AI focuses on practical AI systems that can be tested, explained, deployed, and governed. The research agenda connects technical investigation, applied lab prototypes, and executive-readable judgment.

Current Research Programs

Seven programs define the institute's active research map.

Research is published as short, decision-focused publications for builders, security teams, leaders, and operators.

Anomaly Intelligence

Unsupervised AI for detecting weak signals before they become obvious incidents.

Why it matters

High-consequence environments rarely have clean labels, so early warning depends on drift, outliers, detector agreement, and interpretable anomaly evidence.

Current lab connection

The Evening Star AI Engine uses Isolation Forest, robust scoring, drift detection, and weak-signal summaries as a reusable intelligence core.

Future direction

Richer feature attribution, better detector ensembles, and anomaly memory that can compare current behavior against prior regimes.

Adversarial AI Security

Security research for LLM systems exposed to malicious instructions, jailbreak pressure, and untrusted context.

Why it matters

LLM applications increasingly connect to tools, files, memory, and workflows, making prompt injection and adversarial inputs operational security problems.

Current lab connection

Purple Firefish explores prompt injection detection, jailbreak detection, adversarial-input scoring, LLM threat modeling, and AI security testing.

Future direction

Policy-aware gateways that inspect prompts, retrieved content, tool calls, outputs, and agent memory before action is allowed.

Vulnerability Intelligence

Evidence-first prioritization for vulnerability risk, exposure, exploitability, and remediation decisions.

Why it matters

Security teams do not need more flat vulnerability lists; they need judgment about what is exposed, exploitable, important, and urgent.

Current lab connection

Purple Radar connects vulnerability prioritization, exposure analysis, risk scoring, exploitability signals, and executive reporting.

Future direction

Risk models that combine asset context, adversary signal, scanner confidence, known exploitation, and remediation feasibility.

Decision Systems

Models and interfaces that convert evidence into confidence, severity, explanation, and next-best action.

Why it matters

Useful AI must expose uncertainty and help humans decide; a score without reasoning, severity, and action design is not enough.

Current lab connection

Evening Star lab systems use confidence scoring, severity modeling, reason codes, and explainable recommendations across multiple domains.

Future direction

Next-best-action engines that produce trigger levels, invalidation logic, human approval points, and decision logs.

Agentic Automation

Governed AI agents that monitor signals, summarize risk, automate workflows, and support human operators.

Why it matters

Agentic systems become valuable when they can do useful work without hiding evidence, assumptions, handoffs, or accountability.

Current lab connection

Agentic communications work explores how AI can monitor operational exhaust, enrich signals, summarize risk, and prepare governed actions.

Future direction

Human-in-the-loop agents that can watch, brief, recommend, queue action, and document outcomes without overstepping authority.

AI Governance

Practical governance patterns for AI systems that must be explainable, accountable, auditable, and safe to operate.

Why it matters

Serious AI systems need more than capability; they need risk controls, documented assumptions, human approval points, policy boundaries, and evidence of responsible operation.

Current lab connection

Evening Star lab systems expose confidence, severity, reason codes, decision logs, and governed action paths so operators can understand and challenge model output.

Future direction

Reusable governance frameworks for agentic workflows, AI security gateways, vulnerability intelligence, decision support, evaluations, and audit-ready reporting.

Applied AI Labs

Domain-specific systems that test Evening Star research against real operational workflows.

Why it matters

Research becomes credible when it is forced through product constraints, messy data, real users, and domain-specific decisions.

Current lab connection

Purple Radar, Purple Firefish, Candles Edge, and future applied systems provide proving grounds for the institute's research patterns.

Future direction

More domain-specific AI systems that reuse the same anomaly, security, decision, and governance architecture.

Research Pattern

From raw context to governed action.

Observe
Detect
Explain
Decide
Govern