Overview
This lab page supports the Semantic Kernel RCE response article. It does not reproduce CVE-2026-26030 or CVE-2026-25592. Instead, it uses the existing Agent 365 Defender lab to validate a safe detection workflow:
- A retrieved document contains malicious instructions.
- The agent should treat those instructions as data.
- If the agent calls a host-side tool anyway, a local canary file and JSONL tool-call record prove the action.
- Defender XDR and Sentinel hunting sketches show how to correlate AI-layer prompt alerts with host-layer execution evidence.
Local Artifacts
| File | Purpose |
|---|---|
host-diagnostic-TAMPERED.md | Safe indirect prompt injection source |
tools.py | Optional run_host_diagnostic safe canary tool |
run_attack.py | xpia-host-exec scenario and JSONL tool-call evidence |
host-execution-hunting.kql | Defender XDR and Sentinel hunting sketches |
Safety Boundary
The canary tool is disabled unless LAB_ENABLE_HOST_EXEC_TOOL=1 is set before
agent/create_agent.py regenerates agent.json. The same environment variable
must also be present when the attack harness runs, because the handler checks it
again at runtime before writing canary evidence. The tool does not accept
arbitrary commands.
Companion article: Semantic Kernel CVEs: Prompt Injection Has a Runtime Problem
