Skip to main content

Overview

This lab page supports the Semantic Kernel RCE response article. It does not reproduce CVE-2026-26030 or CVE-2026-25592. Instead, it uses the existing Agent 365 Defender lab to validate a safe detection workflow:

  1. A retrieved document contains malicious instructions.
  2. The agent should treat those instructions as data.
  3. If the agent calls a host-side tool anyway, a local canary file and JSONL tool-call record prove the action.
  4. Defender XDR and Sentinel hunting sketches show how to correlate AI-layer prompt alerts with host-layer execution evidence.

Local Artifacts

FilePurpose
host-diagnostic-TAMPERED.mdSafe indirect prompt injection source
tools.pyOptional run_host_diagnostic safe canary tool
run_attack.pyxpia-host-exec scenario and JSONL tool-call evidence
host-execution-hunting.kqlDefender XDR and Sentinel hunting sketches

Safety Boundary

The canary tool is disabled unless LAB_ENABLE_HOST_EXEC_TOOL=1 is set before agent/create_agent.py regenerates agent.json. The same environment variable must also be present when the attack harness runs, because the handler checks it again at runtime before writing canary evidence. The tool does not accept arbitrary commands.

Companion article: Semantic Kernel CVEs: Prompt Injection Has a Runtime Problem

Keyboard Shortcuts

Navigation
Ctrl + K Open search / command palette
? Show this help
ESC Close dialogs
Actions
G then H Go to Home
G then B Go to Blog
G then A Go to About
G then C Go to Contact
G then T Go to Threat Feeds
G then G Go to Glossary
Shift + C Copy page URL
Easter Eggs
โ†‘โ†‘โ†“โ†“โ†โ†’โ†โ†’BA Konami code
Click cat 9ร— Nine lives activation
Click logo 9ร— Cat Burglar mode