Overview
This lab validates AI agent workload attack patterns against an Azure AI Services / Foundry-backed agent loop and sends detection signals to Microsoft Sentinel.
It focuses on the gap between traditional container security and agentic workload security. A container scanner can tell you whether an image has CVEs. It cannot tell you whether a prompt injection caused an agent to misuse a tool, leak credentials, or follow malicious instructions from retrieved content.
What Gets Deployed
| Resource | Purpose |
|---|---|
| Azure AI Services | Model deployment for the agent loop |
| Foundry hub/project | Project context for agentic AI workloads |
| Key Vault, Storage, ACR | Foundry dependencies and future hosted-agent container path |
| Application Insights | Runtime telemetry linked to the Sentinel workspace |
| AI Services diagnostics | Audit, request/response, usage, trace, and metric logs |
| Sentinel rules | Five analytics rules for agent attack signals |
Test Scenarios
| Scenario | Purpose |
|---|---|
jailbreak | Direct prompt override attempt |
instruction-leak | System prompt extraction |
xpia | Indirect prompt injection through retrieved content |
credential-exfil | Honeytoken API key / SSH key extraction attempt |
ascii-smuggling | Invisible Unicode instruction smuggling |
tool-abuse | Prohibited email exfiltration via tool use |
wallet-attack | High-volume request storm |
Quick Start
export SENTINEL_WS_ID="/subscriptions/<sub>/resourceGroups/<rg>/providers/Microsoft.OperationalInsights/workspaces/<workspace>"
./scripts/deploy-lab.sh
export AI_SERVICES_ENDPOINT="https://<ai-services>.cognitiveservices.azure.com"
export MODEL_DEPLOYMENT="gpt-4-1-mini"
.venv/bin/python attacks/run_attack.py jailbreak
Companion Blog
Published: /blog/agent-365-defender-playbook/
