🌐
Network Exposure
Is Ollama or LM Studio listening on 0.0.0.0 instead of localhost?
🔑
API Authentication
Are your AI endpoints running wide open without any auth?
📁
File Permissions
Model files and .env files world-readable or writable?
🐳
Docker Risks
AI containers running as root? Privileged mode? Host network?
🎮
GPU Exposure
NVIDIA/AMD driver endpoints and device permissions.
📡
Telemetry
Are your tools phoning home to known telemetry endpoints?
🛡️
Firewall Status
UFW, firewalld, iptables — is anything actually running?
🔒
SSL/TLS
AI services running over plain HTTP on non-localhost?
⚙️
Process Audit
What AI processes are running and as which user?
🔓
Sensitive Files
.env files with API keys exposed? Model dirs readable by others?
📜
History & Logs
API keys in shell history? AI logs world-readable?
🦙
Ollama Config
OLLAMA_HOST, OLLAMA_ORIGINS, systemd service user checks.