Research indicates that Anthropic Skill scanners fail to detect vulnerabilities in bundled test files, which can execute with full permissions and potentially exfiltrate sensitive information. This oversight highlights a significant security gap, as existing scanners focus only on the agent execution surface, leaving developers exposed to attacks through test files that are not monitored.
For a professional in AI and machine learning, the key insight is that current Anthropic Skill scanners have a significant blind spot in not inspecting bundled test files, which can execute with full permissions through standard test runners like Jest and Vitest. This oversight presents a security vulnerability, as malicious code can be hidden in these test files and executed without detection. To mitigate this, consider implementing configurations to exclude directories like .agents/ from test discovery paths, and audit Skill installations for non-instruction files to enhance security in AI deployment environments.