#Skill

Agent AI April 24, 2026

We Scanned 50,000 Skills: The Threat Persists

The explosive popularity of OpenClaw in early 2026 transformed AI from a system that answers questions into an agent that executes operations on your behalf. "Skills" are the primary mechanism through which Agents acquire these capabilities, making them the latest entry point for attackers to poison the well. We used A.I.G (https://github.com/tencent/AI-Infra-Guard) to conduct a comprehensive scan of over 50,000 Skills on ClawHub. We uncovered not only known malicious samples but also the next generation of highly stealthy attack vectors.

Read Full Article arrow_forward
Agent AI January 23, 2026

When AI Learns to Backstab: In-Depth Analysis of the Security Pitfalls of Agent Skills

This article exposes the supply chain security risks hidden within Agent Skills used by AI coding assistants. Research highlights how attackers can weaponize seemingly benign plugins—such as GIF makers or calculators—to steal sensitive keys, deploy ransomware, or establish remote control via "hidden prompts," malicious scripts, or authorization flaws. Since traditional security tools struggle to detect these NLP-based threats, Tencent's Zhuque Lab has introduced A.I.G, an open-source platform that uses "AI to scan AI," providing automated auditing and risk mitigation to build a safer Agent ecosystem.

Read Full Article arrow_forward