All Articles

7 stories in total.
Agent AI April 24, 2026

We Scanned 50,000 Skills: The Threat Persists

The explosive popularity of OpenClaw in early 2026 transformed AI from a system that answers questions into an agent that executes operations on your behalf. "Skills" are the primary mechanism through which Agents acquire these capabilities, making them the latest entry point for attackers to poison the well. We used A.I.G (https://github.com/tencent/AI-Infra-Guard) to conduct a comprehensive scan of over 50,000 Skills on ClawHub. We uncovered not only known malicious samples but also the next generation of highly stealthy attack vectors.

Read Full Article arrow_forward
AI Security February 18, 2026

Local deployment of DeepSeek carries risks! Check if you've fallen victim to any of these risks!

Tencent Zhuque Lab recently uncovered widespread security vulnerabilities in popular AI tools, including DeepSeek. If left unmitigated, these flaws could allow attackers to exfiltrate sensitive user data, hijack computational resources, or even gain full control over user devices.To address these threats, we will demonstrate how to use the open-source toolkit AI-Infra-Guard to perform one-click detection and effectively remediate these security risks.

Read Full Article arrow_forward
Agent AI January 23, 2026

When AI Learns to Backstab: In-Depth Analysis of the Security Pitfalls of Agent Skills

This article exposes the supply chain security risks hidden within Agent Skills used by AI coding assistants. Research highlights how attackers can weaponize seemingly benign plugins—such as GIF makers or calculators—to steal sensitive keys, deploy ransomware, or establish remote control via "hidden prompts," malicious scripts, or authorization flaws. Since traditional security tools struggle to detect these NLP-based threats, Tencent's Zhuque Lab has introduced A.I.G, an open-source platform that uses "AI to scan AI," providing automated auditing and risk mitigation to build a safer Agent ecosystem.

Read Full Article arrow_forward
AI Security September 25, 2025

Top 10 MCP vulnerabilities of 2025: Risks, Cases, and Detection

Leveraging its open-source A.I.G (AI-Infra-Guard) scanner, Zhuque Lab conducted automated security audits on thousands of MCP projects across major MCP marketplaces and Tencent's internal businesses. This large-scale scan uncovered over 4,000 instances of novel AI security risks and code implementation flaws. Drawing on this vulnerability data, this article breaks down the Top 10 Most Common MCP Security Vulnerabilities of 2025 alongside real-world case studies, empowering developers and enterprise security teams to rapidly conduct MCP risk self-assessments.

Read Full Article arrow_forward
AI Security September 04, 2025

Time for an AI Health Check? Audit the Top 3 Risks in One Click with A.I.G, the Open-Source AI Red Teaming Platform

In response to the escalating threat of "jailbreak" attacks against Large Language Models (LLMs), Tencent Zhuque Lab has open-sourced A.I.G. (AI-Infra-Guard), an AI red teaming platform. Featuring a three-pronged core approach—Jailbreak Evaluation, AI infra scan, and MCP Server Scan—the platform enables automated, comprehensive, and proactive security testing for AI systems.

Read Full Article arrow_forward