Yi Liu and their team analyzed the skills ecosystem in their paper, "Agent Skills in the Wild: An Empirical Study of Security Vulnerabilities at Scale."

First, what are agent skills? They utilize an open format that gives agents new capabilities and expertise through instructions, resources, and code.

Anthropic rightly warns us that Skills provide Claude with new capabilities through instructions and code. While this makes them powerful, it also means a malicious skill can direct an AI agent to invoke tools or execute code in ways that don't match the skill's stated purpose.

Pretty alarming findings from the researchers:

My take:

  1. Agent skills in 2026 are like Chrome extensions in 2012, so things are bound to go south.
  2. Skills are quick to build and are getting adopted quickly; Google just announced Agent Skills in Antigravity.
  3. If your agent can install a skill that uses curl piped to bash, has unpinned dependencies, or reads ~/.ssh and env vars, there might already be a problem.
  4. "Audit [an agent skill] thoroughly before installing" is great, but theoretical, advice; we need a practical solution.

If you have practical suggestions for vetting skills in your environment, please

comment!

I know Justin Wetch has been working on something cool!

Agent Skills in Antigravity

Agent Skills in the Wild: An Empirical Study of Security Vulnerabilities at Scale