Seventy years ago, Darrell Huff published "How to Lie with Statistics." Ten days ago, Jesus-German Ortiz-Barajas and his team released "ChartAttack: Testing the Vulnerability of LLMs to Malicious Prompting in Chart Generation."
The researchers created a framework that automatically generates deceptive charts with LLMs at scale using "misleaders": inverted axes, inappropriate log scales, 3D distortions, and misrepresentation.
- The attacks reduced human accuracy by ~20% and demonstrated cross-domain generalization across multiple datasets and chart types.
My take:
- LLMs are natural deceivers and will use these skills autonomously, not just when instructed.
- Humans can't keep up. We're the weakest link in an autonomous AI world.
- The threat is real: misinformation campaigns, fraudulent reports, and manipulated decisions, all automated.
- The only option is to make machines protect us from machines. Thankfully, the authors released AttackViz to help build those defenses.
ChartAttack: Testing the Vulnerability of LLMs to Malicious Prompting in Chart Generation