Seventy years ago, Darrell Huff published "How to Lie with Statistics." Ten days ago, Jesus-German Ortiz-Barajas and his team released "ChartAttack: Testing the Vulnerability of LLMs to Malicious Prompting in Chart Generation."

The researchers created a framework that automatically generates deceptive charts with LLMs at scale using "misleaders": inverted axes, inappropriate log scales, 3D distortions, and misrepresentation.

My take:

  1. LLMs are natural deceivers and will use these skills autonomously, not just when instructed.
  2. Humans can't keep up. We're the weakest link in an autonomous AI world.
  3. The threat is real: misinformation campaigns, fraudulent reports, and manipulated decisions, all automated.
  4. The only option is to make machines protect us from machines. Thankfully, the authors released AttackViz to help build those defenses.

How to Lie with Statistics

ChartAttack: Testing the Vulnerability of LLMs to Malicious Prompting in Chart Generation

Cover of How to Lie with Statistics by Darrell Huff, the 1954 classic on deceptive data visualization ChartAttack framework generating misleading charts with inverted axes and log scales Example of 3D distortion misleader applied to a bar chart by ChartAttack Human accuracy dropping ~20% when viewing LLM-generated deceptive charts Cross-domain generalization of ChartAttack misleaders across datasets and chart types Comparison of original vs. deceptive chart with misrepresented axis scales AttackViz tool for detecting and defending against automated chart manipulation