The next arms race in American politics isn't television ads. It's AI-powered behavioral prediction at scale — and it's already here.
In the summer of 2024, the Harris campaign deployed AI tools to analyze millions of individual voter profiles, modeling not just voting likelihood but the specific emotional triggers most likely to move each person from indifferent to engaged. They tested thousands of ad variations in real time. They used natural language processing to analyze social media content at scale, identifying persuadable voters by patterns in their digital behavior that no human analyst could detect.
None of this was secret. Most of it was reported, in varying degrees of technical detail, by political journalists covering the campaign. And yet almost none of the public discourse about the 2024 election grappled seriously with what it meant.
We are in the early innings of a transformation in how political campaigns communicate with voters. The implications run considerably deeper than most political commentary has engaged with — and understanding them requires being specific about what the technology actually does.
FROM BROADCAST TO BEHAVIORAL PREDICTION
The history of political advertising is a history of decreasing friction between campaigns and individual voters.
Broadcast TV reached everyone who happened to be watching. Cable allowed basic demographic targeting — advertise on golf channels to reach affluent older men, on country music channels to reach rural voters. Digital advertising allowed targeting based on declared interests and inferred demographics. Social media allowed lookalike modeling — find voters who resemble your known supporters.
AI-powered systems represent something qualitatively different: behavioral prediction at the individual level, based on patterns across dozens of data sources, generating probabilistic models of each voter’s persuadability, preferred communication style, most effective emotional triggers, and likelihood of acting on a given message.
At scale, this is not persuasion. It is behavioral engineering.
The inputs to these models include voter file data, consumer purchase history, social media behavior, app usage patterns, location data, magazine subscriptions, car ownership, and hundreds of other variables that campaigns buy from data brokers. The outputs are not demographic segments. They are individual-level probability distributions: this specific person has a 73% chance of responding positively to an economic security message, a 31% chance of responding to an immigration message, and is most likely to act if contacted on Tuesday evenings via SMS.
THE CONTENT GENERATION PROBLEM
The targeting problem — finding the right person for the right message — is only half of the equation. The other half is content generation.
Until recently, campaigns produced a relatively small number of ad creative variations — perhaps dozens of versions of a given message, tested and refined over weeks. The constraint was production cost: each video ad required shooting, editing, and review.
AI dramatically reduces this constraint. Generative AI tools can produce thousands of variations of written content, image content, and increasingly video content at a fraction of the previous cost. A campaign can now produce not 20 versions of an email, but 20,000 — each slightly different, each tested in real time, each iterated based on response data.
The combination of individual-level behavioral targeting and industrialized content generation creates a system capable of delivering a personalized message, optimized for your specific psychological profile, at the moment you’re most likely to act on it, at a cost approaching zero per individual.
This is not a future scenario. Versions of this system were operational in 2024. The 2026 and 2028 cycles will see it deployed more extensively, at higher sophistication, by campaigns that have spent the intervening years learning from what worked.
THE AUTHENTICITY PROBLEM
There is a second-order consequence of AI-powered political communication that receives less attention than the targeting question but may be more important: the collapse of authenticity as a meaningful political signal.
For most of democratic history, voters have used perceived authenticity as an important heuristic for evaluating candidates. Does this person actually believe what they’re saying, or are they just telling me what I want to hear? The imperfection of this heuristic is well-documented — skilled politicians have always been skilled communicators — but it provided some check on purely manufactured personae.
AI dissolves this check. A candidate whose every public communication is generated and optimized by AI systems — every email, every ad, every social media post — is not communicating with voters. They’re deploying a behavioral influence system. The “candidate” the voter is responding to may have very little relationship to the actual human being running for office.
WHAT THE REGULATION GAP LOOKS LIKE
The regulatory framework for political advertising was built around two assumptions: that ads were expensive to produce (limiting their volume), and that their provenance was attributable (making disclosure meaningful).
AI breaks both assumptions simultaneously.
When a campaign can generate 10,000 variations of a political ad at near-zero marginal cost and distribute them through micro-targeted digital channels, the disclosure requirements designed for a world of 30-second broadcast spots become essentially unenforceable. You cannot require disclosure on every version of every AI-generated message when campaigns are producing them faster than any regulatory process could track.
The FEC has acknowledged this problem without resolving it. Several states have passed laws requiring disclosure of AI-generated political content, but enforcement is nascent and the technology evolves faster than the regulatory response.
The most significant near-term risk is deepfakes — AI-generated audio and video depicting candidates saying things they never said. The 2024 cycle saw early deployments of this technology, primarily in social media distribution. The sophistication of these systems improves on roughly an 18-month cycle.
THE PRACTITIONER’S LENS
What’s missing from most coverage of AI in political campaigns is a practitioner’s perspective on what this actually looks like from the inside.
Political campaigns are not monolithic entities with coherent AI strategies. They’re organizations under extreme resource pressure, with short time horizons, using vendor tools that are often not well understood even by the people deploying them. The sophisticated AI targeting systems used by major presidential campaigns are generally not available to state legislative candidates or even most congressional campaigns.
What does penetrate down-ballot is the commoditized version: email optimization tools, social media scheduling with basic personalization, digital ad platforms with automated targeting. These are AI-powered in a meaningful sense, but they’re also the kind of tools that a small business might use for customer acquisition. The conceptual distance between a campaign email platform and a precision behavioral influence system is large.
The implication for democratic health is nuanced. The fully-realized version of AI-powered political communication — individual-level behavioral modeling across hundreds of data sources, deployed at scale — is currently accessible primarily to campaigns with very large budgets. As the technology commoditizes over the next several cycles, it will become accessible to any campaign that can afford basic digital tools.
That democratization of sophisticated manipulation technology is the development most worth watching.
This is the first installment of "Technology & Democracy," a series examining the structural forces that shape American political spending. Next: "The Next Political Arms Race Will Be Data, Not Television Ads."