WHEN a major speech is given during a conflict, it doesn’t just reach the public once (think about Melania Trump and relationships with Epstein).
It’s translated, clipped, tested, reframed and redistributed within minutes – across TV, social platforms and global news cycles.

AI sentiment engineering – or are we being manipulated?
In conflicts like Ukraine and the Middle East, as well as across recent elections, AI is already part of that process:
- Supporting how messaging is analysed, adapted, and scaled in real time.
This reflects a wider shift. Political communication is no longer just about what is said. It’s about how messaging performs… attention, reaction and reach:
- Language is tested before release
- Public response is tracked continuously
- Narratives are repeated and reinforced across platforms
What’s changing now?
AI systems learn from these same patterns. They don’t independently verify truth; they reproduce what is most consistent, visible and widely represented.
Why it matters
In a 24/7 media environment, influence is no longer only downstream (how people react). It is increasingly upstream, shaped during creation, testing and distribution. Shaped by whom? Think political parties and commercial lobbying, for example.
The result isn’t necessarily false information; it’s information that has been optimised to perform.
Practical question
How should media literacy evolve when what we see is not just reported – but engineered to scale, possibly engineered to lobby (or manipulate, if someone is machiavellian).
#AI #DigitalTransformation #Media #Politics #PR #InformationIntegrity
Interested in the research behind this? Comment “sources” and I’ll share.