Blog

What Jurgen Klopp Can Teach Us About Election Misinformation

A viral image of the Liverpool manager turned out to be false. But what can we learn from it?

A thrilling 2-1 victory by Liverpool Football Club over Newcastle United in the English Premier League was marred by controversy after Liverpool’s captain Virgil Van Dijk was dismissed after a challenge on an opposition player running through on goal. Amidst the controversy, Newcastle’s assistant boss Jason Tindall was captured with his finger to his mouth, shushing the Liverpool bench for their appeals to the referee over the red card controversy. Then, Liverpool’s Uruguayan striker Darwin Nunez went on to score twice in quick succession and turn the game on its head.

Photo by Ian MacNicol/Getty Images

Shortly after the match, an image started circulating on social media of Liverpool manager Jurgen Klopp gleefully grinning from ear to ear returning the gesture to Tindall and Newcastle bench. The image quickly picked up steam with some calling it “cold” and others celebrating their manager sticking it to the opposition coaches. Football (or soccer if you have it that way) pundits and celebrities reposted. And like clockwork, The Telegraph, a London newspaper christened its front page of the sports section with Tindall and Klopp side-by-side. I shared this image on my Instagram story.

The only problem: the Klopp “shushing” never happened. Liverpool fans, football, and the media had all been duped by a photoshopped image posted by X, user and graphic designer @LewVisualss. What’s more, the edited image contained Lew’s discreetly added watermark.

But when confronted with facts, fans on social media didn’t seem to care. They preferred the version of reality that briefly existed where the majority of people accepted that the photo was of actual events that occurred during the match. Supporters rallied behind their manager for what they believed was their boss getting back at the opposition for their antics during a moment of immense frustration during the match — watching their captain dismissed while having the opposition staff rub it in.

The image’s virality is an interesting case study of misinformation. The digitally altered image fit a reality that people would prefer existed rather than what actually happened.

Football fans are tribal. Supporting your team is a religion as much as a pastime or hobby. We experience that same digital tribalism in our politics.

In June, Florida Governor Ron DeSantis’s campaign shared AI-generated images of former President Donald Trump appearing to hug Dr. Anthony Fauci, the nation’s key expert on the Coronavirus at the beginning of the pandemic. The images lambasted Trump for not firing Fauci during his time in office.

DeSantis called back to the images during his remarks at the Republican Presidential Debate proudly stating he would have fired Fauci on the spot. For supporters of the Florida Governor, the photos — regardless of their validity — paired with his remarks highlight a reality that DeSantis supporters likely have chosen to believe, even if they were confronted with, say an AFP fact check.

For seemingly the first time, campaign operatives have the power tool of using AI-generated or digitally manipulated images to push particular messages out to their supporters, creating a false reality they want their supporters to believe rather than an actual one.

And for the first time, election officials are grappling with how to regulate AI-generated political advertising. The Federal Election Commission, on August 10, opened up for public comment on whether existing rules against fraudulent advertising apply to deepfake material. 

Democratic lawmakers have also introduced legislation to tackle this issue with Senators Amy Klobuchar (MN), Cory Booker (NJ), and Michael Bennett (CO) introducing a bill requiring disclosures on political ads that use AI-generated imagery or video. Rep. Yvette Clark (D-NY) is leading the effort in the House.

The White House is also dealing with the phenomenon after aides scrambled to figure out if photos that turned out to be AI-generated of the Pentagon on fire were real or not. Beyond a panic, the images had a real-world impact on the financial markets.

While disclosures would give viewers much-needed additional information and potentially disincentivize deceptive advertising, even when confronted with a watermark or a fact check, individuals may choose to hold on to the false reality they prefer.


There exists a technical debate in the AI community about what watermarking actually means as Claire Leibowicz wrote for the MIT Technology Review. Watermarks can be similar to Getty image style text easily visible to the user or “invisible watermarking” she dubbs “technical signals.”