Shortly after the 2016 presidential election, PepsiCo was dragged into a national boycott over something no one ever said. Welcome to the Age of Disinformation.
During an interview with the New York Times, PepsiCo CEO Indra Nooyi had stated her disappointment with the outcome of the election but congratulated President Trump and encouraged the country to come together.
“The election is over,” she said. “I think we should mourn for those of us who supported the other side. But we have to come together, and life has to go on.”
However, the right-wing blog, the Conservative Treehouse, falsely reported her telling Trump supporters to “take their business elsewhere.” The blog interpretation of the quote was quickly attributed to Nooyi as a real quote. It was carried by multiple conservative news websites and then amplified over social media, calling for a nationwide boycott with #BoycottPepsi and #PepsiBoycott.
PepsiCo, which carries major brands including Starbucks, Mountain Dew, Lays and Fritos, quickly sank by more than five percent.
The Era of Disinformation
Around that same time, New Balance Shoes faced a backlash from the other side of the political spectrum. The VP of Public Affairs was quoted as saying that under Trump things would be moving “in the right direction.”
The company clarified that his comments were taken out of context and only made in reference to the Trans-Pacific Partnership; but not before infuriated customers took to the internet to burn their New Balance shoes. It didn’t help that a white supremacist site published an article praising New Balance as “the official brand of the Trump revolution.”
Both incidents represent how misinformation campaigns can wreak havoc with a company’s brand in two hours or less.
If major brands like PepsiCo and New Balance struggle to stay ahead of these threats, SMBs are even more exposed.
Misinformation in marketing campaigns occurs when an audience misunderstands or misinterprets a product or message; whether it’s a line of clothing believed to represent blackface or an image that lacks context.
For example, in 2012, Heineken was accused of animal abuse after a photo was distributed online, showing Heineken banners around a Mongolian nightclub hosting a dogfighting event. In reality, the posters were placed days before the dogfight for an unrelated incident and left up by the owner.
Online propaganda
Disinformation occurs when an organization, whether political, consumer, or state-sponsored, purposely alters a message or image and promotes it online. This was the case with PepsiCo and recently with an online home furniture and goods store accused of using its products to traffic children.
According to a 2019 report by Global Disinformation Order, more than 70 countries reported organized social media manipulation campaigns last year, up from 48 countries in 2018 and 28 in 2017. In 26 countries, online propaganda was used as a tool of information control to suppress human rights and dissenting opinions. Facebook remains the platform of choice for social media manipulation for 56 countries that organize and promote computational propaganda campaigns.
Check the comments and don’t drink from the firehose
Misinformation and disinformation events can also originate from consumers who believe they’re driving positive social change. More than half of consumers worldwide think it’s easier to pressure brands into addressing social problems than it is for the government to take action, according to the Edelman 2018 Earned Brand Global Report.
Their efforts are commonly reflected in the comments section of a product page or review sites like Yelp. Crisp Thinking, which identifies and combats online threats for major global brands, recently conducted a survey of more than 2,000 consumers in the U.S. and U.K. that found 47 percent of people who read negative comments in a brand advertisement will avoid purchases with that company.
“These days, we see a lot of companies looking at ways to promote their advertising on social media platforms and find new ways to be more engaging,” said Crisp Thinking Chief Experience Office Emma Monks. “But they’re not focusing on how the audience is responding via comments. And they really should be, because we know that nearly half of their audience will be turned off and will not become customers if they see comments that are distressing, negative, unpleasant, abusive. They will just go purchase elsewhere.”
Under pressure
Digital marketers who are under intense pressure to guard against disinformation attacks and cases of misinformation are often under-equipped with social media monitoring tools and part-time resources to monitor them in a specific time zone. Monk says these tools are capable of determining web traffic and tone, but they’re not always useful in identifying an impending crisis.
“The way they fall down is that first they kind of produce a firehose of information,” she says. “Most of these tools you set up dashboards and try to filter out the noise to get to the nuggets of information that they’re looking for. That doesn’t work too well.”
Monks recommends an “intelligent alerting” approach to analyzing social content and interactions with customers, requiring someone or another dedicated tool to continually sort and sift through social media channels. When something of concern pops up, the threat is quickly evaluated, categorized by a threat level and communicated internally.
However, most SMB organizations don’t have dedicated resources and budget to monitor and access threats 24/7. A lack of global resources can leave multiple time zones and regions unchecked until the next business day. And by then, the damage is done.
Move swiftly and smartly
Libba Letton, a crisis communications consultant, says intelligence and speed are crucial to de-escalating bad news from getting worse. The damage from a misinformation or disinformation event happens within the first one to two hours of the social postings; the most damage occurs within the first 60 minutes. That’s why it’s essential to locate the issue and quickly acknowledge it to help defuse the situation.
“When bad news begins circulating online, it can ramp up really quickly and multiply exponentially,” she says. “But at the same time, if you respond as soon as possible that you’re aware of the problem, investigating the issue and promise to come back with more info, the issue can ramp down just as quickly.”
In the investigation phase, Monks recommends assessing the threat level, including the potential to further drive the issue. In PepsiCo’s case, additional conservative websites and their social media were able to escalate the spread of the disinformation. For Crisp and their clients, a 30-minute alert after the first positing is required, followed by intelligence-gathering efforts.
Letton also recommends organizations respond internally, giving employees access to the information to understand and assess the problem. This will provide them with the right information to respond, if necessary, to their social media followers. At the same time, company executives should be trained and prepared with general messaging and statements.
Misinformation and the election
With just over three months before the election, social media channels are nearly guaranteed to be influenced by other countries. Their disinformation will be amplified by those that support the message. And news that appears real will go unchecked before disseminated to thousands of people.
Letton recommends that as consumers everyone has a responsibility to fact check information before publishing it on their social media accounts. The best ways to avoid spreading fake news is to follow these steps:
- Check the news outlet to see if it’s real and lacks any connections to partisan organizations. Also, see how long they’ve been in existence and where they’re based.
- See if other independent news sources have covered the story. A simple Google search under “news” will do the trick.
- Go to fact-checking organizations like FactCheck.org or Snopes.
“The countries and organizations behind fake news and disinformation campaigns rely on us to spread the information for them,” Letton says. “They appeal to our bias as a way of motivating us to blindly share their content on social channels. When that happens, we unwittingly become an accomplice to their misdeeds.”
Photo by Kayla Velasquez on Unsplash
Join the conversation