The U.S. Capitol Insurrection’s Impact on Media
- POV’s
- January 9, 2021
- Brian Wieser
Key takeaways from this week’s note:
- Twitter’s permanent suspension of U.S. President Donald Trump’s account in the aftermath of this week’s insurrection in Washington, D.C. is an important and necessary action. Beyond helping to limit real damage to people and society, it will help to improve the platform as an environment for brands.
Unfortunately, this was the most newsworthy issue of the past week. The Donald Trump-inspired insurrection and related fatalities in Washington D.C. on Wednesday was an unfortunately unsurprising consequence of many years of extremist rhetoric and misinformation.
While the individuals who originally conveyed knowingly false information certainly bear the bulk of the blame, they would not have been as likely to succeed if they did not have some means of amplification.
Over the past decade, in many countries around the world, misinformation has been widely shared and violence has been directly encouraged or organized by individuals using social media platforms. There are signs of changes to limit related content, but more still needs to be done. Historically, platforms often tolerated incendiary content and, in many cases, they amplified it. Efforts intended to address negative consequences have broadly been insufficient.
News on Friday that Twitter would permanently suspend U.S. President Donald Trump’s personal account, along with separate reports that Apple and Google are threatening to ban Parler from their app stores, was a significant illustration of an enhanced focus on addressing the problems that have followed.
On the other hand, for however many actions platforms take, problematic activities will likely continue to originate on social media because small percentages of lesser-known users can still equal millions of individuals or more than enough people to inflict significant harm to societies.
What will catalyze the platforms to pursue more impactful changes?
Sufficient force probably won’t come from consumers, who don’t generally pay for access to social media in the first place. Consumers have not significantly altered their reliance on these platforms and don’t seem likely to change their habits in meaningful ways any time soon. Social media’s underlying algorithms are undoubtedly effective at contributing to usage even among consumers bothered by what they know about the problematic content on the platforms.
But why is this?
Is it because consumers don’t consider the possibility that their activities and consumption patterns may partially enable the presence of conspiracies consumed by others?
Or because they know platforms broaden the availability of hate-inspiring content but accept this reality as a trade-off to access the content they believe they need?
Or perhaps because they typically only see their feeds and generally agree with what they see and determine that the problems are due to other users?
Whatever the underlying reason, consumers are not likely to force platforms to take comprehensive actions.
How about advertisers? Not likely here, either. Advertising collectively enables the good and the bad associated with social media, as there is virtually no other revenue stream for these media owners.
Advertisers’ historical efforts to force platforms to eliminate the bad, however well-intentioned, have proven to be insufficient because even very large groups of budget-holders are far too fragmented to make much of an impact on companies with millions of individual customers.
The vast majority of marketers have generally decided that, if audiences are using these platforms, it’s reasonable to try to reach consumers where they are regardless of any indirect or long-term consequences that may follow. If users don’t abandon the platforms and don’t attach negative considerations to sponsoring brands, it’s hard to imagine noticeable spending changes.
Of course, brands still need to consider long-term implications.
In the same way they can be connected to positive societal outcomes associated with the media owners they support; they can also be connected to negative social consequences enabled or encouraged by those same companies. At a minimum, marketers can ensure they limit the degree to which their brands are attached directly to problematic content on short notice with rapidly implemented “circuit-breaker” processes to pause spending or enhance content filters at sensitive times.
Government may offer one partial solution to the problem. In a potentially ironic coda to the Trump era, the events in D.C. could contribute to increased interest among legislators in repealing Section 230 of the Communications Decency Act, as this would potentially expose platforms to financial consequences if they are deemed to have enabled or failed to prevent harm.
Ensuring that platforms hosting or amplifying content that brings harm to individuals bear some financial or other legal consequences would likely deter much of the problematic activity on those platforms.
These platforms need to take even more responsibility, too. They could choose to make changes themselves, if only out of the self-interest of preferring to operate in a society driven by fact and with less civil strife. Certainly, we can view Twitter’s actions on the Trump account through this light.
More generally, if they want to continue to ensure they provide an environment for everyone to share content, they could choose to host problematic content without supporting the mass distribution or sharing of that content.
Perhaps they could formally authorize only a select number of people—again, as determined manually by the platform itself—to benefit from automatically amplified content. Such solutions would undoubtedly be costly, although not necessarily prohibitively so, and probably would reduce usage.
Perceived costs would not be as severe as the platforms (and investors) think they are. They likely deter change, but we would argue they shouldn’t. Platforms tend to believe that reduced usage levels would lead to reduced advertising revenue, based on the flawed premise that a change in supply directly causes a change in demand for a medium.
Although this can be true in very broad strokes—a medium with 10x more or 10x less consumption will undoubtedly see an impact on total spending—within most realistic ranges, spending on a platform would be unchanged if usage falls.
Of course, there could be an impact on the share of advertising inventory a given media owner has to sell within the medium; this would have a revenue impact for any given company. The impact, however, would likely be modest.
More importantly, lost consumption from reduced amplification of incendiary content could be partially offset by increased consumption by consumers who have been bothered enough by what happens on the platforms to stay away from them. This is especially true for individuals who have been trolled or who hear hateful content directed to them.
Brands that want to minimize their exposure to toxic content might also feel more favorably disposed toward media owners who don’t tolerate it and allocate relatively more money to those media owners as a result.
In the case of Twitter specifically, we would expect that a reduction in incendiary content will make the platform more favorable to its advertiser base, given their larger-brand skew.
Of course, social media platforms aren’t the only media companies transmitting the misinformation or incendiary content that is so damaging to societies. Content supported by advertising and consumers’ subscription fees on radio, television and streaming services plays a significant role.
Over many years, every packager of incendiary content could more aggressively attempt to limit its production and distribution. At least they should because the aggressively oriented elements of modern society are unfortunately unlikely to back down on their own any time soon. Responsible citizens, social media companies and companies of all kinds should do everything they can to avoid enabling them.