The Staff Report In A Nutshell
On September 19, 2024, the FTC voted 5-0 to release a staff report on the data collection and use practices of major social media and video streaming services.
The report assessed information collected by FTC staff from nine companies, including some of the largest social media and video streaming services: Amazon.com, Inc., which owns the gaming platform Twitch; Facebook, Inc. (now Meta Platforms, Inc.); YouTube LLC; Twitter, Inc. (now X Corp.); Snap Inc.; ByteDance Ltd., which owns the video-sharing platform TikTok; Discord Inc.; Reddit, Inc.; and WhatsApp Inc.
The orders asked for information about how the companies collect, track and use personal and demographic information, how they determine which ads and other content are shown to consumers, whether and how they apply algorithms or data analytics to personal and demographic information, and how their practices impact children and teens.
According to FTC Chair Lina Khan:
“The report lays out how social media and video streaming companies harvest an enormous amount of Americans’ personal data and monetize it to the tune of billions of dollars a year. While lucrative for the companies, these surveillance practices can endanger people’s privacy, threaten their freedoms, and expose them to a host of harms, from identity theft to stalking. Several firms’ failure to adequately protect kids and teens online is especially troubling. The Report’s findings are timely, particularly as state and federal policymakers consider legislation to protect people from abusive data practices.”
The report described specific staff concerns with media and video streaming services actions, including:
· “Woefully inadequate” collection and indefinite retention of troves of data, including information from data brokers, and about both users and non-users of their platforms;
· Company business models that incentivized mass collection of user data to monetize data, especially through targeted advertising, posing risks to users’ privacy;
· Inadequate company monitoring of automated systems that were fed data, and obstacles to third parties’ opting out of the use of their data;
· A failure to adequately protect children and teens who accessed media and video streaming sites; and
· The risk that companies that amass significant amounts of user data may be in a position to achieve anticompetitive market dominance, potentially leading to harmful company prioritization of data acquisition at the expense of user privacy.
Key report recommendations included:
· Congress should pass comprehensive federal privacy legislation to limit surveillance, address baseline protections, and grant consumers data rights;
· Congress should enact additional legislative privacy protections for children and for teens over the age of 13;
· Companies should limit data collection, implement concrete and enforceable data minimization and retention policies, limit data sharing with third parties and affiliates, delete consumer data when it is no longer needed, and adopt consumer-friendly privacy policies that are clear, simple, and easily understood;
· Companies should not collect sensitive information through privacy-invasive ad tracking technologies; and
· Companies should carefully examine their policies and practices regarding ad targeting based on sensitive categories.
Problems With The Staff Report
FTC staff could have presented a neutral description and analysis of data collection practices by major streaming service companies. Instead, the staff report sounds primarily like a litany of concerns about theoretical (not actually shown) consumer harms. The report also essentially ignores the substantial economic benefits stemming from the companies’ data use.
Notably, although they concurred in releasing the report, two FTC Commissioners, Melissa Holyoak and Andrew Feguson, released separate statements dissenting from important aspects of the report’s analysis.
Commissioner Holyoak’s Three Key Concerns
Noting that “companies pay close attention to what the Commission votes to put forward,” Commissioner Holyoak highlighted three concerns:
“First, the [r]eport may affect free speech online. Where the [r]eport’s analysis relates to protecting children and teens online or to clearly harmful content (e.g., promoting self-harm), I am deeply sympathetic. And as noted earlier, the [r]eport says it does not “endorse any attempt to censor or moderate content based on political views.” But because some of the [r]eport relates to how social media companies design or modify their algorithms and AI in ways that may affect their content recommendations or moderation, I have grave concerns.
Second, the [r]eport’s so-called “recommendations” effectively seek to regulate private conduct through a sub-regulatory guidance document, and, at times, incorporate mischaracterizations of what current law requires. We should not dictate or otherwise seek to reshape private-sector conduct in a guidance document.
Third, I note that—notwithstanding many descriptive contributions this Report makes—there are pivotal factual and policy questions that must still be explored. Before we can conclude the [r]eport’s unqualified recommendations would ultimately lead to good outcomes for consumers or competition, more analysis is essential. Indeed, it is particularly troubling that these recommendations overlap with the Commission’s [2022] contemplated rulemaking [on commercial surveillance and data privacy] and appear created in part to provide support part to provide support for such rulemaking, circumventing and potentially subverting the public-comment process.”
Commissioner Ferguson’s Concerns
Commissioner Ferguson’s criticisms of the report centered on its treatment of targeted advertising and artificial intelligence, and noted its omission of any discussion of political censorship:
“The [r]eport’s claim that consumers can be ‘profoundly threat[ened]’ and suffer ‘extreme harm[]’ by being shown a targeted advertisement is unjustified, a gratuitous attack on the online economy made with the goal of justifying heavy-handed regulation.
The [r]eport also calls for the expansion of AI safety departments within these companies, and for the bureaucrats who staff them to have binding authority over the engineers and business leaders who actually innovate and create new products. It conveniently fails to mention the stunningly bad decisions made by such AI safety bureaucracies where they already exert their harmful influence.
Just as disappointing is what the [r]eport omits. The . . . [FTC] orders [requesting information] asked about the companies’ content moderation policies, but the [r]eport says nothing about the pervasive political censorship and election interference carried out by the studied companies under the guise of ‘content moderation.’ The [r]eport says nothing about the banning of politicians (including Donald Trump while he was serving as President of the United States), about the removal and demonetization of users who challenge the Silicon Valley political consensus, nor about one of the most brazen acts of election interference in recent history: the coordinated suppression by social media companies of the Hunter Biden laptop story in the leadup to the 2020 presidential election.”
The Report’s Flaws: An Overall Assessment
The staff report provides new information on data collection and usage practices by media companies that may be of interest to future researchers and Congress. This is beneficial.
The report fails, however, in making highly critical subjective characterizations of business practices and in putting forth recommendations reflecting those characterizations. The report provides no empirical support for its findings. It also ignores economic analysis that points to positive aspects of the practices it criticizes.
In particular, the report largely ignores the substantial benefits of data retention and, in particular, targeted advertising. More efficient targeted advertising and data retention for monetization purposes enables digital firms to enhance the efficiency and qualify of platform services, benefiting consumer and business users.
In addition, language in the staff report that calls into question data retention without focusing on particular case-specific harms may discourage efficient and profitable platform data usage. This slows digital platform-adjacent innovation and economic growth.
The report overall reflects the unjustified belief that benevolent government knows better than successful digital firms and can “perfect” their platform practices, an example of what the eminent economist Harold Demsetz called the “Nirvana Fallacy.”
Finally, it is ironic that the staff report focuses on the risk of data sharing, without noting that many antitrust interventionists implicitly support data sharing by promoting platform interoperability as a means of enhancing competition.
The Next Step For the FTC
The pointed critiques of the staff report’s analysis by 2 of the 5 FTC Commissioners, coupled with the report’s lack of rigor, subjectivity, and analytic deficiencies, severely restrict its utility. In its current form, it reflects poorly on the FTC. The Commission should have staff rewrite it, to eliminate its subjective conclusions and recommendations. A new report should also include economic analysis that discusses the welfare benefits as well as costs of data retention and usage.