Frances Haugen, a Facebook whistleblower, gave a polished testimony before the European Parliament Monday. This was after similar sessions with US and UK legislators in recent week.
Her main message was the same as her warnings on both sides of Atlantic: Facebook puts profit above safety and ignores the harmful effects of toxic content. It is essential that regulators have oversight to control and hold such irresponsible platform powers accountable. There is no time for legislators to lose by imposing rules on social networks.
The European Parliament gave Haugen a warm welcome. They expressed their gratitude for her time and what they called her "bravery" in raising her concerns publically.
They interrogated her about a variety of topics, but the majority of their attention was given to how the incoming pan-EU digital regulations could best provide transparency and accountability on platform giants.
MEPs are focusing on the Digital Services Act (DSA), as they vote on amendments to Commission's proposal, which could significantly change the law.
MEPs have been pushing for a ban on behavioral ads to be added to the legislation to allow privacy-safe options like contextual advertising. Another amendment recently supported by some MEPs is to exempt news media content from platform takedowns.
It turns out Haugen doesn't like either of these potential amendments. She supported the entire regulation, however.
The DSA's main thrust is to create a safe and trusted online environment. Several MEPs spoke during today's session to praise the EU for being so far advanced that they have a digital regulation on the table and moving quickly toward adoption. This was in the midst (yet another) Facebook publicity crisis.
The Facebook whistleblower said that she was happy to massage political egos by telling MEPs that the EU takes platform regulation seriously and suggested there is an opportunity for the bloc set a global gold standard with the DSA.
She used a similar line last month in the UK parliament, when she spoke up about domestic safety legislation in glowing tones.
Haugen reminded UK lawmakers by repeating her warning about Facebook's "dancing in data" skills. She also stressed to them that they must not adopt naive laws that require Facebook to give data about its platform. Facebook should be required to provide detailed explanations of any data it gives over, including the details of queries used to pull the data and to generate oversight audits.
Haugen warned against the lack of such legislation. He said that the shiny new EU digital rules would come with a huge loophole for Facebook to use to serve up selectively self-serving data and run whatever queries it needs to complete the task.
She suggested that regulation must be multi-tiered and dynamic in order to be effective on untrustworthy platforms like Facebook. This will allow for continuous input from external researchers and civil society organizations.
She also suggested that it should take a broad view on oversight -- providing platform data for a wider range of experts than just the'vetted academics’ of the current DSA proposal to ensure accountability for AI-fuelled impacts.
She stated that Facebook has proven they are willing to lie about data. I encourage you to fill in the DSA. If Facebook gives you data, they should show you how they obtained it. You can't trust anything they tell you unless they can prove that.
Haugen wasn't content to sound the alarm. She also lied, telling MEPs she believes Europe should regulate these platforms as Europe is a vibrant and linguistically diverse democracy.
"If the DSA is right for your linguistically, ethnically diverse, 450million EU citizens, you can make a huge impact on the world. You can force platforms to include societal risk in their business operations to ensure that decisions about which products and how to build them are not solely based on profit maximization. It is possible to establish systemic rules that protect free speech while addressing risks. You can also show the world transparency, oversight, and enforcement.
She stated that platforms need to disclose their safety systems, languages they are in, and performance per language. This is the type of information that can be put in the DSA. You can also say, "This is dangerous for a significant portion of Europeans."
This approach would be a huge benefit to Europe, according Haugen. It would force Facebook to adopt language-neutral content neutral solutions. She argued that this is necessary in order to address harms across all markets and languages where it operates.
Her leakage of internal Facebook documents highlights the skew in Facebook's (limited!) safety budget directed towards English-speaking markets. She suggested that Europe could address the lack of equity in global equity surrounding how powerful platforms work (and what they prioritize or deprive) by enforcing context specific transparency around Facebook's AI model -- not only requiring a general measure, but also specifics per country; per language; and per safety system; as well as per cohort of highly targeted users.
Facebook should be forced to address safety issues as a systemic requirement. This would solve the problems that the platform causes in European markets, and it would also "speak up" for those who live in fragile areas of the world.
While many of Haugen's talking points were familiar from her earlier testimony sessions and press interviews, during the Q&A a number of EU lawmakers sought to engage her on whether Facebook's problem with toxic content amplification might be tackled by an outright ban on microtargeted/behavioral advertising -- an active debate in the parliament -- so that the adtech giant can no longer use people's information against them to profit through data-driven manipulation.
Haugen rebutted, saying that she supported people being able choose ad targeting (or not ad targeting). This is in contrast to regulators making decisions.
She suggested that specific things and ads... need to be regulated. She pointed out ad rates to be one area she would target to regulate, instead of an absolute ban. She stated that the current system subsidises hate and that it is 5x-10x more expensive to run hateful political ads than non-hateful ones. "I believe flat rates should be used for ads, given that." But I believe there should be regulations on how ads are targeted to certain people.
I don't know if this is something you are aware of, but you can target specific ads at a 100-person audience. It is clearly being misused. I have done an analysis of who is hyper exposed to political advertisements and it was obvious that those who live in Washington DC are the most exposed. We're talking thousands upon thousands of ads per month. It is unacceptable to have mechanisms that target specific people without their knowledge.
Haugen also advocated for an end to Facebook's ability to use third-party data sources to enrich profiles for ad targeting purposes.
"In terms of data retention and profiling, I believe you shouldn't be allowed take third-party data sources -- something Facebook does -- they work with credit cards companies, other forms -- it makes their ads dramatically more profitable," she stated. She added: "I think that you should have to give your consent every time you hookup more data sources. Because people would be really uncomfortable knowing that Facebook has some of their data.
She avoided supporting an outright ban on behavioral advertising targeting.
This was an interesting aspect of the session. Given the momentum surrounding the issue in the EU, including her whistleblowing that amplified regional legislators' concerns about Facebook, Haugen could have helped to stoke it (but chose not).
"With respect to targeted ads," she stated. She did not go into detail about how regulators could create a law against something as complex as 'dark patterns design."
She said that platforms should be transparent about how they use data, but she retorted that she was a big advocate that they publish policies such as do they charge flat rates for all political ads? You shouldn't be subsidizing hateful content in political ads.
Her argument against banning behavioral advertising seemed to be based on regulators achieving full platform transparency. That's able give an accurate picture about what Facebook (et. al.) does with people's personal data. so that they can make an informed decision about whether or not they wish to be targeted. It all depends on full-picture accountability.
Haugen, however, argued that it was doubtful that children (or adults) can understand Facebook's data processing practices at the moment.
She told MEPs that she was unsure if children can understand what they are trading away. "We don’t know what algorithms are, how we’re targeted, so the idea that children could give informed consent -- I don’t think they can. They have less capacity."
Her belief that such complete transparency is possible, and that it will present a universally comprehensible view of data-driven manipulation that allows adults to make an informed decision about whether to accept manipulative behavior ads or not -- seems a bit tenuous.
If we follow Haugen's logic, were the suggested cure of radical transparency to fail -- including by regulator/s improperly/inaccurately communicating everything that's been found to users and/or failing to ensure users are appropriately and universally educated regarding their risks and rights -- well the risk is, surely, that data-drive exploitation will continue (just now with a free pass baked into legislation).
This argument felt sloppy and a bit inconsistent. It was as if her opposition against banning behavioral ads and, consequently, to tackling the core incentive that fuels social media's manipulative toxicity -- felt more ideological than logical.
It seems like quite a leap of faith that governments around the globe will be able to put in place the high-functioning, 'full fat' oversight Haugen suggested. She's spent weeks convincing lawmakers that platforms cannot be understood as context-specific, highly data-detailed algorithm machine systems. Not to mention the enormity of the task, given Facebook's "amazing", amount of data.
This is perhaps the exact perspective that you would expect from a data scientist and not a rights expert.
(Ditto her quick dismissal regarding banning behavioral ads. This is the kind of trigger reaction that you would expect from an insider of a platform who has been privy to blackboxes and focuses on manipulating algorithms, data, vs being out of the machine where the harms flow.
Haugen also argued for radical transparency to be the only cure for social media's problems, but warned against the EU allowing 27 national agencies to enforce complex matters.
She suggested that if the EU did this, it would cause the DSA to collapse. Instead, she suggested that lawmakers create a central EU bureaucracy to enforce the complex, multilayered, and dynamic rules required to wrap Facebook-level platforms. She even suggested that ex-industry algorithms experts might find a home there, contributing their expertise and "giving back" by contributing to public accountability.
There are very few formal experts who understand the algorithm and its consequences. You can't get it as a Master's degree, and you can't even get a PhD. So you need to work in one of these companies to be trained up," she said.
"It will be very difficult to find enough experts and distribute them as widely as possible."
Given the many warnings that lawmakers have received about the need for regulators to find devilish details within self-serving data-sets or "fragile AIs", in order to stop platforms from pulling the wool over everyone's eyes it is instructive that Haugen would be opposed to regulators setting some simple limits, such as no personal information being used in ads.
MEPs also directly asked her whether regulators should place limits on platforms' access to data and/or inputs for algorithms. Her preference was transparency, not limits. (Although she also called for an end to Facebook purchasing third-party data-sets in order to enrich its ad profiling, as mentioned above.
The ideology of an algorithm expert might have some blind spots when it is about thinking outside the box for effective regulation of data-driven software machines.
For democratic societies to take back control of data-mining tech giants, it might be necessary to make some hard stops.
Haugen's greatest advocacy is her detailed warnings about the danger of digital regulation being fatally cut down by loopholes. Her assessment is correct in that the risks here are numerous.
She raised another possibility loophole earlier in her presentation, urging lawmakers to not exempt news media content form the DSA (which is another possible amendment MEPs are considering). She argued that if you want to make content neutral rules rules then they must be really neutral. "Everything is excluded and everything is included.
She warned that "every modern disinformation campaign will exploit digital news media channels by gaming the system." "If the DSA makes platforms illegal to address these issues, we risk undermining law effectiveness -- in fact, we may be worse off today than today."
Haugen was also asked a few questions by MEPs during the Q&A about new regulatory challenges in light of Facebook’s pivot to creating the so-called "metaverse".
She told lawmakers that she was "extremely worried" about the potential for increased data collection from metaverse-feeding sensor proliferation in offices and homes.
She expressed concern that Facebook's emphasis on building workplace tools could lead to a situation where opting out is not an option. Employees typically have very little say over business tools, suggesting that people might face a dystopic future choice of Facebook's advertising profiling or being able earn a living.
Facebook's new focus on "the Metaverse", as Haugen called it, illustrates what Haugen calls a "metaproblem" for Facebook. It is a preference to "move on" rather than fix the problems caused by current technology.
She said that regulators must use their power to force the juggernaut onto a safer course.