Who controls our personal data? That question was at the heart of a panel discussion at CHM on November 4, 2019 about Netflix’s documentary The Great Hack, which exposes the underbelly of the Cambridge Analytica scandal and Facebook’s role in the security breach.
Expanding on clips from the film, director Karim Amer, writer and producer Pedro Kos, Guardian journalist Carole Cadwalladr, and cyber policy expert Marietje Schatte engaged in a lively conversation with moderator and science writer John Markoff. They explored the impact on democracy when our data—a trillion-dollar-a-year industry—is used without our knowledge or consent to manipulate our behavior.
Before it was persona non grata, Cambridge Analytica, a British firm owned by a defense contractor and funded by conservative businessman Robert Mercer, claimed that it had collected 5,000 data points on every American voter. That data set, though acquired through some questionable and some clearly illegal methods, attracted researchers interested in experimenting on how data profiles might be used to manipulate political behavior.
On Facebook, Cambridge Analytica targeted “persuadables,” people whose minds they thought they could change, with individually tailored fake ads. They also accessed the friend groups of those who unknowingly took “personality” quizzes and gathered their data too. The company essentially engaged in “information warfare” in service to their clients.
Writer and producer Pedro Kos describes the motivation for the film as attempting to understand what happened with the Brexit vote and hack of the US Democratic National Committee in 2016. He and director Karim Amer read Guardian journalist Carole Cadwalladr’s story on the connections between the Brexit and Trump campaigns and outcomes, and they wanted to explore how the “same players” were exploiting people’s vulnerabilities across different countries.
Amer came to find that giant tech companies like Facebook and Google, founded with good intentions to connect us all, are not preserving the ideals of the open society to which they owe their success.
Carole Cadwalladr points out that the future of her country, Britain, was actually decided in Silicon Valley, where there’s no accountability for tech platforms that are used to undermine democratic elections. She notes that Facebook CEO Mark Zuckerberg has refused to appear and give evidence at UK and European inquiries.
Carole describes a disturbing encounter with Google search when exploring the concept of “fake news” as the beginning of her investigation into how tech platforms spread disinformation, eventually leading her to the Facebook/Cambridge Analytica story.
Companies that have mined people’s data have power and they try to use it to shape the regulatory discussion. They promote a pervasive Silicon Valley myth that regulation can stifle innovation. But, as John Markoff points out, regulation in the auto industry actually sparked a new wave of safety and efficiency innovation. Microsoft, too, survived its antitrust suit. Touted at the time as a disaster that would destroy the company, today it is a prosperous tech giant.
Another argument against regulation is that tech is too complicated for regulators to understand. But former EU parliament member and cyber policy expert Marietje Schaake argues that tech issues are no more complex than other regulated sectors such as health care. Tech companies can be understood if they are more transparent about their algorithms and what else goes on “under the hood.” She believes regulation must start with the inviolability of democracy and human rights and must be bound by clear principles that promote competition to counter the power of tech companies.
Karim Amer takes issue with yet another tech myth, the Silicon Valley mantra that disruption is inherently good, and he argues that real fines can help hold tech companies accountable.
What’s different about the ads on Facebook? Unlike mass media ads in the past that appeared on public channels, in the 2016 election, people received political ads based on their personal data that were not the same as anybody else’s and there is no public record of those ads. That means we still don’t know exactly what happened. That’s frightening in light of the few sample ads recovered from Cambridge Analytica that show fear-mongering and explicit racism in Trump’s campaign ads.
Today’s elections are often decided on very slim margins where you only need to change the minds of 1 or 2 percent of voters.
So what do we do?
Chris is a very bright data scientist and Brittany formerly worked in human rights issues and on the Obama campaign. Attracted by the opportunity to work at an innovative startup, only over time did they come to fully realize what they had been involved with and the implications of crossing ethical boundaries.
Facebook was the weapon and Cambridge Analytica—“a propaganda machine”—knew how to pull the trigger. Without accountability and data protection laws, our data will continue to be weaponized on an industrial scale.
There’s always a tension between the great things tech can bring us and the unintended consequences. But every individual has agency, and people in Silicon Valley are beginning to ask themselves where they want to put their talents. More and more it’s not at companies that exploit the things that make us human and undermine political systems that foster the best in humanity.