The writer is international policy director at Stanford University’s Cyber Policy Center
In a much-scrutinised decision this month, Facebook’s oversight board ruled that the company had been justified in blocking former US president Donald Trump. But the debate about whether this was the right or wrong decision has overshadowed a larger, more pressing issue. Should a private company be the one to decide when speech threatens public order or the integrity of elections?
The revolution is here, but we still have not solved the question of who should govern the use of digital technologies. Lawmakers in democratic countries, particularly the US, are currently making it all too easy for companies to set their own rules.
Facebook, for instance, has single-handedly established many data collection norms. And almost all commercial sites employ cookies — small pieces of data — to identify users. These move with people around the web as they browse for groceries, clothes or medicines, allowing companies to more accurately target individuals with advertising. Apple has started allowing users to opt out of this sort of pervasive tracking. This has been praised as an important step to limiting data hoovering. But it also illustrates how companies are stepping in where regulators have failed to impose and enforce their own rules.
Take the spyware market. The use of spyware on unsuspecting citizens sits uneasily with democratic standards, such as the protection of press freedom and human rights, but the industry remains largely unregulated. These technologies are often sold as counter-terrorism tools, but they are also used maliciously.
In one illuminating case, WhatsApp and parent company Facebook filed a complaint to a US federal court alleging that Israeli mobile surveillance company NSO Group exploited a vulnerability in the messaging app to send data-extraction malware to 1,400 mobile devices. Some of these belonged to human rights activists, dissidents, diplomats and officials.
The outcome will probably set the precedent in terms of what limits should apply to the use of these stealth technologies and surveillance tools. But constitutional rights to privacy, safety and security erode when private companies — in this case through the law — set the standards. With innovative technologies, governments are often playing catch-up.
Facial recognition systems are another problem technology. They are banned for use by government in a handful of US states but continue to be rolled out. One company, Clearview AI, based in New York, stores billions of faces scraped from Facebook, YouTube and other internet sites in a massive database. Its instant-identification services are reported to be sold to gyms, casinos and law enforcement agencies.
Then there is the threat to the functionality of government. Democratic governments typically outsource their cyber security protection, relying on companies such as SolarWinds and Microsoft. This is often a far from waterproof set-up, as last year’s hack on SolarWinds illustrated. The ransomware attack on Colonial Pipeline just last week underscores the lack of resilience in US infrastructure.
Regulators can have a first-mover advantage. The EU’s General Data Protection Regulation, which limits what data companies can collect and store, gives more rights to individuals. Microsoft promptly adopted GDPR as its global standard and other countries went on to adopt similar laws.
The world can learn from this success. Tech companies do not have a mandate to decide on the parameters of access to information, the safety of journalists or the extent of privacy protections for hundreds of millions of people. Governments should not just respond to incidents but develop rules that put people’s rights, safety and security first.