Big Tech's Big Lie

We must regulate AI now

Big Tech companies often claim to be working for the social good, and in some cases, are even registered as ‘public benefit corporations’. But this is a mirage of altruism that seeks to help them avoid regulation and grab power. Writes Pat De Brún and Damini Satija.

 

In 2021, major AI player Anthropic registered itself as a “public benefit corporation”. According to the company “our PBC structure is important in aligning Anthropic’s governance with our public benefit mission”.

It should come as no surprise that some of the biggest names in AI – including the likes of Anthropic and Inflection AI – are flirting with new corporate structures and promoting renewed efforts towards self-regulation. There is currently unprecedented demand for meaningful state-led regulation over Artificial Intelligence. From President Biden’s recent Executive Order on AI to the recent political agreement over the AI Act in Brussels, governments are waking up to both the threats and supposed economic opportunities presented by AI - and competing to position themselves as leaders in the space of AI regulation.

In an effort to counter this momentum towards effective state regulation, tech CEOs like Sam Altman and Elon Musk have busied themselves touring parliaments and prime ministers’ offices, positioning themselves as the responsible stewards of emerging tech and the only people who can be trusted to save us from the existential threats posed by the very technologies which they wield. They claim to be in favour of regulation, though the unspoken part of these claims is always that they favour regulation on their own terms only. And what profit-seeking corporation wouldn’t want to dictate the rules which might seriously threaten their profitability? These high-level lobbying tours are facilitated by the tech CEO cult of personality which has become so normalized in Silicon Valley and beyond, combined with the inordinate economic power wielded by today’s leading tech corporations – several of which outsize most global economies.

SUGGESTED VIEWING The dream and danger of AI With Liv Boeree, Dominic Walliman

In addition to this charm offensive, which serves as an effective distraction from the current harms being caused by AI, efforts are underway among tech companies to redefine themselves as inherently good corporate actors, whose modus operandi is to advance public benefit. In 2013, Delaware revised its corporate legal statutes in order to allow corporations to convert into so called “public benefit corporations” – corporations which are mandated not only to act in the best interests of their shareholders, but also for the benefit of an identified “social good”. But public benefit corporations must be seen for what they are – just the latest iteration of tech companies’ long-standing efforts to maximize their profits at the expense of our human rights by evading effective state regulation.

This would not be the first time major tech companies have sought to stave off regulation by presenting themselves as inherently good and mission-driven. The architecture of surveillance capitalism – the dominant economic model underpinning the modern internet and the same insidious system linked to so many of our biggest global challenges - from insurrections to teen mental health crises – was erected under our noses under the guise of ‘not being evil’. This system, whereby access to “free” online services, from search to email and social media to streaming, is predicated on harvesting and analysis of our most intimate personal data, has been able to take root into all aspects of our social lives because of the early successes of Big Tech corporations at presenting themselves as benign actors operating for social good.

Although such excruciatingly vapid mottos as “don’t be evil” have been largely withdrawn from official marketing campaigns by tech behemoths, companies like Meta and Google still claim to be driven by their desire to do good. The problem is that, ultimately, these companies get to define what “good” is in the absence of objective accountability frameworks. And the past ten years have made it clear that what’s “good” for a Big Tech shareholder is rarely – if ever – what’s best for human rights.

Meta's stated mission is “to give people the power to build community and bring the world closer together.” This is the same company whose own internal research shows that the Facebook platform drives extremism and polarization, and fuels the spread of mis- and disinformation. It is the same company whose algorithms fueled the ethnic cleansing of the Rohingya people of Myanmar, with over 700,000 being pushed into refugee camps in Bangladesh as their villages were burned to the ground.

Likewise, Google’s erstwhile “don’t be evil” motto is complemented by its current mission statement of "significantly improving the lives of as many people as possible." This, the company that has systematically censored pro-democracy voices on YouTube in Vietnam and which signed a secretive multi-billion-dollar deal to provide advanced technologies to the same Israeli government and military which enacts a system of technology-enabled apartheid over the Palestinian people, and which is currently perpetrating relentless atrocities against the people of Gaza.

This is also not the first time we have seen corporate actors propose and adopt solutions which ostensibly reorient their business goals towards social good, but in reality serve as a new packaging for what remains at its core a profit-oriented model. Corporates have launched multi-million corporate social responsibility divisions purportedly working to invest in social causes, they ‘pink-wash’ their brands every year for Pride month and revel in providing consumers with more ‘ethical’ options.

In fact, it is practically unheard of for major corporations not to have some sort of purported social mission nowadays. Yet this era of ‘ethical consumerism’ and ‘corporate social responsibility’ (CSR) only serves to reinforce the embedded logic of (surveillance) capitalism; they act as a confidence trick which can lull us into a false sense of security about the true impact that these corporations are having on our societies and our human rights. It tracks that, when we associate these companies with good intentions, we are less likely to demand that our governments take action to rein in their most harmful aspects.

Public Benefit Corporations take the illusory benefits of CSR to the next level. By giving corporate-defined “social good” the stamp of legal legitimacy, the argument against truly meaningful regulation is strengthened further. When presented with demands for effective regulation based on objective accountability standards grounded in human rights law, corporations can point to their public benefit status as proof of their benign orientation. Combined with long-standing (and flimsy) arguments which contend that any government regulation is destined to stifle innovation, PBCs can be seen as a powerful shield against public scrutiny over corporate largesse.

Fiction and morality SUGGESTED READING Art doesn't make us better humans By Greg Currie

In the case of Big Tech, the root cause of harms lies in the predominant business models which are deeply problematic at their core and harmful by design. It is the absence of effective regulations which has enabled these destructive and extractive business models to flourish. In 2019, Amnesty International published a report, ‘Surveillance Giants’, showing that internet platforms which rely on advertising for revenue generation are inherently built on data extraction and exploitation practices, amounting to mass corporate surveillance. In other words, unprecedented abuses of our right to privacy are hard-wired into how these corporations operate.

As we witness the rise of a new era of powerful tech companies, namely those developing emerging forms of Artificial Intelligence tools like Chat-GPT, it is critical that we set ourselves on a path which ensures genuine public oversight and accountability and prevents these companies yet again taking the reins and defining the bounds of their power.

We left the surveillance-based business model unchecked, resulting in the corporate capture of our information and communication infrastructures, whereby our social lives and access to information are entirely mediated by corporate interests and filtered by profit-oriented algorithms – with devastating consequences for the health of our societies. There have been a few notable efforts to make amends for regulatory failures of the past – the EU’s Digital Services Act chief among them – but even the DSA is coming into force in a context where Big Tech firms have amassed so much concentrated power that it is hard to imagine any single regulation fundamentally dislodging their stranglehold over markets and public discourse.

We have already been slow to regulate earlier forms of AI both in the public and private sectors. Facial recognition systems used by police forces disproportionately and mistakenly target black and brown communities, algorithmic amplification of harmful content on social media platforms has the power to sway elections and incite genocides, and governments are able to use discriminatory systems to make life-altering decisions about people, whether that is where they go to school, what healthcare they can receive or their likelihood of committing a crime. All this has further expanded the power of AI companies, fueling the proliferation of their tools. We cannot repeat the mistakes that have led to today’s reality wherein a handful of corporations and CEOs have dictated the solutions and swayed governments away from effective and binding regulation, only serving to enable these actors to superficially wash their profiteering business models with a value system.

Already, these new generation AI companies are becoming enmeshed in existing power structures where self-regulation is the modus operandi. Open AI was founded by Sam Altman with the mission to deliver safe Artificial General Intelligence, claiming that this was the inevitable path that humanity was on and therefore that Open AI would take on the responsibility to deliver this technology to us safely and for the benefit of all. Fast forward and we find that Open AI is no longer a not-for-profit, it is owned by one of the most powerful companies In the world, Microsoft, and the flagship system it has created, ChatGPT, is being mainstreamed at a rapid pace without any regulatory measures in place. As the recent Altman firing-and-return saga yet again underlined, whenever there is a conflict between profit-seeking and principle, profit tends to win out in Silicon Valley. And in the absence of effective regulation and public accountability, such outcomes are all but guaranteed.

With AI companies, we are faced with an even deeper problem because of how successful these companies have been at presenting themselves as agents of social good. This mirage of altruism is part of the reason why this generation of tech CEO is being so warmly welcomed into the halls of power and invited to shape the public discourse on the future of AI regulation.

But changing the status quo requires dismantling and diluting the power of those who have it and redistributing it to those who should define how tech develops and what ‘public benefit’ tech should look like. This means making laws that are adopted following regulatory processes that are genuinely democratic, participatory, and which prioritise the perspectives of the communities most marginalized and harmed by tech. Once such laws are passed, regulatory oversight must ensure continued and genuine public participation, and in order for it to be effective, regulatory bodies must be adequately resourced and empowered to counter the financial might of Big Tech.

Fortunately, there are pre-existing frameworks which we can draw upon when seeking to ensure that AI is used for good, and that the risks associated with its use are effectively mitigated. These solutions can be found in the human rights framework. International human rights law is clear that states have a duty to enact effective regulations which can meaningfully protect people from corporate abuses, and AI corporations are no different - regardless of their purported “public benefits”.

21 09 15.We cant tame tech SUGGESTED READING We can’t tame big tech By Martin Moore

Such regulations must include provisions to effectively prevent potential harms related to bias and discrimination, privacy and freedom of thought – that means outright bans on the most harmful deployments of AI, such as remote biometric and mass facial recognition technologies. It also means that high-risk uses of AI must be subject to strict scrutiny and oversight, including robust risk assessments and mandatory human rights impact assessments, conducted by properly funded and staffed regulatory bodies. New deployments of AI systems should be subject to prior approval by competent regulators with technical expertise.

Such measures could effectively stop in its tracks the AI arms race which is playing out among major tech companies and which poses an escalating danger to public safety.

 

Authors Pat de Brún and Damini Satija have written this article in their personal capacities. The views expressed are their own and do not necessarily represent the official position of Amnesty International.

Latest Releases
Join the conversation