Let’s make sure AI serves a collective purpose and builds a society that benefits the public good.

Catherine Miller is the Director of the European AI & Society Fund. Previously she worked in tech policy as Director of Policy and then CEO of Doteveryone, the responsible technology think tank, and as a consultant. Prior to that she had a career as an award-winning journalist for the BBC covering news and current affairs from around the world for TV, radio and online.

As the world is increasingly powered by AI, we as civil society need to take an active interest in cocreating it in the public interest. We cannot passively accept it without question, as part of the air that we breathe.

Human dignity must sit at the centre of this work. AI technologies use algorithmic systems to predict risk, match compatibility and recommend information across many crucial aspects of our daily lives. Human dignity requires a true sense of our needs, wants and desires to secure our autonomy and our freedom of thought and action. Ensuring that AI technologies do not rob us of this dignity in making decisions on our behalf, limiting our access to a wide range of information or influencing our ideas, feelings and opinions is the essential job of civil society.

The goal is not to banish technology. We know the game changing impact of AI in medicine and science, from breakthroughs in detecting various diseases including cancer and heart disease, to superbug-killing antibiotics and novel drug discovery. The data-crunching capacity helps interpret big data to screen for disease, tackle climate change and predict future pandemics. This is an awesome power that can be harnessed. But to do so, we – as individuals, communities and societies - must define the type of society we want, and then determine how best to use the opportunities of technological innovation in ways that align with our values. This requires mobilising a broad spectrum of expertise and experience, and a range of communities who represent our diverse societies and our various intersecting, and sometimes competing, interests.

Regulation can improve everyone’s experience with technology. Then people will start to believe that the power balance is shifting.

It is also an act of imagination. As James Bridle describes beautifully in Ways of Being, AI can help us rethink our own idea of what intelligence is. It shines a light on our own limits and the ways we have failed to recognise many other forms of intelligence across nature. Used with care and in consideration of the needs of the whole planetary ecosystem he outlines how AI could be an opportunity to shift our practices away from a selfish drive to fulfil only human needs at the cost of the environment that sustains us.

This is not just a technology issue. It is a political issue, a democratic issue, an economic and a social justice issue. As Kate Crawford writes in the Atlas of AI, “Artificial Intelligence is now a player in the shaping of knowledge, communication and power… Artificial Intelligence, in the process of remapping and intervening in the world, is politics by other means.” At present, the world is being remapped by the corporate giants who design and create these technologies and by those – including governments – who deploy them.

This shift in power happens when we defer responsibility and decision-making to automated systems, particularly when we use them to solve problems that we, as humans, have traditionally found hard. Who should we hire? Who should we support with public funds? Who should we allow to live in our country? Faced with these questions, it’s tempting to turn to the apparent objectivity of a machine. But Artificial Intelligence will only draw data from our past imperfect judgements and systematise it.  Amazon noticed this when it used AI to recruit engineers, but found the system automatically eliminated women's applications as it had learned from history that the company’s pool of ‘top’ engineers is mostly men. Even without using gender as a criteria, it would sift out women based on the kinds of sports they participated in or if they had been to single sex universities.

This is not just a technology issue. It is a political issue, a democratic issue, an economic and a social justice issue.

This kind of issue is not isolated. Algorithms deployed by the Austrian unemployment system gave women less support to get jobs because they’ve historically been less likely to secure stable work. Facial recognition software used by the University of Amsterdam to prevent student fraud has been found not to recognise Black students, depriving them of access to education. These are just some of the ways that the use of AI is already concretely affecting people’s lives. Thanks to the work of civil society they are being investigated, exposed and challenged through existing data protection and human rights legislation. At the same time digital rights and social justice organisations are working to ensure new regulation for AI being developed in Europe will secure fundamental rights and dignity.

This is a phenomenal achievement. When I started working in this field six years ago very few people believed that it was desirable, or even possible, to regulate digital technologies. Now policymakers across the world are working to create new rules that will hold tech companies to account. The European Union is leading the way with two substantial pieces of legislation to regulate platforms, the Digital Markets Act and the Digital Services Act, already on the books and the first attempt to comprehensively regulate Artificial Intelligence in the final stages of negotiation. Industry lobbyists have pushed back hard but despite being massively out-gunned, civil society organisations have built a powerful coalition across the fields of digital rights and social justice to secure tangible measures that will protect the public and promote our human dignity.  

These are great victories, but they must translate into action. We need to show that regulation can improve everyone’s experience with technology. Then people will start to believe and to feel that the power balance is shifting, that they do have a right to hold big tech, governments and corporations accountable and take back the power. That there is a form of redress and accountability.

And this is at the heart of what will make a difference and put human dignity front and centre of technology. AI has the potential to transform our lives. Let’s make sure it serves a collective purpose and builds a society that benefits the public good.

Catherine Miller, European AI & Society Fund