AI Ethics for Real Scenarios Leave Sci Fi for the Movies

10 Feb 2021

“Our democracies are vulnerable and under threat; if we want to maintain and realize our ethical and political values, we need to take a close look at the political role of digital technologies – now, not in the future,” asserts philosopher of technology Mark Coeckelbergh.

STI Experts

Coeckelbergh  talked to STI about his recent book AI Ethics  (MIT Press ) and his new work on the political philosophy of technology.

STI: In your new book AI Ethics, you seem to reject science fiction scenarios for thinking about AI. Why?
MC: I think science fiction can be a great starting point for philosophers to reflect on artificial intelligence and the future of humanity, but the discussion about the ethical aspects of AI is often dominated by far-future scenarios that distract from present and urgent ethical issues such as surveillance, threats to privacy, problems with responsibility and accountability, and bias. In the book, I argue that we should take distance from Frankenstein-like narratives or transhumanist fantasies about superintelligences, and deal with the issues by means of ethical reflection and effective policy frameworks.

STI: One theme you bring up in your book is responsibility, a recurring theme in your work. Why is this such an important issue today?
MC: AI is part of a range of automation technologies. Propelled by machine learning, these technologies are developing fast. Think about self-driving cars, programs that can write texts or create fake videos, and autonomous military technology such as drones. Apart from the numerous other challenges these new technologies raise, one issue is definitely: who is responsible for what these artificial agents do, given that we delegated these tasks to them?

STI: And what is your answer?
MC: In my view, which is in line with many recommendations of national advisory councils such as the European Commission’s High-Level Expert Group on AI and the Austrian Council on Robotics and Artificial Intelligence  of which I am a member, responsibility should always rest with humans. Yet, that’s the easy part of the answer. In my book I also ask: which humans? Technological action typically involves many people. Consider software creation, for example, which also spans a long period of time. This makes it difficult to ascribe and distribute responsibility.  

STI: In your recent article on the topic, you also talk about “relational” responsibility. What do you mean?
MC: That’s right. We need to ask not only who is responsible but also: to whom we are we responsible. I defend a relational conception of responsibility, which affirms that what matters for responsibility goes beyond the existence of an agent who knows what he or she is doing, to include what I call  “responsibility patients:” the people who are impacted by our actions and to whom owe an explanation for what we do. Exercising that responsibility is a problem when the decision is delegated to a machine. How can you explain to someone that he is sentenced to imprisonment or has been denied a loan? That the machine decided?  These examples are not science fiction. They already happen in the US, when AI is used to recommend such decisions.

STI: Another topic you write on is climate change. What does that have to do with AI?
MC: For me it’s important that we talk about global problems when it comes to AI – and preferably find global solutions. For example, in a recent opinion piece, I argued for global governance with regard to the covid-19 crisis. There is so little coordination, even within the European Union, and I strongly believe that global problems need global solutions. Climate change – often people talk about a climate crisis or a climate emergency – is of course one of the global challenges we face. In the last chapter of the book, I criticize the space craze among tech billionaires – like Elon Musk’s plans to colonize earth and other such plans that come straight out of a transhumanist play book. I also dream of going to space. Yet, I think in this case we’re talking about huge investments in technologies that do little to solve the problems we have on earth - problems which also differ for people in the affluent West/North and people in the Global South. I noted this when I addressed an audience at UNESCO a few years ago. 

STI: Can AI help solve those problems on earth?
MC: Sure, AI can help. For example, AI and data science can help us with tackling climate change. For example, it can help to predict weather events and to help us better manage energy consumption. There are opportunities. However, in a recent article  I also warn that AI creates new problems – including some for the climate and the environment. For example, some types of machine learning and data centers require a lot of energy, and like many risks and problems, vulnerability to climate change is not distributed equally across the globe. For example, people who live in areas that have regular flooding are going to suffer more from climate change than are others. This is a problem of justice. We also need to consider how technologies that may nudge us into greener lifestyles could also threaten human autonomy: it is possible to influence people subconsciously, but doing so treats them as things rather than as human beings. I would prefer a world in which we can have reasonable discussion about how we want to deal with that future, rather than one where people are treated like lab rats.

STI: What are you working on currently? What are your plans?
MC: I’m now working on what I call a “political philosophy” of technology, in particular for AI and robotics. As I explained last year in another interview with STI,  I’m very concerned that we throw some of our political values out of the window in order to cope with our current crises. Then, I focused on freedom restrictions due to COVID-19, but there are other examples. In the US, freedoms are also restricted in the name of protecting the public sphere from misinformation and hate speech. I’m not saying that this is bad, not at all. And it is so well intentioned. Yet, we should acknowledge that there is political value – freedom - that is damaged in the process. I am not a big fan of censorship, and in a liberal democracy, nobody should be. Unfortunately, there are still too many examples of authoritarianism around us.

STI: Are you mainly interested in problems about freedom?
MC: No, my next book , which will come out later this year, concerns that topic. I believe climate change and AI really challenge the foundation of our political value systems, and in the book, I try to develop a conception of freedom that should help to make them more fit for the digital age and the age of climate change. One that is not authoritarian, of course, but also not libertarian. There are also other political issues at stake: injustice, inequalities, power differences, and the erosion of democracy. These are partly due to the rise of populism and extremist politics. But digital technologies such as AI also have a hand in this, through their unintended consequences. For example, they can boost existing biases in society and help to spread misinformation, thus polluting the public sphere in a way that makes it difficult to have a reasonable and inclusive debate. Our democracies are vulnerable and under threat; if we want to maintain and realize our ethical and political values, we need to take a close look at the political role of digital technologies – now, not in the future.

STI: Looking forward to your next book, and thanks for your time!
MC: It was a pleasure!