Issue 1/2021 - Net section


The Price of Freedom

Interview with Mark Coeckelbergh about the Ethics of Artificial Intelligence

Christian Höller


The philosopher Mark Coeckelbergh has long been dealing with the development of intelligent machines and their effects on concepts of humanity, societal transformation and the ideology of the trans- and posthuman. His recent book AI Ethics (MIT Press, 2020) provides a survey of the most pressing moral questions opened up by these developments. Should we simply enjoy the new liberties generated by AI as future offers without any alternative? Should we principally mistrust claims of a non-human intelligence? Where does selflessness end with respect to the machinic “other,” and where should deliberations about a “trustworthy” AI start? Questions like these are tackled by Coeckelbergh in the following interview.

Christian Höller: The dominant discourse on Artificial Intelligence is very much governed by two extreme positions: fears of a soon-to-come “superintelligence” or “technological singularity” on the one pole, and utopian visions of an “earthly Elysium” (where all the nasty work and other worldly or material tasks are taken care of by machines). How do you think a more realistic but still critical approach towards AI should proceed with respect to these two dominant narratives?

Mark Coeckelbergh: There is a largely transhumanist narrative about the far future, where there will be transformed humans or where machines take over, a time in which we colonize other planets and travel to the universe. In a way, then, it would not really matter that we are no longer humans or are replaced by super-intelligent machines. Personally, I don’t really think that there will ever be machines that can be human-like or supersede humans by means of general AI. I do think, though, that we will have very good AI for specific tasks. Already nowadays there are impressive results for speech recognition or playing certain games. But I’m skeptical that general intelligence will be possible. Also, I think it’s dangerous to focus on such abstract and far-future scenarios because this neglects the problems we have on this earth, problems that have to do with politics and complex social and ecological issues.
The other vision of machines taking over is that we do not have to work anymore and that there will be just leisure time. The idea that technological developments would lead to a leisure society is actually quite old by now but it didn’t really happen. What does happen is that technologies in the context of capitalism are used for capitalist purposes. Rather than everybody having leisure time, there is a gap between people who work under severe or precarious conditions in digital environments, constantly monitored and so on, and people who lose their jobs or have no work at all. This narrative of a leisure society is also not happening and is again distracting from the real social and political problems raised by AI.

Höller: AI has become extremely pervasive in a lot of everyday techniques, from searching the internet to making sales recommendations, from self-driving cars (although there are still not that many of them) to predictive policing. What makes it so urgent that certain “ethical codes” are developed with respect to all these areas, when you could argue – superficially at least – that these are all techniques whose ultimate goal is to make life easier in lots of ways? That they are for the common good, so to speak.

Coeckelbergh: That’s the standard reason provided by the tech companies, that they want to make life easier and just want to contribute to the good of humanity. Partly, it is true that things do get easier in that we do not have to do everything ourselves any longer. But there are certain consequences of delegating tasks to machines. These consequences have to do with ethical problems like issues of privacy or questions of responsibility. Also, we could wonder how much easier life really becomes with the heavy use of electronic equipment and technology, which obviously has psychological and physical effects. E-mail, for instance, makes a lot of things easier but like every technology, the whole system starts to change with it and suddenly you are overwhelmed and find yourself in a new, almost Tayloristic environment, in which e-mails pass by as if you were working on a conveyor belt. So it really depends on how the whole system is reconfigured by technology.

Höller: A lot, if not to say almost all the relevant AI research is going on in closed-off labs run by private companies. Then, most of the time, particular applications are released to a public that do not have the faintest idea how exactly the mechanisms behind them are working. Should companies be forced to publicize the exact protocols of how the algorithms are constructed? And if so, wouldn’t this be a complete overload for most technology users?

Coeckelbergh: There is definitely a gap in terms of knowledge, which is also a gap in terms of power. I do think that we need a transfer of knowledge from the companies to the public but I do not think that this should only be a transfer of technical stuff. What we rather need is general knowledge about how the technology works without looking at the codes, and knowledge of what happens behind the screens. The main problem is that a lot of things are invisible to us in the workings of AI and data science, where they take our data, do things with them, sell them and so on. It should be an obligation of the companies to give us information about what they are doing with our data, we do not need specific technical details.

Höller: Your approach towards an “AI Ethics” is very cautious and deliberative, especially when it comes to the spectre of a threatening AI apocalypse, but also with respect to the relationship between humans and machines generally, on which a lot of the existing AI discourse seems to dwell. Inhowfar should the standard story of a fundamentally competitive relation between humans and machines be revised in order to create a better (and also ethical) understanding of AI?

Coeckelbergh: On the one hand, to see humans and machines as completely different isn’t very helpful because it doesn’t enable us to see that technology is actually human-made and we are in a position to change it. Also, we should see that the social and political system connects to technology in complex ways. Technology is human and humans are technological in the sense that we have always used technologies – to that extent, I go with the posthumanist approach. On the other hand, what needs to be corrected is the misleading and all-too-optimistic view that humans are machines, that both will form a symbiosis and live happily together ever after. There are definitely power tensions and imbalances that are created by the introduction of automation and AI. These problems need to be addressed and fairytales about the posthuman technological “other” only distract from acknowledging these.

Höller: Quite a lot within the approach of “AI Ethics” appears to hinge on the problem of what kind of moral status we should assign to machines. In terms of moral agency and moral patiency, aren’t we automatically affirming a certain anthropocentrism when it comes to assigning such status?

Coeckelbergh: If we are posing the question like that we are assuming kind of human-like machines and robots. Of course, we do have certain limitations as human beings to think about other beings. An area that could help here is environmental philosophy and animal ethics where we try to think about other beings not as necessarily human. But there is always a challenge that we do violence to the other by categorizing and ascribing certain properties. I therefore propose a relational approach to moral status that includes a precautionary principle, which says that we cannot be sure about the moral status of the other and that therefore, we should not try to fix it. We should draw no immediate conclusions here because in the past, we have made huge mistakes, for example concerning the moral status of slaves and animals. We luckily have different ideas about that today.

Höller: A lot of current AI is based on machine learning (or deep learning), usually implemented on neural networks. Even engineers who set up the starting conditions of such networks, choose the training sets of data and so on, admit that they sometimes have no clue how the mechanism generates a particular solution.1 It seems that the human mind, and by extension human moral agency, aren’t of any particular help here, as measures or models of how the algorithm works. What do you think?

Coeckelbergh: We can have some understanding of how the technology works in general, but the problem with so-called black boxes is that the decision cannot be traced back to the mechanism that the machine actually uses. This is of course a problem when the technology is used to judge people, or when judges use this system in court. The problem is not only that machines can make errors, what has already happened 2, but the overall intransparency of how these decisions are made. This is also a problem of responsibility because being responsible not only means being responsible as an agent but also being responsible towards someone, towards a patient. So, again, a relational approach is needed here. If I am judge and use AI to determine if someone has to go to prison I have the duty to be able to explain the decision.

Höller: Transparency obviously has its limits, not just with respect to the right to pursue private research interests but also, and maybe even more dramatically, as regards the technical nature of algorithmic decision-making. In what ways could “explainability” which is a huge and often-made demand vis-à-vis AI systems, start to mitigate against the kind of obscurity in which these systems operate. On what levels should this demand purposefully be articulated in order to guarantee a more trustworthy AI?

Coeckelbergh: First, technical measures can be taken to make the technology more transparent. But it’s also important to have regulatory legal frameworks where the humans who deploy the technology are obliged to explain how it actually works and justify their decisions. This needs to be addressed on the level of policy, which also relates to general policy challenges since nowadays, big tech companies make more and more decisions for the general public and who can even censor certain people. All these decisions are made without us and it is not clear on what criteria the decisions of the censors and content moderators are based. Even in traditional media like newspapers, AI contend moderation is being used and it’s not clear what criteria are actually used.

Höller: In 2017, there was the famous case when two chatbots that Facebook had developed were supposedly starting to communicate in a newly created artificial language of their own. The engineers had no other clue as to unplug them.3 Do you think they were right?

Coeckelbergh: It’s interesting that they developed their own language, which I think should be an object of study in itself. I don’t think the chatbots should have to unplugged and also, we should not be scared of an emerging superintelligence here. It rather needs to be studied how this was possible and if there is a way to develop machine creativity and machine intelligence in a way that can help us. Not as a step to general AI but for specific applications, for example in the medical and creative sectors.

Höller: Let’s move on to the issue of bias: When it comes to gender or racial bias that in lot of instances is encoded into AI systems the more general problematic is, as you state in your book, a very fundamental political question: Should we want the technology to mirror the actual state of the world (with all its different layers of bias and discrimination), or should we rather want it to operate towards a better future (trying to overcome these unfortunate states of affairs)? What burden does this place on technology, which of course is never socially or politically neutral but which, at the same time, cannot be expected or made responsible for fixing all existing social evils?

Coeckelbergh: I think we need to find a balance between the two aspects. A lot of effort of people working in AI ethics is invested into convincing tech people that technology is never neutral and that there is always a link with society. Of course, it is important that technology does fix what it can and contribute to a better world. But we should not put all the burden on the engineers or designers of the technology. There are also managers, companies, administrators. I think a holistic approach is important here and we should coordinate our efforts in different sectors and at different levels to make things better.

Höller: A desperately needed fix concerns the larger global topic of the climate catastrophe. Speculative proposals have been aired as towards a possible benign “nanny AI” or a globally operating “eco-AI”,4 which would probably be better equipped and more effective than humans to stop the consequences of global warming. Despite these being largely utopian proposals, what would, in principle, count against such a top-down but potentially more promising endeavor?

Coeckelbergh: It’s tempting to say that an AI could help us with these complex ecological problems. But a top-down approach is very dangerous and points towards authoritarianism. I think we do need AI to gain more knowledge about the process of climate change; also, it could be used for the coordination of policy. But I see a lot of danger in the possible manipulation of people even if the manipulation is for good purposes. This is definitely not the kind of political system that we want. We need a more democratic way of dealing with these matters. On the other hand, it can’t be the case that scientific expertise and technology do not play a role – indeed, we have to think how we can combine democracy with expertise and AI. I think we need both. We can see the danger of a purely one-sided utilitarian way of thinking in the film “I Robot” where the AI does things for the benefit of humanity and for instance makes people disappear.

Höller: Trying to solve the current pandemic crisis through such a top-down approach would not work either or would be equally dangerous?

Coeckelbergh: I think it would work but we would have to pay an enormous price – the price of giving up freedom. That is why my new book is about the question what kind of freedom we should strive for, which is not an easy matter.

Höller: Towards the end of your book, one of the conclusions you draw is to “find the right balance between technocracy and participatory democracy”5. Isn’t this a very “human-centric” suggestion, especially when you think of the posthumanist demand of also including “non-human” forms of intelligence and agency into one’s overall way of thinking? Or to put it differently: Shouldn’t the goal (in order to save the planet) be to find a sustainable mode of coexistence between human and non-human forms, including machine intelligence?

Coeckelbergh: Absolutely, coexistence and relationality are key here, and we should consider our politics towards non-humans. So instead of wanting to go to another planet I think we should deal with this planet and its problems. The aim is to find a different way of relating to other entities and also to the natural environment. If we don’t go this way we will increase the hyper-power and agency that was dominant in the Anthropocene. By wanting to become more master of the planet through geo-engineering we will actually mess up things even more and make the problem bigger. We should rather reflect on ourselves and ask what kind of attitude we should develop towards other human beings, towards non-humans, towards the environment that supports us and that we depend on. Seeing ourselves as vulnerable and dependent mortal beings is important for creating an ethics and politics that can make us survive and flourish together with others.

 

 

[1] Cf. Will Knight, The Dark Secret at the Heart of AI, April 11, 2017; https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/.
[2] Cf. Kashmir Hill, Wrongfully Accused by an Algorithm, June 24, 2020; https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html and Kashmir Hill, Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match, December 29, 2020; https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify-jail.html.
[3] Cf. Andrew Griffin, Facebook’s artificial intelligence robots shut down after they start talking to each other in their own language, July 31, 2017; https://www.independent.co.uk/life-style/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html.
[4] Cf. Roberto Simanowski, The Death Algorithm and Other Digital Dilemmas. Cambridge/London 2018, p. 144ff.
[5] Mark Coeckelbergh, AI Ethics. Cambridge 2020, p. 177.