Interview with Dr. Emmanuel R. Goffi on Law, Ethics and Artificial Intelligence

March 29, 2021

Dr Emmanuel R. Goffi is a philosopher of technology. He is the Co-Director and Co-Founder of the Global AI Ethics Institute in Paris. He is also a research associate with both the Big Data Lab at the Goethe Universität in Frankfurt, Germany, and the Centre for Defence and Security Studies in Winnipeg, Canada.

He served in the French Air Force for 27 years. His academic career his made of experiences in France, Canada, and Germany, where he has been teaching and doing research in IR and ethics for 15 years. He is still solicited to teach and give conferences all around the world.

Emmanuel holds a PhD in Political Science from Science Po-CERI.

 

1. Can the Internet be considered a no-man’s-land? If not, what limits it?

It definitely seems that the Internet is a normative vacuum where anyone can state almost anything without being held accountable. The power of the Internet has become so huge that it is now used widely.

2. What regulates the behavior on the internet? Laws? Ethics? Cyberspace rules? Is ethics on the internet guided by the same rules as in the non-digital world?

Nothing really regulates behaviors on the Internet. Obviously, you have legal instruments that have been set in some countries. Yet, it seems that their effectiveness is still to be proved. Besides, given that regulations are uneven at the international level, any regulations in a given country would be challenged by actors in others places ill-regulated. The issue with regulations is that they must go along with formal sanctions, which disqualify ethics, and consequently with resources to both check the Internet and sue those who violate rules. It is time consuming, and it has a cost that many countries are not ready to assume.

As far as ethics is concerned, behind digital behaviors are hidden individuals. So their ethics is basically the same as in the non-digital world. For some actors it is clear that being hidden helps them behave in a way they would not dare elsewhere. Being hidden gives you the feeling of impunity. Behaving through keyboards and screens create a moral buffer that gives us much more freedom to express our dark sides.

Nonetheless, until there will be fully autonomous artificial intelligence systems that will make decisions by themselves, those human beings that are acting behind the veil of computers do apply the same moral rules as all of us. The difference ifs that our choices in terms of how to act are less constrained so we can easily get freed from our social inhibitions and express our darkest side.

3. In general free-of-charge online services (searching engines, social media, for example) commercialize personalized ads based on users’ online experience. Is it fair and ethical to charge users with customized offers to use a service?

That is a tricky question. Basically, my point would be to stress that ethical doesn’t mean good. Both words are not synonymous. So strictly speaking everything is ethical in the philosophical sense since ethics is the appraisal of what is bad and good. The real question is, is it ethically acceptable? Then the answer depends on who you are asking. For those who benefit from these practices are perfectly acceptable. They are huge sources of revenue, they provide some people with jobs and salary, they even consider that their services are helping consumers to make choices in a faster and better way. For those who disagree with that it is obviously not acceptable. The issue here is that most of the time there is a huge hypocrisy from those who are complaining against this kind of practices. Indeed, we all offer tons of data without even wondering how they will be used. We like to flaunt ourselves on social network providing others with data that are very personal and that can be used in a harmful way.

Most of the time we forget that as users we are as responsible for what is happening on the net as those we are lecturing or condemning for their misbehaviors. It is very human and very comfortable to put the blame on others without questioning one’s own behavior.

4. How can one deal ethically with users’ privacy and data storage issues?

That is impossible. First of all because we are addressing the issue through in a biased way. This issue is mainly a Western issue, to which we try to offer a Western solution. Privacy does not mean the same thing if you approach it through say a Buddhist perspective. Privacy is related to the reification of individual, putting the self ahead of the group. In the Buddhist tradition the Self does not exist, so individuals are seen as cogs in a wider ecosystem. They are part of a society towards which they are accountable. So, their privacy is limited by their accountability towards others. This relational ethics can also be found in other wisdoms and traditions such as Ubuntu in Africa and Hindouism. What that means is in some places privacy is either irrelevant or very different from our Western perspective.

Consequently, when it comes to addressing the question you have the choice between accepting that due to ethical particularism solutions will be local, or fall into moral absolutism and impose solution to other cultures. At the end of eth day, if you go this way, you can be sure that rules will not be applied.

5. What is your opinion about selling users’ data for merchandizing?

I do not have any issue with that. As I mentioned it earlier, we are all grown ups responsible for what we leave on the internet in terms of personal data. So, if you give personal information on websites and social network, you cannot complain that some people will merchandize them. I do believe that we are now very well informed about the risks associated with personal data, so we can pretend we are mere victims.

Once again it is a matter of individual responsibilities. I fear we are now, too lazy to think. We live in hedonistic society where our comfort surpasses any other consideration. Anything that request efforts is rejected and we fall quite quickly into easy options.

If we were less lazy, we would first ask ourselves what kind of society we want to live in and we want to hand over to future generations. Then we might discover that the quest of happiness is relational and not individual. We would discover that happiness goes hand in hand with some suffering, and that a life totally cleared of difficulties cannot be satisfying. This request intellectual efforts we are no longer ready to make.

The paradox is that we see that some things are not acceptable. But it is way more comfortable to give up our free will to others that will make decisions for us, than to decide by ourselves.

At the end of the day, we accept to be instrumentalized and used for financial and political purposes.

Here there is a strong need for revitalizing philosophical debates on what we are as human beings and what we ultimately want.

6. Considering that fake news is present in everyday life, can it serve as an example of unethical behavior?

Fake news is nothing new. What is new is the quickness and ease of their spreading. Back to my previous comment, it is also important to keep in mind that unethical does not mean anything here. Some might find fake news ethically unacceptable; others will see it as perfectly justifiable. No one holds any absolute truth.

So fake news is like lies, or deception strategies used in the military and the political realms. They are tools for specific ends.

If you want to assess their ethical acceptability, you have to focus on specific cases in specific situations, in specific environment. Any other universal standpoint would lead to moral absolutism.

Here again there is some kind of hypocrisy saying that fake news is “unethical” for if we were less intellectually lazy, we would look for better and more reliable information using and comparing different sources to check whether an information is true or not. So, we are condemning something that exists only because we do not want to spend time double-checking information.

7. What is your opinion about creating AI algorithms free of “human teaching”? Considering some cases of racism reported by the media, should the AI creator be held responsible for an act contrary to ethics?

I do not see how, at least today, we could build this kind of algorithms. Algorithms work on data, and data are provided by humans, so algorithms are taught by humans. It will remain so well after humankind will have disappeared from the surface of the globe.

Regarding racism, I do believe that this need to be contextualized. First racism is an ill-defined word often used to label people without checking if they are actually racists. Racism speaks to everybody, but everybody is not able to define it.

So, where there are real cases of racism, obviously some people must be held responsible depending on the situation. But as it is the case in any democratic judicial system, responsibility cannot be established a priori. It will be established on a case-by-case basis through a thorough investigation that will determine who was knowledgeable n the matter; to what extent; whether things have been done purposedly or not; and whether they could have been done differently. I think it is a bit too easy and ethically unacceptable to point someone out as a culprit beforehands.

8. Do you think we will achieve an AI equivalent to the human being, at some point (complete-AI)? Does mankind need this?

I do believe we will reach the point where we will have intelligent machines that will look like human beings and have the same abilities. Lots of work and money is invested in research going that way. If you look at the past two decades you will see that we have progressed in a concerning way towards some kind of post-humanism.

Obviously, the question remains to know whether we are moving towards human-like machines or towards a new kind of sentient beings mad of either flesh and technology, or technology alone. My feeling is that we already are at a point where we all are exocyborgs augmented by cars, glasses, cell phones, and other technological tools.

We see two trends: on the one hand more and more augmented human beings, on the other hand more and more intelligent machines. This will inevitably lead to something new that will not be a change of paradigm but the mere continuation of life through other means. Machines will not “replace” human beings, they are the next step of our evolution.

9. How can AI be exploited to benefit humanity, in general?

There is no way AI could benefit humankind in general. At best it can benefit the greatest number. Yet, AI will benefit humankind only if it does not play the role of a Trojan horse from Western interests and if the debate on this transforming technology opens to different voices in the non-Western world.

AI can be both beneficial and dangerous. What we need is a debate, a real one where people listen at each other without rejecting any option. We also need to question our Western tropism to think we hold the truth. Consequently, we need to open the debate to new philosophical perspective on AI. So far, we have been addressing the topic through a very limited Western perspective made of a highly superficial understanding of ethics mostly used as a communication tool. This cosm-ethics, namely the reassuring discourse built on ethics, is problematic since it kills reflection. But at the same time, it is the result of our intellectual laziness, which allows others to shape the discourse and consequently our perceptions and behaviours.