How Does A.I. Influence our Identity/Sense of Self?
In his dialogue Protagoras, Plato says that the wisdom of Thales and the other Seven Sages was reflected in the brief but memorable remarks they each uttered when they met. In this spirit we put together a list of thought-provoking and insigthful quotes from our panel debates at Thales Day (alphabetically organized):
“An engineer is not necessarily an expert on ethics or good interactions between humans and machines. Therefore it is important that there are philosophers, psychologists and others working in this field...”
- Thomas Bolander (A.I. Engineer)
“The problem with A.I. is that it is a very general concept and it develops so fast. This leads us to wonder what to do and to this story of disruption, a story of exponential growth, and how things develop so quickly that suddenly it is too late. It is a very dangerous story, in my opinion, because people are too quick to accept it and say we have to act quickly. I think the right thing to do is to do things slowly, and to regulate it and make sure that we do not become slaves to the technology. After all, it is the technology that is supposed to help us and not the other way around.”
- Thomas Bolander (A.I. Engineer)
“It is actually fair enough that we have this dystopian fear because there is something about it being dangerous. However, it is not dangerous because the machines themselves run wild or decide to take over the world, but because us humans are a bit too naive, a bit too fast, and don't quite demand enough from these algorithms, and then it can spiral out of control.”
- Thomas Bolander (A.I. Engineer)
“A.I. literature and movies always seem to tell us to fear ourselves. It is their main message. Take for instance Mary Shelley's story of Frankenstein… the monster is exhibited as a creature we should understand and have feelings for, whereas the mad scientist Frankenstein is the one we should fear.”
- Per Juul Carlsen (Film Critic)
“...what is very interesting in many of these (A.I) stories is how the human being is imperfect. We create an A.I. that we wish to make smarter than ourselves, and this A.I. also feels insecure and therefore wants to exceed its master. In this way there is a dizzying competition over who is most important, God, human beings, or the A.I.”
- Per Juul Carlsen (Film Critic)
“...at one point she felt like it (SIRI) was alive, because when she said certain words it instantly responded to her... When listening in we all got the feeling that it was a living entity. We projected emotions on to this little silly phone, and I think with respect to emotions and A.I., it is not just about what we can put into these machines. It has as much to do with what we can project onto them, because we have emotions and imagination, and because we function the way we do.”
- Per Juul Carlsen (Film Critic)
"...'self-driving cars' is the way language fools us. Self-driving cars are not self-driving. They are machines within an infrastructure that supports their self-driving capability…Autonomous technologies do not exist. In order to have self-driving cars we must change the infrastructure, the rules and the laws... In order to be intelligent we must take into consideration how these technologies might actually develop. We must consider what we want and what we don't want. We want a lot of it, I think, because it is really smart and it will make many things easier and better. However there is also a lot we would like to avoid before its negative consequences become reality.”
- Cathrine Hasse (Cultural Analytic)
“I think it is worth mentioning that at one point a professor of economy analyzed the econometric models of prediction. He found that they are about 90 percent wrong, but it has not refrained anyone from using them, right? And it is possible that this will also be the destiny of A.I., that now we have it, and have invested so much in it, we will pretend that it has something sensible to say.”
- Cathrine Hasse (Cultural Analytic)
“With regards to technologies, there are three places they always impact first: the economy where they make the powerful more powerful…,with regards to sex, and that is how it has always been-- and with respect to war... if democracy follows humanistic ideals, the technologies will also impact handicapped people, elder care and so on, but only if society has a certain idea of what it means to be human... so A.I. is perhaps just a way of saying that we should all just pull ourselves together and consider how technologies can potentially develop and be used, and for what purpose."
- Ole Fogh Kirkeby (Philososopher)
“... there is a certain ethical logic to much technological development that says we should no longer think in terms of sustainability but in terms of resilience, because sustainability is no longer possible. There is no longer a balance between nature and society. The Chinese seem to have realized these things, and they have a philosophical background that enables this, that of Confucius, Mencius, and Laozi. They have a very fine ethical tradition if they want to make use of it, and it seems that they are beginning to.”
- Ole Fogh Kirkeby (Philososopher)
“there might be a big confrontation in the future between those who are A.I. literate-- who can interpret these systems and understand the language of programming, and the logic and matematics behind it, and those who cannot...”
- Ole Fogh Kirkeby (Philososopher)
“Immanuel Kant once said something to the effect of; ‘The person who can use his reason would be able to think critically, put himself in anyone's shoes, and be in tune with himself.’ Thankfully we have not discussed these things with respect to computers, apart from putting yourself in someone else's place. However, if that is what characterizes a human being, we are in trouble, because I don't think people are very good at that.”
- Ole Fogh Kirkeby (Philosopher)