Artificial intelligence (AI) has been one of the most debated topics of the last few months, and with good reason. To some, AI is the answer to all our problems, whereas, for others, it is a threat to our very existence as humans. Its development raises fundamental questions that are definitely not ethically neutral. Here are some of those questions:

How do we transition from a material society to a digitized society? What will the impacts of artificial intelligence be on society? What values will such a world embrace? What will our lives be like?

It is impossible to answer such questions without elevating the debate to the level of the existential questions that philosophers have been asking throughout history.

What is life? What sort of humankind do we want?

Science advocates claim that scientific development cannot be prevented and that, at any rate, science is neutral; which, without being false, is not exactly right. While science itself may be neutral, scientific development is not. Data is more than just data; just think of the recent Facebook and Cambridge Analytica excesses, whose consequences have yet to be fully assessed.

The development of AI should therefore not take place without reference to the topic of the future of life itself. What life could be. What life should be. What we would like it to be.

However, even before such questions can be asked, we believe that the greatest danger faced by humankind is a short-term threat, and not the development of artificial intelligence in itself. Instead, we believe that, in light of the significance of the above-mentioned questions, the greatest danger is that humans may turn away from critical thinking. The greatest danger at this point in history is that humans [already] stop asking questions about themselves and become concerned with little more than their own personal and immediate comfort.

Over the past couple of decades, it has become apparent that humans have gradually given up thinking; that they have replaced reflection with fun and entertainment. In a word, what humans want now is to have a good time. The Homo festivus announced by Philippe Muray has now reached a stage in his history where celebrating and being entertained are more important than anything else. Nowadays, one can easily see, in the media and in society as a whole, that thinking for oneself is not valued, and that comments or opinions are preferred. Basically, thinking takes too much time for a society that prefers instant gratification.

Thus, since people would rather play around than reflect, they choose to use free applications, immediately, because they want them, without realizing the real hidden cost of free offers. And this is precisely where the danger lies: besides needing to understand that nothing is free, how can we answer the existential questions faced by humankind when all we are doing must be fun and achieved instantly? Answering existential questions requires a long time and much reflection. Life is more than a game that can be started over again at any given moment.

AI, at this time, is a formidable computing machine. AI will always be able to compute faster than humans: an algorithm will always be able to beat the world’s top chess player. On the other hand, one must also realize that algorithms can feel nothing, will never feel anything, and will never experience any pride or satisfaction for beating humans in a game of chess, in the same way that they would not feel any guilt for having exterminated a population. Algorithms have no individual will or feelings. Algorithms are unable to appreciate art or experience happiness. Algorithms don’t think; humans continue to be the ones responsible for doing the feeling and the thinking. These two acts are the distinctive features of human nature.

As AI is being developed, it will be necessary to make sure that the actions undertaken by algorithms are based on a human ethical system for the benefit of humans, rather than on an AI ethical system for the benefit of AI or Silicon Valley.

Humans must continue to think and initiate the actions of algorithms. Algorithms and the complaints departments of start-ups have no business determining AI’s ethics.

The chief ethical concern at the heart of AI must be human preservation.


With humans, the possibility of creating links that will be more than mere connections will continue to exist; and the existence of values and virtues such as respect and empathy will also subsist. Understanding others, avoiding hurting them needlessly and being alive in harmony with them can never be reduced to a series of ones and zeros.

There are times in history when our decisions end up carved in stone; the greatest care must then be taken to avoid making mistakes.