Montreal Declaration for Responsible Development of Artificial Intelligence

On November 3, 2017, the University of Montreal launched the process of co-construction of the Montreal Declaration for the Responsible Development of Artificial Intelligence (Montreal Declaration). A year later, the results of these citizen deliberations are public. Dozens of events have been organized to stimulate discussion on societal issues related to artificial intelligence (AI) and 15 brainstorming workshops have been organized over three months, involving more than 500 citizens, experts from all walks of life.

The Montreal Declaration is a collective work that aims to guide the development of AI to support the common good, and guide social change by formulating recommendations with strong democratic legitimacy.

The citizen co-construction method chosen is based on a prior declaration of general ethical principles articulated around seven (7) fundamental values: well-being, autonomy, justice, privacy, knowledge, democracy and responsibility. As a result of the process, the Declaration has been enriched and now presents 10 principles based on the following values: well-being, autonomy, intimacy and privacy, solidarity, democracy, equity, inclusion, prudence, responsibility and environmental sustainability.

To sign the declaration, visit the website at:

About the Montreal Declaration for Responsible Development of Artificial Intelligence (1)

For the first time in human history, it is possible to create autonomous systems capable of performing complex tasks that only natural intelligence was supposed to be capable of: processing large amounts of information, calculating and predicting , learn and adapt responses to changing situations, recognize and classify objects. Given the immaterial nature of these tasks, and by analogy with human intelligence, we refer to these extended systems under the general name of artificial intelligence. Artificial intelligence constitutes a major form of scientific and technological progress, which can generate considerable social benefits by improving living conditions and health, facilitating justice, creating wealth, enhancing public safety and mitigating the impact of human activities on the environment and climate. Intelligent machines do more than perform better calculations than human beings; they can also interact with sentient beings, keeping them company and caring for them.

However, the development of artificial intelligence poses major ethical challenges and social risks. Indeed, intelligent machines can restrict the choices of individuals and groups, lower living standards, disrupt the organization of work and the labor market, influence politics, go against fundamental rights, exacerbate inequalities social and economic, and affect ecosystems, climate and the environment. . If scientific progress and life in society always entail a risk, it is up to citizens to determine the moral and political purposes that give meaning to the risks incurred in an uncertain world.

The lower the risks of its deployment, the greater the benefits of artificial intelligence. The first danger of the development of artificial intelligence consists in giving the illusion that we can control the future by calculation. Reducing society to a series of numbers and governing it by algorithmic procedures is an old chimera that still feeds human ambitions. But when it comes to human affairs, tomorrow rarely looks like today, and numbers can’t determine what has moral worth, or what is socially desirable.

For all these considerations, AI research centers, AI scientists and specialists, and business leaders around the world must support and sign the Montreal Declaration.

Moreover, in order to prevent artificial intelligence from being used to destroy populations, especially in times of war, it is essential that governments and military organizations sign the Montreal Declaration for the responsible development of artificial intelligence. .

(1) Preamble of the statement

Comments are closed.