AI Ethics Guidelines Global Inventory

AlgorithmWatch.org

April 2020

       

About

Last year, AlgorithmWatch launched the AI Ethics Guidelines Global Inventory to compile frameworks and guidelines that seek to set out principles of how systems for automated decision-making (ADM) can be developed and implemented ethically. We have now upgraded this directory by revising its categories and adding a search and filter function. With the support of many contributors we have compiled more than 160 guidelines. As our evaluation shows, there are still only a few guidelines which indicate an oversight or enforcement mechanism. In addition, the overwhelming majority comes from Europe and the US.

How we categorize AI ethics guidelines

The guidelines have been classified according to whether the organisation that drafted them has the means and processes in place to ensure compliance. The categories are therefore not suitable for determining the quality or accuracy of the ethical positions in the documents collected.

Another important factor is that we do not include adopted legislation in the inventory. Laws are by definition binding and would make these categories redundant. Furthermore, a comparison between the guidelines of a professional association or a company and the results of a legislative process is not useful for this inventory. In our opinion, the various documents that have accompanied and influenced these legislative processes are more relevant.

Binding Agreement

We understand a binding policy to mean a policy issued by an organisation/corporate body that has the means [or processes] to sanction non-compliance with the policy. In particular, this includes guidelines issued by government institutions and professional associations, but also certificates that can be revoked, or companies with structures that enforce compliance with their ethical guidelines, i.e. via an ethics board or ombudsperson that can be contacted in case of a violation of the guidelines and then has at least powers to question the company’s decision.

Voluntary commitment

These are clear guidelines that an organization or individual agrees to comply with, but that do not specify what the consequences of non-compliance might be. Examples of such commitments are code of conducts, voluntary agreements or oaths.

Recommendation

Recommendations are guidelines that are addressed to other actors or demand measures that the organisation itself cannot implement, e.g. a catalogue of demands of an NGO or the results of a panel of experts [scientific review].

Project team & contributors

Revision & classification: Leonard Haas & Sebastian Gießler | Evaluation of submissions & database maintenance: Veronika Thiel, Leonard Haas & Sebastian Gießler | Project coordination: Marc Thümmler | Website: Hauke Hille

Many thanks to the contributors: Jared Adams, Richard Benjamins, David Bluemke, Tom Cowley, Paul de Laat, Allison Gardner, Max Haarich, Marc Hauer, John Havens, Stephanie Huf, Dongwoo Kim, Riikka Koulu, David Leslie, Yong Liu, Leah Mathews, Annette Mühlberg, Michael Puntschuh, Alejandro Saucedo, Johannes Schöning, Ludwig Schreier, Wolfgang M. Schröder, Oliver Suchy, Reinier van den Biggelaar, Freyja van den Boom, Benjamin Walczak, Moritz Winkel, Yuanyuan Xiao, Katharina Zweig and many more.

BACKGROUND

Ethics between business lingo and politics: Why bother?

By Sebastian Gießler & Leonard Haas

„The word ethics is under siege in technology policy. Weaponized in support of deregulation, self-regulation or hands-off governance, “ethics” is increasingly identified with technology companies’ self-regulatory efforts and with shallow appearances of ethical behavior.“ 1

At the end of 2019, Elettra Bietti from Havard Law school summed up a growing dissatisfaction with ethical guidelines. In recent years, a large number of actors have begun to develop normative guidelines for the use of so-called Artificial Intelligence. These include international organizations, NGOs, representatives of civil society, professional associations, businesses of all sizes and trade unions, as well as various governments, intergovernmental organizations such as the United Nations and the European Union. The number and diversity of actors and their different goals make it all the more necessary to clearly define the terms used.

Ethics of Artificial Intelligence, robotics or machine ethics is a research area between computer science and philosophy. Some researchers are explicitly concerned with the development of machines, robots or autonomous systems as „explicit moral actors“, assuming that these actors are able to make independently plausible moral judgements and give reasons for them.2

Most of the guidelines in this inventory obviously do not deal with these issues. So what are the goals of these guidelines? AI ethics as we find them in this database are often positioned between instrumental-economic and ethical perspectives. AI ethics in this sense is rather business ethics. One aim of business and corporate ethics is the shaping of business practice through ethically sound recommendations. These recommendations range from compliance, dealing with misconduct, socially conscious entrepreneurship, conditions under which products are manufactured and possible long-term consequences of products.3 Business interests and principles of ‚right action‘ are weighed against each other. The goal is the socially accepted use of AI methods in various social and economic fields – not the question of the moral status of automated systems.

Guidelines for the use of artificial intelligence methods are not created in a vacuum, but are shaped by the decision-making processes, intentions and limitations of the companies and organizations that develop and use them. These guidelines are often instrumental in dealing with ethical conflicts, which is increasingly being researched empirically. Two studies shall be mentioned here as examples.

In her essay From Ethics Washing to Ethics Bashing: A View on Tech Ethics from Within Moral Philosophy (2019), Elettra Bietti, quoted at the beginning, diagnoses two positions in the discussion about AI ethics guidelines: „ethics washing“ and „ethics bashing“. In the former, it is argued that tech companies use the term ethics to avoid regulation and to fend off criticism from academics and civil society. The argument is that sufficient self-regulation by companies means that no legal regulation is necessary. „Ethics bashing“ basically understands the philosophical preoccupation with the topic as too abstract and contrary to political regulation. This observation shows that the term „ethics“ is now so over-used that it is no longer taken seriously by actors in the tech community. A discussion of these problems on a normative-ethical basis is considered futile from the outset.

The study Engaging with ethics in Internet of Things: Imaginaries in the social milieu of technology developers by Ustek-Spilda et al (2019) also observes different ways in which companies deal with the challenge of ethics: From disinterested pragmatists who generally see ethical principles as an obstacle to innovation, pragmatists who see ethics as part of corporate compliance, to idealists who see the ethics incorporated in their products as a tangible advantage in the marketplace.

These studies show that ethics guidelines cannot be understood without knowledge of the organisations and social factors involved. Ethical guidelines are not, in comparison to academic ethics, academic treatises about the right action, but are documents that are intended to have an effect on politics, economy and society. Ethical guidelines thus serve specific interests and objectives of institutions and companies. This database is intended to support researchers and academics in investigating AI ethics guidelines.

The understated power of AI ethics guidelines

The result that AI ethics guidelines are highly context-dependent on social circumstances and their respective fields of application is by no means surprising. Why is it nevertheless useful to keep an eye on the development of ethical guidelines?

The AI strategy of the German government, published at the end of 2019 offers a good start to point out the underlying complexities of AI ethics guidelines:

„We want to safeguard Germany’s outstanding position as a research centre, to build up the competitiveness of German industry, and to promote the many ways to use AI in all parts of society in order to achieve tangible progress in society in the interest of its citizens.“ 4

This example clearly illustrates the tensions within AI ethical guidelines. These range from social participation, individual freedom of action, the welfare of society to commercial opportunities of AI. It is not only important to ask about the choice and justifications, but also how these principles are applied in negotiation processes between, for example, autonomy and self-determination through strong privacy protection and the interests of the tech industry. Guidelines are always dependent on their authors and tangible economic interests cannot be separated from them. This tension can be found in almost all the guidelines examined.

Documents such as the German AI strategy, ethics guidelines and roadmaps are neither created in a vacuum, nor should their impact on research and development be underestimated. These documents can be understood as a mechanism for controlling uncertain technological future scenarios. The positions that are formulated do not run parallel to technological development but have an influence on research projects and method development and thus shape expectations, values and goals of technological development performatively.5/6 This is exemplified by the way in which problems such as algorithmic bias and algorithmic fairness are understood and dealt with. These issues are now talked about almost by default by companies and institutions when AI systems are introduced. The increase in the number of guidelines in 2019 and 2020 reflects this trend.

Due to this influence of guidelines on developments, it is important to deal with them. It is just as important to look at the institutional framework of technology as it is to look at the technology itself.


[1] Bietti, Elettra (2019): From Ethics Washing to Ethics Bashing: A View on Tech Ethics from Within Moral Philosophy, SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, 1. Dezember 2019. https://papers.ssrn.com/abstract=3513182.

[2] See. Misselhorn, Catrin (2018): Grundfragen der Maschinenethik. 2., durchgesehene Auflage. Reclams Universal-Bibliothek, Nr. 19583. Ditzingen: Reclam. p.33.

[3] See. Moriarty, Jeffrey (2017): Business Ethics, The Stanford Encyclopedia of Philosophy (Fall 2017 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/fall2017/entries/ethics-business/

[4] Germany (2018). Artificial Intelligence Strategy. Federal Ministry of Education and Research, the Federal Ministry for Economic Affairs and Energy, and the Federal Ministry of Labour and Social Affairs. https://www.ki-strategie-deutschland.de/home.html?file=files/downloads/Nationale_KI-Strategie_engl.pdf

[5]See. Marris, Claire, und Jane Calvert. „Science and Technology Studies in Policy: The UK Synthetic Biology Roadmap“. Science, Technology, & Human Values 45, Nr. 1 (1. Januar 2020): 34–61. https://doi.org/10.1177/0162243919828107. p.36.

[6] See. Ibid. p.37.

 

Explore the inventory

Anything missing?

Submit a guideline