October 14, 2022

"Ugly? Bad? Good!": what is artificial intelligence ethics and how it helps make algorithms useful and safe for humans

How can AI have ethics at all? It's inanimate after all! Of course, AI itself cannot distinguish between good and evil. The task of humanity is not to make an AI similar to humans, but to teach it to act on the principles of justice and morality, rather than the way most people usually behave. We answer popular questions about who formulates ethical principles for AI and why, and where and how they are applied.

1.         Ethics is something that comes from philosophy, isn't it? What are we talking about at all?

Indeed, originally ethics was a philosophical discipline about norms of behaviour towards other people, society, God etc. and later about internal moral categories. There is, for instance, a well-known ethical maxim of philosopher Immanuel Kant - the categorical imperative - "Act in such a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a means."

But ethics can also be understood much more simply as a system of moral principles which

allow everyone to distinguish between good and evil; to understand what is good and what is bad. Ethics regulates the coexistence of people, communities and nations. The golden rule of ethics is "do not do to others what you would not wish to yourself".

2 And what does ethics have to do with technologies?

For a long time, people have divided the world into animals – that act purely "by nature", based on instincts - and rational people, whose doings are rational and based on ethical principles.

Therefore, if someone violates them, he or she is either a bad person, unworthy of living in

society, or a madman. However, over time, humans began to delegate some of their actions to machines. The machines now started to perform certain acts, but with no thought or moral guidance.

Nowadays, a special class of technologies has appeared that solve individual intellectual tasks, which were previously subject only to humans. This is sp called weak, narrow or applied artificial intelligence. It can play chess or drive a car. And while in the former case, AI is unlikely to have a strong impact on people and society, in the latter it has to deal with many ethical dilemmas.

Some of these are summarised in the "trolley problem" thought experiment - a variety of choice situations where casualties need to be minimised. For instance, an algorithm for an unmanned car must decide under critical conditions what to do - drive off into a ditch and put the passenger at risk, or move on, endangering traffic offenders.

Most moral dilemmas and principles can be reformulated into these kinds of problems and then tested empirically or experimentally. This is called the operationalisation of ethics.

3.         Am I correct in assuming that we have to teach the AI to act like a human?

Alas, rather the opposite. Let's figure out in a nutshell what any artificial intelligence system consists of. Firstly, to learn how to solve any task, for example, to identify and classify people's faces, AI needs data. Very, very much data. That's why they are also called "big data".

The same images of people's faces are often collected by bots on the Internet. In addition to the images themselves, descriptions are also needed for them, which are usually taken from the context of placing the image. As a result, the model, when solving the classification problem, more often refers women to the category of "domestic worker" than, for example, "doctor", following sexist stereotypes rooted in the social foundations of some countries. This is a data-level bias inherited by AI directly from humans.

The next level is the construction of the algorithm and its training. Here too, biases or distortions may arise. The problem is that usually the operation of any algorithm for the researcher or engineer is a black box. It is difficult for us to explain how the model made a particular decision, or made a prediction. The problem at this level can be overcome by the eXplainable Artificial Intelligence (XAI) approach.

Finally, there is also a bias at the level of interpretation of the result of the algorithm. This is where the role of humans is important. Lets suppose we train a model on data from one population, get the necessary predictions, but then apply it to a completely different population, in a sense, thus implementing the transfer (transfer) of machine learning. But now the result can be significantly different, and it needs to be taken into account and controlled: what happens, alas, this does not always happen.

The last point is the application of AI in real life and tracking the results of such application. If algorithms infringe on people's rights, they need to be corrected or the work of thr system has to be discontinued. For instance, in one municipality in South Netherlands, having proven the distortions in its operation local activists have pushed to stop using the Totta Data Lab algorithm for profiling welfare recipients.

So it would be more correct to say that we should teach the AI to act on the principles of justice and morality, rather than the way most people normally act.

4.         What else is artificial intelligence ethics needed for?

First and foremost, for the effective development of AI technology. Any invention, once it leaves

the creator's garage or the scientist's laboratory, usually immediately triggers fears and

apprehension in society. Take cars. In the early days of the automobile production, self-propelled, slow and rumbling steam cars terrified horses, carriers and pedestrians in the streets.

The two countries, Great Britain and Germany, took a radically different approach to the

problem. In Foggy Albion in 1865 they introduced the "Locomotive acts" - in front of every steam car ("horseless carriage") a man was to walk with a red flag raised during the day or a red lantern lit at night and thereby warn of the arrival of a new "technological monster". And there had to be at least two drivers. Such restrictions led to a drop in the speed of steam cars, the inconvenience of using them and completely levelled the already insignificant advantages of the new transport over the horse-drawn one at that time..

Germany, on the other hand, did exactly the opposite by allowing steam cars to travel freely, which stimulated their rapid development and the formation of the now world-famous German car industry. The English were radically behind in development of advanced motor car transport. They realized the problem only by the end of XIX century, when the law destructive for the industry was finally cancelled in 1896.

At the same time, there were no universally codified international traffic rules until 1968,

although Henry Ford started the first car assembly line back in 1913. By 1927, there were only about 15 million Ford Model T cars on the streets and roads of the world, without counting all other cars. For a long time, there were no traffic lights in cities (in the US until 1914, in Europe until 1918), no road markings, no regulators, it was possible to drive under the influence of alcohol, etc. It was not until the late 1960s that the Vienna Convention on Road Traffic was signed, which standardised and consolidated the traffic rules, and the Vienna Convention on Road Signs and

Signals was signed to regulated road markings, work of traffic lights etc.

In other words, norms for dealing with a particular technology are first formed in society, sometimes for a very long time, as ethical principles, and then institutionalized and legally fixed. The same process applies to AI, which, on the one hand, enables it to develop effectively and on the other hand - protects people and society as a whole from its possible negative effects.

5.         When and who first thought about the ethics of artificial intelligence?

Since the concept of "artificial intelligence" itself appeared only in 1956, and public interest in the subject even later, in the 1960s, the first people to raise and try to solve such problems were science fiction writers. They talked about humanoid intelligent robots, so the whole movement was called roboethics.

In Karel Capek's novel R.U.R. (1920), where he first proposeed the term "robot", there was a League for Humanity. It declared that robots should not be subjected to inhumane treatment. Then the famous "Three Laws of Robotics" plus the "Zero Law" formulated by scientist and author Isaac Asimov in the short story "The Roundaround" (1942) and the novel “Robots and Empire” (1986). They sound like this:

0. A robot cannot harm humanity or allow humanity to be harmed by its inaction.

1.         A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.         A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.

3.         A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

From the applied perspective, the topic has been revisited since the early 2000s, when robots began to actively populate not only factory assembly lines, but also appeared on city streets, in people's homes, etc. In 2004, the First International Symposium on Robotics took place in San Remo, Italy, and in 2017 the European Charter for Robotics was adopted by the European Parliament. The debates on robotics has now intensified again due to the mass proliferation of combat autonomous robots and drones, as well as the emergence of humanoid sexbots.

6.         Who formulated the basic ethical principles in the field of artificial intelligence, and how?

With the end of the 'second winter of artificial intelligence' and the new heyday of neural networks and deep learning models in the 2010s, the world's largest IT companies became concerned about AI ethics. In 2016, Microsoft published "10 Laws for Artificial Intelligence". IBM has also articulated its own vision of AI ethics. To date, 30 of the world's largest IT companies have their own sets of rules, codes and ethical principles in the field of artificial intelligence. Among them are three Russian companies: ABBYY, Sber and Yandex.

But even more important and profound has been the approach of various non-profit organisations (NPOs), who bring together academic researchers, engineers from commercial companies, investors and business people. The most famous case study was the "23 Principles of Artificial Intelligence" formulated at the 2017 Azilomar Conference. They were signed by researchers and thinkers such as Ian Lekun, Ilya Sutzkever, Ray Kurzweil, Stephen Hawking, Elon Musk, Vladimir Onoprienko from Russia and many, many others. Specifically ethics and values relate to 13 principles.

7.         Can we learn more about the Azilomar Principles?

These are the principles dealing specifically with ethics:

Safety: AI systems must be safe and reliable throughout their lifetime, and verifiable where

possible.

Transparency of failures: If an AI system causes harm, it should be possible to find out why this happened.

Transparency of the judicial process: Any involvement of an autonomous system in judicial decision-making must provide a satisfactory explanation, verified by a competent commission of people competent in the field.

Responsibility: Developers and creators of advanced AI systems are interested parties in the moral consequences of their use, misuse and any actions; they are responsible and have the power to shape these consequences.

Values alignment: highly autonomous AI systems should be designed so that their goals and behaviour can be guaranteed to be aligned with human values throughout their operation.

Human values: AI systems should be designed and operated in a way that is compatible with the ideals of human dignity, rights, freedoms and cultural diversity.

Privacy: People should have the right to access, manage and control the data they generate, taking into account the ability of AI systems to analyse and use this data.

Freedom and private life: The application of AI to personal data should not unduly restrict people's real or perceived freedom.

The common good: AI technology should benefit and expand the opportunities of as many people as possible.

Common prosperity: the economic prosperity created by AI should be widely distributed for the benefit of all humanity.

Human control: people must choose how (or whether they should do it or not) they should delegate decisions to AI systems in order to achieve human-defined goals.

No abuse of power: power gained through control of highly sophisticated AI systems should be applied with respect and should improve, not undermine, the social and civic processes on which the health of society depends.

AI arms race: Any arms race in lethal autonomous weapons should be avoided.

8.         Ugh, that is so many principles, maybe there are simpler systems of ethical postulates that are easy to remember?

In general, most specialists in AI ethics agree on the following set of ethical principles:

privacy

transparency

reliability

accountability

equal access

security

credibility

verifiability

controllability

9.         It seems that so far these are just declarations and all these principles look very abstract. How are they applied in practice?

For ethics to work, strategies for implementation and operationalisation of ethical principles in the form of specific applications need to be in place.

Currently, governmental and supranational bodies, commercial companies and NGOs are trying to harmonise and implement an ethics by design approach - thinking through and resolving of a variety of potential ethical dilemmas at the design and development stage of artificial intelligence systems. This approach significantly sophisticates the design process and requires additional competencies from engineers and programmers, and this causes many difficulties in its implementation.

The second approach - AI Localism - is also difficult to translate directly. Its essence is not to start from solving abstract and comprehensive moral dilemmas, but to collect everywhere the best local practices of working with ethical issues when implementing AI at the level of individual cities and countries, or cases of combating unethical systems.

A third approach is to operationalise ethical issues not through principles, but through the

specific risks described and their prevention. It is closer and more understandable to industry and government agencies. The task of implementing ethical principles takes on tangible contours and becomes quite achievable if the economic feasibility of calculating risks and working towards their elimination is demonstrated.

A fourth approach is to give voice to all interested parties, i.e. to gather feedback from not only from developers, experts and regulators, but also from users, independent researchers, NGOs, marginalised and discriminated groups.

10.         What is the state of ethics of artificial intelligence in Russia?

Russia is among the leading powers on this issue. In November 2021, UNESCO adopted its first global convention, the UNESCO Recommendations on the Ethics of Artificial Intelligence, which Russia and 190 other countries supported (the US, as we recall, withdrew from UNESCO back in 2018). At the same time, only two states  - China and Russia  declared at the UNESCO General Conference in Paris that they also have their own framework in which they will develop artificial intelligence technologies.

On October 26, 2021, the first international forum "Ethics of Artificial Intelligence: The Beginning of Trust" was held at the TASS Press Centre in Moscow. During the forum, the

Russian “AI Code of Ethics” was adopted. To date, more than 100 Russian organisations have signed the code.

11.         What principles is the Russian AI Code of Ethics built on and how does it help domestic companies and researchers?

The Russian AI Code of Ethics is based on six key principles:

1.         The main priority in the development of AI technology is to protect the interests of people,

individual groups and every human being;

2.         The necessity of feeling of responsibility in the creation and use of AI;

3.         The responsibility for the consequences of using AI always lies with the humans;

4.         Technologies to be introduced at places and cases, where it will benefit people;

5.         The interests of developing AI technology come before the interests of competition;

6.         Maximum transparency and truthfulness in communicating the level of development of AI technologies, their capabilities and risks is important.

Today, in essence, this code establishes a system of "soft" regulation of the use of artificial intelligence technologies in Russia, and serves as a tool for interaction between industry, society, and state government.

Source:  https://naked-science.ru/article/hi-tech/good-artificial-intelligence

Translated in English by Muhiddin Ganiev

All news...