Artificial Intelligence (AI) Researchers Should Learn & Follow Ethics

15 Dec

Scientists who assemble artificial intelligence and autonomous techniques desire a solid ethical comprehension of the impact that their work could have.

Over 100 technology pioneers lately released an open letter to the United Nations about the topic of deadly autonomous weapons, or "killer robots".

These people, for example, entrepreneur Elon Musk along with also the creators of many robotics firms, are a part of the effort that started in 2015. The letter called for an end into an arms race that it claimed could be the"third revolution in warfare, following gunpowder and atomic arms".

The UN has a part to perform, but responsibility for the near future of these systems also needs to start from the laboratory. The education system which trains our AI researchers needs to school them in ethics as well as coding.

Autonomy in AI

Autonomous systems may make decisions for themselves, with minimal to no input by people. This greatly increases the effectiveness of robots and similar devices.

As an example, an autonomous shipping drone only requires the shipping address, and may then work out for itself that the best route to take -- overcoming any barriers it may encounter along the way, for example, adverse weather or a flock of curious seagulls.

There's been a fantastic deal of research to autonomous systems, and shipping drones are now being developed by companies such as Amazon. Clearly, the same technology could easily be utilized to make deliveries that are significantly nastier than books or food.

Drones are also getting smaller, cheaper and more robust, which means it will soon be achievable for flying armies of thousands of drones to be produced and deployed.

The capacity for the deployment of weapons programs such as this, mainly decoupled from individual management, prompted the letter urging the UN to "find a way to shield us all from these dangers".

Ethics And Justifications

No matter your opinion of these weapons systems, the issue highlights the requirement for consideration of ethical issues in AI research.

As in most areas of mathematics, obtaining the essential depth to produce contributions to the planet's knowledge requires focusing on a specific topic. Often researchers are specialists in comparatively narrow areas and might lack any formal instruction in ethics or moral reasoning.

It is precisely this type of reasoning that is increasingly required. As an example, driverless automobiles, that are being tested in the US, will have to be able to make judgments about possibly harmful circumstances.

For example, how can it respond if a cat unexpectedly crosses the road? Is it wiser to run over the cat, or even to swerve sharply to prevent it, risking harm to the car's occupants?

Hopefully, these instances will be rare, but the car will have to be equipped with some specific principles in your mind to guide its decision making. As Virginia Dignum place it when sending her paper "Responsible Autonomy" in the recent International Joint Conference on Artificial Intelligence (IJCAI) at Melbourne:

The driverless car will possess ethics; the question is, Whose ethics?

A similar theme was explored in the paper "Automating the Doctrine of Double Effect" by Naveen Sundar Govindarajulu and Selmer Bringsjord.

The Doctrine of Double Effect is a means of reasoning about moral issues, such as the best to self-defense under certain circumstances, and is credited to this 13th-century Catholic scholar Thomas Aquinas.

The name Double Impact comes from getting a great impact (such as saving somebody's life) as well as a bad impact (harming somebody else in the process). This can be a way to justify activities such as a drone shooting at a car that's running down pedestrians.

What Does This Mean For Education?

The emergence of ethics as a subject for discussion within AI research suggests we ought to also consider how we prepare pupils for a world in which autonomous systems are increasingly common.

The demand for "T-shaped" individuals has been recently launched. Firms are currently looking for scholars not only with a particular region of technical thickness (the vertical stroke of the T) but also with professional abilities and personal attributes (the flat stroke). Combined, they can see issues from various viewpoints and function effectively in multidisciplinary teams.

Most undergraduate courses in computer science and comparable areas include a course on professional ethics and practice. These are typically centered on intellectual property, copyright, patents and privacy issues, which are certainly important.

However, it appears clear from the discussions at IJCAI which there is an emerging need for extra material on broader ethical problems.

Topics could include strategies for ascertaining the lesser of 2 evils, legal theories like criminal neglect, and the historical effect of technologies on the planet.

The key point is to allow pupils to incorporate ethical and societal perspectives in their work from the very beginning. Additionally, it seems appropriate to require research proposals to demonstrate how ethical concerns are incorporated.

Since AI becomes more broadly and deeply embedded in everyday life, it's very important technologists understand the society where they live and the impact that their creations may have about it.

Share this post with your friends!