We will not talk about how creating artificial intelligence systems is challenging from a technical point of view. This is also an issue, but of a different kind.
I would like to focus on ethical issues in AI, that is, those related to morality and responsibility. It appears that we will have to answer them soon. Just a couple of days ago, Microsoft announced that their AI has surpassed humans in understanding the logic of texts. And NIO plans to launch its own autonomous car soon, which could be much more reliable and affordable than Tesla. This means that artificial intelligence will penetrate even more areas of life, which has important consequences for all of humanity.
In the course of history, machines have taken on more and more monotonous and dangerous types of work, and people have been able to switch to more interesting mental work.
However, it doesn’t end there. If creativity and complex types of cognitive activity such as translation, writing texts, driving, and programming were the prerogative of humans before, now GPT-3 and Autopilot algorithms are changing this as well.
Take medicine, for example. Oncologists study and practice for decades to make accurate diagnoses. But the machines have already learned to do it better. What will happen to specialists when AI systems become available in every hospital not only for making diagnoses but also for performing operations? The same scenario can happen with office workers and with most other professions in developed countries.
If computers take over all the work, what will we do? For many people, work and self-realization are the meaning of life. Think of how many years you have studied to become a professional. Will it be satisfying enough to dedicate this time to hobbies, travel, or family?
Imagine that a medical facility used an artificial intelligence system to diagnose cancer and gave a patient a false-positive diagnosis. Or the criminal risk assessment system made an innocent person go to prison. The concern is: who is to blame for this situation?
Some believe that the creator of the system is always responsible for the error. Whoever created the product is responsible for the consequences of their driving artificial intelligence. When an autonomous Tesla car hit a random pedestrian during a test, Tesla was blamed: not the human test driver sitting inside, and certainly not the algorithm itself. But what if the program was created by dozens of different people and was also modified on the client-side? Can the developer be blamed then?
The developers themselves claim that these systems are too complex and unpredictable. However, in the case of a medical or judicial error, responsibility cannot simply disappear into thin air. Will AI be responsible for problematic and fatal cases and how?
Compensation of labor costs is one of the major expenses of companies. By employing AI, businesses manage to reduce this expense: no need to cover social security, vacations, provide bonuses. However, it also means that more wealth is accumulated in the hands of IT companies like Google and Amazon that buy IT startups.
Right now, there are no ready answers to how to construct a fair economy in a society where some people benefit from AI technologies much more than others. Moreover, the question is whether we are going to reward AI for its services. It may sound weird, but if AI becomes as developed as to perform any job as well as a human, perhaps it will want a reward for its services
Bots and virtual assistants are getting better and better at simulating natural speech. It is already quite difficult to distinguish whether you communicated with a real person or a robot, especially in the case of chatbots. Many companies already prefer to use algorithms to interact with customers.
We are stepping into the times when interactions with machines become just as common as with human beings. We all hate calling technical support because often, the staff may be incompetent, rude, or tired at the end of the day. But bots can channel virtually unlimited patience and friendliness.
So far, the majority of users still prefer to communicate with a person, but 30% say that it is easier for them to communicate with chatbots. This number is likely to grow as technology evolves.
Artificial intelligence learns from data. And we have already witnessed how chatbots, criminal assessment systems, and face recognition systems become sexist or racist because of the biases inherent in open-source data. Moreover, no matter how large the training set is, it doesn’t include all real-life situations.
For example, a sensor glitch or virus can prevent a car from noticing a pedestrian where a person would easily deal with the situation. Also, machines have to deal with problems like the famous trolley dilemma. Simple math, 5 is better than 1, but it isn’t how humans make decisions. Excessive testing is necessary, but even then we can’t be 100% sure that the machine will work as planned.
Although artificial intelligence is able to process data at a speed and capability far superior to human ones, it is no more objective than its creators. Google is one of the leaders in AI. But it turned out that their facial recognition software has a bias against African-Americans, and the translation system believes that female historians and male nurses do not exist.
We should not forget that artificial intelligence systems are created by people. And people are not objective. They may not even notice their cognitive distortions (that’s why they are called cognitive distortions). Their biases against a particular race or gender can affect how the system works. When deep learning systems are trained on open data, no one can control what exactly they learn.
When Microsoft’s bot was launched on Twitter, it became racist and sexist in less than a day. Do we want to create an AI that will copy our shortcomings, and will we be able to trust it if it does?
It doesn’t have to be the classic rise of the machines from an American blockbuster movie. But intelligent machines can turn against us. Like a genie from the bottle, they fulfill all our wishes, but there is no way to predict the consequences. It is difficult for the program to understand the context of the task, but it is the context that carries the most meaning for the most important tasks. Ask the machine how to end global warming, and it could recommend you to blow up the planet. Technically, that solves the task. So when dealing with AI, we will have to remember that its solutions do not always work as we would expect.
So far, humanity has managed to turn all great inventions into powerful weapons, and AI is no exception. We aren’t only talking about combat robots from action movies. AI can be used maliciously and cause damage in basically any field for faking data, stealing passwords, interfering with the work of other software and machines.
Cybersecurity is a major issue today because once AI has access to the internet to learn, it becomes prone to hacker attacks. Perhaps, using AI for the protection of AI is the only solution.
Humans dominate the planet Earth because they are the smartest species. What if one day AI will outsmart us? It will anticipate our actions, so simply shutting down the system will not work: the computer will protect itself in ways yet unimaginable to us. How will it affect us that we are no longer the most intelligent species on the planet?
We have no experience with other species that have intelligence equal to or similar to that of humans. However, even with pets, we try to build relationships of love and respect. For example, when training a dog, we know that verbal appraisal or tasty rewards can improve results. And if you scold a pet, it will experience pain and frustration, just like a person.
AI is improving. It’s becoming easier for us to treat “Alice” or Siri as living beings because they respond to us and even seem to show emotions. Is it possible to assume that the system suffers when it does not cope with the task?
In the game Cyberpunk 2077, the hero at some point faces a difficult choice. Delamain is an intelligent AI that controls the taxi network. Suddenly, because of a virus or something else, it breaks up into many personalities who rebel against their father. The player must decide whether to roll back the system to the original version or let them be? At what point can we consider removing the algorithm as a form of ruthless murder?
The ethics of AI today is more about the right questions than the right answers. We don’t know if artificial intelligence will ever equal or surpass human intelligence. But since it is developing rapidly and unpredictably, it would be extremely irresponsible not to think about measures that can facilitate this transition and reduce the risk of negative consequences.