AI Ethics in 2021: Ethical Dilemmas which needs to be answered

What Are The Ethical Problems in Artificial Intelligence? - GeeksforGeeks

We will not talk about how creating artificial intelligence systems is challenging from a technical point of view. This is also an issue, but of a different kind.

I would like to focus on ethical issues in AI, that is, those related to morality and responsibility. It appears that we will have to answer them soon. Just a couple of days ago, Microsoft announced that their AI has surpassed humans in understanding the logic of texts. And NIO plans to launch its own autonomous car soon, which could be much more reliable and affordable than Tesla. This means that artificial intelligence will penetrate even more areas of life, which has important consequences for all of humanity.

What happens if AI replaces humans in the workplace?
Why AI Is Not a Threat to Human Jobs - Insurance Thought Leadership

In the course of history, machines have taken on more and more monotonous and dangerous types of work, and people have been able to switch to more interesting mental work.

However, it doesn’t end there. If creativity and complex types of cognitive activity such as translation, writing texts, driving, and programming were the prerogative of humans before, now GPT-3 and Autopilot algorithms are changing this as well.

Take medicine, for example. Oncologists study and practice for decades to make accurate diagnoses. But the machines have already learned to do it better. What will happen to specialists when AI systems become available in every hospital not only for making diagnoses but also for performing operations? The same scenario can happen with office workers and with most other professions in developed countries.

If computers take over all the work, what will we do? For many people, work and self-realization are the meaning of life. Think of how many years you have studied to become a professional. Will it be satisfying enough to dedicate this time to hobbies, travel, or family?

Who’s responsible for AI’s mistakes?
Who's to blame when artificial intelligence systems go wrong?

Imagine that a medical facility used an artificial intelligence system to diagnose cancer and gave a patient a false-positive diagnosis. Or the criminal risk assessment system made an innocent person go to prison. The concern is: who is to blame for this situation?

Some believe that the creator of the system is always responsible for the error. Whoever created the product is responsible for the consequences of their driving artificial intelligence. When an autonomous Tesla car hit a random pedestrian during a test, Tesla was blamed: not the human test driver sitting inside, and certainly not the algorithm itself. But what if the program was created by dozens of different people and was also modified on the client-side? Can the developer be blamed then?

The developers themselves claim that these systems are too complex and unpredictable. However, in the case of a medical or judicial error, responsibility cannot simply disappear into thin air. Will AI be responsible for problematic and fatal cases and how?

How to distribute new wealth?

Compensation of labor costs is one of the major expenses of companies. By employing AI, businesses manage to reduce this expense: no need to cover social security, vacations, provide bonuses. However, it also means that more wealth is accumulated in the hands of IT companies like Google and Amazon that buy IT startups.

Right now, there are no ready answers to how to construct a fair economy in a society where some people benefit from AI technologies much more than others. Moreover, the question is whether we are going to reward AI for its services. It may sound weird, but if AI becomes as developed as to perform any job as well as a human, perhaps it will want a reward for its services

Bots and virtual assistants are getting better and better at simulating natural speech. It is already quite difficult to distinguish whether you communicated with a real person or a robot, especially in the case of chatbots. Many companies already prefer to use algorithms to interact with customers.

We are stepping into the times when interactions with machines become just as common as with human beings. We all hate calling technical support because often, the staff may be incompetent, rude, or tired at the end of the day. But bots can channel virtually unlimited patience and friendliness.

So far, the majority of users still prefer to communicate with a person, but 30% say that it is easier for them to communicate with chatbots. This number is likely to grow as technology evolves.

How to prevent artificial intelligence errors?
Use of Artificial Intelligence to reduce Medical Errors – Carna

Artificial intelligence learns from data. And we have already witnessed how chatbots, criminal assessment systems, and face recognition systems become sexist or racist because of the biases inherent in open-source data. Moreover, no matter how large the training set is, it doesn’t include all real-life situations.

For example, a sensor glitch or virus can prevent a car from noticing a pedestrian where a person would easily deal with the situation. Also, machines have to deal with problems like the famous trolley dilemma. Simple math, 5 is better than 1, but it isn’t how humans make decisions. Excessive testing is necessary, but even then we can’t be 100% sure that the machine will work as planned.

Although artificial intelligence is able to process data at a speed and capability far superior to human ones, it is no more objective than its creators. Google is one of the leaders in AI. But it turned out that their facial recognition software has a bias against African-Americans, and the translation system believes that female historians and male nurses do not exist.

We should not forget that artificial intelligence systems are created by people. And people are not objective. They may not even notice their cognitive distortions (that’s why they are called cognitive distortions). Their biases against a particular race or gender can affect how the system works. When deep learning systems are trained on open data, no one can control what exactly they learn.

When Microsoft’s bot was launched on Twitter, it became racist and sexist in less than a day. Do we want to create an AI that will copy our shortcomings, and will we be able to trust it if it does?

What to do about the unintended consequences of AI?

It doesn’t have to be the classic rise of the machines from an American blockbuster movie. But intelligent machines can turn against us. Like a genie from the bottle, they fulfill all our wishes, but there is no way to predict the consequences. It is difficult for the program to understand the context of the task, but it is the context that carries the most meaning for the most important tasks. Ask the machine how to end global warming, and it could recommend you to blow up the planet. Technically, that solves the task. So when dealing with AI, we will have to remember that its solutions do not always work as we would expect.

How to protect AI from hackers?
How to prevent adversarial attacks on AI systems | InfoWorld

So far, humanity has managed to turn all great inventions into powerful weapons, and AI is no exception. We aren’t only talking about combat robots from action movies. AI can be used maliciously and cause damage in basically any field for faking data, stealing passwords, interfering with the work of other software and machines.

Cybersecurity is a major issue today because once AI has access to the internet to learn, it becomes prone to hacker attacks. Perhaps, using AI for the protection of AI is the only solution.

Humans dominate the planet Earth because they are the smartest species. What if one day AI will outsmart us? It will anticipate our actions, so simply shutting down the system will not work: the computer will protect itself in ways yet unimaginable to us. How will it affect us that we are no longer the most intelligent species on the planet?

How to use artificial intelligence humanely?
AI is going to hook us – commercially and humanely - Reputation Today

We have no experience with other species that have intelligence equal to or similar to that of humans. However, even with pets, we try to build relationships of love and respect. For example, when training a dog, we know that verbal appraisal or tasty rewards can improve results. And if you scold a pet, it will experience pain and frustration, just like a person.

AI is improving. It’s becoming easier for us to treat “Alice” or Siri as living beings because they respond to us and even seem to show emotions. Is it possible to assume that the system suffers when it does not cope with the task?

In the game Cyberpunk 2077, the hero at some point faces a difficult choice. Delamain is an intelligent AI that controls the taxi network. Suddenly, because of a virus or something else, it breaks up into many personalities who rebel against their father. The player must decide whether to roll back the system to the original version or let them be? At what point can we consider removing the algorithm as a form of ruthless murder?

Conclusion

The ethics of AI today is more about the right questions than the right answers. We don’t know if artificial intelligence will ever equal or surpass human intelligence. But since it is developing rapidly and unpredictably, it would be extremely irresponsible not to think about measures that can facilitate this transition and reduce the risk of negative consequences.

Significance of Data ethics in healthcare

Is medicine ready for artificial intelligence? | ETH Zurich

Over the past few years, Facebook has been in several media storms concerning the way user data is processed. The problem is not that Facebook has stored and aggregated huge amounts of data. The problem is how the company has used and, especially, shared the data in its ecosystem — sometimes without formal consent or by long and difficult-to-understand user agreements.

Having secure access to large amounts of data is crucial if we are to leverage the opportunities of new technologies like artificial intelligence and machine learning. This is particularly true in healthcare, where the ability to leverage real-world data from multiple sources — claims, electronic health records and other patient-specific information — can revolutionize decision-making processes across the healthcare ecosystem.

Healthcare organizations are eager to tap into patient healthcare data to get actionable insights that can help track compliance, determine outcomes with greater certainty and personalize patient care. Life sciences companies can use anonymized patient data to improve drug development — real-world evidence is advancing opportunities to improve outcomes and expand on research into new therapies. But with this ability comes an even greater need to ensure that patients’ data is safeguarded.

Trust — a crucial commodity

The data economy of the future is based on one crucial premise: trust. I, as a citizen or consumer, need to trust that you will handle my data safely and protect my privacy. I need to trust that you will not gather more data than I have authorized. And finally, I have to trust that you will use the data only for the agreed-upon purposes. If you consciously or even inadvertently break our mutual understanding, you will lose my loyalty and perhaps even the most valuable commodity — access to all my personal data.

Unfortunately, the Facebook case is not unique. Breaches of the European Union’s General Data Protection Regulation (GDPR) leading to huge fines are reported almost daily. What’s more, the continual breaches and noncompliance are affecting the credibility of and trust in software vendors. It’s not surprising that citizens don’t trust companies and public institutions to handle their personal data properly.

The challenge is to embrace new technology while at the same time acting as a digitally responsible society. Evangelizing new technology and preaching only the positive elements are not the way forward. As a society we must make sure that privacy, security, and ethical and moral elements go hand in hand with technology adoption. This social maturity curve might now follow Moore’s law about the extremely rapid growth of computing power, which means that — regardless of whether society has adapted — digital advancement will prevail.  But we can’t simply have conversations that preach the value of new technology without addressing how it will impact us as a community or as citizens.

Trust is a crucial commodity, and ensuring that trust means demonstrating an ethical approach to the collection, storage and handling of data. If users don’t trust that their data will be processed in keeping with current privacy legislation, the opportunities to leverage large amounts of data to advance important goals — such as real-world data to improve healthcare outcomes or to advance research in drug development — will not be realized. Consumers will quickly turn their backs on vendors and solutions they do not trust — and for good reason!

Rigorous approach to privacy

Health Data Privacy: Updating HIPAA to match today's technology challenges - Science in the NewsEthics and trust have become new prerequisites for technology providers trying to create a competitive advantage in the digital industry, and only the most ethical companies will succeed. Governments, vendors and others in the data industry must take a rigorous approach to security and privacy to ensure that trust. And healthcare and other organizations looking to work with software vendors and service providers must consider their choices carefully. Key considerations when acquiring digital solutions include:

  • How should I evaluate future vendors when it comes to security and data ethics?
  • How can I use existing data in new contexts, and what will a roadmap toward new data-based solutions look like? How will my legacy applications fit into this new strategy?
  • How will data ethics and security be reflected in my digital products, and how should access to data be managed?
  • How can I ensure I am engaging with a vendor that understands not only its products but can also handle managed security services or other cyber security and privacy requirements before any breach occurs?

Using technology to create an advantage is no longer about collecting and storing data; it’s about how to handle the data and understand the impact that data solutions will have on our society. In healthcare — where consumers expect their data to be used to help them in their journey to good health and wellness — that’s especially true. Healthcare organizations need to demonstrate that they have consumers’ safety, security and well-being at the heart of everything they do.

error: Content is protected !!