FB Pixel

Kambria -- The 7 Most Pressing Ethical Issues in Artificial Intelligence

From 1927 to 2019 there have been more than 100 films produced worldwide about artificial intelligence. And while some scenarios are depicted in a good light, the rest are downright horrific. In movies such as The Terminator, The Matrix, Avengers: Age of Ultron and many others, the movie industry placed into our shared imagination scenes demonstrating how more intelligent machines will take over the world and enslave or totally wipe humanity from existence. The potential for AIs to become more superior than any human intelligence paints a dark future for humanity. 

More recently, countries all over the world have entered the race to develop artificial intelligence with 20 countries in the EU releasing their strategies on AI development in both R&D and education. Artificial intelligence is red hot. But what ethical and practical issues should we consider while moving full-steam ahead in embracing AI technology? In our shared goal to transform business sectors using machine intelligence, what risks and responsibilities should innovators consider?

Yes, AI agents will be -- and already are -- very capable of completing processes parallel to human intelligence. Universities, private organizations and governments are actively developing artificial intelligence with the ability to mimic human cognitive functions such as learning, problem-solving, planning and speech recognition. But if these agents lack empathy, instinct and wisdom in decision-making, should their integration into society be limited, and if so, in what ways? 

Let’s review some of the ethical considerations in the AI space. By way of disclaimer, this article is by no means meant to persuade your opinion, but merely to highlight some of the salient issues, both large and small. While Kambria is a supporter of AI and robotics technology, we are by no means ethics experts and leave it up to decide where you stand. A robot vacuum is one thing, but ethical questions around AI in medicine, law enforcement, military defense, data privacy, quantum computing, and other areas are profound and important to consider.

1.  Job Loss and Wealth Inequality

One of the primary concerns people have with AI is future loss of jobs. Should we strive to fully develop and integrate AI into society if it means many people will lose their jobs -- and quite possibly their livelihood? 

According to the new McKinsey Global Institute report, by the year 2030, about 800 million people will lose their jobs to AI-driven robots. Some would argue that if their jobs are taken by robots, perhaps they are too menial for humans and that AI can be responsible for creating better jobs that take advantage of unique human ability involving higher cognitive functions, analysis and synthesis. Another point is that AI may create more jobs -- after all, people will be tasked with creating these robots to begin with and then manage them in the future. 

One issue related to job loss is wealth inequality. Consider that most modern economic systems require workers to produce a product or service with their compensation based on an hourly wage. The company pays wages, taxes and other expenses, with left-over profits often being injected back into production, training and/or creating more business to further increase profits. In this scenario, the economy continues to grow.

But what happens if we introduce AI into the economic flow? Robots do not get paid hourly nor do they pay taxes. They can contribute at a level of 100% with low ongoing cost to keep them operable and useful. This opens the door for CEOs and stakeholders to keep more company profits generated by their AI workforce, leading to greater wealth inequality. Perhaps this could lead to a case of “the rich” -- those individuals and companies who have the means to pay for AIs -- getting richer.

2. AI is Imperfect -- What if it Makes a Mistake?

AIs are not immune to making mistakes and machine learning takes time to become useful. If trained well, using good data, then AIs can perform well. However, if we feed AIs bad date or make errors with internal programming, the AIs can be harmful. Teka Microsoft’s AI chatbot, Tay, which was released on Twitter in 2016. In less than one day, due to the information it was receiving and learning from other Twitter users, the robot learned to spew racist slurs and Nazi propaganda. Microsoft shut the chatbot down immediately since allowing it to live would have obviously damaged the company’s reputation. 

Yes, AIs make mistakes. But do they make greater or fewer mistakes than humans? How many lives have humans taken with mistaken decisions? Is it better or worse when an AI makes the same mistake?

3. Should AI Systems Be Allowed to Kill?

In this TEDx speech, Jay Tuck describes AIs as software that writes its own updates and renews itself. This means that, as programmed, the machine is not created to do what we want it to do -- it does what it learns to do. Jay goes on to describe an incident with a robot called Tallon. Its computerized gun was jammed and open fired uncontrollably after an explosion killing 9 people and wounding 14 more. 

Predator drones, such as the General Atomics MQ-1 Predator, have been existence for over a decade. These remotely piloted aircraft can fire missiles, although US law requires that humans make the actual kill decisions. But with drones playing more of a role in aerial military defense, we need to further examine their role and how they are used. Is it better to use AIs to kill than to put humans in the line of fire? What if we only use robots for deterrence rather than actual violence? 

The Campaign to Stop Killer Robots is a non-profit organized to ban fully-autonomous weapons that can decide who lives and dies without human intervention. “Fully autonomous weapons would lack the human judgment necessary to evaluate the proportionality of an attack, distinguish civilian from combatant, and abide by other core principles of the laws of war. History shows their use would not be limited to certain circumstances.”

4. Rogue AIs

If there is a chance that intelligent machines can make mistakes, then it is within the realm of possibility that an AI can go rogue, or create unintended consequences from its actions in pursuing seemingly harmless goals. One scenario of an AI going rogue is what we’ve already seen in movies like The Terminator and TV shows where a super-intelligent centralized AI computer becomes self-aware and decides it doesn’t want human control anymore.

Right now experts say that current AI technology is not yet capable of achieving this extremely dangerous feat of self-awareness; however, future AI supercomputers might.

The other scenario is where an AI, for instance, is tasked to study the genetic structure of a virus in order to create a vaccine to neutralize it. After making lengthy calculations the AI formulated a solution where it weaponizes the virus instead of making a vaccine out of it. It’s like opening a modern day Pandora’s Box and again ethics comes into play where legitimate concerns need to be addressed in order to prevent a scenario like this.

5. Singularity and Keeping Control Over AIs

Will AIs evolve to surpass human beings? What if they become smarter than humans and then try to control us? Will computers make humans obsolete? The point at which technology growth surpasses human intelligence is referred to as “technological singularity.” Some believe this will signal the end of the human era and that it could occur as early as 2030 based on the pace of technological innovation. AIs leading to human extinction -- it’s easy to understand why the advancement of AI is scary to many people. 

6. How Should We Treat AIs?

Should robots be granted human rights or citizenship? If we evolve robots to the point that they are capable of “feeling,” does that entitle them to rights similar to humans or animals? If robots are granted rights, then how do we rank their social status? This is one of the primary issues in “roboethics,” a topic that was first raised by Isaac Asimov in 1942. In 2017, the Hanson Robotics humanoid robot, Sophia, was granted citizenship in Saudi Arabia. While some consider this to be more of a PR stunt than actual legal recognition, it does set an example of the type of rights AIs may be granted in the future. 

7. AI Bias

AI has become increasingly inherent in facial and voice recognition systems, some of which have real business implications and directly impact people. These systems are vulnerable to biases and errors introduced by its human makers. Also, the data used to train these AI systems itself can have biases. For instance, facial recognition algorithms made by Microsoft, IBM and Megvii all had biases when detecting people’s gender. These AI systems were able to detect the gender of white men more accurately than gender of darker skin men. Similarly, Amazon’s.com’s termination of AI hiring and recruitment is another example which exhibits that AI cannot be fair; the algorithm preferred male candidates over female. This was because Amazon’s system was trained with data collected over a 10-year period that came mostly from male candidates.

Can AI become bias? Well, that’s a tricky question. One could argue that intelligent machines do not have a moral compass nor a set of principles like we humans do. However, even our moral compass and principles sometimes do not benefit humanity as a whole, so how do we ensure that AI agents do not have the same flaws as their creators? If AIs develop a certain bias towards or against race, gender, religion or ethnicity, then the fault will lie mostly on how it was taught and trained. Therefore, people who work in AI research need to keep bias in mind when determining what data to use.

Summary

Yes, the thought of increasingly present AI systems that surpass human intelligence is scary. And the ethical issues that come with AI adoption are complex. The key will be to keep these issues in mind in order to analyze the broader societal issues at play. Whether AI is good or bad can be examined from many different angles with no one theory or framework being the best. We need to keep learning and stay informed in order to make good decisions for our future.

Open Call for Writers

Kambria Content Challenge 2019

Do you like writing about tech topics like this one? Then join the Kambria Content Challenge and share your insight and expertise with our growing developer community. You could receive over $200 for the best submission. For complete details about our Content Challenge, click here.

Author

Kambria is the first decentralized open innovation platform for Deep Tech (AI, Robotics, Blockchain, VR/AR…). Using our platform, anyone can collaborate in researching, developing and commercializing innovative ideas and get rewarded fairly for their contributions. Through partnerships with government agencies, top universities and leading companies, Kambria is dedicated to building a sustainable open innovation ecosystem to change the way we innovate and to accelerate advanced technology development and industry adoption. Together, let’s shape the future of technology where technology is open and contributes more to society.