The precise definition of artificial intelligence (AI) is disputed among researchers in the technology industry. Some common questions asked by researchers are ones like: What distinguishes AI from other computational software? What does it mean for a being or a computer to be intelligent?
It is important to think about these questions as you conduct research for your cases, but the following standard is broadly accepted in the field. An AI system can identify patterns in large, complex data sets without explicit programming instructions, and use those patterns to respond to changes in its environment. Based on this definition, a robot that performs a repetitive task on a car assembly line does not qualify as AI, while a self-driving car does.
The computer program AlphaGo is another illustrative example of AI. Created by Google DeepMind, AlphaGo defeated the human grandmaster of the complex strategy game Go in 2016. AlphaGo analyzed millions of games to identify successful strategies, and was able to apply its learned knowledge in the context of a live game.
Today, AI systems have narrow applications. But researchers at Google and other technology companies hope to develop “Artificial General Intelligence”: systems that mimic human sentience, and can perform general tasks in a general context.
Current Applications of AI
One of the more visible applications of AI that has been in development for the last few years is self-driving (autonomous) cars. Autonomous cars have driven millions of miles on American roads, and they have become better at identifying and avoiding potential hazards. These cars are considered to be safer and much less prone to accidents when compared to human drivers. Individual states in the US are now considering regulations to allow autonomous vehicles to operate on roads without backup drivers.
Another example of AI systems currently being developed are AI systems designed to analyze MRI and CT scans. It is predicted that they will soon be able to identify diseases more effectively and efficiently than human radiologists. Eventually many functions of diagnosis and health evaluation will be taken over by computers.
AI systems are also being used in many business and government settings to manage large quantities of data. For example, AI can be used to decide payouts on insurance claims. Computer systems have been developed that can look through documents related to a case, take note of relevant information like length of hospital stay and injury type, and use this information to calculate a payout. This information is processed much more quickly and with fewer errors compared to human employees, leading to increased efficiency. In March of this year, Fukoku Mutual Life Insurance company replaced the 34 employees that calculated payouts with an AI system based on IBM’s Watson Explorer. The company believes that the AI system will provide a 30 percent increase in efficiency, and a return on their investment within two years.
Social, Political, and Economic Implications of AI
AI poses many of the same threats as mechanical automation, but it is significant in both the number of jobs it could possibly render obsolete, as well as the types of jobs it is eliminating. According to different estimates, approximately half of all jobs could be replaced by computers in the next twenty years. The impact of self driving cars alone could lead to the elimination of entire job categories in the transportation and shipping industries. Further jobs that are threatened by automation include: loan officers, claims adjusters, bond traders, and hospital technicians. Essentially, any job where the task is a repetitive one is at risk of being done by a computer in the future.
Many critics of AI also point to the possible military or surveillance applications of the technology. While the mass collection of personal information has been considered harmful in the past, the combination of these practices with a system capable of analyzing the data in close to real time could allow governments to monitor individuals like never before. For example facial and voice recognition technology, combined with current monitoring techniques such as CCTV could allow a government to track an individual with complete precision.
There is also a fear that data from social media and other sources could be used by governments to create registries of individuals belonging to certain political organizations or religions, aiding in persecution or abuse. A study from Cambridge University showed it was possible in over 80 percent of cases to predict a person’s religion based on what they “liked” on social media. As AI based systems can analyze huge amounts of data from various sources and use it to make predictions, there is a fear that this ability to categorize individuals will only become faster and more accurate.
The Frankenstein Scenario
Some skeptics are not just concerned about the short-term implications of AI. Elon Musk, Stephen Hawking, Nick Bostrom, and others have argued that the development of super-intelligent machines could pose an existential threat to humanity.
Their argument goes like this. An advanced AI that is given a harmless objective, and is equipped with the ability to constantly self-improve, might develop harmful instrumental goals. This is possible because machines do not have ethical values. In Nick Bostrom’s hyperbolic example, an AI system tasked with manufacturing paperclips might eventually enslave all humans to achieve its goal more efficiently.
Debaters should engage with the Frankenstein scenario, of AI systems turning on their human creators. Naturally, most AI researchers consider it to be ludicrous. Such an AI system capable of this level of complex thought is still probably hundreds of years away, and this system would not necessarily be hostile towards humans.
How far away are we from developing Artificial General Intelligence? Will these systems be controllable? Can protections be built-in?