Philosophy 198 - 10.3.3 Ethics and Emerging Technologies

Almost everyone in the contemporary world uses technologies such as cell phones and computers, but few of us understand how these devices work. This ignorance hampers our ability to make informed decisions as a society regarding how to use technology fairly or judiciously. A further challenge is that the pace of technological evolution is much faster than the human ability to respond at societal level.

A very life-like head with wires coming out of the top emerges from a block of wood. A person leans over and applies lipstick to the lips of the head.
Figure 10.10 This image of an android makes many people uncomfortable because it appears so humanlike. Is artificial intelligence a threat to human existence? Will there come a time when robots are afforded what we now call human rights? (credit: “Lipstick” by Steve Jurvetson/Flickr, CC BY 2.0)

Artificial intelligence (AI), originally a feature of science fiction, is in widespread use today. Current examples of AI include self-driving cars and quantum computers. Philosophers and engineers sort AI into two categories: strong and weak. Strong artificial intelligence refers to machines that perform multiple cognitive tasks like humans but at a very rapid pace (machine speed). Weak artificial intelligence refers to artificial intelligence that performs primarily one task, such as Apple’s Siri or social media bots. Philosophers of mind such as John Searle (b. 1932) argue that truly strong artificial intelligence doesn’t exist, since even the most sophisticated technology does not possess intentionality the way a human being does. As such, no computer could have anything like a mind or consciousness.

Despite Searle’s assessment, many people—including leaders within the field of computer science—take the threat of AI seriously. In a Pew Research Center survey, industry leaders expressed common concerns over exposure of individuals to cybercrime and cyberwarfare; infringement on individual privacy; the misuse of massive amounts of data for profit or other unscrupulous aims; the diminishing of the technical, cognitive, and social skills that humans require to survive; and job loss (Anderson and Rainie 2018). These concerns may reflect a deeper problem—what Swedish philosopher Nick Bostrom (b. 1973) calls a mismatch between “our ability to cooperate as a species on the one hand and on the other hand our instrumental ability to use technology to make big changes in the world.” Although leaders express more immediate concerns reflected in the Pew report, Bostrom’s fundamental worry—like those expressed in science fiction literature—is the emergence of a superintelligent machine that does not align with human values and safety (Bostrom 2014).

The content of this course has been taken from the free Philosophy textbook by Openstax