“In the years that followed” (the creation of Alan Turing’s Turing Test), “programs called chatbots, capable of conducting conversations, appeared to pass the Turing test by fooling unsuspecting humans into thinking they were intelligent. The first of these, ELIZA, was invented in 1966 by MIT professor Joseph Weizenbaum (1923–2008). In one case, ELIZA was left running on a teletype, and a visitor to Weizenbaum’s office thought he was text-chatting with Weizenbaum at his home office, rather than with an artificial intelligence (AI) program. According to experts, however, ELIZA didn’t pass the Turing test because the visitor wasn’t told in advance that the “person” at the other end of the teleprinter might be a computer.” Fair Use Source: B07C2NQSPV
bot, ‘bot n. [abbr. of robot] a ROBOT.
“1969 R. Meredith We All Died at Breakaway Station in Amazing Stories (Jan.) 130/2: When they got my ship the only part of me that the ‘bots were able to get into cold-sleep was my head, shoulders and a part of my spine.”
“1977 G. Benford Snatching Bot in Cosmos SF & Fantasy Mag. (May) 25/1: “What’s your name, little bot?” The robot squats mutely.”
“1984 D. Brin Practice Effect 23: Compared with some of the sophisticated machines Dennis had worked with, the exploration ‘bot wasn’t very bright.”
“1991 M. Weiss King’s Test 8: Yanking it off, he tossed it over his shoulder to the ‘bot.”
“2001 Time (Nov. 19) 87: This Pentium-powered bot uses sonar sensors to keep her from bumping into walls […] as she rolls along.”
An Android is “a synthetically created human, usually of organic or biological origin. The term was not generally adopted until sometime in the early-to-mid 1940s.
These artificially created ‘people’ are often developed for the purposes of slavery or forced labour, and in this respect are thematically akin to ROBOTS. Indeed, at times the terms appear to be interchangeable, especially when considering the work of PHILIP K. DICK.
An example of a proto-android in SF is the monster created by the eponymous doctor in MARY SHELLEY’S Frankenstein (1818). In it, the character of Frankenstein uses surgery and electricity to construct and eventually reanimate a human being. This ‘monster’, assembled out of various exhumed body parts, is cast out by society and eventually confronts its maker. It is a superb study of the morality of using science as a means of controlling the processes of life.
In modern SF, however, one of the most startling treatments of androids can be found in Philip K. Dick’s Do Androids Dream of Electric Sheep? (1968). The androids are here portrayed as a victimised minority who are attempting to escape the shackles of human oppression. By finding their way back to Earth from colony planet Mars and assuming their independence, they make themselves targets for termination; society at large views the androids as a group of malfunctioning machines. In a similar way to Shelley in Frankenstein, Dick illustrates the repulsiveness, yet inevitability of such a response. Indeed, one need only look at the public reaction to genetically modified foods towards the end of the 1990s as a measure of how society at large reacts to this type of scientific alteration of nature.
ROBERT SILVERBERG, in his Tower of Glass (1970), describes how a man asserts power over a group of androids by assuming the role of MESSIAH. However, the androids become wise to his ploy, and his pretence becomes the grounds of his downfall.
Scepticism and a criticism of the blind assumption that through science mankind can become masters of life and death pervades much of the writing about androids.
“The Three Laws of Robotics (often shortened to The Three Laws or known as Asimov’s Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story “Runaround” (included in the 1950 collection I, Robot), although they had been foreshadowed in a few earlier stories. The Three Laws, quoted as being from the “Handbook of Robotics, 56th Edition, 2058 A.D.”, are:
- First Law – A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- Second Law – A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- Third Law – A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
- Zeroth Law – A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
“And how will the machines take over? Is the best, most realistic scenario threatening to us or not? When posed with this question some of the most accomplished scientists I spoke with cited science-fiction writer Isaac Asimov’s Three Laws of Robotics. These rules, they blithely replied, would be “built in” to the AIs, so we have nothing to fear. They spoke as if this were settled science. We’ll discuss the three laws in chapter 1, but it’s enough to say for now that when someone proposes Asimov’s laws as the solution to the dilemma of superintelligent machines, it means they’ve spent little time thinking or exchanging ideas about the problem. How to make friendly intelligent machines and what to fear from superintelligent machines has moved beyond Asimov’s tropes. Being highly capable and accomplished in AI doesn’t inoculate you from naïveté about its perils.