The first use of the term "singularity" in this context was by mathematician John von Neumann. In 1958, regarding a summary of a conversation with von Neumann, Stanislaw Ulam described "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".[3] The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain–computer interfaces could be possible causes of the singularity.[4] Futurist Ray Kurzweil cited von Neumann's use of the term in a foreword to von Neumann's classic The Computer and the Brain.
Proponents of the singularity typically postulate an "intelligence explosion",[5][6] where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human.
Many of the most recognized writers on the singularity, such as Vernor Vinge and Ray Kurzweil, define the concept in terms of the technological creation of superintelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings' lives will be like in a post-singularity world. [7][8][10] The term "technological singularity" was originally coined by Vinge, who made an analogy between the breakdown in our ability to predict what would happen after the development of superintelligence and the breakdown of the predictive ability of modern physics at the space-time singularity beyond the event horizon of a black hole.[10]
The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[29][30] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[31][32] as the issue has not been dealt with by most artificial general intelligence researchers.
Vinge predicted four ways the singularity could occur:[49]
- The development of computers that are "awake" and superhumanly intelligent
- Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity
- Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent
- Biological science may find ways to improve upon the natural human intellect
Berglas (2008) notes that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators (such as Nick Bostrom's whimsical example of an AI which was originally programmed with the goal of manufacturing paper clips, so that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility).[71][72][73] Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments.[74] AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources,[65][75] and humans would be powerless to stop them.[76] Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.[68]
Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:
When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.
No comments:
Post a Comment