A technological singularity is a point at which capability for technological progress becomes both immense and self-reproducing (that is, one improvement promptly makes possible another improvement, allowing for rapid and indefinite progress). In particular, it refers to a theoretical point in the future at which intelligence and technological achievement will progress towards infinite as a result of the growth of artificial intelligence, although the concept may also be applied to a more limited extent to periods of rapid human progress since the Stone Age.
This progress, however, obviously cannot continue indefinitely without reaching some point at which the social and other consequences cause a sharp break from everything that has come before – the point of the singularity itself. Originally, beginning in the 1950s, futurists like Stanislav Ulam and I.J. Good noted that technological progress would continue until it reached some sort of dramatic breaking point. In particular, Good thought that if human designers were ever able to create a computer more intelligent than a human being (the end point of artificial intelligence), than logically that supercomputer would be capable of designing an even more intelligent computer, just as its human designers had invented a computer more intelligent than themselves. The new computer would in turn design an even more intelligent computer, and so on, until eventually some form of entirely different superintelligent entity emerged.
In its original form, then, the theory referred specifically to the technology of artificial intelligence. However, subsequent thinkers, like Ray Kurzweil, have applied the same theory of a rapid chain of progress to other areas of human endeavour in the past, including the rise of agriculture and the Industrial Revolution. Still others have suggested that we will eventually achieve a medical singularity, a point at which progress in medical research is able to increase average life expectancy by more than one year for every one year of research, and therefore essentially lead to indefinite life expectancy.
The idea of rapid and indefinite progress does carry with it seemingly positive and optimistic implications – and this is certainly the case in, for example, speculation about medical achievements. However, the original theorists of the technological singularity were equally concerned that there might be a darker side to this progress, however. It is still far from clear that the long-term evolutionary future of the human race is assured by either agriculture (which carries with it risks of overpopulation and soil exhaustion) or by industrialization (which results in immense pollution and, potentially, climate change). If an artificial intelligence rivalling or superior to human intellect ever is developed, it may provoke the sort of questions about ethics and “human” rights that have long been the stuff of science fiction. However, whether it does or not, it will certainly create the risk that a parallel group of intelligent entities, with values different than our own, will be growing in power alongside us – which is yet another common trope of science fiction, of course. Could fragile, intellectually limited human beings one day become an unnecessary expense and liability to these future supercomputers?
Most of this is still largely speculative, of course. That a computer can exceed the raw processing capacity of the human brain is quite different from the notion that a computer would actually have the sort of sentience and self-awareness possessed by living beings. Moreover, even if the logic of the technological singularity may seem tempting, economic theory suggests that progress in a given field actually slows over time due to the so-called law of diminishing returns: every subsequent improvement requires much greater investment of time and resources than the one before it, so that eventually inventors decide further tinkering simply isn’t worthwhile.