Googling the Future: The Singularity of Ray Kurzweil

David Sanford Horner


Stories of social, cultural and economic futures underwritten by the latest advances in technology are a familiar trope (Seidensticker, 2006; Selin, 2007). In this paper I will extend my recent work in which I have argued that the only rational response to the claims of ‘futurism’ should be one of profound scepticism (Horner 2005; Horner 2007a; Horner 2007b). It might be said that some claims are better then others, for example, a populist futurism may be easily brushed aside but more serious, evidentially based work surely must be taken more seriously. However, what I hope to show is that the problems that beset forecasting are not simply matters of inadequate technique and poor evidence but that the enterprise is conceptually and logically flawed and this, itself, has important ethical implications. This seems to be a miserable conclusion given that foresight has been presented as a principal means by which we might deal with the threats of uncertain futures. However, to illustrate this argument I analyse in some detail Ray Kurzweil’s The Singularity is near: when humans transcend biology (2005). The book is striking in the reach and depth of its projections (taking us well beyond Web 2.0!) in envisaging a future in which information technologies have developed exponentially to create the conditions for humanity to transcend its biological limitations. Kurzweil describes this as ‘the Singularity’: “…It’s a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian or dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself” (Kurzweil, 2005, p.7). He makes the remarkable claim that: “There will be no distinction, post-Singularity, between human and machine or between physical and virtual reality” (Kurzweil, 2005, p.9). It is important to note that Kurzweil says that this ‘will’ happen; there is not even a cautionary ‘may’ happen. The book is suffused with that sense of historical inevitability famously criticized by Isaiah Berlin (1954) the ethical implication of which is a profound loss of human freedom. What underlies this vision is an idea with a long pedigree that history conforms to natural or (supernatural) laws which in themselves constitute the basis for knowledge about the future states of the world. Kurzweil in The Singularity is Near indeed presents ‘a theory of technological evolution’ as justification of the shape of future human society. In the manner of Karl Marx or Herbert Spencer he rejects a so called linear view of historical development in favour of a vast historical canvas of six historical epochs that are driven, in a law-like manner (‘the law of accelerating returns’), by the exponential growth of information and technology. I argue that this view is flawed in at least two fundamental respects. Firstly in its disregard of the notion of limiting factors which apply even in the case of the growth of science and technology (Barrow, 1999; Edgerton, 2006). And secondly, it mistakes phenomena that may be temporary, local and limited for a metaphysical principle (Seidensticker, 2006, 63 – 79). But the problems raised here are also ethical. The danger of this kind of futurism is that it radically devalues human choice and our collective ability to shape technological futures (Flew, 1967). Kurzweil’s account is remarkable in its blindness to the long history of the failure of technological foresight to deliver on its promises (Cole et al. 1974). I argue the brutal case that social foresight and technological forecasting are essentially fraudulent activities which at best are temporarily delusive but at worst may constitute a waste of valuable human and material resources. Following Edgerton’s (2006) account we need rather an ethics of ‘technology-in-use’ rather than a hypostatization of so called technological laws of development. I conclude more hopefully with a brief indication of where we might look for methods of dealing with uncertainty which do not depend on undependable and indefensible knowledge claims about future states of the world.


Barrow, J.D., 1999. Impossibility: the limits of science and the science of limits. London: Vantage.

Cole, H.S.D., Freeman, C., Jahoda, M., and Pavitt, K.L.R., 1974. Thinking about the future: a critique of the Limits to Growth. London: Chatto and Windus.

Berlin, I., 1954. Historical inevitability. Oxford: Oxford University Press.

Edgerton, D., 2006. The shock of the old: technology and global history since 1900. London: Profile Books.

Flew, A. 1967. Evolutionary Ethics. London: Macmillan.

Horner. D.S., Anticipating ethical challenges: Is there a coming era of nanotechnology? Ethics and Information Technology. 7, 2005, pp. 127 – 138.

Horner, D.S., 2007a. Forecasting Ethics and the Ethics of Forecasting: the case of Nanotechnology. In: T.W. Bynum, K. Murata, and S. Rogerson, eds.Glocalisation: Bridging the Global Nature of Information and Communication Technology and the Local Nature of Human Beings. ETHICOMP 2007, Vol.1. Meiji University, Tokyo, Japan 27 -29 March 2007. Tokyo: Global e-SCM Research Centre, Meiji University, pp. 257-267.

Horner, D.S., 2007b. Digital futures: promising ethics and the ethics of promising. In: L. Hinman et al. eds. Proceedings of CEPE 2007: The 7th International Conference of Computer Ethics: Philosophical Enquiry. University of San Diego, July 12 – 14, 2007. Enschede, The Netherlands: Center for Telematics and Information Technology, pp. 194-204.

Kurzweil, R., 2005. The singularity is near: when humans transcend biology. London: Duckworth.

Selin, C., 2007. Expectations and emergence of nanotechnology. Science, Technology and Human Values. 32 (2) March, pp. 196 – 220.

Seidensticker, B., 2006. Futurehype: the myths of technology change. San Francisco: Barrett-Koehler.

Comments are closed.