William M. Fleischman


This paper is a reflection about my experiences of the past thirteen years teaching the course in computer ethics at Villanova University. If I say that this assignment, which was presented to me unexpectedly in the winter of 1998, has proved to be most challenging and rewarding of a long career, it is simply that the responsibility of teaching computer ethics has forced me to be a student on the same level as the young people who – when they display the beautiful good will and generosity of the young – have been my companions in thinking through and sorting out the questions we have encountered and addressed during this eventful period.

My observations here are neither abstract nor general. There is a particular topic to which they are immediately connected – the quite contemporary questions related to the use of robotic agents in warfare. At this historical moment, the use of such robotic weapons has an understandable attraction, especially for students who are technically inclined. Considering the set of advantages these robotic agents possess, the development of automated and, in some cases, autonomous weapons may seem an unavoidable imperative [Singer 2009, Arkin 2009]. Of course, critical consideration of the circumstances of their intended deployment reveals a complementary set of disadvantages that argue against indiscriminate use. [Singer 2009, Gotterbarn 2011] Certainly this is a subject that deserves analytical discussion in a setting in which aspiring hardware and software engineers consider and wrestle with the value choices they will face in professional assignments they undertake after graduation.

But there is a larger and, to my mind, more significant theme that has bearing on my students’ understanding of these issues. This theme has to do with the convergence between what my students conceive to be the nature and limitations of human intelligence, and what they conceive as possible through the simulation of human behavior by means of the techniques of artificial intelligence. Of course, this convergence does not originate with the views of students, nor is it confined only to those who are in the initial stages of their intellectual and professional development. It is a tendency of thought decried by Joseph Weizenbaum in his 1972 essay, “On the Impact of the Computer on Society,” and again in his book, “Computer Power and Human Reason,” [Weizenbaum 1972, 1976].

The issue at the heart of this convergence is the subject of a debate that has, for many years, occupied the attention of influential thinkers and practitioners in computer science. The fundamental question or assertion may be phrased in one of several variant forms. “Is the brain merely a ‘meat machine’?” “The human brain is just a network of 1011 neurons. We’re going to be able to build that soon.” Of course, this is a dream with deep roots in our culture – in literature as well as film. And it constitutes an attractive topic for uncritical treatment in the popular press. Thus, it has many avenues of entrée into the consciousness of young people who eventually gravitate to the study of science and technology.

The first section of this paper will comprise brief remarks about the general approach I take in teaching the course in computer ethics and a more detailed explanation of the unit in which we discuss questions related to the deployment of robotic agents in warfare. In the section that follows, I will take a step back to consider the larger context of the ambitious projects involving the application of artificial intelligence tin the simulation of human behavior. This is a question we explore in the ethics course through readings that begin with the public debate in which Weizenbaum engaged with the influential ideas of Herbert Simon and his followers [Simon, 1969]. This exploration takes us into the realm of “cyborgian” speculation and experiments [Moravec 1998, Warwick, 2000] inspired by Simon’s ideas. In particular, I wish to point out how these speculations and experiments feed the naïve expectation, “We’re going to be able to build this soon,” that many students bring to the topic. And they are at the root of those recurrent moments mentioned in title of this article, which I will illustrate with several examples provided by my students in the course of discussions concerning robotics, cyborgs, and machine simulation of human behavior.

In a sense, the essay, “On the Impact of the Computer on Society,” is the cornerstone of the course on computer ethics as I conceive it. It is a difficult essay for the students to penetrate, in part because of important elements of historical context that, for students born after the fall of the Berlin Wall (to lay down a convenient chronological marker) lie increasingly in the remote and inaccessible past. But it is also a difficult essay because its short form demands of Weizenbaum, the writer, a severe compression of the broad scope of the argument that Weizenbaum, the thinker, wishes to join with those who have asserted or who have internalized a mechanical conception of human history, culture, and intelligence. And finally it is difficult because there are very few instances in the education of my students in which a scientist speaks to them as loftily yet as bluntly as Weizenbaum does of the danger of losing the accumulated wealth of human culture, of undervaluing the full richness of human intelligence. Thus, I will discuss several strategies for unpacking and illustrating Weizenbaum’s argument in a manner that is meaningful to my students. These strategies underscore the exceptional, joyful, and unmechanical nature of human creativity, something against which the world of this moment mounts altogether too many deadening and discouraging counterexamples.


Arkin, Ronald C. (2009), “Ethical Robots in Warfare,” IEEE Technology and Society Magazine, volume 28, no. 1.

Gotterbarn, Don (2010), “Autonomous Weapon’s Ethical Decisions; “I Am Sorry Dave; I Am Afraid I Cannot Do That.” Proceedings of ETHICOMP 2010, pp. 219-229.

Moravec, Hans (1998), “When Will Computer Hardware Match the Human Brain?” in Journal of Evolution and Technology, vol. 1, available online at, last accessed 6 February, 2011.

Simon, Herbert A., The Sciences of the Artificial, MIT Press, Cambridge, Massachusetts.

Singer, P. W. (2009), Wired for War, Penguin Press, New York.

Warwick, Kevin (2000), “Cyborg 1.0,” in Wired Magazine, Issue 8.02, February 2000, available at, last accessed 6 February, 2011.

Weizenbaum, Joseph (1972), “On the Impact of the Computer on Society: How Does One Insult a Machine?” Science, vol. 176, no. 4035, pp. 609-614.

Weizenbaum, Joseph (1976), Computer Power and Human Reason: From Judgment to Calculation, W. H. Freeman and Company, New York.

Comments are closed.