Computer Security Table of Contents
- Editor’s Introduction
- Computer Security and Human Values
- On Computer Security and Public Trust
- The End of the (Ab)User Friendly Era
- Responsibility and Blame in Computer Security
- Computer Crime, Computer Security, and Human Values
- Hacker Ethics
- The Social Impact of Computer-Mediated Voting
The National Conference on Computing and Values (NCCV) was held on the campus of Southern Connecticut State University in August 1991. The Conference included six “tracks”: Teaching Computing and Human Values, Computer Privacy and Confidentiality, Computer Security and Crime, Ownership of Software and Intellectual Property, Equity and Access to Computing Resources, and Policy Issues in the Campus Computing Environment. Each track included a major address, three to five commentaries, some small “working groups,” and a packet of relevant readings (the “Track Pack”). A variety of supplemental “enrichment events” were also included.
This monograph contains the proceeding of the “Computer Security and Crime” track of NCCV. It includes the “track address” with four commentaries, two enrichment papers and the conference bibliography.
The track address is “Computer Security and Human Values” by Peter G. Neumann; and the commentaries include: “On Computer Security and Public Trust” by William Hugh Murray, “The End of the (Ab)User Friendly Era” by Sanford Sherizen, “Responsibility and Blame in Computer Security” by Dorothy E. Denning, and “Computer Crime, Computer Security and Human Values” by Kenneth C. Citarella.
The enrichment papers are: “Hacker Ethics” by Dorothy E. Denning, and “The Social Impact of Computer-Mediated Voting” by Arnold Urken.
The National Conference on Computing and Values was a major undertaking that required significant help from many people. The Editors would like to express sincere thanks to the National Science Foundation and the Metaphilosophy Foundation for support that made the project possible. And we wish to thank the following people for their invaluable help and support: (in alphabetic order) Denice Botto, William Bowersox, Aline W. Bynum, Robert Corda, Donald Duman, Richard Fabish, James Fullmer, Ken W. Gatzke, Steven J. Gold, Edward Hoffman, Rodney Lane, Sheila Magnotti, Armen Marsoobian, John Mattia, P. Krishna Mohan, Beryl Normand, Robert O’Brien, Daniel Ort, Anthony Pinciaro, Amy Rubin, Brian Russer, Elizabeth L.B. Sabatino, Charlene Senical, J. Philip Smith, Ray Sparks, Larry Tortice, Suzanne Tucker.
We focus here on policy issues relating to computer and communication security, and on the roles that technology can and cannot play in enforcing the desired policies. In the present context, computer security relates to measures to provide desired confidentiality, integrity, availability, and more generally prevention against misuse, accidents, and malfunctions, with respect to both computer systems and the information they contain. We deliberately take a broad view of what might constitute computer security as encompassing the prevention of undesirable events, and take a broad view of undesirable human activities as well. Details are provided in the following sections.
Security is intrinsically a double-edged sword in computers and communications; it cuts both ways. For example,
•It can be used to protect personal privacy.
•It can be used to undermine freedom of access to false information about an individual to which that person should be entitled access; it can also be used to undermine other personal rights.
•It can help defend against malicious misuse, such as penetrations, Trojan horses, viruses, and other forms of tampering.
•It can significantly hinder urgent repairs and responses to emergencies.
•It can greatly simplify the concerns of legitimate users.
•It can seriously impair the abilities of legitimate users attempting to protect themselves from calamities, particularly in poorly designed systems with ill conceived human interfaces. It can also hinder routine system use.
•Automated monitoring of computer activities can be used to detect intruders, masqueraders, misuse, and other undesirable events.
•Automated monitoring of computer activities can be used to spy on legitimate users, seriously undermining personal privacy.
Each of these antagonistic pairs illustrates the potential for both constructive and deleterious use – with respect to data confidentiality, integrity, ease of use, and monitoring, respectively.
In the real world, greed, fraud, malice, laziness, curiosity, etc., are facts of life; measures to increase security become a necessity unless it is possible to live in a benign and non-malevolent environment (e.g., no dial-up lines, no networked access, no easy flow of potentially untrustworthy software, no proprietary rights to protect, ideal hardware reliability, and outstanding administrative procedures – including frequent backups). Even in a perfect world in which everyone behaves ethically, morally, and wisely, such measures are still needed to protect against accidental misuse, as well as against hardware and environmental problems. On the other hand, attempts to provide greater security invariably cause difficulties that otherwise would not exist. There are numerous potentially detrimental aspects associated with attempts to increase security, varyingly affecting system users and system operations as well as people seemingly not even in the loop (such as innocent bystanders). Effects on users include impediments to the ease of system use, some loss of performance, intensified anxieties, and perhaps increased suspicions or even paranoia resulting from the presence of the security controls and monitoring. Effects relevant to system operations include greater difficulties in maintaining and evolving systems, less facile recovery from failures, and significantly greater effort expended in administering security. There are also second-order effects that are somewhat more subtle, such as the need for emergency overrides to compensate for crashes, deadlocks, lost passwords, etc.; the pervasive use of super-user mechanisms, escapes, and override mechanisms tends to introduce new vulnerabilities that can be intentionally exploited or accidentally triggered.
The attainment of enterprise security is often dependent on adequate system reliability and availability. It also depends on the integrity of underlying subsystems. Thus, we speak of computer-related misbehavior as including user misbehavior that causes a computer system to fail to live up to its desired behavior, and also including system malfunctions due to causes such as hardware problems or software errors (e.g., flaws in design and implementation). Loosely speaking, security involves attempts to prevent such misbehavior.
There has been extensive discussion about whether access requiring no authorization violates the laws that rule against exceeding authority. Irrespective of the laws, Gene Spafford  concludes that the vast majority of computer break-ins are unethical, along with their would-be justifications. But what good are computer ethics in stopping misuse if computer security techniques and computer fraud laws are deficient? Following is a relevant quote from Neumann [90b] on that question:
Some RISKS Forum contributors have suggested that, because attacks on computer systems are immoral, unethical, and (hopefully) even illegal, promulgation of ethics, exertion of peer pressures, and enforcement of the laws should be major deterrents to compromises of security and integrity. But others observe that such efforts will not stop the determined attacker, motivated by espionage, terrorism, sabotage, curiosity, greed, or whatever…. It is a widely articulated opinion that sooner or later a serious collapse of our infrastructure – telephone systems, nuclear power, air traffic control, financial, etc. – will be caused intentionally.
Certainly there is a need for better teaching and greater observance of ethics, to discourage computer misuse. However, we must try harder not to configure computer systems in critical applications (whether proprietary or government sensitive but unclassified, life-critical, financially critical, or otherwise depended upon) when those systems have fundamental vulnerabilities. In such cases, we must not assume that everyone involved will be perfectly behaved, wholly without malevolence and errors; ethics and good practices address only a part of the problem – but are nevertheless very important.
There has also been much discussion on whether computer security could become unnecessary in a more open society. Unfortunately, even if all data and programs were freely accessible, there would be a need for computer system and data integrity, to provide defenses against tampering, Trojan horses, faults, and errors.
A natural question is whether computer-related systems raise any value-related issues that are substantively different from those in other kinds of systems. Some partial answers are suggested in Neumann [91c], and explored further here:
•People seem naturally predisposed to depersonalize complex systems. Remote and in some cases unattributable computer access intensifies this predisposition. General ambivalence and a resulting sublimation of ethics, values, and personal roles, coupled with a background of increasingly loose corporate manipulations, and anti-ecological abuses) seem to encourage in some people a rationalization that unethical behavior is the norm and somehow or other justifiable. Furthermore, encroachments on the rights of other individuals somehow seem less heinous to those who do not realize that they also may be affected.
•Computers permit radically new opportunities, such as remotely perpetrated fraud, distributed attacks, high-speed cross-linking, global searching and matching of enormous databases, internal surveillance of legitimate users that is unknown to those users, external surveillance that is undetectable by systems personnel, detailed tracking of individual activities, etc. These activities were previously impossible, inconceivable, or at least very difficult.
Most professional organizations have ethical codes. Various nations and industries have codes of fair information practice. Teaching and reinforcement of computer-related values are vitally important, alerting system purveyors, users, and would-be misusers to community standards and providing guidelines for handling abusers. But we still need sound computer systems and sound laws. (See, for example, Denning , articles 26 – 27.)
In the following text, we first identify sources of computer-related misbehavior (Section 2). We next examine expectations that are placed on computer and communication systems (Sections 3 and 4) and on people (Section 5), with respect to security. We also consider various system issues (Sections 6 and 7). We then examine different modes of antisocial behavior and their consequences (Section 8), and consider some specific technological approaches to reducing some of the potential problems (Section 9). We end with an assessment of future needs (Section 10), some concluding remarks (Section 11), and some potential topics for further discussion (Section 12).
2.0 Computer-Related Misbehavior
Approaches to managing the general problem of attaining more meaningful security in a computer-related enterprise have both technological and nontechnological components. The former are generally complex, but are becoming better understood and better supported by newer computer systems. The latter are exceedingly broad, including social, economic, political, religious, and other aspects.
By computer-related misbehavior, we mean behavior that is different from what is desired or expected. Such misbehavior may be attributable to a combination of human, computer, and environmental problems. That is, not just system misuse by people, but also people misuse by systems! As noted in Neumann , there are three basic gaps that may permit computer and/or human misbehavior:
- Gap 1: The technological gap between what a computer system is actually capable of enforcing and what it is expected to enforce (e.g., its policies for data confidentiality, data integrity, system integrity, availability, reliability, and correctness). This gap includes deficiencies in both hardware and software (for systems and communications) and deficiencies in administration, configuration, and operation. For example, passwords are expected to provide authentication of would-be system users; in practice, passwords are highly compromisible. Instances of this gap may be triggered by people (accidentally or intentionally), or by system malfunction, or by external events (for example).
- Gap 2: The sociotechnical gap between the computer-related policies on one hand and social policies on the other hand, such as computer-related crime laws, privacy laws, codes of ethics, malpractice codes and standards of good practice, insurance regulations, and other established codifications. For example, the social policy that a system user must not exceed authorization does not translate easily into a system policy that requires no authorization or in which authorization is easily bypassed.
- Gap 3: The social gap between social policies (e.g., expected human behavior) and actual human behavior, including cracker activity, misuse by legitimate users, dishonest enforcers, etc. For example, someone accessing a computer system from another country who is bent on misuse of that system may not be very concerned about local expectations of proper human behavior. Similarly, employees who misuse a system because they have been bribed to do so may consider the precedence of a “higher ethic” (money).
The technical gap (Gap 1) can be narrowed by proper development, administration, and use of computer systems and networks that are meaningfully dependable with respect to their given requirements. The sociotechnical gap (Gap 2) can be narrowed by creating well defined and socially enforceable social policies, although computer-based enforcement depends upon the narrowing of Gap 1. The social gap (Gap 3) can be narrowed to some extent by narrowing Gaps 1 and 2, with some additional help from better education. However, the burden must ultimately rest on better computer systems and computer networks as well as better management and self-imposed discipline on the part of information managers and workers. Detection of misuse then serves to further narrow the gaps – particularly when access controls are inadequately fine-grained so that it is easy for authorized users to misuse their allocated privileges.
A classification of many types of system vulnerabilities and unintentionally introduced flaws that are subject to malicious or accidental exploitation is given in Neumann and Parker . That article provides useful background, although a detailed technical understanding of the different types of attack methods is not essential here.
Given a computer-related misbehavior, there is often a tendency to attempt to place the blame elsewhere, i.e., not on the real causes, in order to protect the guilty. For example, it is common to “blame the computer” for mistakes that are ultimately attributable to people. Even disastrous computer-related effects resulting from “acts of God” and hardware malfunctions can in many cases be attributed to a deficiency in the system conception or design. Similarly, it is common to blame computer users for problems that more properly should be attributed to the system designers, and in some cases, to the designers of the human-machine interfaces. In many instances, the blame deserves to be shared widely. A recurring theme in the discussion below involves the relative roles of the three gaps noted above. A suitably holistic view suggests that all three might be involved.
3.0 User-View System Requirements
There are numerous security-relevant expectations that people may have of a particular computer system, such as the following:
- Preservation of human safety and general personal well-being in the context of computer-related activities. Computer systems in numerous disciplines (transportation, medical, utilities, process control, etc.) are increasingly being called upon to play a key role in life-critical operations.
- Observance of privacy rights, proprietary interests, and other expected attributes. People should be notified when they are being subjected to unusual monitoring activities, and should be given the opportunity to observe and correct erroneous personal data.
- Prevention against undesired human behavior. This includes malicious acts such as sabotage, misuse, fraud, compromise, piracy, and similar antisocial acts. It also includes accidental acts that could have been prevented.
- Prevention against undesired system behavior, such as hardware or software induced crashes, wrong results, untolerated fault modes, excessive delays, etc.
- Balancing the rights of system users against the rights of system administration, particularly with respect to resource usage and monitoring.
These requirements are intertwined with value-related issues in a variety of ways, including some related to human foibles in system design, development, operation, and use, and some related to misplaced trust in systems – e.g., excessive or inadequate.
4.0 System Security Requirements
The above human-motivated requirements are typically related to computer system requirements, such as the following:
- System security requirements, both functional and behavioral. Computer systems should dependably enforce certain agreed-upon system and application security policies such as system integrity, data confidentiality, data integrity, system and application availability, reliability, timeliness, human safety with respect to the system, etc., as needed to enforce or enhance the socially relevant requirements listed in the previous section.
5.0 Expectations on Human Behavior
There are also numerous security-relevant expectations that system designers and administrators may wish to make of people involved in particular computer systems and applications. At one extreme are reasonable expectations on supposedly cooperative and benign users, all of whom are trusted within some particular limits; at the other extreme is the general absence of assumptions on human behavior-admitting the possibility of “Byzantine” human behavior such as arbitrarily malicious or deviant behavior by unknown and potentially hostile users. A few of the most important expectations are the following. It is convenient to consider both forms of human behavior within a common set of assumptions, with benign behavior treated as a special case of Byzantine behavior.
- Nonspecific expectations relevant across the spectrum of users, e.g., cooperative and uncooperative, remote and local, authorized and unauthorized. Sensible security policies must be established and enforced, with default access attributes that support the user’s needs and the administrators’ demands for controllable system use.
- User security requirements on generally cooperative users. Even in the presence of friendly users, benignness assumptions are risky, particularly in light of masqueraders and accidents. In relatively constrained or non-hostile environments, it may be reasonable to make some simplifying assumptions, e.g., that there are no external penetrators (as in a classified system that has no external access and only trusted users), and that the likelihood of malicious misuse by authorized users is relatively small, and then to make appropriate checks for deviations.
- User security assumptions on potentially uncooperative users. Designing for Byzantine human behavior is an extremely difficult task, just as it is for Byzantine fault modes. In a totally hostile environment, it may be necessary to assume the worst, including arbitrary malice by individuals and possible collusion among collaborating hostile authorized users, as well as unreliability of hardware.
6.0 Design/Implementation Concerns
Various issues need to be considered relating to system design and implementation:
- Do the system security requirements properly reflect the social requirements? Often there are glaring omissions.
- Are the system security requirements properly enforced by the actual system? There are often flaws in system design and implementation.
- What are the intrinsic limitations as to what can and cannot be guaranteed? Nothing can be absolutely guaranteed. There are always possibilities for undetected exceptions. We can always do better, but cannot be perfect.It is desirable to design systems so that if something undesirable does happen, it may be possible to contain it in some sense relevant to the problem, or to undo it, or to compensate for it.
- Is the system being used in a fundamentally unsound way that clearly violates or permits violations of the desired behavior? In many cases the absence of guarantees combined with the likelihood of serious negative consequences suggests that such use is fundamentally unsound.
7.0 Operational Concerns
Even a system that has been ideally designed and implemented can be compromised if it is operationally not soundly administered. Some of the key issues relating to proper administrative management include the following desiderata:
- Ability to recognize and eliminate in a timely fashion various system flaws, configuration vulnerabilities, and procedural weaknesses. Such problems tend to remain of little concern until actually exploited in some dramatic way, at which point a little panic often results in a quick fix that solves only a small part of the problems.
- Ability to react quickly to evident emergencies, e.g., massive penetrations or other computer system attacks. Preparedness is not a natural instinct in the face of unknown or unperceived threats.
- Willingness to communicate the existence of vulnerabilities and ongoing attacks to others who might have similar experiences. In some cases corporate secrecy is important to those who fear negative competitive impacts from disclosures of losses. In other cases there is a lack of community awareness as to the global nature of the problems. Interchange of information can be an enormous aid to good management.
- Recognizing potential abuses, e.g., insiders privately selling off sensitive information or ‘fixing’ database entries (e.g., removing outstanding warrants from criminal records) and dealing proactively with them.
8.0 Antisocial Behaviors
There are various manifestations of antisocial behavior that can be related to computer system design, development, and operation, as well as to specific deviations from ethical, moral, and/or legal behavior.
8.1 ‘Hacking,’ Good and Bad
- ‘Hacker’ was originally a benevolent term, not a pejorative term. In light of media responses to recent system misuses, the negative use seems to have prevailed, and has permanently contaminated the term, more or less preempting its use with respect to benevolent hackers. There are many beneficial consequences of an open society in which free exchange of ideas and programmers is encouraged. However, there will always remain serious potentials for misuse.
- Misuse may originate intentionally or accidentally. Both cases represent serious potential problems. (See the next section for a discussion of what to do about these problems.)
- Misuse by authorized users and misuse by unauthorized users are both serious potential problems, although in any particular application either one of these problems may be more important than the other. It depends on the environment.
- What is actually “authorized” in any given application is often unclear, and may be both poorly defined and poorly understood. This is discussed in the next section.
8.2 Summary of Modes of Misuse
- Trap doors and other vulnerabilities represent serious potential sources of security compromise, whether by authorized users or by unauthorized users. Many systems have fundamental security flaws; some flaws can be exploited by people without deep system knowledge, while other flaws cannot.
- Misuse of authority by legitimate users is in some system environments more likely than external intrusions (e.g., where there are much more limited opportunities for intrusions because of the absence of dial-up lines and network connections). Such misuse may be done by partially privileged users as well as by omnipotent users, particularly when vulnerabilities are exploited as well. Note that the distinction between authorized and unauthorized users is a very tricky one, as discussed in Section 9.
- There are various modes of abusive system contamination, often lumped together under the rubric of pest programs. These include Trojan horses (e.g., time bombs, logic bombs, letter bombs, etc.), human-propagated Trojan horses, self-propagating viruses, malevolent worms, and others. Following the mythology, a Trojan horse is a program (or data or hardware or whatever) that contains something capable of causing an unanticipated and usually undesirable consequence when invoked by an unsuspecting user. The distinctions among the various forms of pest programs tend to cause inordinate philosophical and pseudo-religious arguments among supposedly rational people, but are more or less irrelevant here. So-called personal computer viruses are generally Trojan horse contaminations that are spread inadvertently by human activity. The recent proliferation of old viruses and the continued appearances of new strains of viruses are both phenomena of our times; worse yet, stealth viruses that can hide themselves and in some cases mutate to hinder detection are just beginning to emerge.
8.3 Deleterious Computer-System-Oriented Effects
- Losses of confidentiality. Information (e.g., data and programs) may be obtained in a wide variety of ways, including direct acquisition by the obtainer, direct transmittal from a donor, inadvertent access permission from the purveyor or second party, or indirectly. Indirect acquisition includes inferences derived contextually from available information. One form of inference involves the so-called aggregation problem, in which the totality of information is somehow more sensitive than any of the data items taken individually. Another form of indirect acquisition results from the exploitation of a covert channel, which involves a somewhat esoteric signaling through a channel not ordinarily used to convey information, such as the presence or absence of an error message signifying the exhaustion of a shared resource.
- Losses of system integrity, application integrity, and system predictability. There are numerous relevant forms of integrity. System programs, data, and control information may be changed improperly. The same is true of user programs, data, and control information. Any such changes may prevent the system from dependably producing the desired results. These are basically notions of internal consistency. External consistency is also a serious problem, for example, if the data in a database is not consistent with the real-world data it purports to represent. Erroneous information can have serious consequences in a variety of contexts.
- Denials of service and losses of resource availability. There are deleterious effects that involve neither losses of confidentiality nor losses of integrity. These include serious performance degradations, loss of critical real-time responsiveness, unavailability of data when needed, and other forms of service denial.
- Other misuse. The above list is far from complete, as there are many further types of misuse. For example, misuse may involve undetected thefts of services (e.g., computing time) or questionable applications (e.g., running private businesses from employers’ facilities).
8.4 Social Consequences
- Violation of privacy and related human rights, (e.g., constitutional). Loss of confidentiality can clearly result in serious privacy problems, whether intentionally or unintentionally caused. All of the above modalities of loss of confidentiality can have serious consequences. Furthermore, the effects of erroneous information can be even more serious, in the senses of both internal and external consistency.
- Software piracy. Theft of programs, data, documentation, and other information can result in loss of revenues, loss of recognition, loss of control, loss of responsibility without loss of liability, loss of accountability, and other serious consequences.
- Effects on human safety. Misuse of a life-critical system can result in deaths and injuries, whether it is done accidentally or intentionally.
- Legal issues. The potential legal effects are quite varied. There can be law suits against misusers, innocent users, and system purveyors. Some of those lawsuits would undoubtedly be frivolous or misguided, but nevertheless causing considerable agony to the accused. Computer “crimes” have already been a source of real difficulties for law enforcement communities, as well as for both guilty and innocent defendants.
- Perceptions. Increased interconnectivity, inter-communicability, and use of shared resources are clearly desirable goals. However, fears of Trojan horses, viruses, losses of privacy, theft of services, etc. are likely to create a community that is either paranoid or oblivious to and vulnerable to the social dangers.
9.0 System Considerations
There are various techniques, architectures, and methods relating to system development and operation that can help reduce the gap between what is intended and what is actually possible (the technical gap). These include system security measures and administrative procedures. In particular, crucial issues include system accountability, with user identification, authentication, and authorization, and (sub)system identification, authentication, and authorization as well; better system designs, implementing finer-grain security policies with fewer security vulnerabilities; and judicious monitoring of system usage. These problems are particularly relevant in highly distributed systems (e.g., Neumann [90a]).
Some authors have attempted to make distinctions between intentional and accidental misuse. Even a cursory examination shows that it is essential in many systems and applications to anticipate both types of misuse, including system misbehavior (e.g., hardware faults) as well as human misbehavior. There are examples of one type that can cause (or have caused) serious disasters that could not be detected as instances of the other type. See Neumann [91b].
9.1 Identification, Authentication, and Authorization
One of the most difficult problems related to security is in determining what ‘authorized usage’ means. Computer fraud and abuse laws generally imply that unauthorized use is illegal. But in many computer systems there is no explicit authorization required for malicious or other harmful use. A simple illustrative example is provided by the Internet Worm (e.g., Denning , articles 10 – 15), in which four mechanisms were exploited, the sendmail debug option, the finger program, the .rhosts tables for accessing remote systems, and the encrypted password file. Surprising to some, perhaps, none of these required any explicit authorization for their misuse. If enabled by the system configuration, the sendmail debug option can be used by anyone. The finger program bug (relying on a flawed program gets) permitted anyone to exploit a widely available program designed to give out information about another user. The .rhosts tables permit remote access to anyone logged in with no further authorization. Finally, encrypted password files are typically readable, and subject to off-line or on-line dictionary attacks if any of the passwords are indeed dictionary words. The exploitation of each of these four mechanisms is clearly not what was intended as proper use, but authorization is not what distinguishes “good” (or proper) usage from “bad” (or improper). Perhaps the problem lies in system administrators and users unwisely trusting untrustworthy mechanisms, and with vendors promoting systems that are fundamentally limited.
Without the knowledge of who is doing what to whom (in terms of computer processes, programs, data, etc.), authorization is of very limited value. Thus, some reasonably non-spoofable form of authentication is often essential to provide some assurance that the presumed identity is indeed correct.
In the absence of meaningful authorization, the laws tend to be muddled. For example, the current computer abuse laws in California actually can be construed as making certain perfectly legitimate computer uses illegal. Prosecutors have been quoted as saying that this presents no problems, because no such cases would be prosecuted. But clearly there are problems because it becomes impossible to close the socio-technical gap.
9.2 Access Controls
The existence of the technical gap noted above is fairly pervasive in most computer and communication systems. Ideally, the system access controls should permit only those accesses that are actually desirable. In practice, many forms of undesirable user behavior are actually permitted. Thus, the system controls should as closely as possible permit authorized access only when that access actually corresponds to desired behavior.
9.3 Uses of Encryption Technologies
Encryption has traditionally been an approach for achieving communication secrecy. It is now emerging as a partial solution for many other security-related functions, such as providing encrypted and non-forgeable authenticators, transmitting encrypting and decrypting keys in an encrypted form, identification and authentication, digital signatures, tickets for trusted transactions such as registry and notarization functions, non´forgeable integrity seals, non´tamperable date and time stamps, and messages that once sent legitimately cannot easily be non´repudiated as forgeries. Thus, there is a burgeoning assortment of interesting new applications.
Unfortunately, government restrictions on research, use, and export of encryptive techniques makes some of these applications difficult.
9.4 Accountability and Monitoring
User identification and authentication are both essential for adequate accountability. In the absence of adequate user identification, accountability is of limited value.
Anonymous use presents some potential problems. Typical restrictions permit reading only for information that is freely available, while forbidding external modification; unless the system is to be a sandbox or public blackboard, appending of new material should be also restricted, to prevent directory saturation denials of service.
Monitoring is itself a critical security issue. It must be generally non´subvertible (non´bypassable, non´alterable, and otherwise noncompromisable), and must respect privacy requirements.
Monitoring can serve many different purposes, including seeking to detect anomalies relating to confidentiality, integrity, availability, reliability, human safety, etc. With respect to security monitoring, there are two fundamentally different, but interrelated, types – monitoring of use to detect intruders (which may be a benefit to legitimate users) and monitoring to detect misuse by (supposedly) legitimate users. Management has a responsibility to inform legitimate users as to what type of monitoring is in place, although unfortunately it may be desirable to hide the detailed algorithms, because they may imply the existence of particular vulnerabilities. This is a difficult issue. (See, for example, Denning et al. .)
Security remains an especially serious problem in highly distributed systems, in which accountability and monitoring take on an even greater role. Examples of systems for real-time audit-trail analysis are given by Lunt , while a particular instance of a system that has been carefully designed and implemented to provide extensive restrictions on what can be audited and how the audit data can be controlled is given by Lunt and Jagannathan .
10.0 Future Needs
The pervasive existence of the three gaps noted above suggests that efforts are needed to narrow each of the gaps. Some needs for the future include the following.
- Better systems, providing more comprehensive security with greater assurance – systems that are easier to use and to administer, easier to understand with respect to what is actually happening, more representative of the security policy that is really desired, etc. [Gap 1]
- Professional standards. Existing professional associations have established ethical codes. But are they adequate? or adequately invoked? [Gap 2]
- Better education relating to ethics and values, in the context of the technology, particularly in relation to computer and communication systems, and also relating to the risks of computerization (cf. Neumann [91a]). [Gap 2]
- Better understanding of the responsibilities and rights of system administrators, users, misusers, and penetrators. [Gaps 2 and 3]
- A population that is more intelligent and more responsible, including designers, programmers, operations personnel, users, and lay people who are in many ways forced to be dependent on computerization, whether they like it or not. Holistically, we need a kinder and gentler society, but realistically that is too utopian. [Gap 3]
- In the absence of a utopian world, it seems necessary that we must strive to improve our computer systems and communications, our standards, our expectations of education, and our world as a whole, all at the same time, although the needs of our society will tend to dictate certain priorities among those contributing directions. Unfortunately, commercial expedience often dictates that emphasis be placed on seemingly easy and palliative solutions that in the long run are inadequate. [Gaps 1, 2, 3, addressed together from an overall perspective.]
In this article, we have considered security somewhat broadly, encompassing not only protection against penetrations and internal misuse, but also protection against other types of undesirable system and user behavior. This perspective is important, because attempts to address a narrower set of problems are generally shortsighted.
Overall, awareness of computer system vulnerabilities and security countermeasures is greater than it was a few years ago. In retrospect, computer security has been getting steadily better, but so have the crackers and stealthy misusers of authority. Further, the potential opportunities and gains from insider misuse seem to be increasing. However, our society does not seem to be getting significantly more moral on the whole, despite some determined efforts on the part of a few individuals and groups. Gap 1 has actually been closing a little; Gap 2 needs still more work; Gap 3 remains a potentially serious problem.
At a conference in 1969 I heard “2001”author Arthur Clarke talk about how it was getting harder and harder to write good science fiction; he lamented that “The future isn’t what it used to be.” Yogi Berra might have remarked that Clarke’s observation was “deja vu all over again.” By transitive closure, I think it is appropriate to combine those two aphorisms. Deja vu isn’t what it used to be all over again – it seems to be getting worse. And there seem to be enough people around who subscribe to Tom Lehrer’s title for a song he never wrote (because it would have been an anticlimax): “If I had it to do all over again, I’d do it all over you.” In the absence of better computer and communication systems, better system operations, better laws, better educational programs, better ethical practices, and better people, we are all likely to have it done to us, over and over again.
12.0 Some Topics for Discussions
One of the purposes of this article is to stimulate further discussion of the vital issues relating to values in the use of computers. Following are a few topics of potential interest. All of these have implications relevant to the Security Track, but many of them also have implications in other tracks as well. They are stated here because of the pervasive nature of the problems, and the dangers of attempting to compartmentalize the relations between causes and effects.
- Can the three gaps discussed in Section 2 (technical, sociotechnical, and social, respectively) ever be closed in any realistic sense, in the face of the behaviors of Section 8? Are we converging or diverging, or both? Remember, there is no perfect security.
- Are the existing laws an adequate representation of the need to close Gaps 2 and 3? What are the appropriate roles of ‘intent’, ‘exceeding authority’, and ‘misusing authority’, particularly in situations in which no authorization is required, and what are the implications on attempts to close Gap 1?
- What are the intrinsic limitations of technological security measures by themselves, administrative and operational security measures by themselves, and all of these together? See Section 6.
What are the essential limitations of trying to maintain privacy, particularly in light of the demands for compromising it? The implications of emergency overrides and other exceptional mechanisms (cf. SB 266) provide conflicting needs. (This is of interest also to the Privacy Track.)
- How can we best balance personal rights with needs for monitoring? For example, consider the FBI monitoring on-line newsgroups, and corporations monitoring inbound and outbound e-mail and general system usage. (See Section 9.4.)
- Consider the Free Software Foundation philosophy of open access and free distribution, and its implications. Note that security has many more purposes than just providing confidentiality. For example, preventing Trojan horses and other types of sabotage is clearly an important goal. (This is of interest also to the Equity Track and the Ownership Track.) (Added note: Ironically, just before NCCV, abuse of the FSF computers became rampant, including using the open accounts to trash the FSF software and to gain free access to other Internet systems. Richard Stallman of the FSF reluctantly admitted that they had had to institute passwords. See the Boston Globe, 6 August 1991, front page article.)
- Can we realistically “place the blame” for undesired system and human behavior, with respect to crackers, malfeasors, designers, programmers, system administrators, marketers, corporate interests, U.S. and other governments, etc., across the broad spectrum of security-related problems? Attempts to place blame are often misguided, and tend to lose sight of the underlying problems. Furthermore, blame can usually be widely distributed. There is also the danger of shooting the messenger. (Contrast this distributed notion of blame with the I Ching concept of “no blame”!) See also the following track contribution from Dorothy Denning (Denning ).
- How can the needs of encryption for privacy, integrity, and other purposes noted in Section 9.3 be balanced with needs for “national security” and other governmental constraints? Consider the social implications of private-key versus public-key encryption, export controls, corporate and national interests, international cooperation, etc.
- How does security aid or interfere with other social issues? Might it seriously impede access by handicapped and disadvantaged people? Or if it does not, would it present intrinsic vulnerabilities that could be exploited by others? There are challenges both ways. For example, physically disabled or otherwise handicapped individuals might be able to vote from their homes, via telephone or computer hook-up. Such systems might also encourage fraudulent voting. If serious security measures were invoked, the benefits might be lost.
- Are we creating a bipolar society of computer-literate insiders and everyone else? Or a multipolar society of various distinct categories? Are we disenfranchising any sectors of society, such as ordinary mortals and people in the humanities who do not have computer resources? Might increased computer security tend to further such an alienation? Are people in the creative arts becoming sterilized if they do move toward computerization? Are there relevant implications of computer security on such individuals?
- What are the implications of computer security on scholarly research? Unnecessary secrecy is clearly one concern. So is inadequate privacy. Loss of integrity is another concern, with the possibility of having experimental data and research results altered or forged. Authenticity (the ability to provide assurance that something is genuine) and subsequent non-repudiatability (the ability to provide some assurance that something attributed to an individual really was correctly attributed) are illustrative technical issues that relate to this question.
- Do existing transnational data exchange regulations present serious obstacles to international cooperation, including dissemination of knowledge, programs and other on-line information? If those regulations were relaxed, would there be serious consequences, e.g., with respect to social, economic, political issues, and national integrity? Could computer security help to provide controls that would permit national boundaries to be safely transcended? Or must it be an impediment? Or are both of these alternatives actually true at the same time?
The above itemization is by no means complete. It merely suggests a few of the thornier topics that might be of interest for further discussion.
13.0 Further Background
Further background on computer security is found in Clark et al. , while recent examples of system misuse are analyzed in Denning  and Hoffman . Examples of accidental and intentional events that have resulted in serious computer-related problems are summarized in Neumann [91a], an updated copy of which is appended.
Clark  David D. Clark et al. Computers at Risk: Safe Computing in the Information Age. National Research Council, National Academy Press, 2101 Constitution Ave., Washington DC 20418, 5 December 1990, Final report of the System Security Study Committee, ISBN 0-309-04388-3.
Denning et al.  Dorothy E. Denning, Peter G. Neumann and Donn B. Parker, “Social Aspects of Computer Security,” Proceedings of the 10th National Computer Security Conference, Baltimore MD, September 1987.
Denning  Dorothy E. Denning, “Responsibility and Blame in Computer Security” in Terrell Ward Bynum, Walter Maner and John L. Fodor, eds., Computing Security, New Haven CT, 1992. (Below, pp. 46 – 54.)
Denning  Peter J. Denning, ed, Computers Under Attack: Intruders, Worms, and Viruses, ACM Press, Addison Wesley, 1990, See particularly chapters on The Internet Worm (articles 10 – 15), social, legal and ethical implications (articles 26 – 37), ACM order number 706900.
Hoffman  L.J. Hoffman, ed., Rogue Programs: Viruses, Worms, and Trojan Horses. Van Nostrand Reinhold, 1990. IBSN 0-442-00454-0.
Lunt  Teresa F. Lunt, “Automated Audit Trail Analysis and Intrusion Detection: A Survey,” 11th National Computer Security Conference, Baltimore MD, October 1988.
Lunt and Jagannathan  Teresa F. Lunt and R. Jagannathan. “A Prototype Real-Time Intrusion-Detection Expert System.” Proceedings of the 1988 Symposium on Security and Privacy, IEEE Computer Society, Oakland CA, April 1988, pp. 59 – 66.
Neumann and Parker  Peter G. Neumann and Donn Parker, “A Summary of Computer Misuse Techniques,” Proceedings of the 12th National Computer Security Conference, Baltimore MD, 10 – 13 October 1989, pp. 396 – 407, This reference is included among the reading materials.
Neumann  P.G. Neumann. “The Computer-Related Risk of the Year: Computer Abuse,” Proceedings of COMPASS (Computer Assurance), June 1988, pp.8 – 12. IEEE 88CH2628-6.
Neumann [90a] P.G. Neumann. “The Computer-Related Risk of the Year: Distributed Control,” Proceedings of COMPASS (Computer Assurance), June 1990, pp.173 – 177. IEEE 90CH2830.
Neumann [90b] P.G. Neumann, “A Perspective from the RISKS Forum.” Article 39, Computers Under Attack (P.J. Denning, ed.), ACM Press, 1990, pp. 535 – 543.
Neumann [91a] P.G. Neumann, “Illustrative Risks to the Public in the Use of Computer Systems and Related Technology,” SEN 16, 1, January 1991, pp.2 – 9, (Index to the published RISKS archives).
Neumann [91b] P.G. Neumann, The Roles of Structure in Safety and Security, Position paper for an IFIP Workshop on Reliability, Safety, and Security of Computer Systems: Accidental vs Intentional Faults, Grand Canyon, 22 – 24 February 1991.
Neumann [91c] P.G. Neumann, “Computers, Ethics, and Values, Inside Risks,” Communications of the ACM, July 1991, inside back cover.
Spafford  Eugene Spafford, “Are Computer Hacker Break-Ins Ethical?” Journal of Systems and Software, January 1992.
Stoll  Cliff Stoll, The Cuckoo’s Egg: Tracking a Spy Through the Maze of Computer Espionage, Doubleday, 1989.
Appendix: Updated Illustrative Risks to the Public
At the National Conference on Computing and Values in 1991, Peter Neumann placed a version of his “COMP.RISKS” list at this point in his paper. Since that is now very dated, the reader should go instead to the following URL for the latest version: http://catless.ncl.ac.uk/Risks
William Hugh Murray
On the Role of the Computer
There have been three great intellectual revolutions in the history of mankind. The first was the invention of spoken language, from which we date modern man. The second was the invention of written language from which we date modern civilization. The third was the invention of movable type, from which we date the modern nation state and our modern world economy.
We stand now on the threshold of the fourth such revolution. It is being brought about by the rapid decline in the cost of digital technology. As a result of its new found economic advantage, digital technology can be expected to replace analog technology for almost all recording, computing, and communicating applications. It will cause those applications to merge into a single and seamless whole. We will no longer know where one ends and another takes up.
Like the earlier revolutions, this revolution can be expected to fundamentally alter our institutions, our culture, our values, and even our identities.
On the Role of Cooperation and Collaboration
We are defined, at least in part, by the manner in which we use information to cooperate and collaborate with our fellows. Man is first and foremost a collaborative animal. The emphasis in our culture on the significance of the individual notwithstanding, no man ever did anything of significance by himself. Sir Isaac Newton said “If I have seen farther than other men, it is because I have stood on the shoulders of giants.”
Not even Mozart ever did anything significant all by himself. Mozart was born into a rich musical culture. He was born into a world in which the idea of musical notation had already been codified. He had mentors, colleagues, patrons, collaborators, and audiences. While he died at thirty-five, he left a legacy of work that is the envy of everyone that has succeeded him, no matter how long they have lived. Imagine how different would be our world if Mozart had lived in the age of computers or even of tape recorders. Imagine Homer, Virgil, Shakespeare, or Dickens with the tools of Disney, Spielberg, and Lucas.
The work of these modern giants is signed by dozens; they do not work alone. They work in teams, cooperating and collaborating. The computer has no single father; it too is the result of teamwork. It is the result of, and the ultimate tool for, cooperation and collaboration.
On the Necessity for Trust
Cooperation and collaboration require trust. They require trust in the infrastructure and trust in the community, if not in the individual. If we are to enjoy the benefits of the automobile, we must be able to trust that most drivers will stay on the expected side of the road, most of the time. If we are to enjoy the fruits of modern agriculture, transportation, distribution, and commerce, we must be able to trust that the food supply will not be contaminated. Cyanide found in two grapes devastated the economy of the country of origin of the grapes, not from the danger of the cyanide but from the consequent loss of trust.
Suppose that a century ago someone had contaminated the medicine supply in a local pharmacy and that, as a consequence, seven people had died. Would we even have known that anything significant had happened? Would we have drawn any significant inferences about the purity of medicine in pharmacies in general or in neighboring states? Much less would we have turned an entire industry on its ear in response. The significance of such an event is that everyone knows about it; that my pharmacy, your pharmacy and the contaminated pharmacy have a common source of supply; and, finally, that we do not, cannot, know the motive or the extent. The changes in the distribution of patent medicines are not justified by seven deaths in a society in which more people than that die from the use of tobacco every hour of every day. However, they are justified to preserve the necessary and essential trust in the integrity of the medicine supply.
Increased Vulnerability to Deviance
All of the revolutions have been marked by an increase in the scale of cooperation, specialization, and harmony. It can be argued that the technologies were successful and widely adopted and applied precisely because they had this effect. Conversely, as the scale of cooperation and interdependence have grown, our vulnerability to the failure of the infrastructure or the deviant behavior of a few individuals has grown. This deviant behavior may result in fear, anxiety, and loss of trust and confidence that is out of all reasonable proportion to the amount of damage done.
Technology and Crime
Every new technology brings with it an opportunity for crime that did not exist before it. Piracy at sea came with the decked caravelle, safe-cracking with the vault, highway robbery with the stage coach, bank robbery with the automobile, and hi-jacking with the truck and airplane.
It would have been a surprise if the computer had been an exception to this rule. The computer is similar to this other technology in that the contribution to crime is a vanishingly small part of its total use. Its net effect on crime, after adjusting for the effect of computer based controls over paper based ones, has been to reduce it significantly over what it might otherwise have been.
We should not be surprised by computer crime, but we seem to be. In “The Great Train Robbery” Michael Crichton suggested that society is often offended by this new technology crime to the point of outrage.
Likewise, it should come as little surprise that, like the railroad, the automobile, and the telephone before it, the computer has been the subject of premature, if not preemptive, legislation. The politicians are certain that, somewhere in any dung heap, there must be a pony.
On Rude Behavior
Of course, the deviant behavior that offends and diminishes trust usually stops short of criminal. It is merely rude. Truly “criminal” behavior is rare and requires a high standard of proof. Rude behavior is much more common, and we know it when we see it.
Unlike most rude behavior, this behavior is not subject to common cultural or political controls. The computer is so novel that we have not arrived at any consensus about what behavior is to be tolerated. We have no songs, stories, or games designed to tell people how to use it.
Depending upon where you stand, you will have different ideas about which behavior is to be encouraged, which tolerated, and which is to be actively discouraged. Likewise, you will differ from others on the appropriate means.
For example, young people tend to see access to the computer network as an “entitlement.” Some of them believe that entitlement to be so fundamental that it should not be subject to control by authority. The administrators of that access believe the access to be contingent upon continued orderly behavior. Since they understand the vulnerability of their systems to disorderly behavior, they generally respond by attempting to isolate any such behavior, i.e., they suspend the user account or disable the terminal or line of origin. Not surprisingly, the young people see this as painful, arbitrary, punitive (rather than remedial), and excessive. While they refuse to amend their behavior, they argue that the only available and effective remedy, i.e., denying access, is simply too Draconian to be contemplated by a civilized society. Continued monitoring of what passes for dialog between these two armed camps gives me little hope for early reconciliation of these opposing views.
Not only are there disagreements between the generations, but also between nations. Recently, student hackers in the Netherlands have been attacking systems in the US, in part by employing resources in neighboring countries. The authorities of the University providing the resources for these “experiments” point out that the activity is “legal” in the Netherlands. They argue that the difficulty is with the security of the target systems. (Where have we heard that argument before?) When one suggests that, if not illegal, the attacks are at least rude, said authorities become defensive and indignant.
Now it should be pointed out that the current level of security in the network has worked reasonably well for more than a decade, is, as we have already noted, not subject to ready change and appropriate, perhaps even necessary, to the intended use.
Effect of Computer Security on Social Trust
Social trust is necessary to the full enjoyment of the benefits of computers. Security influences that trust.
Many failures are public; they diminish trust globally, not just locally. My security is related to your security; if your system falls to hackers, it may give them a path to me and resources to be used against me. The damage that is done to necessary public trust and confidence by the publicity of our failures may be out of all proportion to the direct damage that either of us suffers.
The security measures that are indicated to preserve public trust may exceed those that are indicated by your use or mine. The security achieved as a result of each of us making our own local decisions based upon our own local situation may not be sufficient to preserve public trust and confidence. If we are to enjoy the potential benefits of this new technology, then we must ensure that its use is sufficiently orderly and well-behaved to sustain that trust.
That we do trust computers is obvious. Some minimum level of trust has been necessary to their acceptance and use. If you cannot trust what the computer tells you, at least most of the time, then it has no value. Some of that trust is possibly misplaced; it presumes a level of perfection that is difficult to achieve and maintain in complex systems.
That there is a fundamental undercurrent of mistrust is equally obvious. The RISKS forum, moderated by Peter Neumann, gives loud and, often, eloquent testimony to this mistrust.
Much of both the trust and mistrust of computers is independent of their security. However, trust is influenced by security. Security contributes to the necessary trust; its absence and its failures to the mistrust. Thus, computer security, whether we like it or not, is a social issue. It is global, not local. It is bigger than our systems. It is related to those fundamental human values of cooperation and collaboration.
The Cost of Security
We write, speak, and behave as though security were free, as though it were an independent property that could be achieved without diminishing any other desiderata. We speak as though its absence or inadequacy were always a mistake; we want to know who is to blame.
In the sense that good security is good design, this is true. However, in another sense security is usually achieved at the expense of some other desirable property of the system. I learned this the hard way when the design of my masterwork was dismissed by Dr. Willis Ware because it did not preserve to the user the ability to write and execute an arbitrary program of his own choice. For all the years since, I have been defending my choice to Dr. Ware on the basis that it is not possible to reserve all generality and all flexibility of a system to all users and still say that it is controlled or secure. Designers, implementers, and managers are confronted with hard choices. Their decisions will never be risk free and they will never please everyone.
Security of Populations
We also speak as though the issue were the security of individual systems. I would like to suggest that public trust is more influenced by the security of collections or populations of systems.
To date, most work in computer security has been done at the atomic level. That is, it has been about making statements about individual systems. We now have metrics with which to compare the trust of two systems. We are starting to do work at the sub-atomic level. That is, we can make statements about how components affect the security of a system. We have not even begun to make statements about the security of a population or network of systems.
A reader of “Computers at Risk” might be lead to conclude that the problem can be readily dealt with simply by improving the security of component systems. However, security is not a perfectly composible property. That is, it is not possible to bind two systems closely enough to preserve their security. The level of security will always be something less than that of the lesser of the two.
The Interesting Questions
When I connect two systems as peers, neither dominating or controlling the other, I assume that the level of security of the two is approximately the same as that of the least secure of the two. Yet, intuitively we suspect that the security of a large population of systems is higher than that of the least trusted system, and lower than the most. How do we make statements about populations? What is the effect on the population of adding a new system? What is the effect of increasing the security of members of the population? We have no science, art, or mechanism for addressing such questions. Neither do we have information to tell us whether the managers of one system or network consider the security of a nearby system before deciding to connect to it. Yet at the level of society, at the level of values, at the level of social trust and social order, these are the questions of interest. The security of single systems has little relevance.
Society’s need for confidence is so urgent, that if it can get it no other way, it will resort to political force. Indeed, it will attempt to use such force even if it is ineffective, or even counter-productive. It will attempt to impose dogma and order by force.
There is a natural, or at least historical, contention between freedom and order. Nowhere does it manifest itself more than in computing. The authorities are frightened by the individual freedom afforded by the computer, and all too ready to jump in and impose order. Any disorder is taken as justification.
On the other hand, they are equally frightened by the idea of good security in private hands. The National Security Agency is resisting any use of cryptography by commerce because of the potential impact on the cost of intelligence gathering. Likewise, the FBI has recently tried to outlaw the use of the same technology because of the potential for its exploitation by criminals.
In the short run, the level of security in the population of computers is a given. That is, the population is so large that it is not possible to change the security except at the margin. However, the National Academy of Science report, “Computers at Risk,” would have us believe otherwise. They would have us believe that the problem is one of the products offered by vendors, rather than the systems operated by users. Therefore it believes that the solution is to influence vendors, rather than users. If vendors will simply offer better systems with safer defaults, then the problem will be solved. The report is either not aware of or ignores the evidence that users systematically compromise away the security properties with which systems are shipped.
The full enjoyment of the benefits of computers requires a certain level of confidence in how they behave. The security of the systems contributes to that trust. The issue is more one of trust in the population of computers, rather than in any one. While most computer-related behavior is orderly, there is sufficient deviant behavior for it to be a threat to the necessary level of trust.
Security of systems is necessary but not sufficient for the security of the population. It appears to be important to be able to answer questions about the level of trust in the population.
The values to be conserved include trust, confidence, cooperation, collaboration, coordination, competition, contention, order, freedom, and enjoyment of the use and benefits of computing. These values conflict and contend. What is good for one may not be good for all of the others. However, it is clear that security will impact them all. The choices that confront us are hard choices.
Things that society concludes are valuable, it takes steps to conserve. There is some evidence to suggest that society will conclude that computers are valuable. Yet to date, we have taken few such steps for computers. To the extent that we fail, to the extent that the results are unsatisfactory or even merely unsatisfying, we invite intervention by authority with a corresponding loss of freedom.
Deloitte & Touche
Today truly is a wonderful time to be a computer criminal. Not only are there increasing number of criminal opportunities but these are the perfect crimes for the 1990’s. Persons who would not be willing to use a gun to hold up someone on the street may be quite willing to simply press a few computer keys in order to steal. For those who like a clean crime, it can be a bloodless act and the victim need not be faced directly. The public views these acts as less than serious crimes, equal to but different than other white collar crimes. Prevention is quite difficult while the chances of getting caught are minimal and of getting punished almost zero. What great incentives to commit crime!
And what great public crime control policy issues to resolve. Computer crimes have evolved from exotic incidents to a major societal issue. They have quickly moved from hacks to attacks, from fooling around to fouling up, and from violations to virucide. In order to fight computer crime, the society, and computer professionals in particular, face some very difficult decisions on some very fundamental issues. This is a serious moment in our society, as we seek to establish an appropriate balance between old law and new technology.
Peter Neumann’s quite difficult task in developing his paper on “Computer Security and Human Values” was to consider appropriate measures to protect against computer criminals while, at the same time, to stress fundamental human values. To his credit, Neumann has successfully resisted easy answers and abstract theory. He drew from his expertise and his intimate knowledge of the world of risks to present a broad perspective on what computer security can and should mean. His analysis serves as a useful supplement to the excellent Computers at Risk.
The strength of his paper is that he has expanded the usual definitions of computer problems and computer security objectives. That perspective is quite appropriate for this time and this conference, since it does not restrict public policy discussions to time-limited or technology-limited considerations. Peter Neumann has once again served the computing community with his insights, providing us with an important agenda to consider.
He has also avoided getting stalled on some of the current hot computer security topics, such as encryption standards, export controls, and Operation Sun Devil. Yet, he has given us a “vocabulary” containing the types of questions to raise in evaluating some of these emerging issues as well as those issues that we cannot even anticipate at this time.
There are several points that I feel have not been sufficiently covered in this fine paper. What has been covered in the paper is excellent but there is more that needs to be added. Those I will cover in my comments.
My only complaint about his paper is that he has said so much and said it so clearly that he has not left much room for discussion. That creates a difficulty in reviewing his paper. Nevertheless, I am guided by the great words of the unknown author who said, “One who hesitates is not only lost but miles from the nearest exit.” So, without any hesitation, here are my comments.
Understanding the Computerization of Crime
Neumann introduces his paper by stating that he will take a broad view of undesirable human activities. That is a very necessary perspective and it is also refreshing to find in information security. Too often, human aspects are neglected or put into quite separate and often under-appreciated security awareness/management sections. Seldom is there an integrated socio-technical approach to the computer crime problem.
However, as pleased as I am that social aspects are considered, I think that his paper contains just the tip of the behavioral issues that must underlie a sophisticated and effective computer security approach. It is necessary to understand even more of the human aspects than are found in his discussion. We need to establish where the social and psychological lines are drawn between normal and deviant, between allowed and disallowed, between expected and unexpected, between wanted and unwanted.
To start with, we need to know more about typical users and their normal uses of computers and information. We do not even know, for example, the ways by which average users define authorized and unauthorized activities in their work (as distinct from official policy and system decisions). How do users draw their own lines as to what they consider as appropriate and inappropriate? How many employees view certain use of their office computers as similar to pens, pencils, and paper in the office – as perquisites or benefits that are available for the taking? Is there something about computer-mediated work (Zuboff), which “disappears behind the screen,” that is more prone to crime and abuse? Do certain organizations structure their work relationships in such a way that they become “criminogenic” (See Sherizen ) environments, i.e., crime producing or inducing structures?
Beyond the “normal,” Neumann’s model also needs more details about the crime aspect of the computer crime concept. While information security practitioners talk about crime, the field of information security does not understand the basics of criminal behavior and crime control measures. More directly stated (Sherizen ):
It is ironic that the field of information systems security lacks sufficient insights concerning computer criminals. Information security’s operating models and procedures contain a number of largely untested and possibly quite incorrect assumptions about how and why computer criminals function. These assumptions serve as the platform upon which controls and safeguards have been established.
Certainly, the computer aspects of computer crime have quite appropriately been stressed. Yet, other important aspects addressing how opportunities are created for crime and the motivations that shape the crime are given short shrift. In order to meet the challenge created by increasing computer crimes, the field of information security needs to add criminological concepts to the information security database and to more definitively place crime control concepts within the information security process.
Computer crime can best be understood as the computerization of traditional crimes, particularly economic or white collar crimes, as discussed by Sherizen . While new crimes are possible with the use of computers (either those for which laws have not been defined or which are so unique that they were not possible without the technology), the majority of computer crimes are well known behaviors that existed prior to computerization. While computers have changed the nature and potential damage that can occur, computer crime developments have quite predictable features that follow the history of other crimes.
Computer crime must also be understood as composed of individual behavior as well as organizational behavior. We must move away from the “good organization/bad individual” model of computer crime. Neumann mentions that organizations may also commit computer crimes. This point is not well recognized or often discussed. There is an almost unconscious dichotomy that suggests that computer crime is composed of individuals as attackers and organizations as victims. The “Organization as Computer Criminal” needs to be recognized as a problem area. Examples of this type of crime are aspects of competitor intelligence gathering, insider trading activities, programming of supermarket scanners that overcharge shoppers, government snooping, illegal collection of personal information, money laundering, and many other examples found in RISKS.
Finally, there is a need to build on our knowledge of the history of crime to prepare for what could turn out to be very different computer crime in the future. One specific aspect of this is to understand that crime often evolves from an activity of individuals to an organized activity. Hackers (the bad kind) are indeed a problem but they may pale in comparison with what I would consider as an almost inevitable progression into larger scale, coordinated, and well planned computer crime onslaught led by professional criminals. We may look back at 1991 as the quite benign days when hackers and virus makers were the only problem.
While these behavioral issues add complexity to the Neumann model and require additional sets of questions to be answered, they also add conceptual substance to controlling computer crime while meeting human values, the theme that Neumann and the conference so well represent.
The End of the (Ab) User-Friendly Era
Can Neumann’s excellent agenda to resolve the major gaps be accomplished? Does the evolution of computer security inevitably mean the end of the user friendly era, where there will have to be security and audit hidden behind all screens, keyboards, and modems? Will it be mandatory for computer vendors and user organizations to have to meet certain standards of security? Just as cars today are required to have windshields with safety glass, so it is quite possible that a number of forces (law, insurance, public opinion, etc.) will force computer systems and equipment to come protected with the counterpart of safety glass.
Those are not technical but political decisions and they are only partially raised in the paper. While Neumann certainly presents appropriate objectives, particularly in presenting the gaps and in ways to narrow them, more information needs to be added to his model on the politics of data security, i.e., how decisions on how the resolution of computer crime issues will be established.
Yes, there is need for promulgation of ethics, exertion of peer pressures, and enforcement of the laws, as mentioned in the paper. But social change occurs from more than that. Luckily, histories are available on how other social conflicts were resolved. We can learn important lessons on how information protection can best be provided while continuing to meet our important human values. Several examples can be mentioned briefly.
On the western (non-electric) frontier of the U.S., disagreements on property rights led to almost continuous battles between Native Americans, farmers, cattle ranchers, sheep herders, and the propertyless. To a large degree, these battles were decided by the invention of barbed wire. Ownership was quite literally set by the wire, which defined the property lines. They who had the wire had the rights. Livestock or crops could be kept in and trespassers or the unwanted could be kept out.
For some, the current battle over electronic information property rights is a search for the electronic equivalent of barbed wire. Ownership of intellectual property, only in part a battle to control that “stuff” called cyberspace, is becoming an almost continuous set of encounters. The participants differ from the western frontier days but the stakes are as high for the future of this nation. In this new frontier battle, the lines are not going to be drawn in the same fashion. How they will be drawn, the equivalent of the “electronic barbed wire,” has to be carefully considered.
Another historical change shows how certain individual behaviors become changed by societal restructuring. This is shown with the history of pilots in the early days of aviation. They fit our contemporary definitions of hackers. These barnstormers were wild, didn’t respect property, and were constantly challenging authority. When they crashed their system, it really went down. They were a unique breed of individuals, who tested the limits of the world of aviation, sometimes literally by walking on the wings and performing amazing and often dangerous stunts. They were necessary for the early stages of aviation because they tested the limits of the technology.
What finally led to the end of the barnstorming pilots was that the business interests of airlines got precedence over the aviation interests. More directly, business people and moneyed interests wanted schedules, guaranteed delivery of products and people, contractual relationships with shippers, and other accouterments of an industry. The government supported much of this since it wanted guaranteed mail delivery. Stunt pilots and daredevils were viewed as threats to the industry-making wishes and needs of the airline industry builders. After the industry reached a certain level of development, these “pilot hackers” could have quite literally killed the industry. Those who could not stop challenging the limits to flight faced few choices: they could become test pilots for aircraft companies, they could try to fit within a military force, or they could become circus performers. The airlines industry won, the pilot became “civilized,” and (at least in the movies), we all fly off safely and on schedule.
This is not meant to equate “hacker pilots” and computer hackers. Rather, it is raised to show how certain deviant behaviors get resolved, often without changing the behavior but by creating an institutionalized patterning, accepting certain activities and sidetracking other behaviors. There will be a process that will challenge the computer crime problem. It will not necessarily be the same as with airline pilots but it will be a process whereby at least a temporary resolution will be reached.
Certainly, we will have a long wait for the end of computer crime/ hacker attacks. As with other crime problems, at least two points are clear. Societies and organizations have a capability to absorb or get used to what previously was considered as obnoxious (such as unions, long hair, MTV). Secondly, society gets the crime that it deserves, i.e., crime reflects the values of the society and how those values get played out in terms of public policy and policing priorities.
The End of Information Security?
I end my review of the paper with some comments about the field of information security. There are indications that information security is undergoing some significant retrenchment at this time. There are cutbacks on protection information that will affect what security can or will be put in place in the future. That leads to certain essential operational questions.
Who is going to manage information security? Some of the indicators of this retrenchment in information security are growing cutbacks, resulting in some excellent managers losing their jobs, essential staff increases being denied, and, at the same time, increases in the span of security responsibilities. Managers who continue to lack management support are growing in their disenchantment. Some are even questioning whether information security is a dead end job.
Who is going to develop information security products? The information security marketplace is also facing problems. Serious competitive pressures exist and some companies are not surviving. There are shrinking opportunities in certain leading industries, such as banking, where sales of these products often flourished. The government and private sectors don’t seem to be coordinating their interests and, for certain key activities such as encryption, actually are in active disagreement.
Who is going to follow the guidelines of this conference? Many important reports have been written and and insightful conferences have been held before. Yet, information protection is in competition with many other risk problems that require attention. There is plenty of information overload and security is a hard sell. Is anything going to happen as a result of this conference?
These are the tough questions that will make or break the important findings provided in this paper. In essence, without solving the issue of making information security a strategic issue in business and in government, the battle over information will continue.
As can be understood from my comments throughout this discussion, I feel that Peter Neumann’s paper has done what it should. It started me thinking about some critical issues and it made me want to find out even more. It raised important questions and even established some answers. It brought together sources from a number of different fields. For all of that, I thank Peter and hope that he continues to contribute to computer security with human values.
Data Security Systems, Inc.
Sherizen , Sanford, Federal Computers and Telecommunications Security and Reliability Considerations and Computer Crime Legislative Options, Contractor Report for the US Congress, Office of Technology Assessment (OTA), 1985.
Sherizen , Sanford, The Computerization of Crime, Abacus, 5 (1) (1987)
Sherizen , Sanford, “Criminological Concepts and Research Findings Relevant for Improving Computer Crime Control,” Computers & Security, 9 (1990) 215 – 222.
Zuboff, Shoshanah, In the Age of the Smart Machine: The Future of Work and Power, Basic Books, 1988.
This paper introduces a set of distinctions relating to responsibility and failure that allow us to analyze situations in which something goes wrong. These distinctions are used to analyze a computer break-in in order to discover who might be blamed and possible explanations for their failure to meet their responsibilities. The results of such an analysis can then be used to design new human practices or computer systems that lead to better computer security. The distinctions also provide road maps for assessing our own responsibilities and avoiding many situations that lead to blame and negative consequences to ourselves and others.
In his essay, Peter Neumann raises the question of how can we realistically “place the blame” for undesired system and human behavior. Since “blame” means “to hold responsible,” we must first ask what it means to be responsible.
To be responsible for something is to be accountable for it. When we say that a person has a responsibility, we mean that he or she has an obligation or commitment in some domain of action. There are several ways in which one can acquire responsibility: morals, formal contracts, informal agreements, laws and regulations, standard practices, and declarations.
1.1 Moral Responsibility
Moral responsibility refers to living a life that is “right” or “good.” Although individuals and cultures often disagree about what is right, some people argue that there is an absolute moral standard that can and should govern all. Philosophers and religious leaders continue to search for this standard, and some claim to have found it, for example, in the Ten Commandments. Moral responsibility is often used to justify statements such as “Scientists are responsible for how their work is used.” In practice, complete agreement about moral issues is difficult to reach, not only because of individual and cultural differences, but because moral statements are often vague and difficult to interpret.
1.2 Formal Contracts
A formal contract is a legally binding agreement between two or more parties. Each party to the contract is held responsible for the obligations incurred by the terms of the contract.
1.3 Informal Agreements
People often make informal agreement in their everyday actions with each other, for example, to attend a meeting or complete a report. Although there is no formal contract, the parties of an agreement are held responsible for their promises.
1.4 Laws and Regulations
Societies have laws and regulations. Some predate our arrival into the community; others are passed after we arrive. Even if we don’t agree with them, we are held responsible for abiding by them.
1.5 Standard Practices
The customs or standard practices of the communities in which people live also define responsibilities. By community, I mean family, friends, clubs, organization of employment, neighborhood, city, country, and so forth. Like laws and regulations, the communities we live in hold us responsible for abiding by the standards even if we don’t agree with them. A company, for example, may expect its programmers to follow certain standards for software development, or its computer users to pick passwords that satisfy certain criteria.
We can take responsibility by making a declaration and commitments to support that declaration. For example, I might declare responsibility for my health and make a commitment to exercise daily and follow current standards for eating properly.
Every action we take or fail to take has consequences to others and to ourselves. We can take responsibility for our actions in order to minimize the negative consequences or maximize the positive ones.
Different responsibilities may be consistent. For example, the laws governing murder are guided by and generally consistent with moral principles about killing. Standard practices about telling the truth are likewise governed by moral principles regarding honesty.
However, responsibilities can be inconsistent. For example, people who fight in a war must face the conflict between performing their military duties and following moral principles against killing. If one’s declared responsibilities or moral principles come into conflict with laws or standard practices, then one must decide whether to follow the laws and practices, possibly adopting different principles; violate the laws and practices, accepting the risks and consequences; or attempt to change the laws and practices so that they are consistent with one’s principles.
1.8 The Dynamics of Responsibility
Our responsibilities are not fixed. We can re-negotiate agreements, rewrite contracts, make new declarations, change our moral principles, pass laws, and set new standards. Being responsible does not mean that one has to stick with current obligations.
2.0 Failure to Meet Responsibilities
In practice, people do not always meet their obligations. The reasons include incompetence, insincerity, blindness, vagueness, conflicts, unforeseen circumstances, and impossibility.
A person may be incompetent to perform a promised task in the time allowed. Competence is always tied to a particular domain of action. A person may be competent at writing research papers about computer security, but incompetent at implementing protection mechanisms on a given system. Some people are absolved of crimes committed because they are judged to be incompetent in making moral choices.
Many people are incompetent in the domain of managing promises. Rather than re-negotiate an agreement so that a task can be revised, postponed, or assigned to another, the person simply fails to complete the task by the specified time. People who accept more requests than they can satisfy may be incompetent at saying “no” or at assessing their own competence at performing the assigned tasks.
A person may agree to something that he or she has no intention of doing. If a person has a recurrent pattern of making promises that the person is either incompetent to satisfy or insincere about keeping, then others will make an assessment that the person is untrustworthy. Trust is established only when a person consistently keeps his or her promises.
A person may be unaware of existing laws or customs, thereby violating them. This is particularly easy when traveling to different parts of the world or interacting with people having different cultural backgrounds. A new employee may be blind to the computer security practices of the organization, and fail to follow the practices for passwords and virus protection.
The obligations behind a given responsibility may be vague. For example, if you have asked me for a report “soon,” then I may consider the end of the week to be acceptable, whereas you may consider that late. Moral statements are often vague, making it difficult to determine responsibilities. For example, suppose I say that I am responsible for how my research is used. Does it mean than I am responsible for crimes committed by terrorists who cover up their dealings using cryptosystems learned by reading my book Cryptography and Data Security?
Responsibilities can come into conflict because of inherent inconsistencies, as with moral principles, or because of inconsistencies arising from the impossibility of doing two things at once. For example, if I have assumed responsibility for the security of my employer’s system and for my family, then I might find myself with conflicting obligations if I discover an intruder on the system just as I am getting ready to leave the office for a planned vacation with my family. People who take on more commitments than they can handle may find themselves in a constant struggle over conflicting obligations.
2.6 Unforeseen Circumstances
A person my fail to meet an obligation because of an emergency, accident, or some other unforeseen circumstance. For example, I may miss a meeting because I got in an automobile accident and am lying unconscious in the hospital. People have been accused of unauthorized computer access for trying to log into a system to which they were not authorized by dialing a wrong number. Unforeseen circumstances arise because of our inability to predict the future.
A person may fail to meet an obligation because the obligation is impossible, though that impossibility might not be recognized. For example, a person might be unable to develop a “totally secure system,” depending on the interpretation of “totally secure.”
Blame arises whenever something goes wrong and a person is held responsible. In common usage, blame also carries with it three additional connotations:
- The accuser has assessed that the responsible party has produced significant negative consequences for the accuser or for others by his or her action (or inaction).
- The accuser seeks to have the responsible party forced to make good (undo the negative consequences) or to be punished.
- The accuser sees some violation of moral principles in the responsible party’s action.
Thus, being blamed can produce negative consequences to the person blamed, for example, a lost job or friend, lost trust, fewer opportunities, fines, or imprisonment. A person can attempt to minimize such consequences by taking actions that avoid possible failures, and by taking care of the negative consequences of failures that happen anyway. For example, a person can make promises only after assessing that they can be fulfilled, re-negotiate agreements when something happens that makes fulfilling a promise impossible, make sure that all agreements are clear, be informed about relevant laws and practices, and avoid conflicting obligations. If something does go wrong, the person can make reparations and attempt to undo the negative consequences. A person who lives according to these principles is usually characterized as responsible, reliable, and trustworthy.
We can use the distinctions for responsibility and failure to determine who may be responsible in a situation involving computer misuse, and why the breach of security may have happened. Here we will consider the case of an unauthorized break-in by a “cracker.”
We will first consider how each type of responsibility relates to several players: the cracker, system manager, users on the system that was cracked, and the vendor.
•Moral: Most people agree that break-ins are unethical, though some crackers argue they are not because they expose vulnerabilities. Crackers also argue that simple break-ins do not hurt anyone.
•Formal contracts: The system manager is not likely to be under formal contract to provide security. The vendor, however, may have contractual stipulations about correctly representing the security features of the system, about delivering a system satisfying certain criteria for trusted systems, or about delivering fixes to security flaws once they are discovered.
•Informal agreements: The system manager may have agreed to take responsibility for the security of the system. However, it is possible that nobody in the organization ever seriously considered security and no agreement was made.
•Laws and regulations: The cracker is held responsible for abiding by the computer crime laws that prohibit unauthorized access.
•Standard practices: The system manager may be held responsible for knowing the standard practices in the computer security community and following these practices on the system. Users may be held responsible for following security practices adopted by the organization, for example, about passwords.
• Declarations: The system manager may have said he or she would take responsibility for the security of the system, even if the person’s boss never asked the manager to do so.
3.2 Causes of Failure
We next consider ways in which each type of failure could explain the break-in. This list is by no means exhaustive, and we invite the reader to consider other possible explanations.
•Incompetence: The system manager may have been unfamiliar with the steps needed to secure the system, unable to install or implement necessary security mechanisms, or unable to manage his or her commitments. The employees of the vendor may have been incompetent at delivering a system that meets its specifications or at distributing fixes in a timely fashion.
•Insincerity: The system manager may have been insincere when taking on obligations. The vendor may have intentionally overrated the security of the system.
•Blindness: The security manager may have been unaware of the particular vulnerability that the cracker exploited, but capable of fixing it had he or she known. The users may not have known about the security practices of the organization and chosen easily cracked passwords. The vendor may have been blind to some of the vulnerabilities in its system. The cracker may have been blind to the consequences of his or her actions on others and ultimately on himself or herself. Many crackers do not realize the cost of their actions in terms of lost time and money to the organization, and lost opportunities for their own future arising from the negative assessments made of them by society.
•Vagueness: The concept of “computer security” is vague unless accompanied by a precise and clear security policy. All of the players may have made a different interpretation of security. The security standards given to users may have been unclear.
•Conflicts: The manager may have had conflicting obligations, or accepted some risk in order to provide the users with desired functionality. The manager may have lacked adequate resources to fully protect the system. The employees of the vendor may have had similar conflicts.
•Unforeseen circumstances: The manager may have installed a system change that inadvertently introduced a vulnerability that nobody had anticipated.
•Impossibility: It is not possible to provide fully secure systems under almost any interpretation of security.
3.3 Taking Responsibility
The preceding analysis shows who is likely to be blamed after a break-in, and the possible explanations for security failure. The break-in could lead to negative consequences to the cracker, to the system manager and his or her organization, and to the vendor. The organization may suffer losses of time and money spent recovering from the break-in and lost credibility with its customers. The vendor may suffer lost credibility for its products.
By taking responsibility, people in the organization can attempt to avoid the losses that can otherwise result. The system manager’s boss can make sure the manager is competent and willing to take responsibility for the security of the system, and that he or she has the necessary resources to do so. The system manager can make sure that he or she stays informed about vulnerabilities, that users are informed about security practices, that all passwords are well chosen, that fixes are installed promptly, and so forth. Similarly, employees working for the vendor can take responsibility for the security of their products and for their correct installation and use.
Taking responsibility, however, also has its costs. For the organization, it includes money spent on security that might otherwise go into improvements in performance or functionality on the system, better customer service, investment in new products or services, or higher salaries. For the vendor, it includes the extra cost of developing more secure products, and the cost of making sure the systems are properly installed and maintained. Again, these costs may cut into other programs, such as the development of the next generation of machines.
In deciding whether to take responsibility for avoiding break-ins or other types of computer misuse, one must evaluate these advantages and disadvantage. The organization using computers must consider its assets, the cost of each type of misuse, the cost of security, what its competition is doing, and so forth. The vendor must likewise consider the cost of providing systems with certain security features, and the effect of that on its position in the marketplace. Neither of these are simple decisions. The decisions are complicated by a lack of clear standards for computer security and a general sentiment that it is not possible to have 100 percent security.
I have given a set of distinctions relating to responsibility and failure that allow us to investigate situations in which something goes wrong or could go wrong. These distinctions were then used to analyze a computer break-in in order to discover who might be blamed for the break-in and possible explanations behind their failure to meet their responsibilities. The analysis of this particular situation could be taken deeper, and it could be broadened to consider other types of computer misuse and the responsibilities of other players such as researchers, the government, standards organizations, and so forth. The real value of performing such an analysis is not in pinning the blame and punishing the culprit. Humans must satisfy many conflicting and challenging obligations, and do so out of considerable blindness. In this context, faults are common and compassion is essential. The value of the analysis is in the learning that takes place. By finding the areas of fault, we can design new human practices or computer systems that better meet our objectives. If a system is broken into, then the system manager’s boss could fire the manager. A better strategy might be to send the manager to a computer security course, enroll the manager in a professional organization for security managers, hire a consultant for a day, relieve the manager of certain responsibilities, give the manager additional resources to do the job, or send the manager to a course on managing promises.
The distinctions also provide road maps for assessing our own responsibilities. If we all used them more, we would be better able to observe the domains in which we are not living up to our responsibilities. We could avoid many situations that lead to blame and negative consequences for ourselves and others.
I am grateful to Peter Denning and Steve Steinberg for their comments on an earlier version of this paper.
- Neumann, P., “Computer Security and Human Values” in Terrell Ward Bynum, Walter Maner, and John L. Fodor, eds, Computing Security, Research Center on Computing & Society, 1992, pp. 1-30. (See above.)
- Denning, D., “Hacker Ethics” in Terrell Ward Bynum, Walter Maner, and John L. Fodor, eds, Computing Security, Research Center on Computing & Society, 1992, pp. 59-64. (See below.)
Kenneth C. Citarella
I am a prosecutor. I specialize in white collar crime, and more particularly in computer crime and telecommunication fraud. My professional interest regarding computer crime, computer security, and the human values involved with them is, therefore, quite different from that of the other members of this panel. I study motive, intent, criminal demographics, software security and other topics with a more limited focus: how do they help me identify, investigate, and prosecute a criminal.
A crime is an act prohibited by law. Criminal statutes define acts deemed so inimical to the public that they warrant the application of the police power of the state. Computer crimes only exist because the legislature has determined that computers and what they contain are important enough, like your house, money and life, that certain acts directed against them merit the application of that power.
A curious distinction arises with regard to computers, however. Your house can be burglarized even if you leave the door open. If you drop your money on the street, a finder who keeps it may still be a thief. The foolish trust you place in an investment swindler does not absolve him of guilt for his larceny. Yet much of the discussion on what constitutes computer crime, and even the computer crime statutes of many states, place a responsibility on the computer owner to secure the system. Indeed, in New York State, unless an unauthorized user is clearly put on notice that he is not wanted in the system, the penetrated system falls outside the protection of several of the computer crime statutes. The intrusion, no matter how unwanted by the system owner, has actually been legitimized by the legislature. Since I participated in the writing of the New York computer crime statutes, I can attest to the desire of legislative counsel to force the computer owner to declare his system off limits. So the societal debate over how much protection to afford computers has very practical consequences in the criminal arena.
Peter Neumann’s paper contributes to this debate as truly as it typifies it. He explores “deleterious computer-system-oriented effects,” such as a loss of confidentiality or system integrity, which result from antisocial behavior such as abusive “hacking.” (“Hacking” and “hackers” are terms that have become so romanticized and distorted from their original context, that I refuse to use them; they simply do not describe the behavior which is of interest.) Permit me to translate this into real life, that is, the way it comes into my office. A computer intruder penetrates the system of a telecommunications carrier and accesses valid customer access codes. She distributes these codes to a bulletin board host who posts them for the use of his readership. Within 48 hours, the numbers are being used throughout the United States. The carrier experiences $50,000.00 in fraudulent calls before the next billing cycle alerts the customers to the misuse of their numbers. Or, make them credit card numbers taken from a bank and used for hundreds of thousands of dollars of larcenous purchases. Or, it could be experimental software stolen from a developer who now faces ruin.
Stories like these are known to all of us. They have something in common with all criminal activity, computer based or not. The criminal obtains that which is not his, violating one of the lessons we all should have learned in childhood. The computer intruder ignores that lesson and substitutes a separate moral imperative: I can, therefore, I may; or, might makes right. The arguments about exposing system weaknesses, or encouraging the development of youthful computer experts, amount to little more than endorsing these behavioral norms. These norms, of course, we reject in all other aspects of society. The majority may not suppress the minority just because they have the numbers to do so. The mob cannot operate a protection racket just because it has the muscle to do so. The healthy young man may not remove an infirm one from a train seat just because he can. Instead, we have laws against discrimination, police to fight organized crime, and seats reserved for the handicapped.
I suspect that part of our reluctance to classify many computer intrusions as crimes arises from a reluctance to recognize that some of our bright youths are engaging in behavior which in a non-computer environment we would unhesitatingly punish as criminal. The fact they are almost uniformly the white, middle class, and articulate offspring of white middle class parents makes us less ready to see them as criminals. Although there are questions to be resolved about computer crime, we are sadly mistaken to focus on what may be different about computer crime, to the exclusion of what it has in common with all other criminal conduct. Refer back to the simple scenarios outlined above. The computer intruder may have all the attributes some commentators find so endearing: curiosity, skill, determination, etc. The victims have only financial losses, an enormous diversion of resources to identify and resolve the misdeeds, and a lasting sense of having been violated. They are just like the victims of any other crime.
Of course, there are computer intruders who take nothing from a penetrated system. They break security, peruse a system, perhaps leaving a mystery for the sysop to puzzle over. Would any computer intruders be as pleased to have a physical intruder enter their house, and rearrange their belongings as he toured the residence? The distinctions on the intruders’ part are basically physical ones: location, movement, physical contact, manner of penetration, for example. The victims’ perspectives are more similar: privacy and security violated, unrest regarding future intrusions, and a feeling of outrage. Just as a person can assume the law protects his physical possession of a computer, whether he secures it or not, why can he not assume the same for its contents?
What after all is the intent of the intruder in each situation? To be where he should not be and alter the property that is there without the approval of its owner. Each case disregards approved behavior and flaunts the power to do so.
Of course, computer intrusions have many levels of seriousness, just as other crimes do. A simple trespass onto property is not a burglary; an unauthorized access is not software vandalism. The consequences must fit the act. Prosecutors and police must exercise the same discretion and common sense with computer intruders they do regarding conventional criminals. No reasonable law enforcement official contends that every computer intrusion must be punished as a criminal act. Youth officers and family courts commonly address the same behavior in juveniles that other agencies address in adults. Sometimes a youth is warned, or his parents are advised about his behavior, and that is the best response. But to insist that some computer intrusions are to be legitimized, assumes that law enforcement lacks the common sense and discretion to sort out prosecutable incidents from those best handled less formally. This was expressly the concern of legislative counsel in New York. If we choose not to trust the discretion and experience in our law enforcement authorities regarding computer crime, then how can we trust these same people to decide what drug trafficker to deal with to get someone worse, or to decide which child has been abused and which was properly disciplined. The point is that law enforcement makes far more critical decisions outside of the context of computer crime than within. The people involved are trained and have the experience to make those decisions. Yet much of the debate over computer crime assumes just the opposite.
In my personal experience, prosecutorial discretion has worked just as well in computer crimes as it has regarding other criminal behavior. Some complaints result in a prosecution; some are investigated and no charges filed; some are not ever entertained.
Lastly, I should point out that frequently computer intruders are also involved in a variety of other crimes. Typically, credit card fraud and software piracy are in their repertoire. And, let us not forget that the telecommunication charges for all their long distance calls are being borne
by the carrier or the corporate PBX they have compromised. With telecommunication fraud exceeding a billion dollars a year, the societal cost of tolerating these intruders is too large to be blindly accepted.
If the challenge of penetrating a system you do not belong on is an essential way of developing computer skills, as some people contend, then let computer curricula include such tests on systems specifically designed for that. Surgeons develop their skills on cadavers, not the unsuspecting. Pilots use simulators. Why should computer specialists practice on someone else’s property at someone else’s expense?
There are privacy and Fourth Amendment issues involved in computer crime. But they are the same issues involved in any other criminal investigation. The public debate is needed and cases must go to court as has always been the case with constitutional aspects of criminal law. Whenever law enforcement follows criminal activity into a new arena, problems arise. It is as true with computer crime as it was with rape and child abuse cases. The answers lie in understanding the common forest of all criminal behavior not in staring at the trees of computer crime.
Assistant District Attorney, Westchester County
Dorothy E. Denning
I am a computer scientist who has specialized in the area of computer security for about eighteen years. Up until 1990, I focused my research on understanding the vulnerabilities of systems and designing mechanisms that would protect against these vulnerabilities. I paid little attention to the people who were accused of being perpetrators of the crimes I was trying to prevent or detect. Then in 1990, I began doing research on young computer “hackers” who break into systems. Since then, I have interviewed or met several dozen hackers who began hacking as adolescents and are now college age. I reported my initial findings in Denning (1990).
Not all people who call themselves “hackers” break into systems or commit computer crimes. Indeed, the term “hacker” originally meant anyone who loved computing, and there is still an annual “Hackers Conference” of such people. However, people who break into systems also call themselves hackers, and they refer to their tools as “hacking programs” and “hacking worksheets.” They write articles such as “A Novices Guide to Hacking” and “Yet Another File on Hacking Unix.” Thus, it is not surprising that the media and law enforcement communities use the word to refer to someone who attempts to gain unauthorized access to computer systems.
Some people use the term “cracker” to refer to hackers who break into systems. I will continue to use “hacker” since the term is used so widely, especially among the people about whom I’m writing. In the context of this paper, however, I am referring only to crackers.
Others use the phrase “malicious hacker” to distinguish crackers from non-crackers. However, most of the hackers I have spoken with say they have no intent to inflict harm or suffering on another. Thus, I reserve this phrase only for people who intentionally cause harm.
Although hackers violate laws and professional codes of ethics in the domain of computing, I would not characterize the hackers I have met as “morally bankrupt” as some people have called them. I have not seen any data to support allegations that they are in general more prone to lie, cheat, or steal than others. The hackers I have met have seemed to me to be decent people.
Although I do not condone unauthorized break-ins or accept the arguments that hackers use to justify their acts, it is not my intention here to present counter-arguments to the hacker ethic. My goal is only to describe that ethic, and comment briefly on its implications for ethics education.
2.0 Why Hackers Break Into Systems
Before turning to the ethics of hackers, it is important to understand why hackers break into systems. From what I have learned, most hackers do it for the challenge, thrill, and social fun. Although the stereotype image of a hacker is someone who is socially inept and avoids people in favor of computers, hackers are more likely to be in it for the social aspects. They like to interact with others on bulletin boards, through electronic mail, and in person. They share stories, gossip, opinions, and information; work on projects together; teach younger hackers; and get together for conferences and socializing. They are curious about the vast network of systems, and they want to explore it. They hear about a computer at a place like Los Alamos National Labs, and they want to find out what it does, what it’s used for, and who uses it. By sharing the secrets they learn, hackers also gain recognition from their peers and entry into exclusive hacker groups. Since their actions are illegal, hackers may also enjoy the thrill of doing something that they are not supposed to do without being caught.
There is nothing particularly unusual about hacker’ motives. Curiosity, adventure, and the desire to be appreciated and part of a group is fundamental to all human beings. Moreover, there are powerful motives behind the attraction to learning secrets, including the desire to have control, to feel superior, and to achieve intimacy with those with whom the secrets are shared. They allow one to be an insider rather than an outsider, to be accepted by a group, and to cross forbidden boundaries. (Bok 1983)
Most hackers do not break into systems for profit or sabotage. Although some do, I will restrict my discussion to those that do not, since my personal experiences have been with hackers who consider these activities to be wrong.
3.0 Hacker Ethics
For the most part, the moral principles of hackers are not much different from those of us who consider hacking wrong. The hackers I have spoken with agree that it is wrong to hurt people or cause damage. Where hackers differ is in their interpretation of what constitutes “hurt” and “damage,” and thus in their particular ethical standards of behavior.
Hackers do not usually consider the act of breaking into a system as either harmful or damaging. To them, damage would occur only if they destroyed user data or adversely affected a life-critical operation. They do not consider modification of system files for the purpose of gaining privileged status, creating new accounts, gaining access to passwords, or covering up their tracks as damage, even though someone has to restore the files. They do not consider the disruption they cause to the systems staff and users as harmful. They do not consider the downloading of system files or the use of resources without paying for them as harmful or even as theft; instead, they rationalize that the files remain on the system and the resources would otherwise go unused. They agree that browsing through personal information and electronic mail is an invasion of privacy and therefore wrong; some do it anyway. Most agree that some break-ins are unethical, e.g., breaking into hospital systems.
Some hackers say they are outraged when other hackers damage user files or use resources that would be missed, even if the results are unintentional and due to incompetence. One hacker said “I have always strived to do no damage, and to inconvenience as few people as possible. I never, ever, ever delete a file. One of the first commands I do on a new system is disable the delete file command.” Some hackers say that it is unethical to give passwords and similar security-related information to persons who might do damage.
Hackers justify their actions on the grounds that learning and exploring are good, that the free flow of information has generally been beneficial to society, that it is useful to uncover system vulnerabilities that could be exploited by someone with malicious intent, and that many of the organizations whose systems they break into engage in unethical practices. Although few people dispute these principles, most do not accept them as legitimate reasons for breaking into systems. Some hackers also argue that it is the responsibility of the system managers to prevent break-ins, and that they are the scapegoats of poor security practices.
Some hackers go further and argue that most systems should be accessible for the purpose of learning. They say that the real crime is information hoarding.
Many hackers acknowledge that break-ins are wrong – just not that wrong. They see the penalties imposed on hackers as being harsh and out of proportion to the seriousness of the crimes. One former hacker told me that his parents knew of his activities and told him that what he was doing was wrong, but that they did not consider his hacking to be bad enough to take action. They thought it was important for him to discover the reasons for not hacking himself. He did.
Many people share the view that non-malicious break-ins are wrong, but not sufficiently bad to justify harsh penalties. Indeed, it was common in the past to hire hackers, and most that were went on to become responsible employees. In universities, it is still common to regard student hackers as smart and basically good people with extra time on their hands, who should be guided into pursuing challenges that are legal rather than punished. Some system managers thank hackers for pointing out vulnerabilities to them.
Most hackers get started in their youth. One of the founders of the hacking group “The Legion of Doom” said that he was eleven. People at this age are at a much earlier stage of maturity than someone who is fifty. They have fewer life experiences to draw on, and they have had fewer responsibilities. Many hackers have little or no idea what goes on inside organizations, and why the people in these organizations have a low tolerance for hacking. They may not consider the cost of hacking to an organization in terms of lost work or extra phone charges, and how these costs can effect people’s jobs. Without this appreciation, it is difficult for a hacker to see the harmful effects of a break-in, even when no malicious damage was intended. Hackers may also lack an appreciation for how their actions may ultimately effect their own lives, for example, by producing assessments in others that they are immoral and untrustworthy, thereby cutting off possibilities for their future.
The process of maturing is a continual one throughout our lives. It happens as more and more of the world unfolds before us. Yet many people in their forties and fifties expect someone in their teens or twenties to know everything they know, at the same time forgetting that they themselves are still blind to most of what is out there. I find myself continually learning from young people who see things that I don’t see or that I forgot.
Most of the hackers I have met say they have “retired.” They abandon their illegal activities when they see the negative consequences of their actions to themselves or others.
5.0 Teaching Computer Ethics
Most people agree that we should teach computer ethics, and efforts are underway to include ethics in the curriculum at all levels. I shall close this paper with a few comments regarding ethics education as it might impact hacking.
Based on my conversations with hackers, I am convinced that we must do more than simply tell young people that break-ins are wrong. As mentioned earlier, many of them know that, they just don’t consider non-malicious break-ins to be a serious offense. Moreover, I am skeptical that we can convey the consequences of hacking entirely through analogies, for example, by comparing breaking into a system with breaking into a house, and downloading information and using computer and telecommunications services with stealing tangible goods. Hackers recognize that the situations are not the same. They can appreciate why someone would not want them to break into their house and browse around, while failing to appreciate why someone would seriously object to their browsing on their computer.
Brian Harvey, in his position paper for the ACM Panel on Hacking (Harvey), recommends that students be given access to real computing power, and that they be taught how to use that power responsibly. He describes a program he created at a public high school in Massachusetts during the period 1979 – 1982. They installed a PDP-11/70 and let students and teachers carry out the administration of the system. Harvey assessed that putting the burden of dealing with the problems of malicious users on the students themselves was a powerful educational force. He also noted that the students who had the skill and interest to be hackers were discouraged from this activity because they also wanted to keep the trust of their colleagues in order that they could acquire “super-user” status on the system.
Harvey also makes an interesting analogy between teaching computing and teaching karate. In karate instruction, students are introduced to the real, adult community. They are given access to a powerful, deadly weapon, and at the same time are taught discipline and responsibility. Harvey speculates that the reason that students do not misuse their power is that they know they are being trusted with something important, and they want to live up to that trust. Harvey applied this principle when he set up the school system.
Giving students responsibility for computing can help them learn the consequences of different actions on themselves and others. If it is not feasible to give students hands-on experience managing computer systems, then perhaps some form of role playing or case study analysis can help students see consequences of different actions.
Finally, ethics has to do with actions, not words. If we do not practice good ethics ourselves, then we will be poor role models and justifiably accused of hypocrisy. How well do we embody responsible computer use?
I am grateful to Peter Denning, Craig Neidorf, Steve Steinberg, and “Tim” for comments on an earlier version of this paper.
Bok, S., “Secrets,” Vintage Books, New York, 1983.
Denning, D., “Concerning Hackers Who Break into Computer Systems,” Proc. of the 13th National Computer Security Conference, Oct. 1990, pp. 653 – 664.
Harvey, B., “Computer Hacking and Ethics,” in “Positive Alternatives to Computer Misuse: A Report of the Proceedings of an ACM Panel on Hacking,” J.A.N. Lee et al., ACM Headquarters, New York.
Arnold B. Urken
Abstract: Computer-Mediated voting raises concerns about the traditional notion of privacy, the timing of voting processes, and the reliability of voting media. But these modes of voting may actually increase – not decrease – the options for individual expression of opinion and individual exertion of control in collective choice processes.
Individuals can use Computer-Mediated voting to control the degree of privacy associated with the communication of their preferences. This control can allow them to share information to exert influence without compromising their identities. Moreover, instead of passively relying on the integrity of a voting process, voters can audit voting records themselves to insure that their votes were recorded and counted correctly.
Asynchronous voting can be used as a means of maximizing the importance of individual participation and increasing the information voters have to render decisions. Although sequential voting is typically seen as a condition that fosters manipulation of collective outcomes, the complexity of Computer-Mediated participation will make it practically impossible to control outcomes, particularly if participants can change their votes.
The reliability of Computer-Mediated voting is already an important issue in United States elections. This abstract analysis of voting procedures may make analysts and citizens more aware of underlying problems of reliability involving the voting rules used to communicate information about individual preferences. This awareness reveals a potential role for decision support systems in enhancing individual expression and augmenting individual options for control in collective choice processes. These possibilities may spread into political life as Computer-Mediated communication evolves in business and organizational environments.
Since voting is a central concept in most definitions of “democracy,” theorists frequently debate the merits of different voting methods (Riker, 1983). These arguments usually follow a normative or descriptive approach to the problem of assessing the social impact of voting systems. The same pattern can be found in contemporary assessments of the impact of Computer-Mediated voting on society. Although both approaches are potentially complementary, the first one emphasizes what one ought to do and looks at computer technology as a means to a presumed end. The other approach simply describes the results of using new media to vote and then explores the implications of these consequences for achieving postulated objectives.
This paper follows the second approach by focusing on how technology will change options for voting from the viewpoint of the individual voter. This focus explicitly rules out some issues that are often encountered in assessments of the social impact of Computer-Mediated voting. In particular, issues such as “access” and “democracy” are not considered (Gould, 1989). Why? Intuitively, greater access and more democracy seem appealing, but it is not clear that increasing access is necessary to have more democracy, particularly if “democracy” is measured by the character of social outcomes instead of the form of participation. In fact, instant access to public referenda may be counterproductive if voters are poorly informed. For according to one theory of democracy, in the long run, a large number of uninformed voters can have a lower probability of making an optimal choice than a smaller number of voters (Urken, 1988). Moreover since mass participation could produce information overload, the quality of “democratic” decision making might be unacceptable.
In any case, regardless of one’s conception of democratic objectives, analysis must eventually determine if any given set of voting rules is feasible to use and is consistent with stated objectives. Moreover, focusing on options for individual expression in Computer-Mediated environments leads us to see what might happen in groups, organizations, and society as an outgrowth of individual behavior. From this perspective, we can identify the conditions that are required to make certain options feasible. And we can avoid assuming that new voting patterns will emerge simply because they seem appealing or that new modes of voting are necessarily desirable.
Focusing on individual voting options is also significant because it highlights issues that must be taken into account in designing on-line environments. Frequently, these matters are counter-intuitive and do not become apparent until voting is used to carry out a particular decision task in the context of a specific technological situation. Of course, until new systems are actually developed and refined, our knowledge must remain conjectural and tentative. And theoretical possibilities for reform must be carefully tested before they are adopted. Nevertheless, it is important to begin thinking about voting in this way to make sure that traditional rights are protected – and even extended – when hardware and software are developed. Unless professional developers are guided by public debate about software and hardware standards, there is a danger that traditional individual rights will atrophy and that opportunities for augmenting individuality will be missed.
This assessment of the impact of Computer-Mediated voting is based on a simple model of the voting process presented at the beginning of section two. The remainder of the second section is devoted to describing four changes in voting processes that computer-mediation will make possible. These descriptions address the quality of privacy, the asynchronous nature of decisions, the reliability of voting tools, and decision support in future voting processes. Section three discusses some implications of these predictions.
2.0 Electronic Voting and Options for Individuality
Voting situations are usually characterized by a group of individuals communicating with each other by casting votes to make a collective choice about a set of alternatives. This set, the agenda, may itself be the result of a collective decision made by the entire group or it may be created by one or more members. Obviously, the agenda may be manipulated to try to control the electoral outcome (Riker, 1983, McKelvey, 1973). And in certain situations, knowledge of other people’s preferences can enable one voter to manipulate the outcome (Gibbard, 1973). More commonly, alternatives may be added to the agenda to siphon off voting support from competing choices. Moreover, it is well known that if three or more alternatives are considered two at a time, the order in which the choice are paired can prevent the group from choosing the most popular choice based on complete information about voter preference orders (Farquharson, 1969). Furthermore, the rules used to represent voting information can affect the outcome (Urken, 1989).
The steps of a voting process can be summarized as follows:
- Voters or a subset of voters create an agenda.
- Each voter evaluates the choices by creating a preference order.
- Votes are cast according to the rules of a voting system.
- Votes are pooled or aggregated according to an explicit rule.
- A collective outcome is produced.
To appreciate the critical role of the voting system in this process, consider the scenario presented in Tables 1 and 2.
|Table 1. Cardinal Preferences of Four Voters for Three Alternatives
Voters’ Cardinal Utility Ratings
|Choices||Voter I||Voter II||Voter III||Voter IV|
|Table 2. Vote Allocations and Collective Outcomes Under One Person, One Vote (OPOV) and Approval Voting (AV) Methods
Voter Allocation by Method
Table 1 describes the preferences of four voters (I – IV) for three agenda choices, A, B, and C. The preferences are represented on a cardinal utility or ratio scale so that, for example, it is clear that voter I prefers A five times as much as C. Also, it is clear that A and B are tied in voter III’s preference ordering. Table 2 shows the vote allocations that can be expected when the voting information presented in Table 1 is filtered through two voting systems. Each voting system is characterized by a rule for casting votes and a rule for aggregating the votes. Under a one person, one vote (OPOV) system, voters cast one vote for their most preferred choice, while under approval voting (AV), a vote can be allocated to each choice that voters approve. In this example, it is assumed that all voters express approval by casting one vote for each alternative that equals or exceeds their average utility. In other words, each alternative with a rating greater than or equal to three receives an approval vote.
Table 2 shows quite dramatically that the voting system controls the collective outcome and that voting rules are not neutral! Depending on the rules that are chosen, C, B, or an indecisive outcome will be produced.
Before considering how Computer-Mediated voting might change this process, it should be noted that there is no agreement on the definition of a “good” voting system. Some analysts maintain that the choice of a voting system is based solely on taste, though when a specific, common collective decision task is at issue, different systems can be compared in terms of their efficiency in producing the desired result (Riker, 1983, Urken, 1988). Yet even here, the problem is complicated by the fact that more than one system may yield the same result. Nevertheless, if voting systems are simply considered to be tools for pooling information, we can avoid the trap of presuming that a single system is necessarily best (Niemi and Riker, 1975).
Now let us examine the issues of privacy, asynchronicity, reliability, and decision support in the context of this voting scenario.
Although much attention has been given to the problem of not distorting or subverting the voting process in an online environment, techniques such as encryption and distributed time-stamping make it practically impossible to compromise personal autonomy (Chaum, 1985) and (Haber and Stornetta, 1991). If these methods become economical and easy to use and citizens learn and feel comfortable with data-input devices, elections could be carried out via computer.
But Computer-Mediated voting may also change the nature of privacy in elections. For traditionally, voting has been either public (e.g. in the Swiss cantons) or private (e.g. the Australian ballot) (Barber, 1989, Urken, 1989). However, in a suitably designed online environment, voters may not be faced with a categorical choice between absolute openness or complete secrecy. For voters will be able to control the distribution of information about their voting behavior. For example, voters might automatically send a message to a selected list of friends or associates about how they cast their votes and the reasons for their allocation. This message might include complete identification (including a video and audio greeting in a multi-media environment). In contrast, if voters wanted to notify members of a political party, they might only transmit information about their district, age, and other data that would be valuable in assessing electoral trends. Many other distinctions for selective privacy are possible.
Another possibility for selective privacy involves what happens once votes have been cast. In a paper-ballot mode or in most online voting situations, the individual is forced to play a passive role once votes have been tallied. People may be concerned about maintaining the integrity of the voting process, but the only indication of breakdown they will encounter will be an incident that involves so many errors that it cannot be covered up! However, in a suitably designed environment, voters might be allowed to audit their own votes to insure that their choice was properly recorded and that their vote was actually used to produce the official tally.
In business or other non-public online environments where Computer-Mediated voting is an option, the prospects for introducing selective privacy are different. If past experience in corporate computer conferencing systems are any guide, access to online voting may be denied completely or limited to forms of expression that management considers appropriate (Urken, 1988). However there may be situations in which “political” rights become relevant in the operation of a non-public network. For example, if members of a corporate political action committee were polled about the size and use of their contributions, it could be argued that individuals should be afforded the same options for selective privacy discussed above.
2.2 Asynchronous Voting
Most political theorists and philosophers adhere (implicitly at least) to the ideal of all voters making their choices at the same time. Obviously, even if voters were located in the same room, votes would not be cast simultaneously, regardless of the mode used to communicate voting information. But for all practical purposes, the time-differential would be so small and so difficult for humans to discern that votes would seem to allocated synchronously.
There are many potential arguments for this ideal, but most of them concern the potential for manipulating the outcome if votes are cast in a sequence over time (Brams and Fishburn, 1985, Gibbard, 1973). As the time-differential increases, information about the emerging coalitions can be collected so that some individuals can control the collective outcome by swaying other voters or simply casting their own votes in a strategic manner (Nurmi, 1989).
The scenarios used to illustrate the manipulability of asynchronous voting outcomes usually involve one person, one vote voting. In this context, it is normal for competitors for office to practice asynchronous communication with each other to plan strategies for persuading voters to cast their votes in a particular way. This effort relies on surveys to ascertain trends in voter attitudes and identify the factors that lead people to cast their single vote one way or another. Combined with last-minute negative advertising, campaigns can be calibrated to engineer a victory.
But imagine what might happen if all voters had N hours or days (beginning and ending at the same time) to cast their votes asynchronously and could change their votes. Assuming that voters exercised their right to privacy in different ways, debate could go on during the voting process and voting trends could be updated frequently to prompt parties and candidates to clarify their positions. This type of pressure could force realignments and turnarounds during an election based on the arguments that emerged during the asynchronous voting process. As a consequence, pre-election campaigning might become less focused on obfuscation to preclude negative voter reaction (Schmidt, Shelley, and Burdes, 1989). And perhaps pre-election campaigning will take less time and money and the duration of the (asynchronous) election process will be extended and become the focus of informed debate. In such an environment, voters may have more of an incentive to participate because they sense that their individual voices and choices can make a difference in determining the collective outcome. And, assuming that this behavior does not undermine democratic procedures or outcomes, individuals may be better informed and exert greater control in public policymaking.
In a non-political situation, asynchronous voting is already the norm in those relatively rare applications of computer conferencing that include a facility for voting. But as described below, it may be used more frequently when it becomes appreciated as a means of augmenting human intelligence and improving organizational efficiency.
Most people do not think about “reliability” as an issue in voting, but experience with Computer-Mediated voting in political and non-political environments will make it a significant public concern. The problem of choosing a voting system described above involves the task of choosing a reliable mechanism for achieving an individual or group objective. Depending on the nature of the task and the preferences of the voters, the objective may be to reach a decisive choice, a plurality decision, or a majority decision. As Table 2 shows, voting rules will determine which objective is reached.
Section 2.4 elaborates some possibilities for using decision support to enable voters to obtain reliable voting results, but this aspect of the problem of reliability is likely to be less salient for citizens than the unreliable Computer-Mediated voting already encountered in public elections. These problems include difficulties with scanning marked ballots, reading punchcards, and the dependability of computer hardware and software (Saltman, 1988, and ECRI, 1988, and Dugger, 1988). Public awareness of these issues is growing and state and federal governments are beginning to grapple with the development of standards for Computer-Mediated elections. The process of developing these standards will provide a basis for expanding public awareness of the reliability of voting mechanisms as individuals encounter options for the expanded scope of computer-mediation in online voting. This awareness may lead vendors and users of computer equipment and software to recognize the need for quality assurance evaluations of election tools and election management procedures.
2.4 Decision Support
It may seem ironic that choosing a voting system is an unresolved dilemma. But the irony is removed when the problem is defined as the task of matching voting systems to human preferences, capabilities, and objectives. But the simple voting situation depicted in Tables 1 and 2 above indicates just how complex this task can be. For example, producing a decisive choice may be an important group objective, but would voters III and IV (who most prefer C) be content with approval voting outcomes (that both generate B) even though one person, one vote yields a weak consensus because only the plurality outcome is clear-cut?
There are two ways of handling this type of problem. One method is to compare the outcomes of different voting systems with ideal measures of what a voting system should produce. The first method is illustrated by the voting scenario shown in Table 3, which displays the same type of cardinal preference information contained in Table 1.
3.0 Cardinal Preferences of Four Voters for Three Alternatives
|Voters’ Cardinal Utility Ratings|
|Choices||Voter I||Voter II||Voter III||Voter IV|
Condorcet and Copeland’s scoring methods are normally used as ideal measures of collective outcomes. Both measures operate on information about the order of choices in voter preference orderings. Condorcet scoring computes the number of times that each alternative is preferred to every other alternative in the preference orderings of all voters. For instance, in Table 3, since A is preferred to B once and preferred to C twice, it would have a Condorcet score of 3. Copeland scoring is an extension of the Condorcet method, a net-Condorcet score, that subtracts defeats from victories. In our scenario, B and C each defeat A twice, so A’s Copeland score would equal ( 1 – 2) + (2 – 1) = 0.
This arithmetic leads to the following results:
4.0 Condorcet and Copeland Scores for the Voting Scenario in Table 4
|Choice||Condorcet Score||Copeland Score|
These data indicate that these measures of consensus can be inconsistent with each other in subtle ways. For while Condorcet and Copeland scoring both identify B as the strongest choice, the methods provide different pictures of the strength of the consensus. A and C are tied under both systems, but the strength of the consensus is stronger under Copeland scoring than it is under the Condorcet method. The second system suggests that the collective preference intervals between the winner and A and C are 2 and 3. Under the results produced by the first method, these intervals are 3 and 6, respectively. Although these scoring methods frequently yield consistent results, they do not necessarily provide a straightforward way of indicating which voting system is best.
But the problem of choosing a voting system can be interpreted as a process of pooling information to resolve conflicts. From this viewpoint, techniques such as Condorcet and Copeland scoring can help groups clarify the nature and strength of “consensus” by deliberating about which type of collective outcome best fits their judgment. For instance, voters might consider if the intervals between the candidates are best approximated by Condorcet or Copeland scoring (or neither!).
Although using voting as a decision support system for conflict resolution may be limited by factors such as group size, decision tasks, and caliber of the decision makers, there are several other possible forms of decision support (Urken, 1988).
In administrative decision tasks, where there is a consensus on social and decision objectives, voters might rely on support systems to determine the “best” way of representing voting information. This support might provide feedback for group deliberation (or individual deliberation if a leader or supervisor is trying to pool information) or artificial intelligence or expert systems might be devised to select a voting system for voters based on a database of information about voter performance or background. This latter possibility might become very useful when many decisions must be made in a short period of time.
Another potential tool for dealing with time constraints involves looking at asynchronous decisions filtered through different voting systems to determine when a consensus can be declared to exist. For instance, in the voting scenario in Tables 1 and 2, if voters I, II, and III have already voted, a consensus cannot be determined under one person, one vote voting because a tie exists. Under the same system, if voters I, III, and IV vote first, a plurality outcome cannot be declared because if voter II votes for A, a tie would be created. Similarly, under approval voting, neither of these hypothetical asynchronous patterns of voting would reveal a consensus before all individuals have allocated their votes. When voters I, II, and III vote first, voter IV would create a three-way tie by casting an approval vote only for C. And when the votes of I, III, and IV arrive first, voter II can either make B a plurality and majority winner or, by voting only for A and C, produce a three-way tie.
5.0 Hypothetical Vote Allocations and Collective Outcomes Under One Person, One Vote (OPOV) and Approval Voting (AV) Methods
|Voter Allocation by Method|
|Plurality Outcome: A
Majority Outcome: A
|Plurality Outcome: B
Majority Outcome: B
However the scenario represented in Table 5 contains two examples of how decision support can be used to identify a consensus before all votes have been cast. Under one person, one vote voting, if voters I, II, and IV act first, then a plurality or majority consensus can be declared even though voter III has not voted. Similarly, under approval voting, if votes arrive first from I, III, and IV, B can be identified as a plurality and majority winner.
Similar decision support possibilities can be developed that combine the analysis of preference and voting structures with factors such as competence. Here the measure of competence must be well defined and related to carrying out a decision task.
A more complex form of support could be developed to guide voters operating under a “fungible” voting system (Coleman, 1973, Urken and Akhand, 1976, Urken, 1988). This method allows individuals to save and trade votes the way they currently allocate money. In this type of environment, guidance could be provided on market conditions, risks, and intermediaries, entrepreneurial interest groups and coalitions that collect votes to influence collective outcomes. Although this type of system was originally proposed as a solution to the problem of balancing intensities of preference in public policymaking, it seems more likely that online groups or corporations would be first to experiment with Computer-Mediated fungible voting to facilitate internal bargaining and negotiation (Urken and Akhand, 1976, Urken, 1988).
Computer-Mediated voting is not a panacea for resolving conflicts in small or large groups of people, but people may find new ways of expressing their individuality via voting using this technology. As individuals experience Computer-Mediated voting and recognize possibilities for looking at the social function of voting in different ways, there will be gradual, piecemeal changes in social practices. Some of these changes may be quite dramatic, spurred by public attention given to certain problems and accomplishments. As news about the potentially positive Computer-Mediated voting spreads, theoretical possibilities will be refined in practice.
Stevens Institute of Technology
Barber, B., (1984) Strong Democracy, Berkeley: University of California Press.
Brams, S.J. and P.C. Fishburn (1985), “Approval Voting,” American Political Science Review.
Chaum, D., (1985) “Security Without Identification: Transaction Systems to Make Big Brother Obsolete,” Communications of the ACM.
Coleman, J.C., (1973) “Political Money,” American Political Science Review, 67: 131 – 45.
Dugger, R., (1988) “Annals of Democracy: Counting Votes,” The New Yorker.
ECRI, (1988) “An Election Administrator’s Guide to Computerized Voting Systems,” Plymouth Meeting, PA.
Farquharson, R., (1969) Theory of Voting. New Haven: Yale University Press.
Gibbard, A., (1973) “Manipulation of Voting Schemes: A General Result,” Econometrica.
Gould, C., (1989) “Network Ethics: Access, Consent, and the InformedCommunity,” in C. Gould, ed., (1989), The Information Web, Boulder: Westview Press.
Grofman, B., (1976) “Judgmental Competence of Individuals and Groups in Binary Choices,” Journal of Mathematical Sociology.
Grofman, B., G. Owen, and S. Feld, (1986) “Thirteen Theorems in Search of Truth,” Theory and Decision.
Grofman, B. and G. Owen,eds., 1986 Information Pooling and Group Decision Making: Proceedings of the Second University of California, Irvine, Conference on Political Economy, Greenwich: JAI Press, Inc.
Haber, S. and W.S. Stornetta, (1991) “How To Time-Stamp a Digital Document,” Journal of Cryptography.
Kelly, J.S., (1989) Social Choice Theory, New York: SpringerVerlag.
McKelvey, R.D., (1975) “Policy Related Voting and Electoral Equilibria,” Econometrica.
Niemi, R. and W.H. Riker, (1978) “The Choice of a Voting System,” Scientific American.
Nurmi, H., (1989) Comparison of Voting Systems. Dordrecht: H. Reidel.
Riker, W.H., (1983) Liberalism Against Populism, San Francisco: Freeman.
Saltman, R.G., (1988) Accuracy, Integrity, and Security in Computerized Vote-Tallying, Washington: National Bureau of Standards.
Schmidt, S.W., M.C. Shelley, and B.A. Bardes, (1989) American Government and Politics Today. St. Paul: West Publishing Co.
Urken, A.B. and S. Akhand, (1976) “Vote Trading in a Fungible Voting System,” Operations Research.
Urken, A.B., (1988) “Social Choice Theory and Distributed Decision Making.” in R. Allen (ed.) Proceedings of the IEEE/ACM Conference on Office Information Systems.
Urken, A.B., (1989) “Voting in a Computer Networking Environment,” in C.
Gould (ed.) The Information Web, Boulder: Westview Press.
Winograd, T. and F. Flores, (1986) Understanding Computers and Cognition, Ablex Publishing Company.