A Conceptual Framework for Computer Ethics

AUTHOR
Norberto Patrignani

ABSTRACT

Introduction

This paper addresses the challenging issue of designing a conceptual framework for Computer Ethics. Since its very first definition [1] Computer Ethics has been a fast changing subject with several directions of research: those that consider computing just another area where philosophy has to contribute with the classical tools [2], those that consider it as a necessary new field of study [3], others see it from an historical point of view and backdate its foundation to Norbert Wiener [4] [5]. This paper aims at describing Computer Ethics by means of a conceptual framework based on an ‘applied ethics’ approach. Whilst this “bottom-up” approach has not the ambition of re-defining the nature of Computer Ethics, it can be useful for describing the many areas of impact of computers on society and as a map for case analysis while teaching Computer Ethics.

This framework can be simply described by its two dimensions: the vertical dimension (the "layers") and the horizontal dimension (the "domains").

This framework can be simply described by its two dimensions: the vertical dimension (the “layers”) and the horizontal dimension (the “domains”).

Vertical “Layers”

The vertical “layers” represent the several areas potentially impacted by computers: from the physical world (Planet, Biosphere, People) to the virtual world (Infosphere, Cyberspace, Ideas). A brief description of these “layers” follows:

  • “Planet”, the Earth, our planet (the only one we have)
  • “Biosphere”, the whole biological world living on Earth
  • “People”, the human beings, ourselves
  • “Infosphere”, the collection of hardware, software, computers, routers and networks that now constitutes the precious shell around the planet connecting everyone, everything, everywhere – the Internet
  • “Cyberspace”, the virtual space, “on top” of the infosphere, where we are starting to spend a significant part of our life [6]
  • “Ideas” (Noosphere), the highest layer of abstraction representing the collection of ideas flowing into humans minds; probably the most precious resource we have, our thought as a whole: inheritance of knowledge and wisdom from the past and innovation potential towards the future [7].

Horizontal “Domains”

The horizontal “domains” illustrated by ellipses with areas in proportion with the importance or potential impact. Their vertical position describes the layers covered by that domain. These domains represent the collection of critical issues created or aggravated by computers. Let’s shortly describe each “domain”:

  • “e-Democracy” – what are the new scenarios opened by the dawn of cyberspace? What are the new metaphors we need to develop for the right use of this space? How (and where) to use these tools in the public life, for discussing, taking decisions, voting? [8] [9]
  • “Accessibility, Universal Access & Digital Divide” – what are the new barriers (economic, cultural, sensorial) we are building? How to guarantee access to virtual resources using equitable and inclusive criteria? Are we imposing new restrictions to people with disabilities, to elderly? [10]
  • “Workplace” – what are the new issues and professional hazards introduced by computers in the workplace? How are the employees, the end-users involved in the design of new systems? [11]
  • “Content & Education” – how to select, collect, organize and deliver content on the net? Who will select the content to be inherited by future generations? What impact will computers have on our learning capabilities? [12]
  • “CopyRights” – how should the rewarding mechanisms for artists and innovators evolve in the new knowledge society scenario? [13]
  • “Hackers” – how should systems be protected against intrusions? How to exploit for the public benefit the “hacker ethics” for improving systems security and reliability? [14]
  • “Privacy” – how to protect sensitive data, how to rule the cyberspace for enforcing those protections? Where is the limit between surveillance/safety and an Orwellian society? What are the assumption related to opt-in / opt-out alternatives? [15]
  • “Computer Crimes” – what is the definition of computer crime? How to protect critical systems from computer crimes? [16] (it is separated from “hackers” domain, since not all hackers are criminals)
  • “Computer (Un)Reliability” – how to improve computer reliability and protect ourselves from life critical systems failures? Who and when should be informed about critical holes in security and reliability of the applications? [17]
  • “Artificial Intelligence” – what are the deep questions posed to humankind by robots (at human-scale and at nano-scale) development? Is it correct to delegate to machines life-critical decisions? What issues will arise from the coming cyborgs, man-machine hybrid systems? [18]
  • “War” – what are the consequences of “intelligent” weapons development, delegating final “killing” decisions to machines? [19]
  • “Ecology & Recycling” – how to avoid/minimize the environmental hazards and impact of hardware production cycles? [20].

These “critical issues” must be addressed at many levels: at individual, professional and societal level; a special role is played by Computer professionals in defining their code of ethics and in informing the public and decision makers about the potentialities and the limits of information technology [21].

In Italy several universities are starting to introduce Computer Ethics courses and we hope this will contribute to the process of growing a new generation of computer professionals, people that will be technology experts aware of the social and ethical implications of information technology.

REFERENCES

[1] Maner W., “Starter Kit on Teaching Computer Ethics”, Helvetia Press, 1980

[2] Johnson D., “Computer Ethics”, Prentice-Hall, 1994

[3] Maner W., “Unique Ethical Problems in Information Technology”, ETHICOMP95

[4] Ward Bynum T., “The Foundation of Computer Ethics”, AICEC99

[5] Wiener N., “The Human Use of Human Beings: Cybernetics and Society”, Houghton Mifflin, 1950

[6] Gibson W., “Neuromancer”, Ace Books, 1984

[7] De Chardin T., “The Phenomenon of Man”, Harper Perennial 1976

[8] Castells M., “The Rise of the Network Society, Blackwell, 1996

[9] Levy P., “Collective Intelligence: Mankind’s Emerging World in Cyberspace”, Perseus, 1999

[10] WSIS, World Summit on Information Society, “Geneva Declaration of Principles”, 2003

[11] CPSR, Computer Professionals for Social Responsibility, Proc. Participatory Design Conf., MIT, 1992

[12] Koyle K., “Access. not just wires”, CPSR Annual meeting, October 1994

[13] Lessig L., “The future of ideas”, Random House, 2003

[14] Levy S., “Hackers: Heroes of the Computer Revolution”, Anchor Press/Doubleday, 1984

[15] Lyon D., “Electronic Eye: The Rise of Surveillance Society”, University of Minnesota Press, 1994

[16] Forester T., Morrison P., “Computer Ethics, Cautionary Tales and Ethical Dilemmas in Comp, MIT, 1993

[17] Neumann P.G., “Computer-Related Risks”, Addison-Wesley, 1994

[18] Joy B., “Why the future doesn’t need us”, Wired 8.04, 2000

[19] Bellin D., Chapman G., “Computers in Battle: Will They Work?”, Harcourt, 1987

[20] EU Directive 2003/108/EC on Waste Electrical and Electronic Equipment (WEEE)

[21] Gotterbarn D., “Informatics and Professional Responsibility”, in “Computer Ethics and Professional Responsibility” (ed. by Ward Bynum T. and Rogerson S.), Blackwell Publishing, 2004.

Comments are closed.