Morality, Markets, and the Internet

Richard Spinello


Our limited experience of the New Economy has given us a glimpse into the various market failures we can expect as electronic commerce becomes more widespread. The most typical market failure is an externality which involves additional costs borne by society that are not reflected in the price of the good whose production has generated those costs. The erosion of privacy or the transmission of spam would fall in this category.

In the virtual world as well as the physical one, market failures and imperfections are inevitable. But how should we address these failures? We can rely on the “invisible hand” of the market and wait for its self-correcting mechanism to take effect. The market will bring about the most efficient use of economic resources in the long run and this will tend to maximize the social good. However, while the market can effect some progress in eliminating imperfections, it is not the best forum for encouraging attentiveness to non-economic values such as privacy or free speech rights.

Or we can turn to the “hand of government,” relying on the force of law to secure values such as privacy or fair competition. We ask policy makers to intervene and correct the market failure or provide some means to thwart distorting behavior. While there are benefits to deferring to this “visible hand,” there are definite liabilities with this approach. There is always the risk that vested economic interests will capture policy makers. And the threat of regulatory arbitrage is greatly magnified in cyberspace. Also, we have repeatedly witnessed how difficult it is for laws to keep pace with the rapid and unpredictable evolution of technology.

There is a third alternative that is losing favor as consumers grow increasingly impatient with the opportunistic behavior of many e-commerce businesses. This is self-regulation. According to this model, the primary burden of regulating the ‘Net falls on its stakeholders, both organizations and individuals, along with those who develop the ‘Net’s code. What makes this self-regulation or “self-organization” of the Net feasible is technology. As Lessig (1999) reminds us, the most potent regulatory force in cyberspace is not the market or the law but code, i.e., the protocols and software programs that comprise the architecture of the Internet. According to Lessig, “the code is the law.” For example, users and organizations have at their disposal many software tools to control and regulate their environment: filters for unwanted speech, trusted systems for intellectual property protection, and technologies that make it easier to ascertain the privacy policies on web sites.

The first major argument of this paper will support the superiority of a decentralized approach to Internet regulation. We do not suggest that government regulations are always inappropriate, but that they should only be relied upon when absolutely necessary. We will contend that a decentralized rule-making scheme is preferable for several reasons. We use the Coase theorem to argue that self-regulation is usually the least costly solution, which keeps the overall harms to a minimum. We also demonstrate that this approach is consistent with the Net’s technology, which tends to defy centralized controls. Finally, we make the case that the individual and institutional autonomy which is preserved by this scheme represents an important countervailing power to government authority.

Lessig and others are quick to point out that decentralized rulemaking through code is fraught with risks and obstacles. Sometimes code developed by programmers such as filtering devices to block pornographic web sites masks a certain political agenda. Or code can be utilized to stifle legitimate forms of free expression and narrow one’s perspective. There is also the danger that commonly accepted, traditional values will be ignored in a code-based solution.

Obviously unguided self-regulation and self-organization of the Net is inadequate since it can solve some of these market failures but lead to other distortions, especially when stakeholders act only in accordance with their own rational self-interest. What we need is ethical self regulation whereby rational self interest, even when it is being used to deal with Internet externalities, is linked with respect for the needs and concerns of others and, above all, with respect for the common good of the Internet community.

How then is this ethical self-regulation to be achieved? Can it be implemented in a way that is not disruptive or counterproductive? We must first appreciate that there are three key issues involved in a decentralized scheme of regulation. First, users and organizations in cyberspace must exercise proper self-restraint. They must abide by commonly accepted moral principles and respect the needs and interests of others even when the law is ambiguous.

On a second level, Internet stakeholders must prudently regulate or order their environments. They must seek to avoid or at least minimize the collateral damage that can sometimes accompany code-based solutions designed to handle externalities (such as filtering pornography or blocking out junk e-mail.) This will often involve choosing the right software and implementing it responsibly.

Finally, software developers, ISP’s, and others who facilitate Internet access have a special obligation. They write the code that regulates the Net and they set the rules of access. They are shaping the Internet’s future architecture and are obligated to do so in a way that is attentive to core moral values. If self-regulation is to work effectively they must demonstrate the moral competence to develop code as carefully as lawmakers develop laws.

While the issue of self-restraint is important it will not be our focus. Instead we will dwell on the second and third issues, and specifically on how the use of code as law complicates but enhances the opportunity for effective self-regulation. We propose some general “meta-principles” that suggest some parameters for how users should behave when ordering their environment and how developers should behave when writing the Net’s code. The final paper will expand upon each of these principles in some detail:

  • Code should be as open and transparent as possible so that the user’s autonomy and capacity for informed consent is fully respected.
  • Where there is more than one code-based solution to a given social problem, users should choose whichever solution minimizes collateral damage.
  • Code should be written so that it preserves traditional social and moral values such as “fair use” of copyrighted material.
  • Whenever feasible, regulations should be imposed downstream rather than upstream, i.e., at the level of the individual or in some cases the organization but preferably not at the level of ISP or the state.
  • Opportunity should be provided for independent review and dialogue for pieces of code that appear to have some regulatory force
  • There must be reasonable proportionality between the harm that is being corrected and the code-based solution that corrects this harm.

Thus, our purpose in this paper is twofold — it will defend a decentralized approach to regulating the Internet, and it will provide some general guidelines for how this model of self-regulation can be realistically accomplished within the bounds of ethical probity.


Lessig, L. (1999) Code and other laws of cyberspace. New York: Basic Books.

Comments are closed.