The Person as Risk, The Person at Risk

AUTHOR
Jeroen van den Hoven and Noëmi Manders-Huits

ABSTRACT

The use of computer supported modeling techniques, computerized databases and statistical methods in fields such as law enforcement, forensic science, policing, taxation, preventive medicine, insurance, and marketing greatly promotes the construal of persons “as risks”.

In the “persons as risk” discourse, persons are characterized in terms of probabilities: probabilities that they will commit crimes (security), that they will like commercial products (marketing), are prone to accidents (safety), are likely to exhibit certain types of unhealthy behavior (preventive medicine), constitute moral hazards for insurance companies (insurance).

In the first part of our paper we present the historical background to this view by discussing two results of the work of Ian Hacking. First, a thesis of historical ontology. Hacking has argued that “people can and have been made up”, sorted and stereotyped– e.g. the homosexual, the criminal, the repeat offender, the bad credit risk, – on the basis of historically rooted classifications and concepts. Second, Hacking and others have extensively documented the emergence and prevalence of thinking in terms of probabilities about a broad range of phenomena in the last two centuries.

These two developments in conjunction, we argue, have given rise to a view of human beings which tends to conceive of them in terms of classifications, statistical categories, profiles and probabilistic models. In the field of identity management and profiling, identities and persons are construed as dynamic collections of personal data. Individuals are routinely and increasingly treated on the basis of probabilistic representations. The treatment they receive, the things they are entitled to, their rights, accountabilities, and the opportunities they are given as well as the limitations that are imposed upon them are shaped by the way their identities are construed and used.

In the second part of the paper we argue that the statistical and probabilistic construal of persons is fundamentally incomplete so as to give rise to questions about the moral justification and the limits of its use and application. First of all they do not accommodate conceptions of the moral person as acting on moral reasons and secondly they fail to accommodate a person’s self-presentations.

We confront the IT induced shift to a view of persons “as risks” with the idea –following Bernard Williams – we have termed ‘moral identification’. Persons need to be able to ‘morally identify’ to some extent with the ways in which they are represented by others and they legitimately desire to be identified by others as such, i.e. as identifying themselves with identity ideals in particular ways. Persons have aspirations, higher order evaluations and atti¬tudes and they see the things they do in a certain light. Representations of these aspects of persons are missing when they are repre¬sented as statistical elements, liabilities and risks in data bases and computer models.

In order to examine the moral problems raised by conceiving of persons as risks, the technologies that enable and support the management and creation of representations, need to be understood in two different ways. On the one hand, they are used for the descriptive mapping of (user) identities and matching characteristics, on the basis of which profiles are created. These characterizations are extremely useful for scientific purposes and support epidemiology, demographic and social science research. However, on the other hand these characterizations are also used for practical purposes, as a grid of categorical profiles, in which data subjects are classified accordingly and constrained in particular ways. Identities or representations of persons are fossilized – carved in stone – as it were, and may lead to erroneous and morally objectionable classifications in marketing, welfare, criminal justice, preventive medicine, insurance and finance and may lead to imposing unjustified constraints on actions and agency of persons.

We provide several suggestions for value sensitive design of profiling technology to accommodate these problems.

Comments are closed.