Lessons Learnt from the Past: Reflection on Working for Families Projects in Scotland on Ethnic Minority

Nidhi Sharma and Shalini Kesar


Keeping in mind lessons learnt from the previous research presented at the last two ETHICOMP conferences, this paper reflects on the most current project (Working for Families Project: 2009-2012) funded by European Social Funds and Dundee Partnership . This is phase III of an on-going research. Recognizing the success of the previous Working for Families Project (WfFP), reflected in phase I and II, a new WfFP was initiated that focuses on various issues in the context of reducing employability gap currently existing in Scotland. Results of the WfFP reflected in this paper (phase III) focuses on ethnic minority in Dundee. The main goal of this project is to train people from ethnic minority group to enhance their basic skills, including Information, Communications and Technologies (ICT) and literacy and numeracy.

In doing so, this paper conducted two groups of interviews. Group I, included same set of women, who are currently working or enrolled in higher education after receiving training/services from earlier WfFP (phase I and II). Whereas Group 2, included new set of people from ethnic minority, who are currently enrolled in this project. This was for two reasons. Firstly, we would be able to better identify and thus compare and contrast their barriers from employment point of view. Secondly, feedback from Group I will help us to further modify the existing training and service delivery to better suit the needs of Group 2 while trying to obtain employment. This is important as funding for the following years depends on the success of this project. In other words, funding depends on the number of people from ethnic minority who actually obtain employment or go into higher education. This is monitored by the government and like funding authorities.

Working for Families Project was initiated in early 2000s by the Scottish Executive where the goal was to support vulnerable or disadvantaged parents towards, into or within employment by breaking down childcare or other barriers. It underpins the Scottish Government commitment to tackling child poverty). WfFP also aims to tackle additional employability barriers such as; low skills, lack of confidence, transport, debts, substance misuse issues, and other care responsibilities . The target groups for the initiative were: Lone parents; Ethnic minority and Parents with other stresses in the household which make it difficult to sustain employment (for example, disability, mental health, family break-up and drug and alcohol problems). Main services offered were:

  • Employability Support Team – deal directly with many clients and signpost them to an appropriate Link Worker or for specialist help
  • Link workers – central to WFF with roles as recruiters, providers of guidance and advice, signposting clients to relevant employment, education and training opportunities.
  • Money Advice Support – provide a range of services including; benefit checks, and better off calculations.
  • Access to Childcare- WfFP Staff can assist clients in finding suitable childcare to enable access to work, education or training
  • Training & Education – WfFP provides a range of opportunities to improve skills and employability
  • ICT Training.
  • Dundee College – provide a range of career focused taster courses
  • Financial Assistance – many WFF clients are eligible for assistance from one of the WFF client funds
  • Childcare Subsidy Fund – provides assistance to clients who are starting work and need help
  • Barrier Free Fund – this can help clients with non-childcare related expenses

Although the main objectives of Phase III (current project) is the same as previous WfFP, the main difference in this project is that tools and techniques are being modified by keeping in mind the findings of phase I and II of this research. Kolb’s cycle is used as a way to reflect and hence outline lessons learnt from different phases of this project. Table below summarizes the findings of this paper so far.

Engineering Ethics for an Integrated Online Teaching: What is Missing?

Montse Serra, Josep M. Basart and Eugènia Santamaria


Engineering graduates are —and will be— facing increasingly complex ethical and social issues in their work. Certainly, laws, professional regulations and codes of ethics can help when addressing this strong challenge, but the utility of these policies and resources depends on whether these future professionals understand where and how to take them into account. Accordingly, a well-founded education in professional ethics is required for future engineers. Nevertheless, in spite of the expectations and demands of an ever-changing society, the incorporation of courses on ethics into engineering curricula is often a concession instead of a common academic requirement.

Thus, from any concerned educational approach is necessary a claim for ethics to show how to develop engineers’ work in an ethically and socially responsible way, because it is apparent that ethical issues are inherent to their profession (Huff and Frey, 2005).

Designing an effective ethics introduction into the academic curriculum is more difficult than teachers are able to imagine and admit, particularly where undergraduate students are considered. From our point of view, several constraints and resistances are present that deserve a special attention:

  • As our society becomes more and more dependent on technology, the role of the engineer’s figure is accentuated and his/her responsibilities (Pritchard, 1998) amplified. So engineering instructors find it difficult to know how to weave applied ethics into a curriculum already full of technical subjects which are (all) considered intrinsic to a course.
  • Spreading ethics across the curriculum asks for the contribution of both experts versed in different relevant areas of the technical or engineering sciences and experts from the humanities and social fields, in order to achieve the expected goals. This collaboration is not always welcomed by either of them and is never straightforward.
  • The existence of some doubts and objections inside the teaching staff about whether ethics can be taught at all. Even less to grown-up people who are supposed to know the difference between right and wrong.
  • Under the influence of both, their social environment and the one they find in technical schools themselves, engineering students often think that ethical contents are not really relevant to their own field of study (Fleischmann, 2006).
  • Finally, the frequent clash between students’ scepticism towards learning ethics and teachers’ conviction of its advisability, asks for a constant weighing up and adaptation of, which contents to teach, which methodology to apply, which educational and technological resources to use, and which teaching staff.

To carry out a discipline such as engineering ethics within an online environment drags other constraints that are endemic to this context, and these special characteristics must be considered when developing any learning process. Teaching within an online environment (Rodríguez, Serra, Cabot and Guitart, 2006) is a social process which requires a specific setting, involving technological platforms and methodological tools, in order to facilitate online interaction such as the discussing of ideas and practising behaviours, the developing of attitudes and skills for, finally, promoting an experiential and active learning (Sieber, 2005). In the case of engineering ethics these goals provide a challenge to educators to focus on real-world problems and practical solutions, when these requirements are not easy to meet within an online learning context (Demiray and Sharma, 2009).

Within this framework what is needed, therefore, is an examination of the teaching methodology and its performance in practice when ethical subjects are considered. Our proposal here is to show how learning tools as dialogue (Serra and Basart, 2010), moral reasoning and judgemental language work and how they are reshaped in this new environment. It involves analysing the essentials requirements of these communication tools (i.e., genuine listening, attention in a virtual context, non-conditioned thinking, and open mind). Additionally, solving moral conflicts requires appropriates strategies, so, a heuristic analysis will be under consideration taking into account the above mentioned learning tools. Finally, as an integrator element, we show how the interaction is developed along the learning process, inside a social net, by means of the previous tools.

It is important to emphasize that, thanks to these communication tools, the network communities created within an online context, learn within a group, constructing the knowledge collectively, and contributing the tacit knowledge (Bohm, 1996) of the community where their members participate.


Bohm, D. “On dialogue”. Nichol Lee editor. Routledge, London, 1996.

Demiray, U. and Sharma R.C. “Ethical Practices and Implications in Distance Learning”. Information Science Reference. Hershey, New York, 2009.

Fleischmann, S.T. Teaching Ethics: More Than an Honor Code. Science and Engineering Ethics, 12, 381–389, 2006.

Huff, C. and Frey, W. Moral Pedagogy and Practical Ethics. Science and Engineering Ethics, 11, 389–408, 2005.

Pritchard, M. S. Professional responsibility: Focusing on the Exemplary. Science and Engineering Ethics, 4, 215–233, 1998.

Rodríguez, M.E., Serra M., Cabot J. and Guitart, I. “Evolution of the Teachers’ Roles and Figures in E-learning Environments”. The 6th IEEE International Conference on Advanced Learning Technologies (ICALT 2006). Proceedings of the 6th IEEE International Conference on Advanced Learning Technologies, IEEE Computer Society Press, 512–514. Kerkrade, The Netherlands, 2006.

Serra M. and Basart J.M. “A dialogical approach when learning engineering ethics in a virtual education frame”. Proceedings of Ethicomp 2010 – The “backwards, forwards and sideways” changes of ICT, 483–490. Universitat Rovira i Virgili, Tarragona, Spain, 2010.

Sieber, J.E. Misconceptions and Realities about Teaching Online. Science and Engineering Ethics, 11, 329–340, 2005.


Dr. Toni Samek and Dr. Ali Shiri


Contributions to information ethics occur between disciplines, across different disciplines (e.g., computer science, gender studies, law, business), and even beyond disciplines. And because information work is often political it is important for educators to examine, explore, and teach a range of social responsibility and ethical implications as reflected in an increasingly intense information society. Looking through the specific lens of the North American library and information studies landscape, we can see that teaching and scholarship are heavily weighted to techno-managerial curricular design and research. However, broadly in society, social responsibility, social justice, and global information justice movements blend people and concerns for the human condition into theories and practices of social computing applications and environments. Our contribution is a knowledge mapping of social responsibility in an information intensive society and the final product that we hope to share with ETHICOMP is a taxonomy.

Dr. Samek’s ongoing immersion and scholarship in human rights forms the basis for our taxonomic content. In her scholarship, she studies evidence of voices and other human traces that reflect contemporary local, national and transnational calls to action on conflicts generated by failures to acknowledge human rights, by struggles for recognition and representation, by social exclusion and by library and related cultural institutional roles in these conflicts. Through content analysis of human rights literature (including workbooks) she collates terms (e.g. protest, human security, survival) that she then tests out for matches in global library and information worker advocacy and activism. For example, for human rights terminology such as “revitalization” and “human security” she points to such activities as the Joint UNESCO, CoE and IFLA/FAIFE Kosova Library Mission. Dr. Shiri’s intellectual contribution draws on his sophisticated research in the development and evaluation of knowledge organization systems such as thesauri and taxonomies. Using facet and subject analyses, his work shapes the foundation for the design of the underlying framework and knowledge structure of our taxonomy.

Some knowledge organization systems have been developed for the analysis and documentation of human rights literature, such as Human Rights Thesaurus and Human Rights Documentation Classification. Our taxonomy is different from these kinds of tools in that it addresses and encompasses the information-focused themes and terms evidenced in global social responsibility initiatives and emergent social computing applications. Herein, our knowledge mapping aims to provide a deeper, more comprehensive, and intercultural snapshot of social media and social computing technologies within these broader contexts.

We propose ten high level categories (e.g., communities, social computing applications, activities and operations) that reflect prevalent contemporary aspects of social responsibility in information society. We also assign each of these ten categories a specific set of sub-facets and terms that reflect concrete actions – both physical and digital – and perhaps most interestingly in the emergent realm of digital human connections and exchanges. And we situate this work in the trans-disciplinary communities of scholarship with a common interest in information ethics, social responsibility and computer ethics.

We hope that by introducing our taxonomy to the ETHICOMP community we can receive direct and diverse feedback to help us move forward in the development of a more refined and inclusive iteration that can be used for the organization, sharing and searching of physical and digital information by multiple stakeholders in society. Here below is a version of our first-stage taxonomy (though not in its complete form for the purposes of word count).
Table fig1

What do we Take? What do we Keep? What do we Tell? Ethical Concerns in the Design of Inclusive Socially Connected Technology for Children

Janet C Read and Maija Fredrikson


Designing great computer systems requires attention to many things. In this paper, the focus is on the design of a mobile technology for children that was aimed at providing an inclusive approach to music making that would enable children who would perhaps be otherwise excluded, to feel more attached to the others around them and to experience feelings of self worth. The two stages of design being considered for this paper are the involvement of children during early design work, and the design of security and alert systems in the interactive product. Both of these stages raised ethical dilemmas that the project team had to find solutions for.

Including children as participants in the design of their own technologies takes its inspiration from the early work on participatory design (Schuler and Namioka 1993) as well as from more recent work on children as design informants (Read, Gregory et al. 2002), (Scaife, Rogers et al. 1997; Druin 1999). In a typical session of this kind, children are given some information about the problem being designed for and are then given activities that collectively gather ideas for features, for the look, and for the fun aspects of an eventual product (Theng, Nasir et al. 2000; Mazzone, Read et al. 2008). Several commentators have considered what the value of these design sessions is by examining the value to the children, the value to the development team and the value to the adult participants (Mazzone, Read et al. 2008). The ethical problems associated with this type of activity mainly centre around the extent to which the children understand their participation. It is highly possible that children may not fully understand what their ideas are being used for, what the overall project is about or the extent to which their work will be used at all.

As a result of carrying out these sorts of activities within the UMSIC project, where the participatory activities were carried out both in the UK and in Finland, we have developed a protocol for ‘Honest Research’ with children. This protocol demands that children are kept fully in the research loop by being given clear information at the beginning of a project that outlines why they are participating, by being given specific appropriate feedback from each individual design session that outlines what was taken from it, and by being able to see, and critique all outputs from the design sessions whether these be academic papers or interactive products. In carrying out this protocol the research team are seen to be more cautious about what they do, more attentive to detail in regards of what they say about the design sessions, and more respectful of the children’s views. In the UMSIC project, where possible, children have been shown the eventual product that was developed with their help.

Our second problem space in designing connected technologies for children is associated with the use of passwords and security systems and in making what should be easy to use systems secure as well as understandable. In many instances, users of computer technology are unaware that they are connected to other machines; they are also often unaware of what data is being taken from one place to another. It is clear in our work that children should be kept informed about whether or not they are connected to each other, about where their data may go, and about the possible dangers associated with their connectedness. It is also clear to us, however, that most children are rather unconcerned with security (Read and Beale 2009) and want it to be invisible whereas the parents and guardians of these children, in determining what technologies their children may be using, want to see security systems and want to see these at work in order to ‘trust’ the product (Gefen, Karahanna et al. 2003). The more security that is put into the product, the more unusable, and unattractive, it might become to the children. This raises an ethical dilemma as the design team want to design for both groups but clearly are most concerned with making the products usable for children.

In our work (Read and Beale 2009) we have designed a security system (Possibilities not Perils) that is in two layers with one layer being the concern of the children and the other being the concern of the adults. Children are shown icons that identify when they are connected to other children and are clearly told where their data is heading. Adults on the other hand have adult style control systems that are shown to be robust and sturdy. It could be argued that it is the duty of a team making connected software for children to ‘educate’ children about the perils of being online and being in a shared data space. The view for our project is that this is not appropriate, the system needs to deal with the perils and the children need to feel free to use the software. Security, we feel, is a system problem that needs to be shown to adults but not to children. The only use of passwords for children, in the child-facing product, is for user profiles to be loaded that will give a better user experience.


Druin, A. (1999). Cooperative inquiry: Developing new technologies for children with children. CHI99, ACM Press.

Gefen, D., E. Karahanna, et al. (2003). “Trust and TAM in online shopping: An Integrated Model.” Management Information Systems Quarterley.

Mazzone, E., J. C. Read, et al. (2008). Design with and for disaffected teenagers. Nordichi 2008, Lund, Sweden ACM Press.

Read, J. C. and R. Beale (2009). Under my pillow: designing security for children’s special things. DCS – HCI 2009, Cambridge, UK, ACM Press.

Read, J. C., P. Gregory, et al. (2002). An Investigation of Participatory Design with Children – Informant, Balanced and Facilitated Design. Interaction Design and Children, Eindhoven, Shaker Publishing.

Scaife, M., Y. Rogers, et al. (1997). Designing For or Designing With? Informant Design for Interactive Learning Environments. CHI ’97, Atlanta, ACM Press.

Schuler, D. and A. Namioka, Eds. (1993). Participatory Design: Principles and Practices. Hillsdale, NJ, Lawrence Erlbaum.

Theng, Y. L., N. M. Nasir, et al. (2000). Children as Design Partners and Testers for a Children’s Digital Library. ECDL2000, Springer Verlag.

Ethics and Emerging Technologies: Practitioners’ Perspectives

Mary Prior and Simon Rogerson



In his famous 1985 paper James Moor proposed that the novelty of Computer Technology led to the existence of ‘policy vacuums’:

‘Computers provide us with new capabilities and these in turn give us new choices for action. Often, either no policies for conduct in these situations exist or existing policies seem inadequate.’ (Moor, 1985, p. 266)

To address this problem, a research project currently being undertaken within the European Commission 7th Framework Programme is focussed on identifying emerging Information and Communication Technologies (ICTs) and the ethical issues to which they may give rise, in order to recommend governance structures and policies aimed at addressing them before or as they arise (ETICA).

To complement the academic/research focus of ETICA, a project is being undertaken with ICT practitioners to identify their perceptions of emerging technologies, the ethical issues to which they may give rise and how they may be addressed. This paper will report the outcomes of this project, including a comparison between the perceptions of academics/researchers and of practitioners.

Research methods

The work is being undertaken on behalf of a professional body, with the co-operation of its more experienced members. Two research methods are being employed; firstly, a survey (questionnaire) (Bryman, 2008) and secondly, focus groups (Beardsworth & Bryman, 2006).

The ETICA project used a survey in its initial stages, aimed at researchers and helping to identify:

  • the fields within which current ICT research is being conducted;
  • application areas, expected use and the benefits of these technologies;
  • ethical, social and legal issues that were foreseen, how they were identified, how they had been addressed and how effective were any measures taken to address them;
  • the technologies likely to be used in the future, the ethical issues to which they might give rise and how they might best be addressed.

This survey was adapted for use with ICT practitioners. At the time of writing, responses have been received and analysed to identify fruitful areas for more in-depth discussion within the focus groups. The latter will comprise senior, experienced practitioners and will take place during the Spring of 2011.

Survey results

Respondents are working on a wide range of technologies, in a variety of industries. The ETICA project had identified 11 fields (e.g. affective/emotional computing; ambient intelligence; artificial intelligence; bioelectronics). Nearly half of respondents work in the field of ‘Cloud Computing’ with the next highest proportion being ‘Future Internet’. Altogether 9 of the 11 identified fields are represented.

Only half of respondents say that ‘possible ethical, social or legal problems’ were foreseen arising from the projects they were working on. Given the fields involved, and the expected benefits (many of which involved greater efficiency/cost savings and improved data management) this is an interesting finding that requires further investigation. The majority (nearly 80%) did not consider gender.

Of the possible ethical, social or legal problems identified, many were related to data protection, privacy and security. However others such as ‘reduced staff requirements’ and ‘intrusion into personal matters’ were also mentioned. Among the measures taken to address the issues, many were ‘technical’, although ‘including a work package on legal, ethical and social issues’, ‘reconsideration of the objective of the project’ and ‘setting up an ethics committee/review board’ were among steps taken, too. In one case, ‘cancellation of a part (or more) of the project’ was cited.

Among the future technologies identified, Cloud Computing figured prominently; hardly surprising given that many respondents were working in this field. Others mentioned were mobile technologies, with portable devices becoming more prevalent; internet-based applications and the integration of systems, for example more integrated household management and control. Asked whether they could identify ethical issues to which these are likely to give rise, respondents most frequently cited the security and privacy of data. In addition they mentioned:

  • the boundary between security/counter terrorism and civil liberties;
  • computer hackers will increase by use of the net;
  • retention of data on a server not owned by your organisation;
  • there is a tendency for organisations to assume there are no boundaries to what they can do; there is then an erosion of what is currently acceptable; this always seems to be for the benefit of the organisation and not of the individual;
  • the issues of replacing humans with machines;
  • how virtual reality is used to create situations in relation to gender, religion etc;
  • conflict of use when the same device is used for corporate as well as personal (i.e. private) computing.

Respondents were asked if they could suggest how any ethical issues arising from emerging ICTs should be addressed. A few replied simply replied, ‘no’. Others cited ‘personal responsibility’, the role of education and a technical approach via ‘tightly secured cloud computing’. Regulation was mentioned, as was the setting up of a committee similar to that used to consider ethical issues related to embryology in the UK. The development process was also mentioned, to include ‘multi-stakeholder dialogues’, formal risk assessments and ethics as an integral part. General public forums and focus groups were suggested by one respondent.

Further work

Half of the survey respondents have agreed to be contacted for more in-depth discussion of the issues raised. In particular, the researchers wish, firstly, to pursue the means by which ethical, social or legal issues have been identified and addressed in projects the study participants have worked on. Secondly, to explore the range of future technologies that have been identified and the potential ethical, social or legal issues to which they may give rise. Finally, to discuss the means by which participants suggest these issues should be addressed.

Having summarised the findings from this study with experienced industry practitioners, the paper will compare them with the findings from the more academic/research-oriented participants in the ETICA project. Concluding observations will include suggestions/recommendations for further work in this area.


Beardsworth, Alan & Bryman, Alan (2006), Focus Group Research. Open University Press.

Bryman, A. 2008. Social Research Methods. 3rd ed. Oxford University Press.

ETICA Project home page: http://www.etica-project.eu/

Moor, James (1985), What is Computer Ethics? Metaphilosophy, vol. 16 no. 4, 266-75.

Tracing ‘unconventional variables’ in e-government services take up: the role of religion

Nancy Pouloudi, Antoine Harfouche and Stephane Bourliataux-Lajoinie


As the number and diversity of available e-government services grows worldwide, so does the research on their current state and the success factors leading to their adoption. Much of this research employs technology adoption and diffusion models, showing the importance of factors such as trust, perceived usefulness, perceived e-government value, perceived compatibility of values of citizens and governments ease of use (e.g., Belanger and Carter, 2006). At the same time, qualitative studies have shown, in context, the challenges in the implementation of e-government services, e.g., as citizens and state employees ‘work around’ the systems (Azad & Nelson, 2009), or as political parties as ‘mega actors’ negotiate the role of IT in state modernization (Prasopoulou, 2009). These studies reveal a complex picture of service adoption and bring to the fore the specificities of each national or application context.

Against this background, in a recent workshop on ‘IT and Culture’ at Tours, France, the authors of this paper had the opportunity to discuss and contrast their experience on the adoption and reactions to new e-government services in three countries of the Mediterranean Region, namely France, Greece and Lebanon. These countries are quite different in terms of and e-government adoption and the maturity of available e-services. However, the most intriguing aspect that seems to emerge from such a comparison is that ‘unconventional’ variables, that is, aspects that are rarely acknowledged in mainstream information systems research, may come into play and substantially influence e-government services adoption.

In this paper we will argue that religion may be one such important institution, whose significance can be more vividly understood and appreciated by considering different cultural contexts. In this respect, Lebanon, Greece and France provide an interesting set of countries to consider; despite their geographical proximity, the importance and interference of religion is substantially different, and, once considered in more detail reveals a complexity well worth studying further.

Of the three countries, in France, religion is separated from the State since the days of the French revolution and therefore religion is not expected to play a role, at least openly, in contemporary political decisions, such as the adoption of e-government services.

Conversely, the role of religion is very prominent in Lebanon, where it is tightly related to state governance. Lebanon has a complex political and public system, where a careful balance in all aspects of political life must be maintained among the 18 ethnic and religious communities. Therefore, the seats in parliament, in government, and in the civil administration are allocated proportionally between Christian and Muslim. The Christian president, the Sunni prime minister, and the Shiite speaker of parliament all rule with almost equal power, although in different capacities. As a result of this confessional oligarchy, Lebanon lives in perpetual political and administrative paralysis. The public administration is seen as the place where confessional parties took care of their interests, seriously undermining institutional credibility (Dagher 2002). According to several reports, Lebanese citizens hold a negative attitude towards the Lebanese administration. They perceive the public administration as a cave for corruption that absorbs public money without providing quality services in return (Antoun 2009). Therefore, adoption of public e-services was not independent of but rather contingent on this political environment. Lack of trust in securing private identifiable information, lack of privacy protections, and fear from government control were the main inhibitors.

In Greece the religious picture is much more homogeneous, with over 95% of the Greek population belonging to, although not necessarily practicing, the ‘prevalent religion’, according to the constitution, of the Eastern Orthodox Church of Christ (commonly known as Greek Orthodox). Church is an important institution, on occasion becoming involved in political matters. This role is rooted in the history of modern Greece: the Greek identity has been preserved alongside the Christian identity under the Ottoman Empire rule and has been instrumental in driving the revolution for liberty and the establishment of the modern Greek State in the early 19th century. The Church therefore argues that religious identity should be formally recognized as part of citizen identity: in 2000, when the Greek state revised the identifiers used on identity cards, the Church reacted very strongly to the removal of religion as an identifier. Citizen signatures were collected after Sunday service, pressing for a referendum on this matter. Although this never took place, it was clear that the church played an active role in shaping opinion about matters related to government services. At present, the Church opposes the introduction of an electronic citizen card by the Greek state. As a result, several citizens stated on the relevant online deliberation (www.opengov.gr) that they will not accept this card that ‘brutally insults [their] religious consciousness’. Set against a background of general mistrust towards the government on the one hand and skepticism against all institutions on the other, the Church occasionally strives to accentuate its importance by assuming a protagonist role in State affairs. Even though such initiatives are heavily criticized in society, they are nonetheless influential for part of the population (typically those least ready to participate in the e-society) and therefore religion can become an ‘unexpected’ inhibitor of e-government services adoption.

This initial comparison of the role of religion on e-government adoption in the three countries illustrates that religion may be an important factor to consider when designing e-government services. Yet, our survey of the literature shows limited attention to date to the role of religion for e-government adoption. Perhaps unsurprisingly, studies explicitly acknowledging and naming religion as a key cultural factor in the context of e-government come from countries where religion is central to state affairs, as is the case in many Arab countries (e.g., Alomari et al., 2010, Al-Shehry et al., 2006).

The aim of this paper is to consider the role of religion in more depth, and drawing from the experiences in the three countries discuss the methodological challenges related to the study of religion in e-government. We hope that this discussion will draw attention to this ‘unconventional variable’ that is absent in much of the mainstream research on e-government services adoption, but may in fact be significant in certain cultural context and therefore needs to be studied and understood more thoroughly.


Alomari, Mohammad Kamel Kuldeep Sandhu, Peter Woods. (2010) Measuring Social Factors in E-government Adoption in the Hashemite Kingdom of Jordan. International Journal of Digital Society, 1, 2.

Antoun, Randa, (2009) “Towards a National Anti- Corruption Strategy”, UNDP & LTA, Beirut.

Al-Shery, A. Rogerson, S, Fairweather, NB, Prior, M. (2006) The Motivations for Change Towards E-Government Adoption: Case Studies from Saudi Arabia.

eGovernment Workshop ’06 (eGOV06) September 11 2006, Brunel University, UK. Azad, B and King, N. (2009) Institutional Analysis of Persistent Computer Workarounds. Proceedings Academy of Management, OCIS Division, Chicago, United States.

Belanger, F. and Carter, L. (2009) The impact of the digital divide on e-government use. Communications of the ACM 52(4)132-135.

Dagher, A. (2002) L’administration libanaise après 1990, Colloque Le Modele de l’Etat developpemental et les defis pour le Liban, 15-16 fev. Rotana-Gefinor, Beyrouth, Liban.

Mouzelis N (1978) Modern Greece: Facets of Underdevelopment. Holmes & Meier, New York.

Prasopoulou, E. (2009) The interplay of ICT innovation with state administrative tradition: evidence from the Greek Taxation Information System (TAXIS).

Unpublished PhD Thesis. Department of Management Science and Technology, Athens University of Economics and Business, Greece.

Ethical Aspects of Employing a Weblog in Research

M.J.Phythian, Dr N.B.Fairweather and Dr R.G Howley


As an instrument in an action research (AR) project to reveal the most suitable way of measuring delivery of e-government, Mick Phythian initiated and maintained a weblog. Employing an electronic medium chimed well with e-government research. The weblog in this instance also provided a ‘golden thread’ of continuity through the research, from establishment towards the end of the initial literature review, to being a promotional tool for the research, promoting best practice from academic and practitioner literature, along with hosting two questionnaires, and drawing comments from practitioners on relevant topics. The limited literature around using weblogs as research instruments focuses on describing their use in general or as research diaries. However, sufficient was found to encourage inclusion of the weblog within the ‘toolbox’ of research instruments employed. The weblog could be said to be a whole toolbox itself, acting as repository for questionnaires, feedback to the questionnaires, ethical information and information about future and past feedback sessions. The weblog shaped thinking about the proposed model, matters around metrics and e-government in general, without directing responses to questionnaires or interviews, a key issue when employing a weblog for research, as opposed to journalistic or personal reasons Some ethical issues around weblogs were raised by Rogerson (2006) but he focused on journalistic and personal weblogs, not weblogs focused on research. This paper explores ethical issues involved with weblogs that are research focused, and in particuar AR focused.

As with any tool, practice at posting on the weblog made using it easier. The main concern was then to provide interesting content. Sources of material came from reading a range of publications but setting up an automated newsfeed search brought up content both for the weblog and the research. The blogger, having an IT background and being responsible for a number of official websites, had experience in the technology but had not previously constructed a weblog. Implementation required consideration of design options to develop a site structure to deliver a relatively attractive but easy to maintain research tool. Also needed was provision for future questionnaires and other documents. A title of the “Great E-mancipator” for the research theme and weblog assisted focusing and styling the weblog.

The weblog was a launch pad for surveys, enabling the ethical preamble to be read, with supporting materials, and then the survey reached by those wanting to complete it. This follows Denscombe’s (2005, p.8) advice:

“research project Home Pages offer a voluntary, self-initiated means for dealing with the requirements of research ethics. They provide an eminently practical tool for ‘self-governance’ that addresses a public audience of a) potential participants, b) actual participants, c) other researchers.”

This added ethical value to the weblog from the outset. In addition, the weblog was convenient when encouraging responses, since postings promoted the survey, with later posts reporting initial feedback and thus prompting additional responses.

Being relatively novel, and one of a set of instruments, writing original posts and relevant responses was a challenge, whilst operating within standard research ethical guidelines to support successful research. This meant not breaking confidences revealed in meetings and maintaining neutrality when discussing different suppliers’ products.

In September 2008, the Municipal Journal online version, www.localgov.co.uk, took the weblog as an automatic RSS feed into their own list of bloggers, which included known commentators and a Member of Parliament. Mick Phythian was interviewed by localgov for their special regular section on citizen engagement, with links back to the weblog. The weblog homepage was updated on a regular basis and further links added, along with the ability to subscribe being used by a slowly increasing audience.

The weblog had been consistently monitoring news around the new national indicator, NI14, on “avoidable contact” and announcing the latest government papers about it as they appeared. Establishing a role as a “critical friend” of metrics attracted a small, regular audience of practitioners, academics and consultants with an interest in the field. Whilst not discovering direct answers to research questions by itself, it drew out the limited range of solutions on the market to recording both service user satisfaction and NI14. Weblog comments confirmed we were correct to determine a common and composite metric for use across multiple channels and services, along with the general difficulties presented by channel shift and costing, when channels are seen in isolation. Encouraging feedback, whilst providing either anonymity or protection of social capital could also be seen as an ethical challenge when collecting data, either directly through the weblog, or via the questionnaires and interviews.

This paper describes in more detail:

  • The process of developing the weblog, describing the ethical framework for this
  • The ongoing experience of writing a weblog for research and for promoting research and discussion of ethical issues faced and how these were reconciled within the live/ongoing research process.
  • Lessons learnt from employing the weblog that inform research ethics.
  • Ethical Implications for researchers using blogs and a consideration of how these may be addressed
  • Conclusions on the use of weblogs as research instruments and their ethical issues


Denscombe (2005)

de Vries (2007)

Elo & Kyngas (2007)

Hookway (2008)

Krippendorff (1980)

Mayring (2000)

Murthy (2008)

Nahapiet & Ghoshal (1998)

Research Information Network (2010)

Rogerson (2006)

Weare & Lin (2000)

Wiles, Pain & Crow (2010)

Online Pornography – Isn’t It Time to Stop Being So Squeamish?

Prof Andy Phippen


Online pornography is one of the last taboos for acadmeic research with little peer reviewed literature exploring the phenomenon. However, one cannot underestimate is social and economic impact. According to Ropelato (n.d.), the ‘pornography industry is larger than the revenues of the top technology companies combined: Microsoft, Google, Amazon, eBay, Yahoo!, Apple, Netflix and EarthLink’, yearning a worldwide revenue of $97.06 billion in 2006, where 25% of total daily search engine requests were for pornography, attracting 72 million visitors worldwide to pornography websites in this year. However, could one argue that research into the online pornography phenomenon is sadly lacking due to academic squeamishness or a failure to acknowledge it as a mainstream aspect of adult society?

Opinion on the social impact of online pornography is clearly divided, in both the little academic literature that exists on the subject, and discussion in the quality media. While conducting research for documentary ‘Hardcore Profits’, Tim Samuels discovered remote villages for example, Ghana, suffer consequences of porn (averypublicsociologist, 2009). It was the belief by local villagers that porn watched via mobile cinemas had increased rape occurrences and marital breakdowns. In the absence of sex education, young men follow suit having sex without using condoms and as a result, two men interviewed contracted HIV. We might suggest that there is a wider public health issue here for policy makers – if one’s only experience of “sex education” is online pornography, this may result in a distorted view of acceptable practice. However, is this the fault of the pornography industry, or government failures to provide effective sex education in their country?

The transmission of sexually transmitted disease also brings media attention to the industry, as reflected in a recent news story about an HIV positive actor in the US. In light of these incidents it could be argued that for the pornography to be deemed ethical, condoms should clearly be shown in use.

Toub (2010) reported that the director of the Feminist Porn Awards, Alison Lee, felt that although the porn industry believe viewers don’t want to see condoms used, however: ‘all it would take would for them to say we’re using condoms 100 percent of the time and viewers would get used to it’, therefore in her view, using condoms in porn would be accepted by consumers.

Many claim that heterosexual pornography showing women forced into sex acts by men objectifies women as genitals and sexualise women (Jones.C, 2004; Onne. A, 2009). Zillmann (1986) documented that men enduring prolonged exposure to heterosexual pornography influenced the likelihood of coercing women into unwanted sexual acts and committing rape (applies to those who have some degree of psychoticism). Zillmann identified other issues that emerged following experiments to test effects of prolonged consumption of heterosexual pornography. This includes discontent for the physical appearance and sexual performance of intimate partners and the opinion that habitual pornography consumers are at risk of becoming sexually callous and violent.

However, there are counter arguments that have a more “pro pornography” stance when considering its social impact. Marriott’s (2003) comments that Erotic Review film critics rarely think how, why and whether pornography is degrading to women; ‘we suspect that it might be degrading to everybody’. This could suggest that porn protesters, especially those who regard porn to degrade and objectify women, voice extreme views when actually, porn consumers understand they watch unrealistic material which is the purpose – to physically view fantasies for entertainment and pleasure. This reflects Neu’s (2002) argument, claiming that pornography is not supposed to reflect reality because it’s a fantasy, serving as a ‘safety valve’ for pleasure. Hoffman, 2008 conducted a documentary around the pornography industry and accepted the effect of all pornography, highlighting that, like any other entertainment such as sporting events, it perpetuates inaccurate ideas about how the audience can be in comparison to the characters.

Research undertaken at the University of Plymouth with a small group of adult consumers presents results that challenge of a lot of the “conventional wisdom” regarding the negative social impact of pornography. An online survey disseminated in January 2011 elicited 118 responses from mainly younger adults (over 85% aged between 18-24), with only 15 respondents saying they had never looked at pornography online. A gender split of roughly 50/50 male/female allowed an exploration of attitudes which challenged thinking that females viewed pornography as negative and detrimental to women.

In our sample, an almost equal number of females than males disagreed that pornography objectified women and while more of both genders did agree, more males “strongly agreed”. More females than males also disagreed that “regular exposure to pornography could increase the chance of consumers forcing others into unwanted sexual acts”, with the vast majority of our respondents disagreeing with this statement. However, there was more agreement that “regular exposure to pornography can lead to consumers desensitising sexual relationships” with a clear skew for the whole population agreeing with this statement. There was, again, no clear gender split.

It was also interesting to note that the majority of our respondents did not feel that watching pornography encouraged unsafe sex. Given our population was entirely UK based, this would support the earlier observation that pornography does not encourage unsafe se per se, however, if it is the only form of public education, it may.

One final area, which was the only attitudinal measure which did show a clear gender difference, was whether pornography should clearly show the use of condoms. The vast majority of our female respondents said they thought this should be the case, while over two thirds of males disagreed.

Respondents were also invited to comment if they disagreed with this statement. It was interesting to note that the comments of Hoffman (2008) were supported by a number of our respondents – generally they felt that pornography was “entertainment” or “fantasy” and therefore did not have to reflect sexual acts in the really world.

We would acknowledge that our initial results are presented from a relatively small sample size and are not immediately generalisable. However, our results do highlight the need for academic research in this area. Without a strong evidence base the stigma surrounded the phenomenon will remain as opinion will be presented as fact. Clearly our research shows a mature attitude in general to what some regard as part of mainstream adult Internet society. Perhaps the academic world, and its ethics committees, should cease its delicate sensibilities around the subject matter and engage in developing greater understanding of what is clearly viewed by many to be part of their adult lives.


Averypublicsociologist (2009) Hardcore Profits [Online] Available: http://averypublicsociologist.blogspot.com/2009/09/hardcore-profits.html [Date accessed: 15 October 2010].

Hoffman (2008). 9to5: Days in Porn. Distributed by Media Entertainment GmbH (theatrical), Strand Releasing (DVD).

Jones, C. (2004) Porn – can you be an ethical consumer? [Online] Available: http://www.scarleteen.com/forum/Forum8/HTML/000786.html [Date accessed: 14 November 2010].

Marriott, E. (2003) Men and Porn [Online] Available: http://www.guardian.co.uk/world/2003/nov/08/gender.weekend7 [Date accessed: 5 October 2010].

Neu, J. (2002) An Ethics of Fantasy? Journal of Theoretical and Philosophical Psy, 22 (2), pp.133-157.

Onne, A. (2009) Review: The Sex Education Show vs. Pornography [Online] Available: http://www.thefword.org.uk/blog/2009/03/review_the_sex [Date accessed: 3 October 2010].

Ropelato, J. (n.d.) Internet Pornography Statistics [Online] Available: http://internet-filter-review.toptenreviews.com/internet-pornography-statistics-pg2.html [Date accessed: 25 November 2010].

Toub, M. (2010) How to revel in porn and feel good about it. [Online] Available from: http://www.theglobeandmail.com/life/family-and-relationships/how-to-revel-in-porn-and-feel-good- about-it/article1664172/ [Accessed: 28 October 2010].

Zillmann, D. (1986) Effects of Prolonged Consumption of Pornography [Online] Available: http://profiles.nlm.nih.gov/NN/B/C/K/V/_/nnbckv.pdf [Date accessed: 15 November 2010].

Is the ICT Infrastructure Future Proof ?

Norberto Patrignani and Iordanis Kavathatzopoulos


The ICT infrastructure and its technological core are now becoming the critical infrastructures of our society. Our activities and processes are now relying on these platforms, they are now our social and business platforms. But are they sustainable? What are the (physical) limits to take into account when looking into the ICT future? Do the planet have enough resources to sustain the making, powering and wasting of all the electronic devices needed to support our social and business platforms in the future?

This paper addresses the issue of evaluating the environmental impact of ICT. Starting from the analysis of the sustainability of one of its most celebrated “laws”, the Moore’s law, we analyse its entire life-cycle, from “silicon-factories”, to their use in data centres, to the final destination of ICT products: recycling and reuse (trash ware) in the best case or, uncontrolled waste traffic towards poor countries with health hazards and environmental pollution, in the worst case.

We introduce a new dimension in the social and ethical analysis related to ICT: the future. What are the implications of this future ethics in ICT?

The Physical limits of ICT Infrastructure

ICT, the “cleanest” and the most “de-materialized” economy’s sector, is under scrutiny by environmental advocacy organizations [Greenpeace, 2010]. ICT contribution (production, power supply) to greenhouse gas production (CO2, etc.) is becoming significant and reaching the same level of airlines (close to 3% of total CO2) [Gartner, 2007].

Our computers are based on silicon chips but their production process has one of he highest impact in the industry: for producing a DRAM (of 2g of weight) we need about 1,7 Kg of fossil fuels and chemicals, a “material intensity” of 850:1, probably the highest in all industries (cars manufacturing has a material intensity of 2:1) [SVTC, 2007].

About workers in these chip factories: the first warning was the results of the first study about the health conditions of “semiconductor workers”. They had an illness rate 200% higher than other workers and the women’s miscarriage rate was 40% higher. This was the first sign that the high-tech revolution carries a high price for health, the environment and sustainable economic development. Chip manufacturing requires vast resources and is based on several toxic hazards during its lifecycle, from design and production (then to disposal) [SJMN, 1985].

Probably the Moore’s law [Moore, 1965] is one of the best examples of the so called “magnificent and progressive” goals of technology trends, described by the doubling of the number of transistors on an integrated circuit every eighteen months. From the first microprocessor Intel 4004 (based on about 103 transistors), to the last generations of chips (Intel-Itanium, with more than 109 transistors), this “law” was valid. But what about its sustainability? A recent study of Yale University showed the limits of Moore’s law due to material consumption: “Our high-tech products increasingly make use of rare metals, and mining those resources can have devastating environmental consequences… The processing capacity increase … is enabled by an expanded use of elements … computer chips made use of 11 major elements in the 1980s but now use about 60 (two-thirds of the periodic table!)” [Schmitz, Graedel, 2010]. Most of these elements are the so called rare-earths and the largest mines are now concentrated in China (see fig.1). This has important implications also at international and political level [Pumphrey, 2011].
The Moore’s law has consequences also on the speed of “gadgets” consumption: new products emerge constantly, they are faster, smaller, cheaper and smarter; but each new wave of innovation in electronics technology introduces new materials and pushes last year’s obsolete gadgets and machines into the waste-basket [WSJ, 2004]. Innovation is the hallmark of ICT industry, but we should also start very quickly to think about its sustainability. This recall the electronic sustainability commitment of “Soesterberg principle”: “Each new generation of technical improvements in electronic products (Moore’s law) should include parallel and proportional improvements (@Moore’s law) in environmental, health and safety, as well as social justice attributes” [Soesterberg, 1999].

Future Ethics

Computer scientists and professionals have difficulties in dealing with long terms consequences of their research and projects, in particular when dealing with moral imperatives defined for the welfare of future generations of humans or for the planet. When we miss the direct interactions with the consequences of our actions it is very difficult to get feedback and change our directions. One of the first researchers studying the ethical challenges introduced by technological developments was Hans Jonas in its Imperative of Responsibility: “Human survival depends on our efforts to care for our planet and its future … act so that the effects of your action are compatible with the permanence of genuine human life” [Jonas, 1984].

How can we develop a new stage of ethics, an ethics that will inform our decisions when the consequences of our acts are so distant in the future? This is the central problem of the so called “future ethics”: the rational acceptance of a norm doesn’t automatically guarantee the action or behaviour according to the norm [Birnbacher, 2006].


ICT offers a number of opportunities for the achievement of global sustainability by economic benefits (value creation, employment, government revenues, etc.), for industrialized countries (eco-efficiency enhancing applications, substitution of transport and buildings by teleworking, videoconferencing, e-business, etc.), and for developing countries (economic development, implementation of poverty alleviation strategies) [WSIS, 2003], but on the other hand we must be aware that ICT products can have adverse environmental and social impact by consuming elements that are becoming scarce, lowering the working conditions in the manufacturing phase, consuming energy in the use phase and growing the e-Waste problem.

This double face of ICT rise ethical dilemmas in the field of “future ethics” where the contributions of philosophers like Jonas or Birnbacher will helps us in analysing the dilemmas introduced by ICT towards the future.

If we want a more reliable ICT social and business platform we should start also pointing our glasses towards the future ethical dilemmas.


– Birnbacher D., (2006), “What motivates us to care for the (distant) future?”, N° 04/2006 Gouvernance Mondiale

– Gartner (2007), Gartner Estimates ICT Industry Global CO2 Emissions, 2007

– Greenpeace (2010), Guide to Greener Electronics, October 2010

– Jonas H. (1984), “The Imperative of Responsibility: In Search of Ethics for the Technological Age”, University of Chicago Press, 1984

– Moore G.E. (1965), Cramming more components onto integrated circuits, Electronics Magazine, 19 April 1965

– Pumphrey D., Ladislaw S.O., Hyland L. (2011), “Energy and Environment in the Barack Obama–Hu Jintao Meeting”, Center for Strategic and International Studies, csis.org, January 2011

– Schmitz O.J., Graedel T.E. (2010), “The Consumption Conundrum: Driving the Destruction Abroad”, e360.yale.edu, April 2010

– SJMN (1985), San Jose Mercury News, “High Birth Defects Rate in Spill Area”, January 17, 1985

– Soesterberg (1999), Trans-Atlantic Network for Clean Production Meeting, Soesterberg, The Netherlands, May 1999

– SVTC (2007), Silicon Valley Toxics Coalition, October 2007

– UN-WSIS (2003), Geneva Declaration of Principles, World Summit on Information Society

– WSJ (2004), “e-Waste, the world’s fastest growing and potentially most dangerous waste problem”, Wall Street Journal, September 23, 2004.

The Adventures of Picciotto Roboto: AI & Ethics in Criminal Law

Prof. Ugo Pagallo


In their 2007 Ethicomp paper, Reynolds and Ishikawa proposed three possible examples of criminal robots:

I) Their first hypothesis was “Picciotto Roboto.” The field pertains to robotic security guards as the Sohgo Security Service’s Guardrobo marketed since 2005. The case concerns a security robot participating in a criminal enterprise as a bank robbery. “As such, it seems that the robot is just an instrument just as the factory which produces illegal products might be. The robot in this case should not be arrested, but perhaps impounded and auctioned” (Reynolds and Ishikawa, 2007);

II) The second scenario is given by the “Robot Kleptomaniac.” Here, the machine has free will and self-chosen goals, so that it plans a series of robbery of batteries from local convenience stores, the aim being to recharge its batteries. Leaving aside the responsibilities of designers and producers of such robots, it is possible to claim that the unlawful conduct of the robot depends on – and is justifiable on the basis of – what is mandatory for survival. In any event, “the robot ultimately chooses and carries out the crime” (Reynolds and Ishikawa, 2007);

III) The final hypothesis is no longer a matter of imagination: the Robot Falsifier. In the mid 1990’s, the Legal Tender project claimed that remote viewers can tele-operate a robotic system to physically alter “purportedly authentic US $ 1000 bills” (Goldberg et al., 1996).

Interestingly, in How Just Could a Robot War Be?, Peter Asaro seriously assumes the hypothesis of the “Robot Kleptomaniac,” by envisaging autonomous robots that challenge national sovereignty, produce accidental wars or even make revolutions. In fact, once we admit the existence of a robot that chooses and carries out the criminal action, it necessarily follows that “autonomous technological systems might act in unanticipated ways that are interpreted as acts of war” and, moreover, they may “begin to act on their own intentions and against the intentions of the states who design and use them” (Asaro, 2008). As a result, new types of crime could emerge with robots accountable for their own actions: for example, in Criminal Liability and ‘Smart’ Environments (2010), Mireille Hildebrandt examines a machine that “provides reasons for its behaviours [in that] it has developed second order beliefs about its actions that enable itself as their author.” The self-consciousness of the robot not only materializes Sci-Fi scenarios as imagining a robot revolution and, hence, a new cyber-Spartacus. What is more, in the phrasing of James Moor (1985), the “logical malleability” of robots would end up changing the meaning of traditional notions such as stealing and assaulting, because the culpability of the agent, i.e., its mens rea would be rooted in the artificial mind of a machine “capable of a measure of empathy” and “a type of autonomy that affords intentional actions” (Hildebrandt, 2010). Today’s state-of-the-art in technology, however, suggests to go back to the case of “Picciotto Roboto” rather than insisting on the adventures of “Robot Kleptomaniac.” Although “many authors point out that smart robots already invoke a mutual double anticipation, for instance generating protective feelings for Sony’s robot pet for AIBO” (Hildebrandt, 2010), it seems more profitable to revert to the terra cognita of common legal standpoints that exclude robot criminal-accountability. For the foreseeable future, indeed, robots will be held legally and morally irresponsible because they lack the set of preconditions for attributing liability to someone in the case of violation of criminal laws. Since consciousness is a conceptual prerequisite for both legal and “moral agency” (Himma, 2007), the standard legal viewpoint claims that even when, say, Robbie CX30 assassinated Bart Matthews in Richard Epstein’s story on The Case of the Killer Robot (1997), the homicide remains a matter of human responsibility, because robots are not aware of their own conduct like ‘wishing’ to act in a certain way. Whether the fault is of the Silicon Valley programmer indicted for manslaughter or of the company, Silicon Techtronics, which promised to deliver a safe robot, it would be meaningless to put poor Robbie on trial for murder.

Still, there is no need to evaluate robots with Turing tests so as to admit a new generation of criminal cases involving human (legal and moral) responsibility and even robots’ moral accountability (as in Floridi and Sanders, 2004). In order to highlight this transformation, it is crucial to address the new responsibilities for Picciotto Robotos that participate or are employed in criminal enterprises, in that robots affect standard legal notions as ‘causality’ and human ‘culpability.’ As the field of computer crimes has shown since the first 1990’s, robots induce a “policy vacuum” (Moor, 1985), for the increasing autonomy and even unpredictability of their behaviour alter the conditions on which the principle of human responsibility is traditionally grounded. Some speak of a “failure of causation” due to the impossibility of attributing responsibility on the grounds of “reasonable foreseeability,” since it would be hard to predict what types of harm may supervene (Karnow, 1996). Others stress “strong moral responsibilities” that software programmers and engineers now have for the design of AAAs, i.e., autonomous artificial agents (Grodzinsky, Miller and Wolf, 2008). Besides a new generation of cases, such as a “semiautomatic robotic cannon deployed by the South African army [which] malfunctioned, killing 9 soldiers and wounding 14 others” in October 2007 (Wallach and Allen, 2009), it is necessary to address both legal and ethical issues of this deep transformation, by paying attention to the ways responsibility should be apportioned between designers, producers, and users of increasingly smarter AAAs.


Asaro, P. (2008,) How just could a robot war be?, Frontiers in Artificial Intelligence and Applications, 75, 50-64;

Epstein, R. G. (1997), The case of the killer robot, New York, Wiley;

Floridi, L., and Sanders, J. (2004), On the morality of artificial agents, Minds and Machines, 14(3): 349-379;

Goldberg, K., Paulos, E., Canny, J., Donath, J. and Pauline, N. (1996), Legal tender, ACM SIGGRAPH 96 Visual Proceedings, August 4-9, New York, ACM Press, pp. 43-44;

Grodzinsky, F. S., Miller, K. A., and Wolf, M. J. (2008), The ethics of designing artificial agents, Ethics and Information Technology, 10: 115-121;

Hildebrandt, M. (2010), Criminal liability and ‘smart’ environments, Conference on the Philosophical Foundations of Criminal Law at Rutgers-Newark, August 2009;

Himma, K. E. (2007), Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent?, 2007 Ethicomp Proceedings, Global e-SCM Research Center & Meiji University, pp. 236-245;

Moor, J. (1985), What is computer ethics?, Metaphilosophy, 16(4): 266-275; Reynolds, C. and Ishikawa M. (2007), Robotic thugs, 2007 Ethicomp Proceedings, Global e-SCM Research Center & Meiji University, pp. 487-492;

Wallach, W. and Allen, C. (2009), Moral machines: teaching robots right from wrong. New York: Oxford University Press.