The Global Communication skill among educated youth of the Tharu tribe: A study in special reference to uses of internet-facilities

Subhash Chandra Verma


The Tharu community is famous tribe of India and Nepal. The Tharus are indigenous people of the Himalayan Tarai area. Maximum population of this community lives on both sides of Indo-Nepal border. Tharu were already living in the Terai before Indo-Europeans arrived. Due to friendly relations between India and Nepal, the Indo-Nepal border is open for people of both countries; so Indian and Nepali Tharus are active in their socio-cultural relationship. This paper is based on primary & secondary data and it describes status of global communication & connectivity among educated youth of the Tharu tribe. There were 32 male and 18 female students in the selected samples. All these Tharu students belong to various villages. A self developed questionnaire has been used in interviewing all selected Tharu students for collection of information about awareness of global communication. Facts about use of internet, Social Networking, Chatting tools, developing of online communities have been collected from the internet. Facebook, Orkut, Yahoo Messenger, MSN Messenger, Skype, Google search have been used for searching those Tharus who are connected and active globally through the use of internet. Some information has been searched from the internet about the Nepali and Indian Tharus. Facts about use of internet, Social Networking, Chatting tools, developing of online communities have been collected from the internet. Facebook, Orkut, Yahoo Messenger, MSN Messenger, Skype, Google search have been used for searching those Tharus who are connected and active globally through the use of internet. It is a normal perception about the Tharus that they have very reserved and shy nature so they are backward. It has also been found in this study that they are really very poor in global connectivity due to their traditional habits. This situation is only in Indian Tharus because Nepali Tharus are more aware than Indian Tharus in use of internet and direct links to other people. This is why they are working at global level but Indian Tharus are still struggling for their basic needs in this era of globalization. Educated Indian Tharus are also backward and poor in use of global communication facilities like internet. Maximum (approx 75%) Indian Tharu students are not able to use computer and internet till now, though they know very well that use of modern technologies of communication is very helpful in development of any society. Hence they are always away from these facilities which are available in the college and market at very low price. There is need for more awareness about global communication and connectivity among Indian Tharus for their development. The Indian Tharu youth, who have access to higher education, are not so aware about globalization and global communication. Although they are aware of the significance of global communication in the development of any community in this era, but even then they are not active in global communication. There is no dearth of facilities which are available free of cost (at college) or at very low price in the market for global communication but Indian educated Tharu youths seem to have little interest in it. Nepali Tharu youths are more active than Indian Tharu youths in global communication by internet and direct contacts with people of other countries. There are many Nepali Tharu students studying in top grade Indian Institutes. But Indian Tharu youths have little awareness about studying in these institutes, though they have special facility of reservation for admission in these types of institutions. What is the real status of the global communication and what are the main problems of Indian Tharu youth about this matter? Why they are not interested in global communication? These are some main and big questions at present. On the basis of this analysis, of collected data from the Indian Tharu students and other available information by related literature & internet search, it should be said that Indian Tharu community is still poor and deprived in matter of global communication in this era of globalization. Lack of awareness about development and globalization is the reason of their backwardness in global communication. Due to poor English some Indian Tharus feel shyness and hesitation to keep global contacts by internet or directly. Educated Indian Tharus are also poor and slow in global communication due to their tipical traditional habits of hesitation and shyness. That is why the Indian Tharus have only one online community named as Rana Tharu Parishad but Nepali Tharus have lot of online communities for social networking (name of these online communities have also described above). Maximum educated Indian Tharus (3/4) are not able to use computer and internet till now. This is the era of globalization so the global communication is must for development of every community. That’s why the Indian Tharus need to be connected with global communication stream. Tharu youth are very important wing of their community. They are playing very creative role in their community. But they are not connected with mainstream of development. Some youths are trying to get higher education and advanced technology but they are very few. They are neither advanced nor are intricately linked with their traditional culture. They should have access to modern education, communication, technology and new life style but the care of traditional culture is necessary to keep their own identity.

Web Deception Detanglement

Anna Vartapetiance and Lee Gillam


Suppose we wished to create an intelligent machine, and the web was the choice of information. More specifically, suppose we relied on Wikipedia as a resource from which this intelligent machine would derive its knowledge base. Any acts of Wikipedia vandalism would then impact upon the knowledge base of this intelligent system, and the system might develop confidence in entirely incorrect information. Ethically, should we develop a machine which can craft its own knowledge base without reference to the veracity of the material it considers? If we did, what kinds of “beliefs” might such a machine start to encompass? How do we address the veracity of such materials so that such a learning machine might distinguish between truth and lie, and what kinds of conclusions might be derived about our world as a consequence? If trying to construct an ethical machine, how appropriate can ethical outcomes be considered in the presence of deceptive data? And, finally, how much of the Web is deceptive?

In this paper, we will investigate the nature and, importantly the detectability, of deception at large, and in relation to the web. Deception appears to be increasingly prevalent in society, whether deliberate, accidental, or simply ill-informed. Examples of deception are readily available, from individuals deceiving potential partners on dating websites, to surveys which make headlines about “Coffee causing Hallucinations” with no medical evidence and very little scientific rigour , . to companies which collapsed due to deceptive financial practices (e.g. Enron, WorldCom), and segments of the financial industry allegedly misrepresenting risk in order to derive substantial profits . We envisage a Web Filter which could be used equally well as an assistive service for human readers, and as a mechanism within a system that learns from the web.

So-called Deception Theory, and the possibility to model human deception processes, is interesting to experts in different subject fields for differing reasons and with different foci. Most research has been directed towards human physical reactions in relation to co-located (face-to-face, synchronous) deception, largely considering non-verbal cues involving body language, eye movements, vocal pitch and so on, and how to detect deceptions on the basis of such cues. Such research is interesting for sociologists in terms of how deception is created, criminologists in trying to differentiate the deceptive from the truthful, and computer vision researchers in relating identifying such cues automatically across participants within captured video. Alternative communication mediums, in which participants are distributed, communications asynchronous, and cues can only be captured from the artefact of the communication, the verbal, requires entirely different lines of expertise and investigation.

To try to recognize deception in verbal communication, lexical and grammatical analysis is typical. Such approaches may be suitable for identifying deception on the web. It is assumed that deception leads to identifiable, yet unconscious, lexical selection and the forming of certain grammatical structures (Toma and Hancock, 2010), and these may act as generally useful cues for deceptive writing. From numerous researchers (Burgoon et al 2003, Pennebaker et al 2003, Newman et al 2003, Zhou et al 2003), we find that the presence of such cues can be divided into four principal groups:

1. Use of more negative emotion words

2. Use distancing strategies – fewer self references

3. Use of larger proportions of rare and/or long words

4. Use of more emotion words

To demonstrate that deception detection might be possible, Pennebaker and colleagues developed a text analysis program called Linguistic Inquiry and Word Count (LIWC) which analyzes texts against an internal dictionary (Pennebaker, Francis, & Booth, 2001, Pennebaker, Booth, & Francis, 2007). Each word in the text can belong to one or more of LIWC’s 70 dimensions, which include general text measures (e.g. word count); psychological indicators (e.g. emotions), and semantically-related words (e.g. temporally and spatially related words). We submitted the BBC’s “’Visions link’ to coffee intake” article, alluded to earlier, to the free online version of LIWC. Results, included below, show an absence of self-references, few positive emotions, and a large proportion of “big words”. Of course, such an analysis is inconclusive as such features may also be true of scientific articles, or textbooks, and LIWC leaves the interpretation up to us.
Our aim is to create a system that can identify deceptive texts, but also explains which are the most deceptive sentences. Such a system could act as an effective Web Filter for both human and machine use. We assume that such a system should be geared towards the peaks of deception which may occur in texts which are otherwise not deceptive, so such deceptions may be lost in the aggregate. We must also account for systematic variations as exist in different text genres in order to control for them, and to ascertain the threshold values for the various factors which give us appropriate confidence in our identification.

The full paper will initially review the literature relating to deception in general, and distinguish between deceptions and lies. In the process, we will offer up some interesting – and occasionally amusing – examples of deception. We will then focus towards text-based deception, and we will include discuss initial experiments geared towards the development of the system mentioned above. One of these experiments may even demonstrate how a paper supposedly geared towards deception detection, whose conclusions fail to fit the aim, was never likely to achieve its aim, and make mention of one or two other interesting examples of academic deception and/or lies.


Toma, C.L. and Hancock, J.T. (2010). Reading between the Lines: Linguistic Cues to Deception in Online Dating Profiles. Proceedings of the ACM conference on Computer-Supported Cooperative Work (CSCW 2009)

Burgoon, J.K., Blair, J.P., Qin, T., and Nunamaker, J.F. (2003). Detecting deception through linguistic analysis. Intelligence and Security Informatics, 2665.

Pennebaker, J.W., Booth, R.J., & Francis, M.E. (2007). “Linguistic Inquiry and Word Count: LIWC 2007”. Austin, TX: LIWC (

Pennebaker, J.W., Francis, M.E., and Booth, R.J. (2001). “Linguistic Inquiry and Word Count: LIWC 2001”. Mahwah, NJ: Erlbaum Publishers.

How to Address Ethics of Emerging ICTs: A critique of Human Research Ethics Reviews and the Search for Alternative Ethical Approaches and Governance Models

Bernd Carsten Stahl


The purpose of this paper is to explore how ethical issues arising from emerging technologies can currently be addressed using the mechanism of ethical review, which dominates the approach to ethics on the European level. The paper discusses which blind spots arise due to this approach and ends with a discussion of alternative and complementary approaches.

The paper arises from the EU FP7 project Ethical Issues of Emerging ICT Applications (ETICA). ETICA ran from 04/2009 to 05/2011 ( Its main focus was on exploring which emerging ICTs can be reasonably expected to become relevant in the next 10 to 15 years, to explore their ethical consequences and propose ways of addressing these. The current abstract gives a brief review of the approach and findings of the project and then concentrates on the way ethical issues are currently addressed in technical research in the EU, namely by ethics review. The abstract argues that this approach will be incapable of dealing with a significant number of ethical issues identified by ETICA and it will discuss other ways of doing so. The rest of the abstract develops this argument in some more depth.

In order to assess whether processes of ethics governance will be fit for purpose, the first task is to come to a sound understanding of which technologies are likely to emerge. The methodology employed to identify emerging ICTs was a structured discourse analysis of documents containing visions of future technologies. Two types of documents were analysed: 1) high level governmental and international policy and funding documents and 2) documents by research institutions.

The grid of analysis used to explore these documents is shown in the following figure:
Data analysis found more than 100 technologies, 70 application examples and 40 artefacts . These were synthesised into the following list of emerging ICTs:

  • Affective Computing
  • Ambient Intelligence
  • Artificial Intelligence
  • Bioelectronics
  • Cloud Computing
  • Future Internet
  • Human-machine symbiosis
  • Neuroelectronics
  • Quantum Computing
  • Robotics
  • Virtual / Augmented Reality

By “technology” we mean a high-level socio-technical system that has the potential to significantly affect the way humans interact with the world.

Having identified likely emerging ICTs, the next task was to explore which ethical issues these are likely to raise. In order to identify likely ethical issues of emerging ICTs, a literature analysis of the ICT ethics literature from 2003 was undertaken. This started out with a novel bibliometric approach that mapped the proximity of different concepts in the ICT ethics literature. The following figure is a graphical representation of this bibliometric analysis:
Using this bibliometric analysis as a starting point, a comprehensive analysis of the ICT ethics literature was undertaken for each technology.

The following mind map represents the headings of the ethical issues identified for the different technologies:
Figure 3: Ethical issues of emerging ICTS

The ethical analysis showed that there are numerous ethical issues that are discussed with regards to the technologies. The number and detail of these ethical issues varies greatly. This variation is caused by the differing levels of detail and length of time of discussion of the technologies. Several recurring issues arise, notably those related to:

• privacy,

• data protection,

• intellectual property,

• security.

In addition to these, there were numerous ethical issues that are less obvious and currently not regulated. These include:

• autonomy, freedom, agency,

• possibility of persuasion or coercion,

• responsibility, liability,

• the possibility of machine ethics

• access, digital divides

• power issues

• consequences of technology for our view of humans

• conceptual issues (e.g. notions of emotions, intelligence),

• link between and integration of ethics into law,

• culturally different perceptions of ethics.

This non-comprehensive list shows that there are numerous ethical issues we can expect to arise.

In order to motivate policy development, the relevance and severity of these issues were evaluated. Evaluation of the emerging ICTs and their ethical issues was done from four different perspectives:

• Law:

The analysis was based on the principles of human dignity, equality and the rule of law. A review of 182 EU legal documents revealed that the legal implications of emerging technologies were not adequately reflected.

• (Institutional) ethics:

The earlier ethical analysis was contrasted by looking at opinions and publications of European and national ethics panels or review bodies. The review furthermore covered the implied normative basis of technology ethics in the EU.

• Gender:

A review of the gender and technology literature showed that in the case of five technologies such gender implications had already been raised in the literature.

• Technology assessment:

This analysis asked how far developed the ICTs are and what their prospects of realisation are. The expected benefits and possible side effects were discussed as well as the likelihood of controversy arising from the different technologies.

This literature-based analysis was supplemented and validated by an expert evaluation workshop. The evaluation found that several of the technologies are so closely related that they should be treated in conjunction. Building on the criteria of likelihood of coming into existence and raising ethical debate, the following ranking was suggested:

1. Ambient Intelligence

2. Augmented and virtual reality

3. Future Internet

4. Robotics and Artificial Intelligence and Affective computing

5. Neuroelectronics and Bioelectronics and Human-Machine Symbiosis

6. Cloud Computing

7. Quantum Computing

This ranking will allow for the prioritisation of activities and policies.

Ethics is described as an important part of technical research in the EU. The European Union is based on shared values as laid out in the Charter of Fundamental Rights of the European Union and implemented in the Europe 2020 and other strategies and policies. Information and communication technologies (ICTs) are needed to achieve numerous policy objectives.

It is therefore important to ensure that development and use of ICTs lead to consequences that are compatible with European values. ICT research projects funded by the EU 7th Framework Programme have to reflect on ethical issues and how to resolve them. This is currently verified by an ethics checklist. This checklist is filled in by project proposers and evaluated by technical experts during the evaluation of the project. If these experts flag the project up as ethically relevant, then it is reviewed by an ethics review panel.

In order to understand whether this approach is suitable for dealing with the ethical issues, the issues were classified as follows:
Figure 4: Top level categories of ethical issues in emerging ICTs.

A more detailed analysis of the last two, social consequences and impact on individuals can be seen in the following figure:
Figure 5: Categories of ethical issues related to impact on individuals and social consequences

The colour coding in this figure refers to the question whether existing ethics processes are likely to pick up these issues and deal with them in a satisfactory manner. Green means that these are established issues that the ethics checklist and subsequent review are likely to identify. Issues depicted in yellow are less clear and red issues are unlikely to be addressed by the current approach.

Having thus established that the EU’s current way of dealing with ethical issues is unlikely to be able to satisfactorily deal with all problems and does not measure up to the EU’s rhetoric, the next question is how this can be addressed. The full paper will go through the computer ethics literature and explore whether alternative approaches offer more promising avenues of addressing these issues.

Overall, the paper will contribute to a theoretically sound and practically relevant way of understanding, evaluating and dealing with ethics in emerging technologies.

Eeny, Meeny, Miny, Masquerade! Advergames and Dutch Children; A Controversial Marketing Practice.

Isolde Sprenkels and Dr. Irma van der Ploeg


In a society increasingly inundated with digital technology, children in the Netherlands learn from a very young age how to use new information and communication technologies (ICTs). These technologies offer them ways to play, learn, explore and develop their sense of identity, as well as interact and communicate with adults and peers. Children spend ever more time in front of computer and mobile screens with gaming as one of their favourite activities. One type of game many children enjoy playing are online casual or mini games. These short, ‘free’ and easy to learn games have friendly designs with bright colours and fun tasks to perform, and are developed to entertain, educate or deliver a particular commercial message. This paper focuses on the latter ‘advertisement as game’ that is developed around a particular brand or product and which can be described as an ‘advergame’.

Advergames are used by companies to build brand awareness, prolong contact time, stimulate product purchase and consumption, drive traffic to a brand’s website, generate consumer data and build and expand digital profiles of consumers. Especially when played by children, these advergames can be considered to be problematic and controversial, as they are seen to exploit children by taking advantage of their state of psycho-social development and by integrating unseen technological features. They bring together several issues related to identity, consumption, marketing, profiling and datamining. Using insights from surveillance studies, science and technology studies, (sociological) studies of identity construction in relation to ICTs, and studies on children and consumption, this paper will analyse several advergames targeted to Dutch children. It examines how this new form of marketing communication fits into corporate objectives and why this can be considered controversial with children.

First, advergames will be examined against a discourse that suggest that it is immoral to economically exploit children; that children are considered vulnerable and it is inappropriate to take advantage of this vulnerability by using sophisticated marketing strategies on them. As children’s cognitive skills are not yet fully developed and they have little life experience, their ability to interpret and assess commercial messages is limited. This makes persuasive strategies unethical as children are still in the process of distinguish messages and unable to make choices that would protect themselves from certain forms of marketing manipulation (Moore 2004; see also Buijzen and Valkenburg 2003).Research has shown that children find it difficult to distinguish between advertising and editorial content in online environments (Nielsen 2002; Mijn Kind Online 2008). There is also an increasing lack of parental supervision in children’s use of the internet (Qrius 2007). This implies that many children are on their own when it comes to identifying commercial content online and developing digital information skills. Codes of conduct such as the Dutch Advertising Code prescribes that the distinction between advertising and editorial content should always be made recognizable. However, when it comes to advergames, this distinction is not made explicit in any way, making it a very difficult task for children to discriminate between an advertisement and entertainment in these ‘seamless environments’ (Moore 2004).

Arguably, this is part of a marketing strategy. Eliminating the recognition or identification of the commercial message and marketer practitioners’ intentions and tactics fits the strategy of ‘kidsmarketing’ to tailor messages, design products, packages, websites and advertisements in a way that appeals to childrens’ ‘wants and needs’, and are identifiable to them, with ‘play and fun’ at its core (Cook 2010). Advergames appear to be the ultimate form of this ‘play and fun’ approach; a ‘masquerade’, where marketer practitioners hide behind a screen full of play and fun, allowing them to reach their own commercial goals in the meantime. More specifically, while advergames may be seen as an opportunity to play something fun for free, children remain unaware of the commercial intent and manipulation behind the (adver)game that can be seen to mediate and even transform their play, their sense of self and their understanding of the world around them. Not only they are offered what they ‘want and need’ following the viewpoint of the marketer practitioner, what they ‘want and need’ appears to be produced by this very same strategy.

Second, in order to reach corporate goals such as building brand awareness, stimulating consumption, and generating consumer data, certain features are designed into advergames and will be taken into account. A study on children and advergames shows that many of these advergames include features to encourage children in repeat play and product purchase by offering such things as multiple game levels, public displays of high scores and game tips within product packages (Moore 2006). Another study indicated that there is a relationship between the capacity of the advergame to induce a state of flow, a mental state of subjective absorption within an activity, and a change in the buying behaviour of (in this specific case adult) players (Gurau 2008). Advergame research also shows how some of these games include product related polls or quizzes, offering valuable information for market research on children’s habits and preferences (Moore 2006; Grimes 2008). They may also encourage players to register and share their gaming experience with friends or family, collecting personal identifiable information (Gurau 2008). Combined with an analysis of in-game-behaviour and activities, marketers are able to construct detailed consumer profiles, based on the aggregation of these behavioural and demographic data (Grimes 2008; Chung & Grimes 2005). Through this, advergames can be described as ‘electronic surveillance devices’, as they enable a new form of tracking children’s activities. In addition, studies on online communities for children and advertising discuss marketers using immersive advertising campaigns such as advergames, encouraging children to play with particular products, enabling them at a later point in time to identify the brand (Grimes & Shade 2005), and to create a ‘personal relationship’ with the product (Steeves 2006). They teach children to trust brands, consider them their friends, not only recommending products, but becoming ‘role models for the child to emulate, in effect embedding the product right into a child’s identity’ (Steeves 2006).


Chung, G. & Grimes, S. (2005) ‘Data Mining the Kids: Surveillance and Market Research Strategies in Children’s Online Games’, Canadian Journal of Communication, vol. 30, no.4, pp. 527-548.

Lessons Learnt from the Past: Reflection on Working for Families Projects in Scotland on Ethnic Minority

Nidhi Sharma and Shalini Kesar


Keeping in mind lessons learnt from the previous research presented at the last two ETHICOMP conferences, this paper reflects on the most current project (Working for Families Project: 2009-2012) funded by European Social Funds and Dundee Partnership . This is phase III of an on-going research. Recognizing the success of the previous Working for Families Project (WfFP), reflected in phase I and II, a new WfFP was initiated that focuses on various issues in the context of reducing employability gap currently existing in Scotland. Results of the WfFP reflected in this paper (phase III) focuses on ethnic minority in Dundee. The main goal of this project is to train people from ethnic minority group to enhance their basic skills, including Information, Communications and Technologies (ICT) and literacy and numeracy.

In doing so, this paper conducted two groups of interviews. Group I, included same set of women, who are currently working or enrolled in higher education after receiving training/services from earlier WfFP (phase I and II). Whereas Group 2, included new set of people from ethnic minority, who are currently enrolled in this project. This was for two reasons. Firstly, we would be able to better identify and thus compare and contrast their barriers from employment point of view. Secondly, feedback from Group I will help us to further modify the existing training and service delivery to better suit the needs of Group 2 while trying to obtain employment. This is important as funding for the following years depends on the success of this project. In other words, funding depends on the number of people from ethnic minority who actually obtain employment or go into higher education. This is monitored by the government and like funding authorities.

Working for Families Project was initiated in early 2000s by the Scottish Executive where the goal was to support vulnerable or disadvantaged parents towards, into or within employment by breaking down childcare or other barriers. It underpins the Scottish Government commitment to tackling child poverty). WfFP also aims to tackle additional employability barriers such as; low skills, lack of confidence, transport, debts, substance misuse issues, and other care responsibilities . The target groups for the initiative were: Lone parents; Ethnic minority and Parents with other stresses in the household which make it difficult to sustain employment (for example, disability, mental health, family break-up and drug and alcohol problems). Main services offered were:

  • Employability Support Team – deal directly with many clients and signpost them to an appropriate Link Worker or for specialist help
  • Link workers – central to WFF with roles as recruiters, providers of guidance and advice, signposting clients to relevant employment, education and training opportunities.
  • Money Advice Support – provide a range of services including; benefit checks, and better off calculations.
  • Access to Childcare- WfFP Staff can assist clients in finding suitable childcare to enable access to work, education or training
  • Training & Education – WfFP provides a range of opportunities to improve skills and employability
  • ICT Training.
  • Dundee College – provide a range of career focused taster courses
  • Financial Assistance – many WFF clients are eligible for assistance from one of the WFF client funds
  • Childcare Subsidy Fund – provides assistance to clients who are starting work and need help
  • Barrier Free Fund – this can help clients with non-childcare related expenses

Although the main objectives of Phase III (current project) is the same as previous WfFP, the main difference in this project is that tools and techniques are being modified by keeping in mind the findings of phase I and II of this research. Kolb’s cycle is used as a way to reflect and hence outline lessons learnt from different phases of this project. Table below summarizes the findings of this paper so far.

Engineering Ethics for an Integrated Online Teaching: What is Missing?

Montse Serra, Josep M. Basart and Eugènia Santamaria


Engineering graduates are —and will be— facing increasingly complex ethical and social issues in their work. Certainly, laws, professional regulations and codes of ethics can help when addressing this strong challenge, but the utility of these policies and resources depends on whether these future professionals understand where and how to take them into account. Accordingly, a well-founded education in professional ethics is required for future engineers. Nevertheless, in spite of the expectations and demands of an ever-changing society, the incorporation of courses on ethics into engineering curricula is often a concession instead of a common academic requirement.

Thus, from any concerned educational approach is necessary a claim for ethics to show how to develop engineers’ work in an ethically and socially responsible way, because it is apparent that ethical issues are inherent to their profession (Huff and Frey, 2005).

Designing an effective ethics introduction into the academic curriculum is more difficult than teachers are able to imagine and admit, particularly where undergraduate students are considered. From our point of view, several constraints and resistances are present that deserve a special attention:

  • As our society becomes more and more dependent on technology, the role of the engineer’s figure is accentuated and his/her responsibilities (Pritchard, 1998) amplified. So engineering instructors find it difficult to know how to weave applied ethics into a curriculum already full of technical subjects which are (all) considered intrinsic to a course.
  • Spreading ethics across the curriculum asks for the contribution of both experts versed in different relevant areas of the technical or engineering sciences and experts from the humanities and social fields, in order to achieve the expected goals. This collaboration is not always welcomed by either of them and is never straightforward.
  • The existence of some doubts and objections inside the teaching staff about whether ethics can be taught at all. Even less to grown-up people who are supposed to know the difference between right and wrong.
  • Under the influence of both, their social environment and the one they find in technical schools themselves, engineering students often think that ethical contents are not really relevant to their own field of study (Fleischmann, 2006).
  • Finally, the frequent clash between students’ scepticism towards learning ethics and teachers’ conviction of its advisability, asks for a constant weighing up and adaptation of, which contents to teach, which methodology to apply, which educational and technological resources to use, and which teaching staff.

To carry out a discipline such as engineering ethics within an online environment drags other constraints that are endemic to this context, and these special characteristics must be considered when developing any learning process. Teaching within an online environment (Rodríguez, Serra, Cabot and Guitart, 2006) is a social process which requires a specific setting, involving technological platforms and methodological tools, in order to facilitate online interaction such as the discussing of ideas and practising behaviours, the developing of attitudes and skills for, finally, promoting an experiential and active learning (Sieber, 2005). In the case of engineering ethics these goals provide a challenge to educators to focus on real-world problems and practical solutions, when these requirements are not easy to meet within an online learning context (Demiray and Sharma, 2009).

Within this framework what is needed, therefore, is an examination of the teaching methodology and its performance in practice when ethical subjects are considered. Our proposal here is to show how learning tools as dialogue (Serra and Basart, 2010), moral reasoning and judgemental language work and how they are reshaped in this new environment. It involves analysing the essentials requirements of these communication tools (i.e., genuine listening, attention in a virtual context, non-conditioned thinking, and open mind). Additionally, solving moral conflicts requires appropriates strategies, so, a heuristic analysis will be under consideration taking into account the above mentioned learning tools. Finally, as an integrator element, we show how the interaction is developed along the learning process, inside a social net, by means of the previous tools.

It is important to emphasize that, thanks to these communication tools, the network communities created within an online context, learn within a group, constructing the knowledge collectively, and contributing the tacit knowledge (Bohm, 1996) of the community where their members participate.


Bohm, D. “On dialogue”. Nichol Lee editor. Routledge, London, 1996.

Demiray, U. and Sharma R.C. “Ethical Practices and Implications in Distance Learning”. Information Science Reference. Hershey, New York, 2009.

Fleischmann, S.T. Teaching Ethics: More Than an Honor Code. Science and Engineering Ethics, 12, 381–389, 2006.

Huff, C. and Frey, W. Moral Pedagogy and Practical Ethics. Science and Engineering Ethics, 11, 389–408, 2005.

Pritchard, M. S. Professional responsibility: Focusing on the Exemplary. Science and Engineering Ethics, 4, 215–233, 1998.

Rodríguez, M.E., Serra M., Cabot J. and Guitart, I. “Evolution of the Teachers’ Roles and Figures in E-learning Environments”. The 6th IEEE International Conference on Advanced Learning Technologies (ICALT 2006). Proceedings of the 6th IEEE International Conference on Advanced Learning Technologies, IEEE Computer Society Press, 512–514. Kerkrade, The Netherlands, 2006.

Serra M. and Basart J.M. “A dialogical approach when learning engineering ethics in a virtual education frame”. Proceedings of Ethicomp 2010 – The “backwards, forwards and sideways” changes of ICT, 483–490. Universitat Rovira i Virgili, Tarragona, Spain, 2010.

Sieber, J.E. Misconceptions and Realities about Teaching Online. Science and Engineering Ethics, 11, 329–340, 2005.


Dr. Toni Samek and Dr. Ali Shiri


Contributions to information ethics occur between disciplines, across different disciplines (e.g., computer science, gender studies, law, business), and even beyond disciplines. And because information work is often political it is important for educators to examine, explore, and teach a range of social responsibility and ethical implications as reflected in an increasingly intense information society. Looking through the specific lens of the North American library and information studies landscape, we can see that teaching and scholarship are heavily weighted to techno-managerial curricular design and research. However, broadly in society, social responsibility, social justice, and global information justice movements blend people and concerns for the human condition into theories and practices of social computing applications and environments. Our contribution is a knowledge mapping of social responsibility in an information intensive society and the final product that we hope to share with ETHICOMP is a taxonomy.

Dr. Samek’s ongoing immersion and scholarship in human rights forms the basis for our taxonomic content. In her scholarship, she studies evidence of voices and other human traces that reflect contemporary local, national and transnational calls to action on conflicts generated by failures to acknowledge human rights, by struggles for recognition and representation, by social exclusion and by library and related cultural institutional roles in these conflicts. Through content analysis of human rights literature (including workbooks) she collates terms (e.g. protest, human security, survival) that she then tests out for matches in global library and information worker advocacy and activism. For example, for human rights terminology such as “revitalization” and “human security” she points to such activities as the Joint UNESCO, CoE and IFLA/FAIFE Kosova Library Mission. Dr. Shiri’s intellectual contribution draws on his sophisticated research in the development and evaluation of knowledge organization systems such as thesauri and taxonomies. Using facet and subject analyses, his work shapes the foundation for the design of the underlying framework and knowledge structure of our taxonomy.

Some knowledge organization systems have been developed for the analysis and documentation of human rights literature, such as Human Rights Thesaurus and Human Rights Documentation Classification. Our taxonomy is different from these kinds of tools in that it addresses and encompasses the information-focused themes and terms evidenced in global social responsibility initiatives and emergent social computing applications. Herein, our knowledge mapping aims to provide a deeper, more comprehensive, and intercultural snapshot of social media and social computing technologies within these broader contexts.

We propose ten high level categories (e.g., communities, social computing applications, activities and operations) that reflect prevalent contemporary aspects of social responsibility in information society. We also assign each of these ten categories a specific set of sub-facets and terms that reflect concrete actions – both physical and digital – and perhaps most interestingly in the emergent realm of digital human connections and exchanges. And we situate this work in the trans-disciplinary communities of scholarship with a common interest in information ethics, social responsibility and computer ethics.

We hope that by introducing our taxonomy to the ETHICOMP community we can receive direct and diverse feedback to help us move forward in the development of a more refined and inclusive iteration that can be used for the organization, sharing and searching of physical and digital information by multiple stakeholders in society. Here below is a version of our first-stage taxonomy (though not in its complete form for the purposes of word count).
Table fig1

What do we Take? What do we Keep? What do we Tell? Ethical Concerns in the Design of Inclusive Socially Connected Technology for Children

Janet C Read and Maija Fredrikson


Designing great computer systems requires attention to many things. In this paper, the focus is on the design of a mobile technology for children that was aimed at providing an inclusive approach to music making that would enable children who would perhaps be otherwise excluded, to feel more attached to the others around them and to experience feelings of self worth. The two stages of design being considered for this paper are the involvement of children during early design work, and the design of security and alert systems in the interactive product. Both of these stages raised ethical dilemmas that the project team had to find solutions for.

Including children as participants in the design of their own technologies takes its inspiration from the early work on participatory design (Schuler and Namioka 1993) as well as from more recent work on children as design informants (Read, Gregory et al. 2002), (Scaife, Rogers et al. 1997; Druin 1999). In a typical session of this kind, children are given some information about the problem being designed for and are then given activities that collectively gather ideas for features, for the look, and for the fun aspects of an eventual product (Theng, Nasir et al. 2000; Mazzone, Read et al. 2008). Several commentators have considered what the value of these design sessions is by examining the value to the children, the value to the development team and the value to the adult participants (Mazzone, Read et al. 2008). The ethical problems associated with this type of activity mainly centre around the extent to which the children understand their participation. It is highly possible that children may not fully understand what their ideas are being used for, what the overall project is about or the extent to which their work will be used at all.

As a result of carrying out these sorts of activities within the UMSIC project, where the participatory activities were carried out both in the UK and in Finland, we have developed a protocol for ‘Honest Research’ with children. This protocol demands that children are kept fully in the research loop by being given clear information at the beginning of a project that outlines why they are participating, by being given specific appropriate feedback from each individual design session that outlines what was taken from it, and by being able to see, and critique all outputs from the design sessions whether these be academic papers or interactive products. In carrying out this protocol the research team are seen to be more cautious about what they do, more attentive to detail in regards of what they say about the design sessions, and more respectful of the children’s views. In the UMSIC project, where possible, children have been shown the eventual product that was developed with their help.

Our second problem space in designing connected technologies for children is associated with the use of passwords and security systems and in making what should be easy to use systems secure as well as understandable. In many instances, users of computer technology are unaware that they are connected to other machines; they are also often unaware of what data is being taken from one place to another. It is clear in our work that children should be kept informed about whether or not they are connected to each other, about where their data may go, and about the possible dangers associated with their connectedness. It is also clear to us, however, that most children are rather unconcerned with security (Read and Beale 2009) and want it to be invisible whereas the parents and guardians of these children, in determining what technologies their children may be using, want to see security systems and want to see these at work in order to ‘trust’ the product (Gefen, Karahanna et al. 2003). The more security that is put into the product, the more unusable, and unattractive, it might become to the children. This raises an ethical dilemma as the design team want to design for both groups but clearly are most concerned with making the products usable for children.

In our work (Read and Beale 2009) we have designed a security system (Possibilities not Perils) that is in two layers with one layer being the concern of the children and the other being the concern of the adults. Children are shown icons that identify when they are connected to other children and are clearly told where their data is heading. Adults on the other hand have adult style control systems that are shown to be robust and sturdy. It could be argued that it is the duty of a team making connected software for children to ‘educate’ children about the perils of being online and being in a shared data space. The view for our project is that this is not appropriate, the system needs to deal with the perils and the children need to feel free to use the software. Security, we feel, is a system problem that needs to be shown to adults but not to children. The only use of passwords for children, in the child-facing product, is for user profiles to be loaded that will give a better user experience.


Druin, A. (1999). Cooperative inquiry: Developing new technologies for children with children. CHI99, ACM Press.

Gefen, D., E. Karahanna, et al. (2003). “Trust and TAM in online shopping: An Integrated Model.” Management Information Systems Quarterley.

Mazzone, E., J. C. Read, et al. (2008). Design with and for disaffected teenagers. Nordichi 2008, Lund, Sweden ACM Press.

Read, J. C. and R. Beale (2009). Under my pillow: designing security for children’s special things. DCS – HCI 2009, Cambridge, UK, ACM Press.

Read, J. C., P. Gregory, et al. (2002). An Investigation of Participatory Design with Children – Informant, Balanced and Facilitated Design. Interaction Design and Children, Eindhoven, Shaker Publishing.

Scaife, M., Y. Rogers, et al. (1997). Designing For or Designing With? Informant Design for Interactive Learning Environments. CHI ’97, Atlanta, ACM Press.

Schuler, D. and A. Namioka, Eds. (1993). Participatory Design: Principles and Practices. Hillsdale, NJ, Lawrence Erlbaum.

Theng, Y. L., N. M. Nasir, et al. (2000). Children as Design Partners and Testers for a Children’s Digital Library. ECDL2000, Springer Verlag.

Ethics and Emerging Technologies: Practitioners’ Perspectives

Mary Prior and Simon Rogerson



In his famous 1985 paper James Moor proposed that the novelty of Computer Technology led to the existence of ‘policy vacuums’:

‘Computers provide us with new capabilities and these in turn give us new choices for action. Often, either no policies for conduct in these situations exist or existing policies seem inadequate.’ (Moor, 1985, p. 266)

To address this problem, a research project currently being undertaken within the European Commission 7th Framework Programme is focussed on identifying emerging Information and Communication Technologies (ICTs) and the ethical issues to which they may give rise, in order to recommend governance structures and policies aimed at addressing them before or as they arise (ETICA).

To complement the academic/research focus of ETICA, a project is being undertaken with ICT practitioners to identify their perceptions of emerging technologies, the ethical issues to which they may give rise and how they may be addressed. This paper will report the outcomes of this project, including a comparison between the perceptions of academics/researchers and of practitioners.

Research methods

The work is being undertaken on behalf of a professional body, with the co-operation of its more experienced members. Two research methods are being employed; firstly, a survey (questionnaire) (Bryman, 2008) and secondly, focus groups (Beardsworth & Bryman, 2006).

The ETICA project used a survey in its initial stages, aimed at researchers and helping to identify:

  • the fields within which current ICT research is being conducted;
  • application areas, expected use and the benefits of these technologies;
  • ethical, social and legal issues that were foreseen, how they were identified, how they had been addressed and how effective were any measures taken to address them;
  • the technologies likely to be used in the future, the ethical issues to which they might give rise and how they might best be addressed.

This survey was adapted for use with ICT practitioners. At the time of writing, responses have been received and analysed to identify fruitful areas for more in-depth discussion within the focus groups. The latter will comprise senior, experienced practitioners and will take place during the Spring of 2011.

Survey results

Respondents are working on a wide range of technologies, in a variety of industries. The ETICA project had identified 11 fields (e.g. affective/emotional computing; ambient intelligence; artificial intelligence; bioelectronics). Nearly half of respondents work in the field of ‘Cloud Computing’ with the next highest proportion being ‘Future Internet’. Altogether 9 of the 11 identified fields are represented.

Only half of respondents say that ‘possible ethical, social or legal problems’ were foreseen arising from the projects they were working on. Given the fields involved, and the expected benefits (many of which involved greater efficiency/cost savings and improved data management) this is an interesting finding that requires further investigation. The majority (nearly 80%) did not consider gender.

Of the possible ethical, social or legal problems identified, many were related to data protection, privacy and security. However others such as ‘reduced staff requirements’ and ‘intrusion into personal matters’ were also mentioned. Among the measures taken to address the issues, many were ‘technical’, although ‘including a work package on legal, ethical and social issues’, ‘reconsideration of the objective of the project’ and ‘setting up an ethics committee/review board’ were among steps taken, too. In one case, ‘cancellation of a part (or more) of the project’ was cited.

Among the future technologies identified, Cloud Computing figured prominently; hardly surprising given that many respondents were working in this field. Others mentioned were mobile technologies, with portable devices becoming more prevalent; internet-based applications and the integration of systems, for example more integrated household management and control. Asked whether they could identify ethical issues to which these are likely to give rise, respondents most frequently cited the security and privacy of data. In addition they mentioned:

  • the boundary between security/counter terrorism and civil liberties;
  • computer hackers will increase by use of the net;
  • retention of data on a server not owned by your organisation;
  • there is a tendency for organisations to assume there are no boundaries to what they can do; there is then an erosion of what is currently acceptable; this always seems to be for the benefit of the organisation and not of the individual;
  • the issues of replacing humans with machines;
  • how virtual reality is used to create situations in relation to gender, religion etc;
  • conflict of use when the same device is used for corporate as well as personal (i.e. private) computing.

Respondents were asked if they could suggest how any ethical issues arising from emerging ICTs should be addressed. A few replied simply replied, ‘no’. Others cited ‘personal responsibility’, the role of education and a technical approach via ‘tightly secured cloud computing’. Regulation was mentioned, as was the setting up of a committee similar to that used to consider ethical issues related to embryology in the UK. The development process was also mentioned, to include ‘multi-stakeholder dialogues’, formal risk assessments and ethics as an integral part. General public forums and focus groups were suggested by one respondent.

Further work

Half of the survey respondents have agreed to be contacted for more in-depth discussion of the issues raised. In particular, the researchers wish, firstly, to pursue the means by which ethical, social or legal issues have been identified and addressed in projects the study participants have worked on. Secondly, to explore the range of future technologies that have been identified and the potential ethical, social or legal issues to which they may give rise. Finally, to discuss the means by which participants suggest these issues should be addressed.

Having summarised the findings from this study with experienced industry practitioners, the paper will compare them with the findings from the more academic/research-oriented participants in the ETICA project. Concluding observations will include suggestions/recommendations for further work in this area.


Beardsworth, Alan & Bryman, Alan (2006), Focus Group Research. Open University Press.

Bryman, A. 2008. Social Research Methods. 3rd ed. Oxford University Press.

ETICA Project home page:

Moor, James (1985), What is Computer Ethics? Metaphilosophy, vol. 16 no. 4, 266-75.

Tracing ‘unconventional variables’ in e-government services take up: the role of religion

Nancy Pouloudi, Antoine Harfouche and Stephane Bourliataux-Lajoinie


As the number and diversity of available e-government services grows worldwide, so does the research on their current state and the success factors leading to their adoption. Much of this research employs technology adoption and diffusion models, showing the importance of factors such as trust, perceived usefulness, perceived e-government value, perceived compatibility of values of citizens and governments ease of use (e.g., Belanger and Carter, 2006). At the same time, qualitative studies have shown, in context, the challenges in the implementation of e-government services, e.g., as citizens and state employees ‘work around’ the systems (Azad & Nelson, 2009), or as political parties as ‘mega actors’ negotiate the role of IT in state modernization (Prasopoulou, 2009). These studies reveal a complex picture of service adoption and bring to the fore the specificities of each national or application context.

Against this background, in a recent workshop on ‘IT and Culture’ at Tours, France, the authors of this paper had the opportunity to discuss and contrast their experience on the adoption and reactions to new e-government services in three countries of the Mediterranean Region, namely France, Greece and Lebanon. These countries are quite different in terms of and e-government adoption and the maturity of available e-services. However, the most intriguing aspect that seems to emerge from such a comparison is that ‘unconventional’ variables, that is, aspects that are rarely acknowledged in mainstream information systems research, may come into play and substantially influence e-government services adoption.

In this paper we will argue that religion may be one such important institution, whose significance can be more vividly understood and appreciated by considering different cultural contexts. In this respect, Lebanon, Greece and France provide an interesting set of countries to consider; despite their geographical proximity, the importance and interference of religion is substantially different, and, once considered in more detail reveals a complexity well worth studying further.

Of the three countries, in France, religion is separated from the State since the days of the French revolution and therefore religion is not expected to play a role, at least openly, in contemporary political decisions, such as the adoption of e-government services.

Conversely, the role of religion is very prominent in Lebanon, where it is tightly related to state governance. Lebanon has a complex political and public system, where a careful balance in all aspects of political life must be maintained among the 18 ethnic and religious communities. Therefore, the seats in parliament, in government, and in the civil administration are allocated proportionally between Christian and Muslim. The Christian president, the Sunni prime minister, and the Shiite speaker of parliament all rule with almost equal power, although in different capacities. As a result of this confessional oligarchy, Lebanon lives in perpetual political and administrative paralysis. The public administration is seen as the place where confessional parties took care of their interests, seriously undermining institutional credibility (Dagher 2002). According to several reports, Lebanese citizens hold a negative attitude towards the Lebanese administration. They perceive the public administration as a cave for corruption that absorbs public money without providing quality services in return (Antoun 2009). Therefore, adoption of public e-services was not independent of but rather contingent on this political environment. Lack of trust in securing private identifiable information, lack of privacy protections, and fear from government control were the main inhibitors.

In Greece the religious picture is much more homogeneous, with over 95% of the Greek population belonging to, although not necessarily practicing, the ‘prevalent religion’, according to the constitution, of the Eastern Orthodox Church of Christ (commonly known as Greek Orthodox). Church is an important institution, on occasion becoming involved in political matters. This role is rooted in the history of modern Greece: the Greek identity has been preserved alongside the Christian identity under the Ottoman Empire rule and has been instrumental in driving the revolution for liberty and the establishment of the modern Greek State in the early 19th century. The Church therefore argues that religious identity should be formally recognized as part of citizen identity: in 2000, when the Greek state revised the identifiers used on identity cards, the Church reacted very strongly to the removal of religion as an identifier. Citizen signatures were collected after Sunday service, pressing for a referendum on this matter. Although this never took place, it was clear that the church played an active role in shaping opinion about matters related to government services. At present, the Church opposes the introduction of an electronic citizen card by the Greek state. As a result, several citizens stated on the relevant online deliberation ( that they will not accept this card that ‘brutally insults [their] religious consciousness’. Set against a background of general mistrust towards the government on the one hand and skepticism against all institutions on the other, the Church occasionally strives to accentuate its importance by assuming a protagonist role in State affairs. Even though such initiatives are heavily criticized in society, they are nonetheless influential for part of the population (typically those least ready to participate in the e-society) and therefore religion can become an ‘unexpected’ inhibitor of e-government services adoption.

This initial comparison of the role of religion on e-government adoption in the three countries illustrates that religion may be an important factor to consider when designing e-government services. Yet, our survey of the literature shows limited attention to date to the role of religion for e-government adoption. Perhaps unsurprisingly, studies explicitly acknowledging and naming religion as a key cultural factor in the context of e-government come from countries where religion is central to state affairs, as is the case in many Arab countries (e.g., Alomari et al., 2010, Al-Shehry et al., 2006).

The aim of this paper is to consider the role of religion in more depth, and drawing from the experiences in the three countries discuss the methodological challenges related to the study of religion in e-government. We hope that this discussion will draw attention to this ‘unconventional variable’ that is absent in much of the mainstream research on e-government services adoption, but may in fact be significant in certain cultural context and therefore needs to be studied and understood more thoroughly.


Alomari, Mohammad Kamel Kuldeep Sandhu, Peter Woods. (2010) Measuring Social Factors in E-government Adoption in the Hashemite Kingdom of Jordan. International Journal of Digital Society, 1, 2.

Antoun, Randa, (2009) “Towards a National Anti- Corruption Strategy”, UNDP & LTA, Beirut.

Al-Shery, A. Rogerson, S, Fairweather, NB, Prior, M. (2006) The Motivations for Change Towards E-Government Adoption: Case Studies from Saudi Arabia.

eGovernment Workshop ’06 (eGOV06) September 11 2006, Brunel University, UK. Azad, B and King, N. (2009) Institutional Analysis of Persistent Computer Workarounds. Proceedings Academy of Management, OCIS Division, Chicago, United States.

Belanger, F. and Carter, L. (2009) The impact of the digital divide on e-government use. Communications of the ACM 52(4)132-135.

Dagher, A. (2002) L’administration libanaise après 1990, Colloque Le Modele de l’Etat developpemental et les defis pour le Liban, 15-16 fev. Rotana-Gefinor, Beyrouth, Liban.

Mouzelis N (1978) Modern Greece: Facets of Underdevelopment. Holmes & Meier, New York.

Prasopoulou, E. (2009) The interplay of ICT innovation with state administrative tradition: evidence from the Greek Taxation Information System (TAXIS).

Unpublished PhD Thesis. Department of Management Science and Technology, Athens University of Economics and Business, Greece.