Prof Jonathan Parker, School of Health and Social Care, reflects on covert reseach methods and their use in the social sciences…
There have been a wide range of important studies that have used covert methods, that have collected data from people who do not know they are being studied at the time, who would not give permission or, had permission been sought, where the data may have been dubious or biased. Researchers justify their actions by stating the need to gain access to inaccessible groups, to illuminate important social issues, and to uncover the unpalatable. Famous examples include, of course, Rosenhan’s[1] study of the ways in which mental illness may be attributed by location and situation (http://www.sciencemag.org/content/179/4070/250.short), Holdaway’s[2] insider research into the police, and Hunter S. Thompson’s[3] research into Hell’s Angel communities.
Covert methods have fallen out of vogue and are often difficult to get through postgraduate committees or, indeed, university and other research ethics committees, which increasingly promote a risk averse and pedestrian approach to scrutiny. The reasons for this include the important focus, within disciplinary ethical codes, academic and professional ethics committees, on informed consent, and promote a seemingly natural desire for excising duplicity and dishonesty from data collection in research. However, there are arguments that suggest covert methods may not always be dishonest or duplicitous and, indeed, not to use them in certain circumstances, may be, unwittingly, unethical (see Parker et al., forthcoming[4]).
The use of undercover reporting in investigative journalism, for example relating to NHS hospitals and patient treatment, and more recently non-NHS hospitals; whilst not research, illuminates many hidden and dubious practices in current society, representing some of the social good that can be drawn from such methods, and indeed ‘impact’ (http://www.medicalnewstoday.com/releases/226545.php).
Where do our research ideas come from in the social sciences? Often from lectures and dialogue within these with students, from supervision, and observations we make in everyday life. That we have collected initial soundings and thoughts from these settings and situations, which has not been scrutinised or completed without informed consent is not questioned: it would be ridiculous to assume we needed informed consent to undertake our daily practices!
As we strive for research excellence and relevance here at BU, we should grapple enthusiastically with the issues and challenges involved in covert research and back it wholeheartedly where its importance is clear. A flaccid response can lose the excitement and challenge involved in the production of new knowledge from in depth engagement with individuals, groups and societies. URECs need to highlight legal challenges, of course. Current mental capacity legislation (which my own research for the Social Care Institute for Excellence and Department of Health suggests transposes ethical scrutiny drawn from moves to protect the public from dangerous medical experimentation Parker et al. 2010[6]) demands ethical scrutiny by appropriate committees, but used well can promote and support ethically-driven knowledge creation and exploration of hidden issues that require methods that cannot and should not involve informed consent. To avoid or proscribe such research methods in all cases leads us down a safe but uninteresting and, potentially, unethical, track.
[1] Rosenhan, D.L. (1973) On being sane in insane places, Science, 179, 4070, 250-258.
[2] Holdaway, S. (1983) Inside the British Police: A force at work, Oxford: Blackwell.
[3] Thompson, H.S. (2003/1965) Hell’s Angels, London: Penguin.
[4] Parker, J., Penhale, B. and Stanley, D. (forthcoming) Research ethics review: social care and social science research and the Mental Capacity Act 2005, Ethics and Social Welfare
[5] Fielding, N. (1982) Observational research on the National Front, in M. Bulmer (ed.) Social Research Ethics: An examination of the merits of covert participant observation, London: Macmillan.
[6] Parker, J., Penhale, B. and Stanley, D. (2010) Problem or safeguard? Research ethics review in social care research and the Mental Capacity Act 2005. Social Care and Neurodisability 1, 2, 22-32.
An interesting and provocative issue. Social utilitarianism and consequentialism are interesting defences to this dilemma but then who makes the judgement about the social utility and validity of an ‘undercover’ project?
Some supervision of the process by some ‘trusted’ and rigorous third party, even if consent is not sought or given for the reasons you suggest, should be a must in such cases.
This could however lead to elitism……
On the other hand, the recent undercover expose by Panorama (30/5/2011) of the abuse in a private ‘hospital’ for people with learning difficulties, if it results in prosecution of the abusers and policy change that leads to greater protection from abuse, could justify such actions.
However, what if the undercover filming had not uncovered any abuse and yet these peoples privacy was invaded. Do the ends always, or ever justify the means ?
You raise some interesting points that I think are important. Certainly, some degree of supervision is required and, in the Blog, I refer to the continued role of research ethics committees and universities. I’m not as sure about your comment relating to elitism. Academics should, by their very role, be experts in their field, and in respect of scholarly and research processes. If this is construed as elitism, so be it. I think a consequentialist approach can be used to justify undercover approaches when seeking to effect a change such as the one you describe, and there will always be arguments about the use (or abuse) of power in these cases. This is not a reason to avoid such. However, in respect of research ethics I think we have to reckon with the ubiquitous imposition of scrutiny from well-meaning but misguidedly applied medical ethics in respect of humanities and social science research.
In promoting knowledge creation, we must create arguments to justify our actions and to defend the ‘knowledges’ we are constructing; ‘social utility’ and ‘validity’ being aspects of this. I believe academics in the social science and humanities should not be constrained but freed to pursue their role in contributing to knowledge and understanding: sometimes this demands using covert means to gather their data.
Hi, I agree with the views you express re the need to be free to pursue ‘knowledge generation’ freed from the narrow prescriptive of medical ethicists. I was just raising a question about pure uncritical utilitarianism.
The elitism I referred to my in my original post is not about academia per se, (although some academics and academic institutions can be guilty of this), but the need for academic activity to be subject to some public scrutiny and accountability so academics do not pursue their own (sometimes self regarding and self absorbed) agendas. The work we do has needs to have real ‘social utility’, however that is defined, otherwise we will not get public support and financial support to continue.
However I am also critical of pure instrumental knowledge and knowledge generation too, but that is not another kettle of kippers !
We must continue to walk that moral and academic tightrope.
Ultimately, I am not sure where I stand on covert research, but I do know that it makes me feel uneasy (as did the content of the recent Panorama program as identified by Chris, above). (Is that research per se?, no more or less than Goffman’s Asylums perhaps, but for a different audience, and one cannot forget ratings I suppose). However, I am not sure where I stand on the issue of researching ‘on’ people, rather than ‘with’ them either, but again, I do know that it makes me feel uneasy. I wonder if one solution (whenever the research qiestion calls for it and is appropriate) might be to swing the focus onto the individual doing the research, that is do sometimes adopt an autoethnographic method. This, of course, is the detailed study of self, but in context and often in relation to others, rather than the study of others. Perhaps this is similarly covert, and may run the risk of another’s unwitting participation in the research even if a strong focus on self is maintained?
I guess I think the autoethnographic approach used in this way would be much akin to political/social/philosophical comment. Quite justified, but not necessarily built on research per se, and to get to the deep experience one needs to be immersed in context. Of course, undercover journalism is not, in and of itself, research, but their are synergies.
Good to open the debate about research on, with, for, within, between etc. There are many issues here. However, I firmly believe that covert methods can be extremely ethical in illuminating the social world, unpalatble views and behaviours and increase and enhance our understanding that surely must be a good thing. Where so-called ‘informed consent’ falls down is in respect of the power differentials, the potential for non-explicitly stated researcher aims, and the need to satisfy third party liabilities and so potentially skew the research from illuminating the ‘realities’ of social life and experience. Having said this, of course, gaining full partcipation in an open and honest way is often best and should, wherever it can be, be pursued.
I can think of plenty of cases when covert research would justify the means. Researching insular and oppressive cults, for example. From my experience I am aware that such groups would not allow research to be undertaken without being able to control the research, setting up ‘shop fronts’ to promote themselves or refute suspicions etc. Thereby sabotaging any ‘unsanitised’ findings. This cannot be in the best interest of the public. So, I believe that along with ethically approved research, covert research carries a gritty moral responsibility that can enable the submerged voices of vulnerable people to be heard and the otherwise hidden circumstances of their lives to be known. This is an issue that the very cautious (and often pusillanimous) research ethics procedures currently in place are failing to address. In response to Fran, I am not sure that auto-ethnography can enable us to understand these phenomena beyond the continuous self-referential loop of the researcher, which is an unenlightening and frustrating issue for old fashioned ethnographers like me, and as Scheper-Hughes herself points out. Logs and memos do form an important part of my data, but only a part of it. The main body of findings (textualisation apart) lies in the faithful recording of the words, actions and the in-depth investigation of the overall context of the phenomena. All that said, in my (ethically approved) work of psychiatric in-patients, for instance, ‘becoming part of the furniture’, to use that old cliché, did enable me to observe a lot of things that were not meant to be overheard or seen by outsiders. But that’s another story.
What is research and when does it start?
Following Jonathan Parker’s thought provoking blog on convert research and the interesting discussion following it I would like to add another issue. This might seem a simple question but considering the notion of covert research I started to think ‘What is research?’ and, more importantly in this context, ‘When does it start?’
If research is a project that start when the funding has been allocated, research ethical approval received and the decisions taken on the over research question and the most appropriate methodology and method(s) to address the research question one might have a particular view on covert research. However, if one regards research as an on-going process that seems to start somewhere with a vague idea which the researcher works on for months trying to crystallise the problem, in an attempt to pinpoint a researchable question, perhaps then one’s view on covert research might be different from the one above. An example of the latter would be a health or social science or media researchers being involved in an everyday activity, and at some point deciding to make that activity the focus of his or her study. For example, Kirkpatrick (2009: 27) in a critical review of Taylor’s (2006) book on exploring online game culture reminds the reader that Taylor starting as a player of a particular online game, hence “… disclosure didn’t present itself as an issue; but then, presumably, it actually became one once the decision was taken to use her experiences as the basis for a research project.”
A second example I like to introduce to highlight how hard it is to set the boundaries of research in the human science is my research note book. I keep a research note book and have done so for the past two decades. In this note book I scribble down a wide range of snippets of information, ranging from notes of research meetings, newspaper cuttings on birth, pregnancy and maternity care, throw away comments made by friends about their pregnancy or childbirth, comments made midwifery friends and colleagues about their working environment or health policy, to conversations overheard in the train. Some of these notes / ideas / snippets of information will end up in publications (one for the ‘end products’ of research). For example, last year we published a paper under the title ‘Lessons learnt from undertaking maternity care research in developing countries’ (van Teijlingen et al. 2010). In this paper we discuss the importance of being sensitive to local culture and the concept of cultural misunderstanding. The following quote from this paper was used to illustrate the topic:
“The first example relates to money, something that always has the potential to cause problems. The UK research team was funded by a London-based Buddhist charity, Green Tara Trust, to help design, implement and evaluate an intervention aimed at improving the uptake of antenatal care in a number of rural villages in Kathmandu Valley. This charity requires an account of how its money is spent, for reasons of transparency to its donors as well as under British charity legislation, which requires accountability for funds used. Our Nepalese collaborators became upset when the UK research team asked for a detailed account of their spending on the research project. A system of financial checks and balances is part of everyday life in the developed world, in some developing countries asking people to justify their expenditure and produce receipts gives out the message that you do not trust someone, and in a society where personal contacts and trust are interwoven, this can damage research collaborations.” (van Teijlingen et al. 2010: 13).
This illustration is based on my field notes in my note book of a difficult meeting between the UK research team and our Nepalese collaborators. When I wrote it down it was background information to help understand and remember the research process in general, but one could argue that only when I went through my note books years later looking for good illustrations of cultural misunderstanding, and decided to use it as an example it became ‘research’. Did I conduct covert research ‘retrospectively’?
References
Kirkpatrick, G. (2009) Technology: Taylor’s Play Between Words. In: Doing Social Science: Evidence and methods in empirical research, Devine, F. & Heath, S. (eds.), Houndsmill, Basingstoke: Palgrave Macmillan, pp.13-32.
Taylor, T.L. (2006) Play Between Worlds: Exploring Online Game Culture, Cambridge, MA: MIT Press.
Teijlingen van, E., Simkhada, P., Ireland, J.C.M. (2010) Lessons learnt from undertaking maternity-care research in developing countries. Evidenced-based Midwifery 8(1): 12-16.