Simulated consultations: a sociolinguistic perspective
Atkins et al. BMC Medical Education
Simulated consultations: a sociolinguistic perspective
Sarah Atkins 0
Celia Roberts 2
Kamila Hawthorne 1
Trisha Greenhalgh 3
0 Centre for Research in Applied Linguistics , Trent Building , University of Nottingham , Nottingham NG7 2RD , UK
1 Duke of Kent Building, Faculty of Health and Medical Sciences, University of Surrey , Surrey GU2 7XH , UK
2 Department of Education & Professional Studies, King's College London , Franklin-Wilkins Building, Waterloo Road, London SE1 9NH , UK
3 Nuffield Department of Primary Care Health Sciences, University of Oxford , Oxford OX2 6GG , UK
Background: Assessment of consulting skills using simulated patients is widespread in medical education. Most research into such assessment is sited in a statistical paradigm that focuses on psychometric properties or replicability of such tests. Equally important, but less researched, is the question of how far consultations with simulated patients reflect real clinical encounters - for which sociolinguistics, defined as the study of language in its socio-cultural context, provides a helpful analytic lens. Discussion: In this debate article, we draw on a detailed empirical study of assessed role-plays, involving sociolinguistic analysis of talk in OSCE interactions. We consider critically the evidence for the simulated consultation (a) as a proxy for the real; (b) as performance; (c) as a context for assessing talk; and (d) as potentially disadvantaging candidates trained overseas. Talk is always a performance in context, especially in professional situations (such as the consultation) and institutional ones (the assessment of professional skills and competence). Candidates who can handle the social and linguistic complexities of the artificial context of assessed role-plays score highly - yet what is being assessed is not real professional communication, but the ability to voice a credible appearance of such communication. Summary: Fidelity may not be the primary objective of simulation for medical training, where it enables the practising of skills. However the linguistic problems and differences that arise from interacting in artificial settings are of considerable importance in assessment, where we must be sure that the exam construct adequately embodies the skills expected for real-life practice. The reproducibility of assessed simulations should not be confused with their validity. Sociolinguistic analysis of simulations in various professional contexts has identified evidence for the gap between real interactions and assessed role-plays. The contextual conditions of the simulated consultation both expect and reward a particular interactional style. Whilst simulation undoubtedly has a place in formative learning for professional communication, the simulated consultation may distort assessment of professional communication These sociolinguistic findings contribute to the on-going critique of simulations in high-stakes assessments and indicate that further research, which steps outside psychometric approaches, is necessary.
Simulated consultations; OSCE; Communication skills; Interpersonal skills; Assessment; Diversity; Sociolinguistics
This paper addresses issues arising from the use of
simulated patients in assessments of clinical consulting, in
particular the linguistic difficulties of interacting in such
settings and how far they reflect a practitioner’s real
consulting abilities. Simulated patients are lay people or
professional actors trained to portray a patient with a
particular condition in a standardised way. As well as
their use in practice and training for medical
practitioners, they play an important role in formal
assessment, such as the objective structured clinical
examination (OSCE) for undergraduates [
licensure examinations for postgraduates [
advantage of this assessment format is that it helps ensure
everyone has a standardised, equitable and repeatable
]. However, given that such exams can
demonstrate significant differences in pass rates between
demographic groups, such as, the Membership of the Royal
College of Physicians ‘Practical Assessment of Clinical
Examination Skills’ (PACES) exams and Membership of the
Royal College of General Practitioners’ Clinical Skills
Assessment (CSA) [
], the construct validity of
simulations is an important research question. Recreating a
linguistically authentic medical interaction may not be the
primary objective of simulation when used for medical
training, where it enables practice and focussing in on
particular skills, particularly technical skills. Recent research
evidence has also shown the important uses of simulation
in communication skills training for medical teams, not
least because of the facility it provides practitioners to
record and reflect on how an interaction has unfolded .
However, the linguistic differences and difficulties of
simulation are of considerable importance in assessment. In
many high-stakes summative exams, the simulated
consultation is not used to enable the medical practitioner to
reflect on and develop their own communication skills, but
rather for an external party to measure a candidate’s
competence and assign a grade. When using the simulated
consultation for assessment, particularly where interpersonal
and communication skills are being marked, we must be
sure that the exam construct and the linguistic
requirements placed on candidates adequately embody the skills
expected for real-life practice.
The authenticity of the interaction is particularly
pertinent when assessing communication skills. The clinical
consultation is not a technical procedure, but an
emotionally charged interpersonal interaction of high social
significance and linguistic complexity [
considering the appropriateness of simulation in assessing
this complexity, we need a different kind of research
sited in a humanistic rather than psychometric
paradigm. Sociolinguistic research provides a useful means
of interrogating and debating these issues.
Sociolinguistics is a field which systematically studies the way
language and society inter-relate. It looks at how people
use language in their everyday lives and how
languagein-context creates the complex social world. The tools of
sociolinguistic research are real recordings of spoken
language, to examine evidence of how different contexts
and social backgrounds affect the talk we produce and
how it is evaluated by others. This focus on evidence
from actual interactions is an important one here. Prior
research on simulated consultations has largely addressed
psychometric properties of particular tests and scenarios
such as internal consistency (e.g. using Cronbach’s alpha),
generalisability, inter-rater reliability, predictability (e.g. of
subsequent examination success), discriminatory power
(ability to distinguish consistently between ‘good’ and
‘poor’ examinees), as well as protocols and procedures for
quality control [
]. A crucial question remains as
to how far such scenarios reflect real consulting abilities.
Both simulated patients and those being assessed, when
asked after the event, tend to rate their experience as
]. Various studies have also found that
unannounced simulated patients, trained to present with a
particular scenario in a kind of ‘mystery shopper’
approach, went undetected by medical practitioners [
Yet there remain crucial differences between real and
simulated consultations, particularly as they are used in
assessment, that cannot easily be evidenced by the mystery
shopper method or participants’ retrospective accounts.
Sociolinguistic analysis can unpick the subtly different
interactional discrepancies that a simulation produces.
It looks at direct evidence from the interactions
produced in situ, rather than relying on asking participants
to reflect back on an interaction. Sociolinguistics often
employs techniques of ‘discourse analysis’ to identify
important features and fine-grained characteristics of
talk, such as grammatical structure, turn-taking
between speakers, intonation and the integration of
nonverbal communication [
]. These can systematically
evidence the characteristic linguistic features that occur
in a particular setting - and how the talk is created by,
as well as constitutive of, the social relationships in that
context. Sociolinguistics is therefore interested in the
choices that speakers make when they use language and
what those choices and variations might mean for the
evaluation of speakers (page 16) [
Integrated with this theoretical approach in
sociolinguistics, is a fundamental interest in how language produces
power relations in society. Professional discourses in
particular, which largely consist of goal-oriented encounters,
often demonstrate a degree of power-imbalance [
can evidence these power relationships in the
‘microphysics’ and fine-grained detail of our everyday practices [
including our professional talk. There are asymmetrical
relationships in terms of who is expected to speak at
certain points, who should show politeness and which
speakers are meant to demonstrate their domain-specific,
professional knowledge [
]. Power relations in the setting
of an exam role-play differ from real-life clinical
encounter, since the role-player has a very different position to
that of a patient and there is also a more powerful third
participant in the examiner, observing the interaction.
Given the different relationships of participants,
interactional differences in simulated consultations compared
to real-life are perhaps to be expected. The environment
in assessed simulations is (intentionally) decontextualized
and scenarios involving invasive physical examinations [
and a range of patient groups such as those with
multimorbidity, limited English or a very different
communicative style [
] are often (though not invariably) excluded.
Yet candidates are asked to behave as if these scenarios
were real [
This sociolinguistic approach to language and
professional communication was recently used in a 3-year
study of the Royal College of General Practitioners’
Clinical Skills Assessment (CSA) [
], which we draw
upon in this paper. As well as this we draw on analytic
work around final-year undergraduate medical OSCEs by
Roberts et al. . De La Croix and Skelton on
undergraduate OSCEs [
], Seale et al.’s linguistic study of
OSCE examinations [
], Niements on the use of
roleplay in training [
], and O’Grady and Candlin on the
Royal Australian College of General Practitioners'
licensing exam [
]. Some linguistic evidence on the use of
simulation for assessing professional communication
outside the medical field is also drawn upon, particularly
] account of the linguistic patterns in
simulated police interviews. Evaluating this evidence
collectively helps us flesh out this debate paper with a fuller
picture of the complexities of using simulation. Although
simulated consultations and OSCE exams do vary in their
setup, some of the essential commonalities, like the
semiscripted standardised part of role-players, the timed cases
and marking descriptors, render our sociolinguistic
discussion relevant to this wide genre of assessment. The
paper addresses the following four themes, identified from
the literature review and our own research, in considering
the simulated consultation: (a) as a proxy for the real; (b)
as performance; (c) as a context for assessing talk and (d)
as potentially disadvantaging candidates trained overseas.
The simulated consultation as a proxy for the real
Social interaction cannot ultimately be standardised.
While there are some relatively stable, overarching
features of a consultation, such as the phases to be
performed and the general history to be conveyed by the
patient, at the minute level of turn-by-turn talk
standardisation becomes difficult and small differences in
delivery are inevitable. There is a tension, then, between the
degree of standardisation of any scenario (hence, its
replicability) and its reflection of the real (its authenticity), since
100 % standardisation would require the simulated patient
to reproduce a script robotically. In reality, while the
simulated patient plays a character and helps depict a
contextual hinterland in answer to candidates’ questions, he or
she must draw on their own interactional resources to
manage the interaction itself [
To understand how simulations are experienced
differently from real consultations, we must ask, “what
‘maintenance work’ needs to be done by both parties to
maintain the semblance of reality?”. To do this, we draw
on the work of sociologist Erving Goffman, whose
seminal essay ‘Frame Analysis’ addressed the question,
“under what circumstances do we think things are real?”
]. Goffman argued that the sense of feeling an activity
is real depends upon our sense of self as we relate to
others. Each interaction creates and reinforces a shared
reality to keep the relationship going [
]. We attend to
others, become involved in the to and fro of talk,
however momentarily, since we have what Goffman calls a
moral requirement to display ourselves in ways that
others expect of us.
His concept of ‘frame’ describes this socially defined
reality. In any given stage of an encounter, speakers and
listeners establish or negotiate what is going on: we are
in the frame of a passing conversation, a preliminary
chat about the weather before the consultation proper
begins, an examination and so on. The frame constitutes
what is happening and also works as a filtering process
through which general principles of conduct apply. For
example, when a doctor tells a patient that chances of
recovery are high, both sides can understand that they
are in a ‘reassurance’ frame within this shared moment
of reality. Different frames can be invoked, and indeed
evidenced, through changes in linguistic behaviour by
the participants. For example, in a case from our CSA
], a simulated patient presents with
menorrhagia. The candidate indicates that he wants to do a
"quick abdominal examination", in the frame of a
routine element of the diagnosis. But when he responds to
the simulated patient’s query by saying “we look for any
abnormal growth”, the simulated patient becomes
alarmed. The candidate then shifts the frame from
information giving to one of reassurance and self-correction.
At any time in an encounter, Goffman argues (page
], we can experience multiple frames. For
example, in an OSCE-style exam, the frame of showing
empathy to a role-playing patient is nested in a frame of
displaying competence to an examiner, which in turn is
nested in the institutional frame of the overall assessment
process. For this reason, the values associated with
empathy are not seriously committed to or felt as real
because they are anchored in a more fundamental frame,
related to simulated performance in the exam. While
roleplayer and trainee/candidate can put on a surface
performance that is realistic, the assessor must decide
whether the candidate is demonstrating ‘real caring’. This
makes any simulated consultation a hybrid activity in
which real qualities (subjectively experienced) are assessed
through the unreal, requiring a considerable amount of
interactional work to sustain the talk and illusion of a real
Goffman calls an activity that does not fit within the
frame of the moment a ‘frame-break’. For example,
candidates in simulated consultations often do not
know whether they are expected to carry out a physical
examination ‘for real’. They may commence a physical
examination frame, only to be interrupted by the
examiner either verbally or by handing them a card
with key physical findings. Candidates must then
rapidly shift frame to the preliminaries of diagnosis. We
found such shifts were typically marked by disfluencies
and/or hesitations, even with highly successful
candidates, as the candidate worked to maintain the
simulated case and ignore any interaction with the
examiner (page 53) [
]. To justify a simulated
consultation as a proxy for the real obscures its limitations
and complexities, many of which only become
apparent when analysing their interactional detail. It is in
this linguistic detail of simulations that we can really
identify the different communicative competences that
come to the fore in simulated consultations, which
may not be the competences required for real-life
The simulated consultation as performance
One of the concerns voiced about OSCE
examinations is that they test acting skills as much as they do
professional communication [
describes how role-played interactions "cannot reproduce
the orientations of real interactions...[W]hat is authentic
to those users when they “live” a specific situation
cannot be authentic to trainers/trainees when they play it"
 (p. 317). A number of studies have addressed the
types of ‘acted’ behaviour such settings consequently
produce, what de la Croix and Skelton have called "the
language game of role-play" [
]. Seale et al. explore
how different ‘frames’, real and fictitious, are invoked
through talk in simulations [
]. They find subtle
moments in which attention is drawn to the fictitious
nature of role-play, citing an example of humorous
comments made about an entirely invented paediatric
patient, that both the candidate and the role-player are
pretending is present (page 183). In their analysis, using
the fictitious nature of role-play to create humour is a
means for the candidate to achieve rapport with the
actor, not rapport with a ‘patient’. So there are multiple
roles and identities at play in simulations and these can
be evidenced in the communication. In answering the
question on ‘authenticity’, Seale et al. ultimately do
suggest that experience of participants in a role-play is
fundamentally different from that of a real-life interaction
and that the candidate must do much more
interactional ‘work’ to keep the illusion up (page 181).
Of course, real consultations also require some level of
performance, but to properly understand the differences
we must unpack what ‘acting’ and ‘performing’ mean in
these interactional situations. To do so, we can draw on
Goffman’s depiction of life as drama – i.e. we present
ourselves on the world as a stage, ‘performing’ in different
ways to different ‘audiences’ in different settings (everyday,
professional, institutional and so on) [
]. We perform all
the time in the everyday, managing impressions of
ourselves in what Goffman called ‘facework’ [
Goffman distinguished the banal and intimate
performances of the everyday that occur ‘backstage’ from
professional behaviour, which is largely ‘front-stage’
] – a term he used to refer to activities like the
waiter at table, the doctor in the surgery or the teacher
in class. Here there are constraints on behaviour in
terms of manner, quality of attention and emotions,
and the performance has an ‘audience’ that evaluate
the competence displayed [
understanding professional behaviour as a performance does
not undercut its values. For example, to care for a
patient may involve masking frustration or fatigue in
order to care better. When institutions require this
professional behaviour to be monitored and assessed,
however, it becomes an institutional performance.
Evaluation of professional performance becomes
institutionalised as observers rate and record performance
and implement rewards and sanctions. There is a
heightened awareness of the need, on the part of the
professional, to perform expressively, a “heightened
mimicry”  and, on the part of the assessor, “a license
… to regard the act of expression and the performer
with special intensity” (page 11) [
]. However, it is
important to make the distinction between a heightened
performance for institutional purposes (e.g. someone
pointedly looking in the mirror when taking a driving
test) and a simulated performance (someone pretending
to look in the mirror).
In simulation, the environment is mutually
constructed as an unreal activity. In her analysis of emotions
in theatre acting, Konijin discusses the way actors must
monitor how far the emotions they are acting out accord
with the inner model of what the play should convey
]. The actor’s task is not to convey sincere emotions
but to play out words and actions that convince the
audience of the authenticity of their character within the
terms of the drama. At the same time they monitor their
own experience of acting and so experience a ‘dual
consciousness’. In a simulation, likewise, the trainee or
candidate has to work hard to create a synthetic reality – one
that convinces the audience/observer, but not one that is
real to candidates in terms of consequences for patients:
an institutionalised display rather than a professional
investment, all the while monitoring their conduct vis-à-vis
the examiner. In sum, simulation is a multi-layered
performance for both role-player and candidate requiring
some of the skills of an actor.
The simulated consultation as a context for assessing talk
The design of OSCE-style exams bring five other
complexities, relating to the quality of talk to the candidate’s
task, adding burdens and reducing the ‘real’. We
consider: (i) the talk-heavy nature of the consultations; (ii)
the design and timing of cases; (iii) the shift of power to
the role-player; (iv) standardised scenarios but individual
emotional responses and (v) who fails such assessments
– and why? We establish these themes from the authors’
study of the CSA [
] and from an overview of the
linguistic research on simulations [
], but draw on
these findings to debate the particular implications for
The talk-heavy nature of the consultations
In simulated consultations, it is primarily talk-in-interaction
that is assessed. To succeed in simulated scenarios,
candidates must work harder or ‘over perform,’ holding a higher
proportion of the conversational floor (between 67–77 %)
than in everyday consultations [
]. Research by Seale et al.
identifies the complex, additional linguistic work required
from candidates in simulations [
] and research on the
Royal Australian College of General Practitioners’ licensing
examination identified how role-played scenarios require a
complex, hybrid discourse from the GP candidate [
Collectively, these findings suggest that simulated consultations
require actions and skills to be verbalised by the candidate
to a much greater degree than in everyday clinical work.
Talk in simulated assessments is also relatively
decontextualized, without the shaping role of the computer
] or any of the other props and interruptions of
real consultations. Decontextualised environments incur
more talk  and, in an environment such as this, lead
to talk becoming intensely focussed on. In addition,
there is no continuity of care, so shared, unspoken
knowledge between doctor and patient can play no part.
This potentially diminishes the types of relationships
and interactions that can be experienced by doctor and
patient in the simulated consultation. Relationship
building over time and the deep values inherent in building
professional capability [
] are overshadowed by an
externally timed case where surface skills must be made
explicit (e.g. enacted or voiced) for assessment. This
simultaneous amplification and reduction is most
apparent in the interpersonal domains of assessment, as we
The design and timing of cases
The design of cases for simulated consultations moves the
focus from the how of patient care to the why of the
particular selected case. Both students and candidates are
primed to fear the trip-wire that comes with the case:
“learners sometimes think there are hidden aspects…they
are being asked to discover, akin to peeling away the skins
of an onion until the flesh is found” (page 67) [
In a high-stakes examination, this ‘Sherlock Holmes’
factor can mar or make success [
]. It turns the
candidate into a timekeeper, dealing with concerns
superficially so that the putative puzzle of the case can be
resolved. They may stop in mid-sentence when the
whistle blows or pack in questions or information as the last
minute ticks by. The strictly timed structure for
simulated consultations produces very different openings and
closings from that identified in real consultations. For
example, in real clinical encounters, doctors raising new
topics at the likely end of the encounter is rare but,
conversely, closings are often extended conversational
exchanges which build the doctor-patient relationship
more generally [
The shift of power to the role-player
Sociolinguistic research has identified how asymmetrical
interactions, where one speaker has more power than
another, show small-scale differences in talk. Medical
consultations are necessarily asymmetrical. The
movement in recent years towards patient-centredness and
shared decision making has not fundamentally altered
this, since asymmetry stems at least partly from the
doctor’s knowledge [
]. But in simulated consultations,
candidates must manage the fact that “the power
relation is inverted, because knowledge and judgment rest
with the simulated patient rather than with the physician
student” (page 266) [
]. De la Croix and Skelton
identify a higher number of interruptions from role-players
across 100 third-year OSCE exams, suggesting a position
of greater interactional power compared to findings on
the linguistic behaviour of real-life patients [
only do the simulated patients know the case and how it
should play out [
], but in examiner feedback sessions
for research on simulations [
], examiners noted that
the simulated patients positioned themselves in an
actorly manner. They put demands on candidates that
patients usually would not and showed familiarity with
the exigencies of the case through their language [
Evidence from outside medical education has also
shown the shift in power relations between speakers when
interacting in simulations. Stokoe conducts a nuanced
account of the conversational inauthenticities of role-play
for police interviews [
], particularly the more elaborate
and sometimes humorous way in which conversational
actions are performed in these false settings, where the
stakes for participants are entirely different from those
where a real defendant is being interviewed. There is
linguistic evidence, then, for how participants must orient
themselves in acted, simulated settings, monitoring their
performance and conducting extra linguistic work to
maintain the illusion of a real interaction.
Standardised scenarios but individual emotional responses
The role-player’s power is made more complex by the
shift from the institutional persona of the actor/patient
to the instinctive resources of the private person. In
other words, the role-player works with a hybrid of
acting behaviour and their own, individual interactional
resources. While careful training is used to standardise
‘patients’, the role-player is usually not working to a
tightly scripted part. He or she is given guidance to react
to the candidate in a natural way, to fulfil interactional
criteria (Table 1). If the candidate’s performance is
unclear or irritating to the role-player, then the role-player
can respond in accord with their inner emotions (the
irritation feels real, even though the setting is simulated).
All parties in fact, must draw on their own
interactional resources to make sense of the encounter. Even
where a middle class actor acts a convincingly troubled
and inarticulate teenager, they cannot gainsay their own
interpretive processes (e.g. they can mumble or remain
silent but they cannot not understand). Examiners not
only have to judge this hybrid of simulation and
instinctive resources, they also have to manage their own mix of
instinctive reactions to how others interact, their own
professional expectations and the formal categories of
the examination. As one said, “A lot of the time, I am
comparing them to me and what I’m used to” [
mix of habits of talk, interpretation and evaluation (on
the one hand) and standardised judgements (on the
other) are most problematic in the domain of
interpersonal skills, where subjective interpretation is necessary
to interpret what counts as ‘rapport’ or ‘sensitivity’. This
You are familiar with GPs and hospitals, so you are comfortable with
became particularly evident in feedback sessions with
examiners, as the following section explores.
Who fails such assessments – and why?
We have noted the heavy focus on talk in simulated
consultations. Communication or interpersonal skills are
often explicitly assessed with their own marking criteria
in medical OSCEs, but can also become implicitly
judged across all other domains, since professional
actions like data-gathering and clinical management must
also be performed through effective communication
]. The metric of reliability also tends to reinforce the
unspoken assumption that there is an implicit ‘best way’
of scoring highly in the interpersonal domain. Candidates
in simulated consultations routinely produce formulaic
phrases such as "Can you tell me a bit more about…" "I
understand how you feel" or "I’m sorry to hear about that",
with a greater frequency and often in different sequential
positions, than is found in real life practice [
]. Some of
these mimic the phrases recommended in communication
skills textbooks and their extensive use in simulations may
be inevitable in an environment in which talk is being
observed and assessed. This is a finding corroborated by
Roberts et al. [
] in a study of undergraduate medical
OSCEs, where the use of elicitation phrases such as "How
do you feel about that?" could be interpreted as sounding
overly trained if used in the wrong location (page 8–9). In
an essay on the experience of being a role-player
evaluating candidates in US medical exams, Jamison points
out that to gain marks, empathy and compassion must
be ‘voiced’ and that (perhaps as a consequence)
candidates seemed either aggressively formulaic in their
insistence, "that must be really hard", or saturated with
humility "Would you mind if I – listened to your heart?"
(page 4–5) [
]. There have been similar findings on
simulations in professional settings outside medicine,
such as Stokoe’s research on police interview role-plays,
in which communication directives from training
manuals are overtly used in the openings, in a way which
they are not in real-life police interviews, potentially for
the benefit of a marker [
It seems to be a consequence of the assessed, simulated
setting then, that participants use these formulaic, trained
professional phrases and interactional moves with a much
higher frequency than real-life. In exams such as the CSA,
high scoring candidates also produce 32 % more of these
exam-modelled utterances than weaker candidates. Yet
these phrases appear much less frequently in real
]. Interestingly, weaker CSA candidates who
also produce these types of phrases, albeit slightly less
frequently, were assessed as formulaic in examiner feedback:
It seems just very formulaic and a lot of it seems
learned. ‘I understand why you would be worried’,
‘What kind of thought went through your mind when
you made this appointment’ which kind of is an
attempt to do the right thing but to me it just felt
very crass… [
Detailed analysis of stronger candidates’ talk showed
that they knew how to play the game: they customised
formulaic phrases so they sounded more real and
sincere, adding in little hesitations, colloquialisms and
changes in intonation (page 59–61) [
]. In such
circumstances, the ‘empathy telling’, already a simulation of
feeling and perception, has to be further worked on to
invoke a convincing suspension of disbelief: a double
simulation or to extend Konijin’s concept of ‘dual
], a ‘triple consciousness’, consisting of the
candidate’s own sense of themselves as professionals, the
consciousness that they must simulate a professional
encounter – and in addition within the institutional frame
– must work on the formulaic phrases of the simulation
so that they sound more sincere to examiners. It is in
these small details of talk, here in the small variations in
delivery of exam-modelled phrases, that we can see how
power and social relationships are constituted in the
micropractices of interaction and its evaluation [
In terms of construct validity, does the simulated
consultation measure what it purports: the interpersonal
capabilities expected of a doctor? The answer is a
complex one. Though the simulation may be good at testing
skills such as giving explanations and structuring the
consultation, there are a number of linguistic features
which do not mimic real-life practice. For example, it
examines competence in using additional
communicative resources to make exam-induced ‘voiced’ phrases
sound sincere and to manage the triple consciousness
required to perform to examiners. In terms of
assessment theory, there is a "construct-irrelevant variance"
] in which certain know-how is assessed which is not
a requirement of real consultations.
Simulated consultations as potentially disadvantaging candidates trained overseas
While all assessments may require ‘exam skills’ to some
degree, when one group of candidates fares much worse
than another, as occurs in many of these assessments [
the fairness of these exam-constructed requirements
needs to be carefully considered. There is wide
recognition that for many candidates trained outside their home
country for the assessment, simulations are often a new
phenomenon and that, like any type of assessment, lack of
familiarity affects performance [
]. The simple solution
offered is that this group need more practice with
simulations. However, detailed sociolinguistic analysis suggests
that simulations may cause difficulties for this group of
candidates in other ways as well.
As indicated above, simulations lead to more talk,
more formulaic phrases and more work to ensure that
such talk sounds sincere. This focus on talk and how it
sounds in contexts of intense assessment puts particular
pressure on those whose style of communicating is
different from the majority of examiners and also, perhaps,
the patient role-players. Small differences in such subtle
features as intonation, word stress and other small
markers of speech can be amplified and read off as
showing negative characteristics, such as formulaic
responses or not engaging, attracting lower marks in the
interpersonal (page 32–73) [
]. Additionally, since it
can be difficult to make standardised, simulated cases
reflect the same variation as real-life consulting,
performance will not reflect the ability to interact effectively and
flexibly with a diverse patient population. In many such
exams, while UK graduates are not assessed on
consulting in linguistically challenging situations, International
Medical Graduates, many of whom consult regularly in
another expert language within the British multi-cultural
context, have no opportunity to display this skill as they
might use it in their everyday practice. Such competence
in linguistically and culturally challenging situations is
increasingly important for medical practitioners treating
diverse patient populations, both in the UK and globally.
It is perhaps the biggest challenge for assessing medical
practitioners’ interpersonal competence in our
modernday context of globalised, mobile and diverse societies.
A review of sociolinguistic approaches to simulations
demonstrates that simulated assessment, even when it is
‘realistic’, shows some crucial differences to the
communicative competences found in real-life practice. Talk is
always a performance in context and in simulations, the
role-playing patient, the candidate and the examiner all
have to work hard to maintain the illusion. Candidates
who can handle the social and linguistic complexity of this
somewhat artificial, standardised situation score highly –
yet what is being assessed is not real communication but
the ability to voice a credible appearance of such
communication. It follows that if communication skills are
assessed purely through simulated patients, this may not
reflect the real consulting abilities of candidates. We must
question whether simulations replace the values–led
development of medical students with ‘playing the game’ of
50, 51, 59
]. The ability of doctors to form
enduring therapeutic relationships with patients may not be
adequately reflected in the “colonisation [of medicine] by
the technologies of the unreal” [
The discipline of sociolinguistics offers an evidenced
approach to these questions around professional
communication. In this paper, we have introduced three core
sociolinguistic concepts relevant to the assessment of
communication in medicine: that the particular variety
of talk in simulated consultations separates it out from
the talk in real consultations; that the notion of ‘frame’
is used to understand how we relate to and make our
talk real to each other and that this reality breaks down
in institutionally assessed communication; and that
micro-features of talk feed constantly into our evaluation
of others and, in high-stakes assessments, can have large
consequences on the trajectory of an interaction. While
a single awkward moment is unlikely to lead to failure,
in settings of intense evaluation, perceived infelicities
such as an unfilled pause or formulaic phrase become
amplified. The cumulative effect of such micro-features
may lead to a candidate being judged as "not developing
rapport" or as showing inadequate responsiveness to
"verbal and non-verbal cues" and an overall negative
impression of interpersonal abilities.
Although a number of studies have identified that
simulated interactions show important differences from
real-life professional communication [
27, 28, 33–37
are not arguing that simulation has no place in teaching
or assessment. Much of medical practice consists of
skills that are more or less technical in nature and which
can be both taught and assessed effectively using
simulated patients (the rationale behind the ‘skills lab’) [
Formative simulated consultations have great value in
the safety they afford learners to make and learn from
mistakes, as well as to ‘slow down’ the consultation to
study what has happened. Summative simulated
assessments, however, must carefully consider the difficulties
of assessing interpersonal skills in this setting. Hence, we
do not seek to bury the OSCE, but in introducing the
sociolinguistic perspective, we do seek to debate its level
of validity for assessing communicative and interactional
aspects of clinical performance. Furthermore, we believe
the evidence identified in a number sociolinguistic studies
of simulated interaction [
27, 30, 58
] requires us to
consider carefully what we mean by ‘fairness’ in assessment
and how we might better assess communication skills in
settings of cultural and linguistic diversity.
CSA: clinical skills assessment; MRCGP: Examinations for ‘Membership of the
Royal College of General Practitioners’; OSCE: objective structured clinical
examination; RCGP: Royal College of General Practitioners.
KH was an examiner for the Royal College of General Practitioners’ (RCGP) Clinical
Skills Assessment during the data-collection and analytic phases of the research
cited in this article. The authors have no other competing interests to declare.
SA was originally the Research Associate who conducted the analytic work
on the Clinical Skills Assessment described in this article and drafted the first
version of this debate article. CR was originally the Principle Investigator on
the research project with the Royal College of General Practitioners and
substantially contributed to the first draft and subsequent versions of this
article. KH contributed a significant amount of analytic work and discussion
as an adviser on the original project with the Royal College of General
Practitioners. TG has conducted extensive research in the field of medical
education and has drawn on this in substantially rewriting the initial and
subsequent versions of this article. All authors contributed to conceptualizing
and writing the paper, sourcing material. All authors have seen and
approved the final manuscript.
The authors are grateful to the research funders who facilitated the work
with the Royal College of General Practitioners, which is referred to in this
paper. This included a Knowledge Transfer Partnership award (KTP008346,
2011–2013), from the Technology Strategy Board and the Academy of
Medical Royal Colleges in the United Kingdom. SA was additionally funded
by an Economic and Social Research Council ‘Future Research Leaders’ grant
at the University of Nottingham (ES/K00865X/1, 2013–2016).
We are also grateful to the Royal College of General Practitioners for the
access and close advice they gave the authors throughout the original
research, on which this debate article is built, and to all the exam candidates
who gave their consent to be part of the study.
1. Khan KZ , Gaunt K , Ramachandran S , Pushkar P. The Objective Structured Clinical Examination (OSCE): AMEE Guide No . 81 . Part II : Organisation & Administration. Med Teach . 2013 ; 35 ( 9 ): e1447 - 63 .
2. Khan KZ , Ramachandran S , Gaunt K , Pushkar P. The Objective Structured Clinical Examination (OSCE): AMEE Guide No . 81 . Part I : An historical and theoretical perspective . Med Teach . 2013 ; 35 ( 9 ): e1437 - 46 .
3. Swanson DB , van der Vleuten CP. Assessment of clinical skills with standardized patients: state of the art revisited . Teaching and Learning in Medicine . 2013 ; 25 ( sup1 ): S17 - 25 .
4. First LR , Chaudhry HJ , Melnick DE . Quality, cost, and value of clinical skills assessment . N Engl J Med . 2013 ; 368 ( 10 ): 963 - 4 .
5. Boulet JR , Smee SM , Dillon GF , Gimpel JR . The use of standardized patient assessments for certification and licensure decisions . Simul Healthc . 2009 ; 4 ( 1 ): 35 - 42 .
6. Cleland JA , Abe K , Rethans J-J . The use of simulated patients in medical education: AMEE Guide No 42 1 . Med Teach . 2009 ; 31 ( 6 ): 477 - 86 .
7. Holmboe ES , Ward DS , Reznick RK , Katsufrakis PJ , Leslie KM , Patel VL , et al. Faculty development in assessment: the missing link in competency-based medical education . Acad Med . 2011 ; 86 ( 4 ): 460 - 7 .
8. Gormley G , Sterling M , Menary A , McKeown G . Keeping it real! Enhancing realism in standardised patient OSCE stations . Clin Teach . 2012 ; 9 ( 6 ): 382 - 6 .
9. Dewhurst NG , McManus C , Mollon J , Dacre JE , Vale AJ . Performance in the MRCP(UK) Examination 2003-4: analysis of pass rates of UK graduates in relation to self-declared ethnicity and gender . BMC Med . 2007 ; 5 : 8 .
10. McManus IC , Elder AT , Dacre J . Investigating possible ethnicity and sex bias in clinical examiners: an analysis of data from the MRCP(UK) PACES and nPACES examinations . BMC Med Educ . 2013 ; 13 : 103 .
11. McManus IC , Wakeford R. PLAB and UK graduates' performance on MRCP(UK) and MRCGP examinations: data linkage study . Br Med J. 2014 ; 348 : g2621 .
12. Korkiakangas T , Weldon S-M , Bezemer J , Kneebone R . Video-Supported Simulation for Interactions in the Operating Theatre (ViSIOT) . Clin Simul Nurs . 2015 ; 11 ( 4 ): 203 - 7 .
13. Sarangi S. Healthcare interaction as an expert communicative system . New Advent Lang Interac . 2010 ; 196 : 167 .
14. Brannick MT , Erol-Korkmaz HT , Prewett M. A systematic review of the reliability of objective structured clinical examination scores . Med Educ . 2011 ; 45 ( 12 ): 1181 - 9 .
15. Ilgen JS , Ma IW , Hatala R , Cook DA . A systematic review of validity evidence for checklists versus global rating scales in simulation‐based assessment . Med Educ . 2015 ; 49 ( 2 ): 161 - 73 .
16. Lievens F , Sackett PR . The validity of interpersonal skills assessment via situational judgment tests for predicting academic success and job performance . J Appl Psychol . 2012 ; 97 ( 2 ): 460 .
17. Nestel D , Tabak D , Tierney T , Layat-Burn C , Robb A , Clark S , et al. Key challenges in simulated patient programs: An international comparative case study . BMC Med Educ . 2011 ; 11 ( 1 ): 69 .
18. Miller GE . The assessment of clinical skills/competence/performance . Acad Med . 1990 ; 65 ( 9 ): S63 - 7 .
19. Allen J , Rashid A . What determines competence within a general practice consultation? Assessment of consultation skills using simulated surgeries . Br J Gen Pract . 1998 ; 48 ( 430 ): 1259 - 62 .
20. Bosse HM , Nickel M , Huwendiek S , Jünger J , Schultz JH , Nikendei C . Peer role-play and standardised patients in communication training: a comparative study on the student perspective on acceptability, realism, and perceived effect . BMC Med Educ . 2010 ; 10 ( 1 ): 27 .
21. Kinnersley P , Ben-Shlomo Y , Hawthorne K , Donovan J , Chaturvedi N. The acceptability and practicality of simulated patients for studying general practice consultations in Britain . Educ Prim Care . 2005 ; 16 : 540 - 6 .
22. Gumperz J . On interactional sociolinguistic method . In: Sarangi S, Roberts C , editors. Talk, work and institutional order: Discourse in medical, mediation and management settings . New York: Mouton de Gruyter; 1999 . p. 453 - 71 .
23. Holmes J . An Introduction to Sociolinguistics. London: Longman; 1992 .
24. Holmes J , Marra M. The Routledge Handbook of Language and Professional Communication . In: Vajay B , Stephen B , editors. The Routledge Handbook of Language and Professional Communication . Abingdon: Routledge; 2014 . p. 112 - 26 .
25. Foucault M. Discipline and Punish: The Birth of the Prison . Harmondsworth: Penguin; 1979 .
26. Morand DA . Language and power: an empirical analysis of linguistic strategies used in superior-subordinate communication . J Organ Behav . 2000 ; 21 ( 3 ): 235 - 48 .
27. Roberts C , Atkins S , Hawthorne K. Performance features in clinical skills assessment: Linguistic and cultural factors in the Membership exam in the Royal College of General Practitioners . London: King's College London with the University of Nottingham; 2014 .
28. Seale C , Butler CC , Hutchby I , Kinnersley P , Rollnick S. Negotiating frame ambiguity: A study of simulated encounters in medical education . Commun Med . 2007 ; 4 ( 2 ): 177 - 87 .
29. Sanci L , Day N , Coffey C , Patton G , Bowes G. Simulations in evaluation of training: a medical example using standardised patients . Eval Program Plann . 2002 ; 25 ( 1 ): 35 - 46 .
30. Mohanna K. Exploring the Royal College of General Practitioners' Clinical Skills Assessment (unpublished thesis in partial completion of Ed D). London: University College London Institute of Education; 2011 .
31. Roberts C , Wass V , Jones R , Sarangi S , Gillett A . A discourse analysis study of 'good' and 'poor' communication in an OSCE: a proposed new framework for teaching students . Med Educ . 2003 ; 37 ( 3 ): 192 - 201 .
32. de la Croix A. The language game of role-play: an analysis of assessed consultations between third year medical students and Simulated Patients (SPs) . Birmingham: University of Birmingham; 2010 .
33. de la Croix A , Skelton J. The simulation game: an analysis of interactions between students and simulated patients . Med Educ . 2013 ; 47 ( 1 ): 49 - 58 .
34. de la Croix A , Skelton J. The reality of role‐play: interruptions and amount of talk in simulated consultations . Med Educ . 2009 ; 43 ( 7 ): 695 - 703 .
35. Niements N. From Role-Playing to Role-Taking: Interpreter's Role(s) in Healthcare . In: Schäffner C , Fowler Y , Kredens K , editors. Interpreting in a changing landscape: selected papers from critical link . Amsterdam/ Philadelphia: John Benjamins; 2013 . p. 305 - 19 .
36. O 'Grady C , Candlin CN . Engendering trust in a multiparty consultation involving an adolescent patient . In: Candlin C , Crichton J , editors. Discourses of trust. London: Palgrave Macmillan; 2013 . p. 52 - 69 .
37. Stokoe E. The (in) authenticity of simulated talk: comparing role-played and actual interaction and the implications for communication training . Res Lang Soc Interact . 2013 ; 46 ( 2 ): 165 - 85 .
38. Stokoe E. Simulated interaction and communication skills training: The “Conversation Analytic Role-play Method . In: Applied conversation analysis: Changing institutional practices . Basingstoke: Palgrave Macmillan; 2011 . p. 119 - 39 .
39. Goffman E . Frame analysis: an essay on the organization of experience . New York: Harper and Row; 1974 .
40. Goffman E. Interaction ritual: essays in face to face behavior . New York: Doubleday; 1967 .
41. Harrison S. How do you make a medical student feel stupid? Bring on the latex breasts and silicone ears . In: The Guardian . 2008 .
42. Niemants NSA . From Role-Playing to Role-Taking: Interpreter's Role(s) in Healthcare . In: From Role-Playing to Role-Taking: Interpreter's Role(s) in Healthcare . Amsterdam/Philadelphia: John Benjamins; 2013 . p. 305 - 19 .
43. Goffman E. The presentation of self in everyday life . 1959 .
44. Goffman E. On face-work: An analysis of ritual elements in social interaction . Psychiatry . 1955 ; 18 ( 3 ): 213 - 31 .
45. Drew P , Heritage J . Analyzing talk at work: An introduction . London: Able Books; 1992 .
46. Bauman R . Verbal art as performance . Prospect Heights , Illinois: Waveland Press; 1984 .
47. Konijn E. Acting emotions: shaping emotions on stage . Amsterdam: Amsterdam University Press; 2000 .
48. Swinglehurst D , Greenhalgh T , Roberts C . Computer templates in chronic disease management: ethnographic case study in general practice . BMJ Open . 2012 ; 2 ( 6 ).
49. Levelt W. Speaking: from intention to articulation . Cambridge: MIT Press; 1989 .
50. Bleakley A. ' Good' and 'poor' communication in an OSCE: education or training? Med Educ . 2003 ; 37 ( 3 ): 186 - 7 .
51. Fraser SW , Greenhalgh T. Coping with complexity: educating for capability . Br Med J. 2001 ; 323 ( 7316 ): 799 - 803 .
52. Kurtz SM , Silverman JD . The Calgary-Cambridge Referenced Observation Guides: an aid to defining the curriculum and organizing the teaching in communication training programmes . Med Educ . 1996 ; 30 ( 2 ): 83 - 9 .
53. West C. Co-ordinating closings in primary care visits: producing continuity of care . In: Communication in medical care: Interaction between primary care physicians and patients . Volume 20 , edn. Edited by Heritage J , Maynard DW . Cambridge: Cambridge University Press; 2006 .
54. Peräkylä A. Communicating and responding to diagnosis . In: Communication in medical care: Interaction between primary care physicians and patients . 2006 . p. 214 - 47 .
55. Hanna M , Fins JJ . Power and communication: why simulation training ought to be complemented by experiential and humanist learning . Acad Med . 2006 ; 81 ( 3 ): 265 - 70 .
56. Jamison L. The empathy exams: essays . Minnesota: Graywolf Press; 2014 .
57. Haladyna TM , Downing SM . Construct‐irrelevant variance in high‐stakes testing . Educ Meas . 2004 ; 23 ( 1 ): 17 - 27 .
58. Esmail A , Roberts C . Independent review of the membership of the Royal College of General Practitioners (MRCGP) examination . Gen Med Counc . 2013 : 1 - 44 . http://www.gmc-uk. org/MRCGP_Final_Report__18th_ September_2013.pdf_53516840.pdf.
59. Skelton JR . Everything you were afraid to ask about communication skills . Br J Gen Pract . 2005 ; 55 ( 510 ): 40 - 6 .
60. Greenhalgh T. Future-proofing relationship-based care: a priority for general practice . Br J Gen Pract . 2014 ; 64 ( 628 ): 580 .
61. Ziv A , Ben-David S , Ziv M. Simulation based medical education: an opportunity to learn from errors . Med Teach . 2005 ; 27 ( 3 ): 193 - 9 .