Gaze-contingent perceptually enabled interactions in the operating theatre
Gaze-contingent perceptually enabled interactions in the operating theatre
Alexandros A. Kogkas 0
George P. Mylonas 0
0 HARMS Lab, Department of Surgery and Cancer, Imperial College London, St Mary's Hospital , 20 South Wharf Road, 3rd Floor Paterson Centre, London, W21PF , UK
Purpose Improved surgical outcome and patient safety in the operating theatre are constant challenges. We hypothesise that a framework that collects and utilises information -especially perceptually enabled ones-from multiple sources, could help to meet the above goals. This paper presents some core functionalities of a wider low-cost framework under development that allows perceptually enabled interaction within the surgical environment. Methods The synergy of wearable eye-tracking and advanced computer vision methodologies, such as SLAM, is exploited. As a demonstration of one of the framework's possible functionalities, an articulated collaborative robotic arm and laser pointer is integrated and the set-up is used to project the surgeon's fixation point in 3D space. Results The implementation is evaluated over 60 fixations on predefined targets, with distances between the subject and the targets of 92-212 cm and between the robot and the targets of 42-193 cm. The median overall system error is currently 3.98 cm. Its real-time potential is also highlighted.
3D eye-tracking; Gaze contingent; Perceptually enabled interactions; SLAM; Smart operating theatre; Robot control
B Alexandros A. Kogkas
Conclusions The work presented here represents an
introduction and preliminary experimental validation of core
functionalities of a larger framework under development. The
proposed framework is geared towards a safer and more
efficient surgical theatre.
The operating theatre is reportedly the environment where
unintentional patient harm is most likely to happen [1]. Some
of the most influential factors are related to suboptimal
communication among the staff, poor flow of information, staff
workload and fatigue and the sterility of the operating
theatre [2]. While new technologies may add complexity to
the surgical workflow, at the same time they provide new
opportunities for the design of systems and approaches that
can enhance patient safety and improve workflow and
efficiency. A number of initiatives have assessed the state of the
art in technological developments and identified key areas
where future innovative solutions could be used to
optimise the operating environment, such as cognitive
simulation, informatics, ?smart? imaging, ?smart? environments,
ergonomics/human factors and group-based communication
technologies [3].
In the spirit of the Internet of Things (IoT) and the
recent explosion of data-driven sciences, it is anticipated
that equipment, surgical instruments, consumables and staff
will be fully integrated and networked within a ?smart?
operating suite. This could happen in a number of ways,
such as electronically, using computer vision, RFID
markers or other technologies [4,5]. Partially integrated operating
suites are already being provided by companies, such as the
Karl Storz?s OR1TM [6], where components of the surgical
environment (e.g. endoscopic devices, video/data sources,
surgical table, ceiling lights) can be tailored to and by the
user and can be controlled from a central location within
the sterile area. Such operating suites, where a large amount
of information can be made available through a unique
integrated system, offer tremendous opportunities for
implementing novel human?computer interfaces, context-aware
systems, automated procedures and augmented visualisation
features.
Moreover, a significant body of research has explored
?perceptually enabled? interactions in the sterile
environment using technologies like 3D cameras, voice commands
or eye-tracking [7]. This way the surgeon can be kept in the
loop of decision-making and task execution in a seamless
way that is likely to help improving overall operational
performance and reducing communication errors. For example,
hand-gestures and a voice-driven robotic nurse introduced
by Jacob et al. has been shown to reduce the number of
movements without significantly affecting task execution
time compared to collaboration with human nurses [8].
Eyetracking methodologies in particular have the potential to
provide a ?third hand? and a seamless way to allow
perceptually enabled interactions within the surgical environment.
Previous work has demonstrated screen-based gaze control
of surgical instruments [9]. In robotic [10] and conventional
laparoscopic [11] surgical settings, screen-based
collaborative eye-tracking of multiple collaborators was shown
to significantly improve verbal and nonverbal
communication, task understanding, cooperation, task efficiency and
outcome.
Overall, the work presented here draws inspiration from
the increasing utilisation of data from diverse sources in
conjunction with advances in machine learning. It is also
fundamentally driven (...truncated)