Virtual reality characters depicting abused children can be used to train General Practitioners (GPs) to spot the warning signs of abuse, new research suggests.
The project, led by a team from Goldsmiths, University of London, University College London (UCL) and the University of Birmingham, involved creating the characters using motion capture from real children and real life case studies. The ‘virtual children’ featured in ‘simulated surgeries’ alongside characters depicting parents.
On entering the study’s ‘simulated surgery’ participants sat down at a real desk with a real laptop and put on a pair of 3D glasses. Everything else in the surgery setting, including virtual parent ‘Chris’ and virtual child ‘Tom’, was computer generated. The technology enabled an operator to manipulate ‘Chris’ to make his behaviour more or less aggressive towards ‘Tom’. In the scenario ‘Chris’ leaves the room giving the doctors a short window of opportunity to talk to ‘Tom’ alone.
The researchers investigated whether factors such as experience and the complexity of information affected the ability of doctors to identify and act on safeguarding concerns. As in a real consultation, in the simulation GPs have limited time to pick up from subtle body language and verbal cues if the ‘child’ is at risk.
To test if complexity of information was a factor, medical reports were long and difficult to read in one set of consultations and clear and concise in another. The 63* participants (37 GPs and 26 trainee GPs) were rated on how effectively they recorded any child abuse concerns in their post-consultation notes.
They found trainee GPs were just as likely to take note of warning signs as their experienced counterparts. However, GPs who were more stressed or scored higher for ‘neuroticism’ were more likely to miss the signs than their less-stressed, more extravert colleagues. A report of the research is published in the journal Frontiers in Robotics and AI.
Dr Sylvia Xueni Pan, study author and Lecturer in Virtual Reality at Goldsmiths, said: “An advantage of our approach is that, unlike with actors, we have absolute control over our virtual characters. This means we can subtly alter the behaviour and responses of these virtual patients. Our results show that medical doctors responded to this, as those given less obvious behavioural cues were not as effective at recording concerns as those given more obvious cues.”
Professor Sylvie Delacroix, Professor in Law and Ethics at the University of Birmingham, said: ‘It is very difficult to study how GPs spot signs of abuse, given the number of factors that may interfere with this in a real-life, professional setting. It is encouraging that the system developed by this project showed that the GPs’ level of experience did not impact upon their ability to pick up on a parent’s level of aggressive behaviour towards their child.”
The medical lead of the work, Dr Caroline Fertleman from UCL said: “For ethical reasons it would be impossible to recreate this kind of sensitive scenario using child actors. What we have shown, for the first time, is that we can create virtual reality characters of abused children and their parents that doctors believe in and interact with in a realistic way enabling them to learn how to spot the subtle warning signs of abuse.”
Dr Sylvia Xueni Pan said: “One possible explanation for the unexpected link between stress and neuroticism and missing the cues is that doctors with better people skills found the simulated consultations less mentally challenging and so were able to focus more of their attention on the virtual child character, ‘Tom’.”
The results suggest this new virtual reality approach could provide an effective and cost-effective method of training doctors in sensitive areas such as paediatrics and mental health where they can repeat and learn from interactions without feeling inhibited or worried about their impact on human actors.
A report of this research, entitled, ‘A study of professional awareness using immersive virtual reality: the responses of general practitioners to child safeguarding concerns’, is published in the journal Frontiers in Robotics and AI.
*1 of the 64 original participants had to be excluded for technical reasons.