Access to UK supercomputer will power pioneering face recognition research

Primary page content

Dr Georgios Mastorakis’ research has secured prestigious access to the UK’s AI Research Resource (AIRR) to advance pioneering work on facial emotion recognition systems using synthetic data.

Georgios, who researches into artificial intelligence, image processing, computer vision, machine learning and simulation modelling, will spend three months running intensive experiments on one of the UK’s two powerful supercomputers. His proposal to the AIRR on improving emotion recognition systems using synthetic image data received a five-star rating from AIRR reviewers, the highest possible marks. 

While it does not provide direct funding, the AIRR arguably offer something equally valuable for computational research - time on extremely powerful GPUs, each worth tens of thousands of pounds. 

A branch of computer vision and AI, emotion recognition systems attempt to classify facial expressions into categories such as anger, disgust, fear, happiness, neutrality, sadness, and surprise – the seven emotions at the heart of Georgios’ project. Such systems are already embedded in everyday technologies; from smartphones that unlock using face recognition to photo apps and social media filters that track facial landmarks. Alongside these applications there are more experimental tools in healthcare, education, security and marketing that attempt to infer emotional states from facial expressions. 

But, despite their take up, emotional recognition systems face a number of severe challenges that put their reliability in doubt. Public data sets that they rely on are often messy leading to risks of mislabelling and historically have not represented the full diversity of human faces thereby running the risk of misclassifying emotions from certain groups. If the underlying data set is skewered or poor quality its outcomes can be unfair and unreliable.  

Georgios also points to a wider industry failure that can undermine the efficacy of these recognition systems

In the past there were companies making a huge amount of money selling data that they claimed was both representative and clean. As soon as you start using the data, these claims collapsed completely and were not true.

Dr Georgios Mastorakis, Lecturer Computer Science

The distinctive feature of Georgios’ research is his use of synthetic images – artificially generated faces and expressions instead of relying mainly on real photographs of people. 

“We have seven different emotions, and we are trying to detect these emotions in images,” he explained. “But the way we do it is by training the AI not with human but with synthetic data. We’re trying to train it on synthetic data. The synthetic images are not like the humans, so this is a big challenge of the research.” 

The synthetic dataset he is using is created with Maya, a popular graphics software package, and originates from work at the University of Washington. Crucially, synthetic data can be generated and controlled, allowing researchers to start from clean well -labelled images, to systematically vary pose, lighting, occlusions and facial types and to create faces of different heights, body type and demographics that may not exist in standard datasets. 

At the heart of Georgios’ research is to build a bridge between synthetic and real data so that the systems trained on artificial faces still performs accurately on real human faces. 

“I’m trying now to make the model transition between the two domains – real data and synthetic data,” Georgios said. “We train on synthetic, then use this transition when we train and test on real human emotions and see if we can get high accuracy.”  

If his experiments succeed, the benefits could be far-reaching: emotion recognition systems that rely less on dubious, biased human datasets and more on carefully designed synthetic data, making them more robust, fairer, and more reliable in the real world.