AI systems such as ChatGPT are increasingly influencing our daily decisions, whether we are trying to cope with mental health issues or looking for information. However, these systems often fail precisely at the moments when empathy, judgement and social understanding are most important. This is where the 'LLMpathy' project, led by Prof. Dr Lucie Flek from the Institute of Computer Science at the University of Bonn, comes in. The project's goal is to make AI not just smarter but socially intelligent. The ERC is funding the project with €1.5 million over five years.
“Today’s AI can mimic empathy, but it doesn’t actually understand you,” says Lucie Flek. 'We want to teach machines to empathise, to consider human emotions and behaviour, and to respond meaningfully to personality, values, and social situations, rather than just offering general advice.'
To achieve this, Flek will create in-depth human models in the form of statistical profiles based on psychological traits such as empathy, impulsivity, or trust. ‘These models can be used to guide AI decisions in high-risk social scenarios such as conflict resolution, negotiations, and mental health support.’
Using a large-scale psychological study, Lucie Flek will link human data to AI development and evaluation, creating a transparent feedback loop between people and machines. “This allows AI to explain why it has made a particular suggestion, based on the information it has about the person it is communicating with,” the computer scientist explains.
LLMpathy therefore also serves to protect users: it helps to identify when AI systems use personalisation in an unethical manner, for example to emotionally pressure someone into making a purchase or to spread misinformation tailored to their fears. ‘We need to understand how personalisation works in order to protect people from its misuse,’ says Dr Flek. ‘LLMpathy gives us the tools to do that.’ The aim of the project is to develop open-source tools, benchmarks and simulation platforms to ensure that future AI meets high standards of transparency, trustworthiness and ethical behaviour – in line with the EU's forthcoming AI Act.
The project’s outcomes will include open-source tools, benchmarks, and simulation platforms to ensure that future AI systems meet high standards for transparency, trustworthiness, and ethical behavior—goals aligned with the EU’s upcoming AI Act.
Professor Lucie Flek is the head of the Data Science and Language Technologies group at the Bonn-Aachen International Center for Information Technology (b-it) at the University of Bonn, and a member of the Transdisciplinary Research Area (TRA) ‘Modelling’. As Area Chair for Natural Language Processing (NLP) at the Lamarr Institute for Machine Learning and Artificial Intelligence, she connects this work with her research on machine learning for natural language processing, including AI robustness and security. Prof. Flek has been active both in academia and industry, including with Amazon Alexa and Google Shopping Search in Europe. Her academic work at the University of Pennsylvania and University College London revolved around user modeling from text, and its applications in psychology and social sciences.
With the ERC Starting Grant, the ERC supports researchers of all nationalities with two to seven years of experience since completing their doctorate. Applicants must have a promising scientific track record and submit an outstanding project proposal on behalf of their host institution. The PI does not need to be employed by the host institution at the time of proposal submission, but a mutual agreement and commitment are necessary if the proposal is successful. Funding is usually provided for five years with a grant of up to €1.5 million. Further information is available at https://erc.europa.eu/apply-grant/starting-grant