Consistency, trustworthiness in large language models goal of new research

An AI image of a brain lit up with light
Chenguang Wang plans to work with Google DeepMind researchers to improve the grounding ability of large language models with a Google Research Scholar Award. (Image: OpenAI DALL·E)

Large language models (LLMs), such as Chat GPT, are becoming part of our daily lives. The artificial intelligence models are trained with vast amounts of data to generate text, answer questions and prompt ideas.

However, one of the fundamental challenges of these models is that they need more grounding, which refers to the ability to link their knowledge to real-world facts. Failure to ground will lead to LLMs producing responses that are inconsistent and unsafe, for example, misinformation and hallucinations. The recent White House executive order and open letters signed by thousands of researchers have highlighted its importance, especially in critical domains such as health care and science.

Chenguang Wang, an assistant professor of computer science and engineering at the McKelvey School of Engineering at Washington University in St. Louis, plans to work with Google DeepMind researchers to improve the grounding ability of these models with a Google award titled “Before Grounding Responses: Learning to Ground References of Large Language Models.”

This award provides $60,000 in funding, which supports early-career faculty members who are pursuing research in fields core to Google.

Read more on the McKelvey Engineering website.