Practical factual recall evaluation in RAG systems are problematic for the following reasons:
The study states that factual recall evaluation framework with FaaF has been open-sourced as a python package (pip install faaf
). However, I was not able to install it.
Considering the image below, a constructor dynamically creates a function object from a set of facts.
Function calling allows LMeval to verify all facts within a single call when provided with an input text.
FaaF reduces the error rate in identifying unsupported facts by up to 40 percentage points compared to prompting whilst reducing the number of LMeval calls and output tokens by more than 5 times.
And considering the image below, given a set of ground truth Answers, facts are extracted via LMf. The Hypothesised responses of the RAG (in this instance Ungrounded Answer and Poor Answer) are then tested for recall against the extracted facts.
The study found that relying on prompts for fact verification can often overestimate the truthfulness of statements, especially when the text lacks important information.
This method can have error rates as high as 50% when dealing with incomplete texts.
However, presenting facts as a function to the language model (LM) greatly improves the accuracy and efficiency of verification.
FaaF shows that text with somewhat relevant or inaccurate information are more likely to produce false positives than those with missing or incomplete details.
The study also discovered that including a not clear option alongside True/False choices improves overall accuracy. Additionally, asking for citations before verifying facts can be helpful in some cases, but it may lead to false negatives if the text indirectly supports the fact without providing direct citations.
Finally, using FaaF significantly reduces both the number of LM calls and tokens required for verification, making the process more efficient in terms of cost and time.
Find the original study here.