Retrieval-Augmented Generation (RAG) is effective in leveraging LLM in-context learning (ICL) capabilities.
But are we missing how different parts of RAG work together & ignore real-world challenges like small mistakes in data?
-
How RAG handles noisy documents, &
-
A complete review of RAG’s strengths.
The study introduces a new attack, the Genetic Attack on RAG (GARAG), to test these areas. GARAG finds weaknesses in each part of RAG and tests the whole system with noisy documents.
The study proves RAG’s strength by using GARAG on common QA datasets with various retrievers and LLMs. The results show GARAG consistently succeeds in attacking RAG, exposing risks from small errors in real-world data.
Key Findings From The Study
-
They point out that RAG systems are vulnerable to minor but frequent textual errors within the documents.
-
An attack method called GARAG is proposed, based on an algorithm searching for adversarial documents.
-
RAG systems are susceptible to noisy documents in real-world databases.
The reader’s ability to accurately ground information significantly depends on the retriever’s capability of sourcing query-relevant documents.
Genetic Attack on RAG (GARAG)
In Conclusion
This study contains three main contributions…
-
Highlighting a vulnerability in RAG systems pertaining to frequent minor textual errors within documents. This evaluation focuses on the retriever and reader components’ functionality.
-
Introducing GARAG, a straightforward & potent attack strategy leveraging a genetic algorithm to craft adversarial documents capable of exploiting weaknesses in both components of RAG simultaneously.
-
Through experimentation, demonstrating the detrimental impact of noisy documents on the RAG system within real-world databases.
The results show that typos seriously harm the RAG system, making it work much worse. Even though the retriever helps protect the reader, it can still be affected by small disruptions.
Read the full study here.