ANAH: Analytical Annotation of Hallucinations in Large Language Models

1 Shanghai AI Laboratory 2 Hong Kong University of Science and Technology
3 The Chinese University of Hong Kong

Abstract

Reducing the hallucination problem of Large Language Models (LLMs) is crucial for their wide applications. A comprehensive and fine-grained measurement of the hallucination is the first key step for the governance of this issue but is under-explored in the community. Thus, we present ANAH, a bilingual dataset that offers ANalytical Annotation of Hallucinations in LLMs within Generative Question Answering. Each answer sentence in our dataset undergoes rigorous annotation, involving the retrieval of a reference fragment, the judgment of the hallucination type, and the correction of hallucinated content. ANAH consists of ~12k sentence-level annotations for ~4.3k LLM responses covering over 700 topics, constructed by a human-in-the-loop pipeline. Thanks to the fine granularity of the hallucination annotations, we can quantitatively confirm that the hallucinations of LLMs progressively accumulate in the answer and use ANAH to train and evaluate hallucination annotators. We conduct extensive experiments on studying generative and discriminative annotators and show that, although current open-source LLMs have difficulties in fine-grained hallucination annotation, the generative annotator trained with ANAH can surpass all open-source LLMs and GPT-3.5, obtain performance competitive with GPT-4, and exhibits better generalization ability on unseen questions.

Case Study

An example of ANAH for sentence-level hallucination annotation. Each sentence in a generated answer is annotated in fine-grained with Reference Fragment, Hallucination Type, and Correction. The hallucinated and supported content are highlighted in orange and blue.

Data Construction Pipeline

ANAH's establishment contains four stages:

  1. Topic Selection & Reference Retrieval: selecting a broad range of topics to ensure comprehensiveness.
  2. Question Generation & Selection: constructing related questions whose responses can be fully supported by reference.
  3. Answer Generation: generating answers from LLMs under different models and scenarios.
  4. Fine-grained Hallucination Annotation: fine-grained hallucination annotation for further analysis and mitigation.

Citation


    @article{ji2024ANAH,
        title={ANAH: Analytical Annotation of Hallucinations in Large Language Models},
        author={Ziwei Ji and Yuzhe Gu and Wenwei Zhang and Chengqi Lyu and Dahua Lin and Kai Chen},
        booktitle={ACL},
        year={2024}
    }

    

ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models

1 Shanghai AI Laboratory 2 Hong Kong University of Science and Technology
3 The Chinese University of Hong Kong

Abstract

Large language models (LLMs) exhibit hallucinations in long-form question-answering tasks across various domains and wide applications. Current hallucination detection and mitigation datasets are limited in domains and sizes, which struggle to scale due to prohibitive labor costs and insufficient reliability of existing hallucination annotators. To facilitate the scalable oversight of LLM hallucinations, this paper introduces an iterative self-training framework that simultaneously and progressively scales up the hallucination annotation dataset and improves the accuracy of the hallucination annotator. Based on the Expectation Maximization (EM) algorithm, in each iteration, the framework first applies a hallucination annotation pipeline to annotate a scaled dataset and then trains a more accurate hallucination annotator on the dataset. This new hallucination annotator is adopted in the hallucination annotation pipeline used for the next iteration. Extensive experimental results demonstrate that the finally obtained hallucination annotator with only 7B parameters surpasses the performance of GPT-4 and obtains new state-of-the-art hallucination detection results on HaluEval and HalluQA by zero-shot inference. Such an annotator can not only evaluate the hallucination levels of various LLMs on the large-scale dataset but also help to mitigate the hallucination of LLMs generations, with the Natural Language Inference (NLI) metric increasing from 25% to 37% on HaluEval.

Scable Framwork for Annotation

ANAH-v2 is a scalable framework for the oversight of LLM hallucinations. Through iterative self-training, we simultaneously and progressively scales up the hallucination annotation dataset and improves the accuracy of the hallucination annotator. The final dataset encompasses both over ∼3k topics, ∼196k model responses, and ∼822k annotated sentences, in English and Chinese.


Citation

    
    @article{gu2024anahv2,
        title={ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models},
        author={Gu, Yuzhe and Ji, Ziwei and Zhang, Wenwei and Lyu, Chengqi and Lin, Dahua and Chen, Kai},
        journal={arXiv preprint arXiv:2407.04693},
        year={2024}
    }