Inproceedings,

Hallucination Detection: Robustly Discerning Reliable Answers in Large Language Models.

, , , , , , , , and .
CIKM, page 245-255. ACM, (2023)

Meta data

Tags

Users

  • @dblp

Comments and Reviews