From post

Can We Really Trust Explanations? Evaluating the Stability of Feature Attribution Explanation Methods via Adversarial Attack.

, , , , , и . CCL, том 13603 из Lecture Notes in Computer Science, стр. 281-297. Springer, (2022)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

CMQA: A Dataset of Conditional Question Answering with Multiple-Span Answers., , , , , и . COLING, стр. 1697-1707. International Committee on Computational Linguistics, (2022)SpikeLM: Towards General Spike-Driven Language Modeling via Elastic Bi-Spiking Mechanisms., , , , , , , , и . ICML, OpenReview.net, (2024)Enhancing Multiple-choice Machine Reading Comprehension by Punishing Illogical Interpretations., , , , , , , и . EMNLP (1), стр. 3641-3652. Association for Computational Linguistics, (2021)A Hierarchical Explanation Generation Method Based on Feature Interaction Detection., , , и . ACL (Findings), стр. 12600-12611. Association for Computational Linguistics, (2023)Logic Traps in Evaluating Attribution Scores., , , , , и . ACL (1), стр. 5911-5922. Association for Computational Linguistics, (2022)Can We Really Trust Explanations? Evaluating the Stability of Feature Attribution Explanation Methods via Adversarial Attack., , , , , и . CCL, том 13603 из Lecture Notes in Computer Science, стр. 281-297. Springer, (2022)AquilaMoE: Efficient Training for MoE Models with Scale-Up and Scale-Out Strategies., , , , , , , , , и 17 other автор(ы). CoRR, (2024)