Author of the publication

Saliency strikes back: How filtering out high frequencies improves white-box explanations.

, , , , , and . ICML, OpenReview.net, (2024)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Harmonizing the object recognition strategies of deep neural networks with humans., , , and . NeurIPS, (2022)Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis., , , , , and . NeurIPS, page 26005-26014. (2021)Sparks of Explainability: Recent Advancements in Explaining Large Vision Models.. CoRR, (February 2025)CRAFT: Concept Recursive Activation FacTorization for Explainability., , , , , , , and . CoRR, (2022)Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex., , , , , , and . CoRR, (2023)Xplique: A Deep Learning Explainability Toolbox., , , , , , , , , and 5 other author(s). CoRR, (2022)How Good is your Explanation? Algorithmic Stability Measures to Assess the Quality of Explanations for Deep Neural Networks., , , and . WACV, page 1565-1575. IEEE, (2022)What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods., , , and . NeurIPS, (2022)Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization., , , , , , , , , and 1 other author(s). CoRR, (2023)On the Foundations of Shortcut Learning., , , and . CoRR, (2023)