Skip to content

Latest commit

 

History

History
29 lines (23 loc) · 1.5 KB

README.md

File metadata and controls

29 lines (23 loc) · 1.5 KB

PLOBELM_1_TEAM_2

AIC XAI 문제 1번 2팀 레파지토리

디렉토리 예상 구조

attribution_method
ㄴ Paper
ㄴ Implementation

Evaluation
ㄴ Paper
ㄴ Implementation

PAPER

Attribution Method

  1. Zeiler, Matthew D., and Rob Fergus. "Visualizing and understanding convolutional networks." ECCV. 2014.
  2. Smilkov, Daniel, et al. "Smoothgrad: removing noise by adding noise." ICML Workshop. 2017.
  3. Fong, Ruth C., and Andrea Vedaldi. "Interpretable explanations of black boxes by meaningful perturbation." CVPR. 2017.
  4. Selvaraju, RamprasaathR., et al. "Grad-cam: Visual explanations from deep networks via gradient-based localization." CVPR. 2017.
  5. Zhang, Quanshi, Ying NianWu, and Song-Chun Zhu. "Interpretable convolutional neural networks." CVPR. 2018.
  6. Wagner, Jorg, et al. "Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks." CVPR. 2019.

Evaluations

  1. Ancona, Marco, et al. "Towards better understanding of gradient-based attribution methods for deep neural networks." ICLR. 2018.
  2. Hooker, Sara, et al. "Evaluating feature importance estimates." ICML Workshop. 2018.
  3. Nie, Weili, Yang Zhang, and Ankit Patel. "A theoretical explanation for perplexing behaviors of backpropagation-based visualizations." ICML. 2018.
  4. Adebayo, Julius, et al. "Sanity checks for saliency maps." NIPS. 2018.
  5. Yang, Mengjiao, and Been Kim. "BIM: Towards Quantitative Evaluation of Interpretability Methods with Ground Truth." arXivpreprint arXiv:1907.09701 (2019).