Lightweight Self-Knowledge Distillation with multi-source image fusion (CS.CV)
Submitting to TNNLS(IF:14.233 Area 1), First and Inde Author
view: pdf code
Abstract:
Existing SKD works mainly focus on multi-exit architecture, utility of history knowledge, interactions with label smoothing and small-scale regularization. Nevertheless, limited angles of source information restrain those SKD works from further amelioration. In this paper we propose two innovative techniques for self-distillation from a wide view of multi-source information fusion, which are DRI (distillation with reverse instructions) and DSR (distillation with shape-wise regularization).
Contribution list:
Xucong raises the idea, does the experiments, records & analyses the results, makes qualitative & quantitive explanation, draws all figures and writes the paper.
Pengchao (Postdoc of CUHK) is the tutor and corresponding author. She takes part in weekly meetings about this project, give her ideas, and ameliorate the paper.
Xucong's note:
This work lasts from November 2022 to April 2023. Before starting this work, I read about 40 peer papers. Actually, this idea is just one of more than 200 attempts I made. Most attempts fade away just because can not reach state-of-art. In this work, in order to verify my methods, I make about 100 experiments on different datasets and models. There also various ablation studies and fine-grained tests. Notably, I creatively employ a work from DL interpretability to make theoretical analysis.