Interpreting ResNet-based CLIP via Neuron-Attention Decomposition

NeurIPS 2025 Workshop on Mechanistic Interpretability

Abstract

We present a novel technique for interpreting the neurons in CLIP-ResNet by decomposing their contributions to the output into individual computation paths. More specifically, we analyze all pairwise combinations of neurons and the following attention heads of CLIP's attention-pooling layer. We find that these neuron-head pairs can be approximated by a single direction in CLIP-ResNet's image-text embedding space. Leveraging this insight, we interpret each neuron-head pair by associating it with text. Additionally, we find that only a sparse set of the neuron-head pairs have a significant contribution to the output value, and that some neuron-head pairs, while polysemantic, represent sub-concepts of their corresponding neurons. We use these observations for two applications. First, we employ the pairs for training-free semantic segmentation, outperforming previous methods for CLIP-ResNet. Second, we utilize the contributions of neuron-head pairs to monitor dataset distribution shifts. Our results demonstrate that examining individual computation paths in neural networks uncovers interpretable units, and that such units can be utilized for downstream tasks

Neuron-Attention Decomposition

Because a nonlinearity follows CLIP-ResNet's residual connections and because CLIP-ResNet's neuron contributions cannot be approximated by a single direction in the joint embedding space, existing CLIP-ViT interpretability methods do not readily extend to the ResNet counterparts. To address this gap, we introduce a new approach that provides a fine-grained decomposition of CLIP-ResNet model outputs into individual contributions of neurons in the last layers and the following attention-pooling heads. As each such neuron-head contribution lives in the joint image-text space, it can be compared and interpreted via text.

BibTeX

@article{bu2025interpretingresnetbasedclipneuronattention,
  title={Interpreting ResNet-based CLIP via Neuron-Attention Decomposition},
  author={Edmund Bu and Yossi Gandelsman},
  journal={NeurIPS 2025 Workshop on Mechanistic Interpretability},
  year={2025},
  url={https://arxiv.org/abs/2509.19943}
}