Visualization methods of Convolutional Neural Networks.
现有的深度学习模型效果很好,但是解释性很差。本文介绍一些卷积神经网络的可视化方法,这些方法可以直观地帮助理解网络的内部机理。
1. Deconvolution
反卷积(Deconvolution)是最早用于卷积神经网络可视化的方法,首先在ZFNet模型中被使用。
$2014$年AlexNet刷榜图像分类任务,但当时无法理解这个卷积网络模型的表现为什么这么好,从而也就无法改善模型。ZFNet是对模型可视化理解的开山之作,该模型使用反卷积在输入图像的像素空间中找出能够最大化激活某一特征的像素,实现对应特征的可视化,而反卷积就是寻找像素的过程。
卷积是将输入图像映射成feature map;反卷积网络的每一层可以看作卷积网络中对应层的逆过程,它们拥有相同的卷积核和池化索引。因此反卷积将feature map逆映射回输入图像的像素空间,并说明图像中的哪些像素参与激活了该特征。
需要注意的是,反卷积并不是反向传播过程,而是一个前向传播过程,下图展示了这个过程。右侧表示卷积过程,当我们对某一中间层的feature map的某一部分(如其特征的最大激活值)感兴趣时,可以将其余特征值置零,再通过左侧的反卷积过程重构近似的输入图像像素空间的图像。其中max pooling对应max unpooling,ReLU对应ReLU,卷积对应转置卷积。
2. Guided Back Propagation
作者在该论文中提出,将CNN中的pooling层替换成stride卷积层,实现不包含池化操作的“全卷积网络”更有效。为了验证这种有效性,进一步提出了Guided Back Propagation可视化方法。
不同于反卷积,Guided Back Propagation使用了反向传播过程。想要找出图像中的哪些部分激活了某个特征,可以通过反向传播算法计算该特征值相对于该输入图像的梯度,下图a)体现了这一过程。
Guided Back Propagation相对于普通的反向传播过程增加了引导,限制了小于$0$的梯度的回传。而梯度小于$0$的部分对应原图中削弱想要可视化的特征的部分。上图b)对比了ReLU层在前向传播、反向传播、反卷积网络和引导反向传播中的信号传递情况,图c)给出了对应的公式表示。
笔者认为,基于反卷积的方法和基于反向传播的方法的主要区别在于,前者指定某个特征位置后,令其余特征位置为零,通过重构找到像素空间中能够在该特征位置产生这种强度响应的像素图像;后者指定特征位置之后,通过反向传播找到像素空间中能够最大化该特征位置的响应的像素图像。
之所以使用ReLU限制梯度的回传,是因为如果将正梯度和负梯度同时回传,得到的响应图中强调区域既会有能够最大化该特征的区域,也会有阻碍其特征最大化的部分。使用ReLU可以阻挡后者,从而只获得对感兴趣区域的可视化。
Pytorch代码如下:
class Guided_BackPropagation():
def __init__(self, model):
super(Guided_BackPropagation, self).__init__()
self.model = model
self.model.eval()
def normalization(self, x):
x -= x.min()
if x.max() <=0.:
x /= 1. # to avoid Nan
else:
x /= x.max()
return x
def relu_backward_hook(self, module, grad_input, grad_output):
return (torch.clamp(grad_input[0], min=0.), )
def get_gradient(self, input_TensorImage, target_label=None):
"""
:param input_TensorImage (tensor): Input Tensor image with [1, c, h, w].
:param target_label (int, tensor): Target label. If None, will determine index of highest label of the model's output as the target_label.
Can be set to int as index of output, or to a Tensor that has same shape with output of the model. Default: None
:return (tensor): Guided-BackPropagation gradients of the input image.
"""
self.model.zero_grad()
self.guided_gradient = None
self.handlers = []
self.gradients = []
self.input_TensorImage = input_TensorImage.requires_grad_()
for name, module in self.model.named_modules():
if isinstance(module, nn.ReLU):
self.handlers.append(module.register_backward_hook(self.relu_backward_hook))
output = self.model(self.input_TensorImage)
if target_label is None:
target_tensor = torch.zeros_like(output)
target_tensor[0][torch.argmax(output)] = 1.
else:
if isinstance(target_label, int):
target_tensor = torch.zeros_like(output)
target_tensor[0][target_label] = 1.
elif isinstance(target_label, torch.Tensor):
if not target_label.dim() == output.dim():
raise NotImplementedError('Dimension of output and target label are different')
target_tensor = target_label
# 当反向传播作用于一个向量而不是标量时,需要传入一个与其形状相同的权重向量进行加权求和得到一个标量
# 在可视化任务中,通常目标张量(标签)是最佳选择
output.backward(target_tensor)
for handle in self.handlers:
handle.remove()
self.guided_gradient = self.input_TensorImage.grad.clone()
self.input_TensorImage.grad.zero_()
self.guided_gradient.detach()
self.guided_gradient = self.normalization(self.guided_gradient)
return self.guided_gradient
GBP = Guided_BackPropagation(model)
GBP_grad = GBP.get_gradient(input_TensorImage=Input_img, target_label=target_label_index)
GBP_grad = GBP_grad.squeeze(0).cpu().numpy()
3. CAM
之前的工作指出,卷积神经网络的卷积层具有一定的定位能力(即检测出图像中可能存在的目标,这也是目标检测的基础),但使用全连接层会破坏这种能力。此外,全连接层引入了大量参数,限制了模型性能。
全局平均池化(global average pooling, GAP)层被引入用来取代全连接层。实验发现该结构具有一定的正则化功能,且能保留模型的定位能力。作者提出了一种可视化方法:Class Activation Mapping (CAM),该方法使用GAP生成类别激活图,特定类别的激活图表示该类对应的图像特征区域,方法流程如下:
如上图所示,GAP作用于最后一个卷积层的每一个通道,生成该通道特征图的全局平均值,这些值通过加权求和生成最终的输出。用相同的权重对最后一个卷积层的特征图进行加权求和,获得可视化的CAM。对于分类任务,算法可以表示如下。
对于输入图像,$f_k(x,y)$代表最后一层卷积层第$k$个通道上$(x,y)$位置的激活值。对该通道进行GAP操作:$F_k = \sum_{x,y}^{} f_k(x,y)$。对于类别$c$,通过Softmax函数得到类别置信度得分:$S_c = \sum_{k}^{} w_{k}^{c}F_k$,其中$w_{k}^{c}$代表$F_k$对于类别$c$的重要性权重。类别得分可以进一步写成:
\[S_c = \sum_{k}^{} w_{k}^{c}F_k = \sum_{k}^{} w_{k}^{c}\sum_{x,y}^{} f_k(x,y) = \sum_{x,y}^{} \sum_{k}^{} w_{k}^{c} f_k(x,y)\]定义类别$c$的CAM为$M_c$,则:
\[M_c(x,y) = \sum_{k}^{} w_{k}^{c} f_k(x,y)\]$M_c(x,y)$代表$(x,y)$位置的激活值对该图像分类为类别$c$的重要性,类别得分是对所有位置的求和:
\[S_c = \sum_{x,y}^{} M_c(x,y)\]需要注意的是,使用CAM需要改动原有的网络结构(去掉原有的全连接层,增加GAP层和新的全连接层),这一定程度上限制了方法的使用。
4. Grad-CAM & Guided Grad-CAM
作者提出了卷积神经网络的可视化方法Gradient-weighted Class Activation Mapping (Grad-CAM),相比于之前的CAM,Grad-CAM可以对任意结构的卷积网络进行可视化,不需要修改网络结构或重新训练。
CAM方法中特征图的加权系数是分类器的权值,而Grad-CAM方法中加权系数是通过反向传播得到的。若卷积特征$A_{i,j}^k$代表卷积层第$k$个通道上$(i,j)$位置的激活值,$y^c$代表类别$c$输出经过Softmax函数之前的logits,则该层卷积层特征第$k$个通道对于类别$c$的加权系数$\alpha_k^c$计算为(对特征梯度进行全局平均池化):
\[\alpha_k^c = \frac{1}{Z}\sum_{i}^{}\sum_{j}^{} \frac{\partial y^c}{\partial A_{i,j}^k}\]Grad-CAM是对卷积特征$A^k$按通道加权求和得到的:
\[L_{Grad-CAM}^{c} = ReLU(\sum_{k}^{} \alpha_k^cA^k)\]使用ReLU是为了阻挡阻碍其特征最大化的强调区域,只保留能够最大化该特征的强调区域,从而只获得对感兴趣区域的可视化。该方法得到的感兴趣区域的分布热图大小与特征图大小一致,可以使用双线性插值恢复原图同样的大小。
Guided Grad-CAM是将Guided BackPropagation和Grad-CAM结合起来的方法;前者通常能够生成高分辨率的可视化结果,后者能够生成类别特定的可视化结果。其主要流程如下:
Grad-CAM的Pytorch代码如下:
class GradCam():
def __init__(self, model):
super(GradCam, self).__init__()
self.model = model
self.model.eval() # model have to get .eval() for evaluation.
def normalization(self, x):
x -= x.min()
if x.max() <=0.:
x /= 1. # to avoid Nan
else:
x /= x.max()
return x
def get_names(self):
""" function to get names of layers in the model. """
for name, module in self.model.named_modules():
print(name, '//', module)
def forward_hook(self, name, input_hook=False):
def save_forward_hook(module, input, output):
if input_hook:
self.forward_out[name] = input[0].detach()
else:
self.forward_out[name] = output.detach()
return save_forward_hook
def backward_hook(self, name, input_hook=False):
def save_backward_hook(module, grad_input, grad_output):
if input_hook:
self.backward_out[name] = grad_input[0].detach()
else:
self.backward_out[name] = grad_output[0].detach()
return save_backward_hook
def get_gradient(self, input_TensorImage, target_layers, target_label=None, counter=False, input_hook=False):
"""
Get backward-propagation gradient.
:param input_TensorImage (tensor): Input Tensor image with [1, c, h, w].
:param target_layers (str, list): Names of target layers. Can be set to string for a layer, to list for multiple layers, or to "All" for all layers in the model.
:param target_label (int, tensor): Target label. If None, will determine index of highest label of the model's output as the target_label.
Can be set to int as index of output, or to a Tensor that has same shape with output of the model. Default: None
:param counter (bool): If True, will get negative gradients only for conterfactual explanations. Default: True
:param input_hook (bool): If True, will get input features and gradients of target layers instead of output. Default: False
:return (list): A list including gradients of Gradcam for target layers
"""
if not isinstance(input_TensorImage, torch.Tensor):
raise NotImplementedError('input_TensorImage is a must torch.Tensor format with [..., C, H, W]')
self.model.zero_grad()
self.forward_out = {}
self.backward_out = {}
self.handlers = []
self.gradients = []
self.target_layers = target_layers
if not input_TensorImage.size()[0] == 1: raise NotImplementedError("batch size of input_TensorImage must be 1.")
if not target_layers == 'All':
if isinstance(target_layers, str) or not isinstance(target_layers, Iterable):
self.target_layers = [self.target_layers]
for target_layer in self.target_layers:
if not isinstance(target_layer, str):
raise NotImplementedError(
" 'Target layers' or 'contents in target layers list' are must string format.")
for name, module in self.model.named_modules():
if target_layers == 'All':
if isinstance(module, nn.Conv2d):
self.handlers.append(module.register_forward_hook(self.forward_hook(name, input_hook)))
self.handlers.append(module.register_backward_hook(self.backward_hook(name, input_hook)))
else:
if name in self.target_layers:
self.handlers.append(module.register_forward_hook(self.forward_hook(name, input_hook)))
self.handlers.append(module.register_backward_hook(self.backward_hook(name, input_hook)))
output = self.model(input_TensorImage)
if target_label is None:
target_tensor = torch.zeros_like(output)
target_tensor[0][int(torch.argmax(output))] = 1.
else:
if isinstance(target_label, int):
target_tensor = torch.zeros_like(output)
target_tensor[0][target_label] = 1.
elif isinstance(target_label, torch.Tensor):
if not target_label.dim() == output.dim():
raise NotImplementedError('Dimension of output and target label are different')
target_tensor = target_label
output.backward(target_tensor)
self.model.zero_grad()
for handle in self.handlers:
handle.remove()
def process():
grads = self.backward_out[name]
if counter:
grads = torch.clamp(grads, max=0.)
grads *= -1.
weight = torch._adaptive_avg_pool2d(grads, 1)
gradient = self.forward_out[name] * weight
gradient = gradient.sum(dim=1, keepdim=True)
gradient = F.relu(gradient)
gradient = self.normalization(gradient)
self.gradients.append(gradient)
if not target_layers == 'All':
for name in self.target_layers:
process()
else:
for name, module in self.model.named_modules():
if isinstance(module, nn.Conv2d):
process()
return self.gradients
GC = GradCam(model)
#### Recommend to run below line before execute Gradcam to find target layer's name
#GC.get_names()
target_layers = 'module._encoder.18'
GC_grads = GC.get_gradient(input_TensorImage=data, target_layers=target_layers, target_label=label)
Guided Grad-CAM方法首先分别计算Guided BackPropagation和Grad-CAM的结果,点乘后归一化即可。其Pytorch代码如下:
grad = GBP_grad * GC_grad
grad -= torch.mean(grad)
grad /= torch.std(grad)+1e-5
grad *= 0.1
grad += 0.5
grad = torch.clamp(grad, min=0, max=1)
grad = grad.squeeze(0).squeeze(0).cpu().numpy()
5. Grad-CAM++ & Guided Grad-CAM++
作者认为,CAM和Grad-CAM等可视化方法都是基于同一个基本假设,即某一类别$c$获得的置信度分数$Y^c$(理论上$Y^c$可以是任意预测的分数,必须是光滑函数)可以写作某卷积层特征图$A^k$全局平均池化的线性组合:
\[Y^c = \sum_{k}^{} w_{k}^{c} \cdot \sum_{i,j}^{} A_{i,j}^k\]最终可视化得到的类别$c$的显著图(saliency map)在位置$(i,j)$上的数值$L^c$计算为:
\[L_{i,j}^c = \sum_{k}^{} w_{k}^{c} \cdot A_{i,j}^k\]不同的可视化方法中权重$w_{k}^{c}$的计算方式不同。CAM通过训练一个MLP分类器来获得这些权重,但受限于固定的网络结构;Grad-CAM通过反向传播计算梯度改进了这一方法,通过计算偏导数的全局平均池化获得权重$w_{k}^{c}$:
\[w_{k}^{c} = \sum_{i,j}^{} \frac{\partial Y^c}{\partial A_{i,j}^k}\]上述方法的主要缺陷在于,如果图中有多个同类别物体,有可能无法全部定位;且定位只能到物体的一部分。作者认为,一种更好的计算权重的方法是通过对激活图偏导数加权平均而不是全局平均,且只保留偏导数为正的影响:
\[w_{k}^{c} = \sum_{i,j}^{} \alpha_{ij}^{kc} \cdot relu(\frac{\partial Y^c}{\partial A_{i,j}^k})\]下面推导$w_{k}^{c}$的计算公式,将上述公式重写为:
\[Y^c = \sum_{k}^{} [\sum_{i,j}^{} \{\sum_{a,b}^{} \alpha_{ab}^{kc} \cdot relu(\frac{\partial Y^c}{\partial A_{a,b}^k}) \cdot \} A_{i,j}^k]\]不失一般性,去掉relu后两边取偏导数:
\[\frac{\partial Y^c}{\partial A_{i,j}^k} = \sum_{a,b}^{} \alpha_{ab}^{kc} \cdot \frac{\partial Y^c}{\partial A_{a,b}^k} + \sum_{a,b}^{} A_{a,b}^k \{ \alpha_{ij}^{kc} \cdot \frac{\partial^2 Y^c}{(\partial A_{i,j}^k)^2} \}\]再取一次偏导数:
\[\frac{\partial^2 Y^c}{(\partial A_{i,j}^k)^2} = 2 \cdot \alpha_{ij}^{kc} \cdot \frac{\partial^2 Y^c}{(\partial A_{i,j}^k)^2} + \sum_{a,b}^{} A_{a,b}^k \{ \alpha_{ij}^{kc} \cdot \frac{\partial^3 Y^c}{(\partial A_{i,j}^k)^3} \}\]整理得:
\[\alpha_{ij}^{kc} = \frac{\frac{\partial^2 Y^c}{(\partial A_{i,j}^k)^2}}{2 \cdot \frac{\partial^2 Y^c}{(\partial A_{i,j}^k)^2} + \sum_{a,b}^{} A_{a,b}^k \{ \frac{\partial^3 Y^c}{(\partial A_{i,j}^k)^3} \}}\]实践中,求高阶导数是比较困难的。而$Y^c$可以是任意光滑函数,因此假设$Y^c$是最大化目标特征激活值$S^c$的指数形式:$Y^c = exp(S^c)$,因此:
\[\frac{\partial Y^c}{\partial A_{i,j}^k} = exp(S^c)\frac{\partial S^c}{(\partial A_{i,j}^k)}\] \[\frac{\partial^2 Y^c}{(\partial A_{i,j}^k)^2} = exp(S^c)[(\frac{\partial S^c}{\partial A_{i,j}^k})^2 + \frac{\partial^2 S^c}{(\partial A_{i,j}^k)^2}] ≈ exp(S^c)(\frac{\partial S^c}{\partial A_{i,j}^k})^2\]类似的,可以得到:
\[\frac{\partial^3 Y^c}{(\partial A_{i,j}^k)^3} ≈ exp(S^c)(\frac{\partial S^c}{\partial A_{i,j}^k})^3\]最终,对$\alpha_{ij}^{kc}$的计算可以简化为:
\[\alpha_{ij}^{kc} ≈ \frac{(\frac{\partial S^c}{\partial A_{i,j}^k})^2}{2 \cdot (\frac{\partial S^c}{\partial A_{i,j}^k})^2 + \sum_{a,b}^{} A_{a,b}^k (\frac{\partial S^c}{\partial A_{i,j}^k})^3}\]通过一次反向传播便可以计算上述梯度权重。
作者总结了CAM、Grad-CAM和Grad-CAM++方法的主要流程,概括如下图:
Grad-CAM++的Pytorch代码如下:
class GradCamplusplus():
def __init__(self, model):
super(GradCamplusplus, self).__init__()
self.model = model
self.model.eval() # model have to get .eval() for evaluation.
def normalization(self, x):
x -= x.min()
if x.max() <=0.:
x /= 1. # to avoid Nan
else:
x /= x.max()
return x
def get_names(self):
""" function to get names of layers in the model. """
for name, module in self.model.named_modules():
print(name, '//', module)
def forward_hook(self, name, input_hook=False):
def save_forward_hook(module, input, output):
if input_hook:
self.forward_out[name] = input[0].detach()
else:
self.forward_out[name] = output.detach()
return save_forward_hook
def backward_hook(self, name, input_hook=False):
def save_backward_hook(module, grad_input, grad_output):
if input_hook:
self.backward_out[name] = grad_input[0].detach()
else:
self.backward_out[name] = grad_output[0].detach()
return save_backward_hook
def get_gradient(self, input_TensorImage, target_layers, target_label=None, counter=False, input_hook=False):
"""
Get backward-propagation gradient.
:param input_TensorImage (tensor): Input Tensor image with [1, c, h, w].
:param target_layers (str, list): Names of target layers. Can be set to string for a layer, to list for multiple layers, or to "All" for all layers in the model.
:param target_label (int, tensor): Target label. If None, will determine index of highest label of the model's output as the target_label.
Can be set to int as index of output, or to a Tensor that has same shape with output of the model. Default: None
:param counter (bool): If True, will get negative gradients only for conterfactual explanations. Default: True
:param input_hook (bool): If True, will get input features and gradients of target layers instead of output. Default: False
:return (list): A list including gradients of Gradcam for target layers
"""
if not isinstance(input_TensorImage, torch.Tensor):
raise NotImplementedError('input_TensorImage is a must torch.Tensor format with [..., C, H, W]')
self.model.zero_grad()
self.forward_out = {}
self.backward_out = {}
self.handlers = []
self.gradients = []
self.target_layers = target_layers
if not input_TensorImage.size()[0] == 1: raise NotImplementedError("batch size of input_TensorImage must be 1.")
if not target_layers == 'All':
if isinstance(target_layers, str) or not isinstance(target_layers, Iterable):
self.target_layers = [self.target_layers]
for target_layer in self.target_layers:
if not isinstance(target_layer, str):
raise NotImplementedError(
" 'Target layers' or 'contents in target layers list' are must string format.")
for name, module in self.model.named_modules():
if target_layers == 'All':
if isinstance(module, nn.Conv2d):
self.handlers.append(module.register_forward_hook(self.forward_hook(name, input_hook)))
self.handlers.append(module.register_backward_hook(self.backward_hook(name, input_hook)))
else:
if name in self.target_layers:
self.handlers.append(module.register_forward_hook(self.forward_hook(name, input_hook)))
self.handlers.append(module.register_backward_hook(self.backward_hook(name, input_hook)))
output = self.model(input_TensorImage)
if target_label is None:
target_tensor = torch.zeros_like(output)
target_tensor[0][int(torch.argmax(output))] = 1.
else:
if isinstance(target_label, int):
target_tensor = torch.zeros_like(output)
target_tensor[0][target_label] = 1.
elif isinstance(target_label, torch.Tensor):
if not target_label.dim() == output.dim():
raise NotImplementedError('Dimension of output and target label are different')
target_tensor = target_label
output.backward(target_tensor)
self.model.zero_grad()
for handle in self.handlers:
handle.remove()
def process():
features = self.forward_out[name]
grads = self.backward_out[name]
if counter:
grads *= -1.
relu_grads = F.relu(grads)
alpha_numer = grads.pow(2)
alpha_denom = 2. * grads.pow(2) + grads.pow(3) * features.sum(dim=-1, keepdim=True).sum(dim=-2, keepdim=True)
alpha_denom = torch.where(alpha_denom != 0.0, alpha_denom, torch.ones_like(alpha_denom))
alpha = alpha_numer / alpha_denom
weight = (alpha * relu_grads).sum(dim=-1, keepdim=True).sum(dim=-2, keepdim=True)
gradient = features * weight
gradient = gradient.sum(dim=1, keepdim=True)
gradient = F.relu(gradient)
gradient = self.normalization(gradient)
self.gradients.append(gradient)
if not target_layers == 'All':
for name in self.target_layers:
process()
else:
for name, module in self.model.named_modules():
if isinstance(module, nn.Conv2d):
process()
return self.gradients
GCplpl = GradCamplusplus(model)
#### Recommend to run below line before execute Gradcam to find target layer's name
#GCplpl.get_names()
target_layers = 'module._encoder.18'
GCplpl_grad = GCplpl.get_gradient(input_TensorImage=data, target_layers=target_layers, target_label=label)
Guided Grad-CAM++方法首先分别计算Guided BackPropagation和Grad-CAM++的结果,点乘后归一化即可。其Pytorch代码如下:
grad = GBP_grad * GCplpl_grad
grad -= torch.mean(grad)
grad /= torch.std(grad)+1e-5
grad *= 0.1
grad += 0.5
grad = torch.clamp(grad, min=0, max=1)
grad = grad.squeeze(0).squeeze(0).cpu().numpy()