阅读了AlignedReID++: Dynamically matching local information for person re-identification[2],相比于AlignedReID: Surpassing Human-Level Performance in Person Re-Identification[1],AlignedReID++基本继承了前作的核心,内容更加完善了,还有了轻微的改动,实验做的也有些变化。文章本身很易读,相比AlignedReID讲述的也更清楚,可以作为AlignedReID的补充阅读。
AlignedReID++论文链接:https://www.sciencedirect.com/science/article/abs/pii/S0031320319302031
读AlignedReID:
https://www.jianshu.com/p/dcd8f6166e65
https://blog.csdn.net/fisherish/article/details/104184906
论文一览:
文章重新命名了AlignedReID中提出的切片对齐方法——动态匹配局部信息(Dynamically Matching Local Information (DMLI)),它能够在不引入附加监督的情况下自动对齐切片信息,以解决由bounding box error,遮挡,视角偏差,姿态偏差等带来的行人不对齐问题。AlignedReID++通过在local feature引入DMLI,结合global feature和local feature的多细粒度,结合triplet hard loss和ID loss的多种loss学习,达到更好的行人重识别准确率。
由于之前读过AlignedReID,这里就简单补充AlignedReID++比AlignedReID不一样的地方。
模型
AlignedReID++讲解的应该说比AlignedReID更详细一些了,也更易读,
模型结构:
这个图很好的展示了AlignedReID++的结构了,下路global feature既要参与ID loss又要参与trihard loss,这个是AlignedReID的图没有的。其他部分跟AlignedReID几乎是一样的,不同的是AlignedReID在测试时并没有使用DMLI,但是AlignedReID++在测试时使用了DMLI进行校准,可以提高最后识别的准确率。
来看一下主程序的test函数:
def test(model, queryloader, galleryloader, use_gpu, ranks=[1, 5, 10, 20]):
batch_time = AverageMeter()
model.eval()
with torch.no_grad():
qf, q_pids, q_camids, lqf = [], [], [], []
for batch_idx, (imgs, pids, camids) in enumerate(queryloader):
if use_gpu: imgs = imgs.cuda()
end = time.time()
features, local_features = model(imgs)
batch_time.update(time.time() - end)
features = faturs.data.cpu()
local_features = local_features.data.cpu()
qf.append(features)
lqf.append(local_features)
q_pids.extend(pids)
q_camids.extend(camids)
qf = torch.cat(qf, 0)
lqf = torch.cat(lqf,0)
q_pids = np.asarray(q_pids)
q_camids = np.asarray(q_camids)
print("Extracted features for query set, obtained {}-by-{} matrix".format(qf.size(0), qf.size(1)))
gf, g_pids, g_camids, lgf = [], [], [], []
end = time.time()
for batch_idx, (imgs, pids, camids) in enumerate(galleryloader):
if use_gpu: imgs = imgs.cuda()
end = time.time()
features, local_features = model(imgs)
batch_time.update(time.time() - end)
features = features.data.cpu()
local_features = local_features.data.cpu()
gf.append(features)
lgf.append(local_features)
g_pids.extend(pids)
g_camids.extend(camids)
gf = torch.cat(gf, 0)
lgf = torch.cat(lgf,0)
g_pids = np.asarray(g_pids)
g_camids = np.asarray(g_camids)
print("Extracted features for gallery set, obtained {}-by-{} matrix".format(gf.size(0), gf.size(1)))
print("==> BatchTime(s)/BatchSize(img): {:.3f}/{}".format(batch_time.avg, args.test_batch))
# feature normlization
qf = 1. * qf / (torch.norm(qf, 2, dim = -1, keepdim=True).expand_as(qf) + 1e-12)
gf = 1. * gf / (torch.norm(gf, 2, dim = -1, keepdim=True).expand_as(gf) + 1e-12)
m, n = qf.size(0), gf.size(0)
distmat = torch.pow(qf, 2).sum(dim=1, keepdim=True).expand(m, n) + \
torch.pow(gf, 2).sum(dim=1, keepdim=True).expand(n, m).t()
distmat.addmm_(1, -2, qf, gf.t())
distmat = distmat.numpy()
if not args.test_distance== 'global':
print("Only using global branch")
from util.distance import low_memory_local_dist
lqf = lqf.permute(0,2,1)
lgf = lgf.permute(0,2,1)
local_distmat = low_memory_local_dist(lqf.numpy(),lgf.numpy(),aligned= not args.unaligned)
if args.test_distance== 'local':
print("Only using local branch")
distmat = local_distmat
if args.test_distance == 'global_local':
print("Using global and local branches")
distmat = local_distmat+distmat
print("Computing CMC and mAP")
cmc, mAP = evaluate(distmat, q_pids, g_pids, q_camids, g_camids, use_metric_cuhk03=args.use_metric_cuhk03)
print("Results ----------")
print("mAP: {:.1%}".format(mAP))
print("CMC curve")
for r in ranks:
print("Rank-{:<3}: {:.1%}".format(r, cmc[r - 1]))
print("------------------")
if args.reranking:
from util.re_ranking import re_ranking
if args.test_distance == 'global':
print("Only using global branch for reranking")
distmat = re_ranking(qf,gf,k1=20, k2=6, lambda_value=0.3)
else:
local_qq_distmat = low_memory_local_dist(lqf.numpy(), lqf.numpy(),aligned= not args.unaligned)
local_gg_distmat = low_memory_local_dist(lgf.numpy(), lgf.numpy(),aligned= not args.unaligned)
local_dist = np.concatenate(
[np.concatenate([local_qq_distmat, local_distmat], axis=1),
np.concatenate([local_distmat.T, local_gg_distmat], axis=1)],
axis=0)
if args.test_distance == 'local':
print("Only using local branch for reranking")
distmat = re_ranking(qf,gf,k1=20,k2=6,lambda_value=0.3,local_distmat=local_dist,only_local=True)
elif args.test_distance == 'global_local':
print("Using global and local branches for reranking")
distmat = re_ranking(qf,gf,k1=20,k2=6,lambda_value=0.3,local_distmat=local_dist,only_local=False)
print("Computing CMC and mAP for re_ranking")
cmc, mAP = evaluate(distmat, q_pids, g_pids, q_camids, g_camids, use_metric_cuhk03=args.use_metric_cuhk03)
print("Results ----------")
print("mAP(RK): {:.1%}".format(mAP))
print("CMC curve(RK)")
for r in ranks:
print("Rank-{:<3}: {:.1%}".format(r, cmc[r - 1]))
print("------------------")
return cmc[0]
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101
挺长的,重新校准距离度量在low_memory_local_dist函数中,我们把这个low_memory_local_dist展开:
def low_memory_local_dist(x, y, aligned=True):
print('Computing local distance...')
x_num_splits = int(len(x) / 200) + 1
y_num_splits = int(len(y) / 200) + 1
z = low_memory_matrix_op(local_dist, x, y, 0, 0, x_num_splits, y_num_splits, verbose=True, aligned=aligned)
return z
123456
其中的low_memory_matrix_op和local_dist展开:
def low_memory_matrix_op(
func,
x, y,
x_split_axis, y_split_axis,
x_num_splits, y_num_splits,
verbose=False, aligned=True):
"""
For matrix operation like multiplication, in order not to flood the memory
with huge data, split matrices into smaller parts (Divide and Conquer).
Note:
If still out of memory, increase `*_num_splits`.
Args:
func: a matrix function func(x, y) -> z with shape [M, N]
x: numpy array, the dimension to split has length M
y: numpy array, the dimension to split has length N
x_split_axis: The axis to split x into parts
y_split_axis: The axis to split y into parts
x_num_splits: number of splits. 1 <= x_num_splits <= M
y_num_splits: number of splits. 1 <= y_num_splits <= N
verbose: whether to print the progress
Returns:
mat: numpy array, shape [M, N]
"""
if verbose:
import sys
import time
printed = False
st = time.time()
last_time = time.time()
mat = [[] for _ in range(x_num_splits)]
for i, part_x in enumerate(
np.array_split(x, x_num_splits, axis=x_split_axis)):
for j, part_y in enumerate(
np.array_split(y, y_num_splits, axis=y_split_axis)):
part_mat = func(part_x, part_y, aligned)
mat[i].append(part_mat)
if verbose:
if not printed:
printed = True
else:
# Clean the current line
sys.stdout.write("\033[F\033[K")
print('Matrix part ({}, {}) / ({}, {}), +{:.2f}s, total {:.2f}s'
.format(i + 1, j + 1, x_num_splits, y_num_splits,
time.time() - last_time, time.time() - st))
last_time = time.time()
mat[i] = np.concatenate(mat[i], axis=1)
mat = np.concatenate(mat, axis=0)
return mat
def local_dist(x, y, aligned):
if (x.ndim == 2) and (y.ndim == 2):
return meta_local_dist(x, y, aligned)
elif (x.ndim == 3) and (y.ndim == 3):
return parallel_local_dist(x, y, aligned)
else:
raise NotImplementedError('Input shape not supported.')
跟之前AlignedReID源码注释一样,这个计算最短距离/DMLI藏在local_dist计算当中,计算距离矩阵之后,计算出对齐后的校准距离,如下:
def meta_local_dist(x, y, aligned):
"""
Args:
x: numpy array, with shape [m, d]
y: numpy array, with shape [n, d]
Returns:
dist: scalar
"""
eu_dist = compute_dist(x, y, 'euclidean')
dist_mat = (np.exp(eu_dist) - 1.) / (np.exp(eu_dist) + 1.)
if aligned:
dist = shortest_dist(dist_mat[np.newaxis])[0]
else:
dist = unaligned_dist(dist_mat[np.newaxis])[0]
return dist
def parallel_local_dist(x, y, aligned):
"""Parallel version.
Args:
x: numpy array, with shape [M, m, d]
y: numpy array, with shape [N, n, d]
Returns:
dist: numpy array, with shape [M, N]
"""
M, m, d = x.shape
N, n, d = y.shape
x = x.reshape([M * m, d])
y = y.reshape([N * n, d])
# shape [M * m, N * n]
dist_mat = compute_dist(x, y, type='euclidean')
dist_mat = (np.exp(dist_mat) - 1.) / (np.exp(dist_mat) + 1.)
# shape [M * m, N * n] -> [M, m, N, n] -> [m, n, M, N]
dist_mat = dist_mat.reshape([M, m, N, n]).transpose([1, 3, 0, 2])
# shape [M, N]
if aligned:
dist_mat = shortest_dist(dist_mat)
else:
dist_mat = unaligned_dist(dist_mat)
return dist_mat
这里的shortest_dist就是DMLI,跟之前的一样,不同的是这次用在了test阶段。
DMLI图例如下:
ResNet50得到 (batch, 2048, 8, 4)的features,经过Horizontal pooling得到(batch, 2048, 8, 1)的features,然后进行动态对齐。可以发现跟AlignedReID不同的还有切片数,之前是7片,现在是8片,而且图例也更加易读了,这样子看就很明了。
还记得之前计算DMLI之前要进行归一化如下:
作者这里阐述了使用这个归一化有一个很好的特性,设
第一是d与x成正相关,d不会改变x的单调性,且保证训练稳定;第二是d对x的导数与x成反相关:
距离越小,梯度越大,说明方向对了,梯度下降的就越快。距离大的,方向就不对了,梯度就很小,阻止你下降。因而最短路径的总距离,即两个图像之间的局部距离,主要由对齐的部分决定。
实验
文章的实验相比之前也有些改变,文章在不同网络block输出而使用DMLI的表现,我注意到block4加了trihard提升的好小。也可以看出网络特征提取对ReID的影响如此之重要。
作者生成了这四个block的输出对齐结果,可以看到只有高级特征对齐比较鲁棒。
然后是老项目,分离试验:
可见完整图片使用TriHard loss(global) 和 Softmax loss 一起训练本身就是有效的,但如果DMLI(local)分支独自和softmax loss 一起训练的话会比global分支更有用,而且global分支表现本来就很差,所以加入global分支还不一定能够对结果有帮助。
经过对齐,网络能够将弄错的难样本对识别出来,如下:
对其之前正样本对距离比负样本对还要高,可见就判断错了,但是对其之后正样本对距离就比负样本对要低了。
作者经过裁剪自己生成了一批partial数据集,主要数据来源是基于Market1501和DukeMTMC,并进行了分离试验:
然后是和其他切片方法的对比,PCB在完整图片数据集上是比AlignedReID要高的,所以AlignedReID在partial数据集上换个角度超越你,结果如下:
(悄咪咪小声)所以接下来的SOTA实验就不跟PCB比较了,如下:
这时候MSMT17已经发布了,所以文章还进行了MSMT17的实验。
问题
还记得这个分离实验:
加了global之后,AlignedReID++(LS)居然还掉点了,这个原因是什么?为什么global分量无法提供有价值的信息了,这个还是值得探究一番的。
参考文献
[1] Zhang X, Luo H, Fan X, et al. Alignedreid: Surpassing human-level performance
in person re-identification[J]. arXiv preprint arXiv:1711.08184, 2017.
[2] Luo H, Jiang W, Zhang X, et al. AlignedReID++: Dynamically matching local information for person re-identification[J]. Pattern Recognition, 2019, 94: 53-61.
文章知识点与官方知识档案匹配,可进一步学习相关知识