https://blog.csdn.net/sgfmby1994/article/details/98517210
yolov3-5做检测 detection reid做特征提取 deepsort做:检测框位置的卡尔曼滤波预测,特征相似比对, IOu也就是重叠度的计算,级联分类匹配,匈牙利分配算法。
Simple Online and Realtime Tracking with a Deep Association Metric 具有深度关联指标的简单在线和实时跟踪
总的思路:
track | box1 | box2 |
---|---|---|
track1 | dist1-1 | dist1-2 |
track2 | dist2-1 | dist2-2 |
row | col |
---|---|
track-row | box-col |
track-row | box-col |
1.预测:遍历track列表,做KF预测 tracker->predict();
2.匹配,结果3类:匹配成功的,轨迹未匹配成的,特征未匹配成功的
2.1 先拿confirmed的track和全部detections用min_cost_matching线性分配匹配。
2.2 再拿(unconfirmed+前面未匹配且time_since_update==1)的track 和 前面未匹配的detections用min_cost_matching线性分配匹配
3.匹配结果出来
deepsort: 重识别:POI: Multiple Object Tracking with High Performance Detection and Appearance Feature. In BMTT, SenseTime Group Limited, 2016. 原始版本:https://github.com/nwojke/deep_sort
pytorch版本:https://github.com/ZQPei/deep_sort_pytorch
This is an implement of MOT tracking algorithm deep sort. Deep sort is basicly the same with sort but added a CNN model to extract features in image of human part bounded by a detector. This CNN model is indeed a RE-ID model and the detector used in PAPER is FasterRCNN , and the original source code is HERE. However in original code, the CNN model is implemented with tensorflow, which I’m not familier with. SO I re-implemented the CNN feature extraction model with PyTorch, and changed the CNN model a little bit. Also, I use YOLOv3 to generate bboxes instead of FasterRCNN.
The original model used in paper is in original_model.py, and its parameter here original_ckpt.t7. To train the model, first you need download Market1501 dataset or Mars dataset. Then you can try train.py to train your own parameter and evaluate it using test.py and evaluate.py.
tf版本:https://github.com/LeonLok/Deep-SORT-YOLOv4 训练参考:https://github.com/nwojke/cosine_metric_learning
步骤: 读懂deep_sort_pytorch流程,试试非行人跟踪。 准备yolov3模型+deepsort模型+nms模块编译 主要的代码模块 from detector import build_detector from deep_sort import build_tracker 核心代码 #1.准备输出:rgb图片 im = cv2.cvtColor(ori_im, cv2.COLOR_BGR2RGB)
#2.目标检测 bbox_xywh, cls_conf, cls_ids = self.detector(im)
#3.过滤出目标检测中的行人 mask = cls_ids == 0 bbox_xywh = bbox_xywh[mask] bbox_xywh[:, 3:] *= 1.2 cls_conf = cls_conf[mask]
#3. 跟踪 outputs = self.deepsort.update(bbox_xywh, cls_conf, im)
#4.绘制结果box if len(outputs) > 0: bbox_tlwh = [] bbox_xyxy = outputs[:, :4] identities = outputs[:, -1] ori_im = draw_boxes(ori_im, bbox_xyxy, identities)
for bb_xyxy in bbox_xyxy:
bbox_tlwh.append(self.deepsort._xyxy_to_tlwh(bb_xyxy))
results.append((idx_frame - 1, bbox_tlwh, identities))
读懂deep_sort_pytorch的训练流程并训练。
工作流程 复习torch基础。 读懂deepsort的RE-ID模型。 对比tf和torch的模型 移植到dnn 移植tracker c++。 自己训练模型:收集数据集和自己采集数据集。