When should I stop training:
Usually sufficient 2000 iterations for each class(object). But for a more precise definition when you should stop training, use the following manual:
- During training, you will see varying indicators of error, and you should stop when no longer decreases 0.XXXXXXX avg:
Region Avg IOU: 0.798363, Class: 0.893232, Obj: 0.700808, No Obj: 0.004567, Avg Recall: 1.000000, count: 8 Region Avg IOU: 0.800677, Class: 0.892181, Obj: 0.701590, No Obj: 0.004574, Avg Recall: 1.000000, count: 8
9002: 0.211667, 0.060730 avg, 0.001000 rate, 3.868000 seconds, 576128 images Loaded: 0.000000 seconds
- 9002 - iteration number (number of batch)
- 0.060730 avg - average loss (error) - the lower, the better
When you see that average loss 0.xxxxxx avg no longer decreases at many iterations then you should stop training.
- Once training is stopped, you should take some of last .weights-files from darknet\build\darknet\x64\backup and choose the best of them:
For example, you stopped training after 9000 iterations, but the best result can give one of previous weights (7000, 8000, 9000). It can happen due to overfitting. Overfitting - is case when you can detect objects on images from training-dataset, but can't detect objects on any others images. You should get weights from Early Stopping Point:
To get weights from Early Stopping Point:
2.1. At first, in your file obj.data you must specify the path to the validation dataset valid = valid.txt (format of valid.txt as in train.txt), and if you haven't validation images, just copy data\train.txt to data\valid.txt.
2.2 If training is stopped after 9000 iterations, to validate some of previous weights use this commands:
(If you use another GitHub repository, then use darknet.exe detector recall... instead of darknet.exe detector map...)
- darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights
- darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_8000.weights
- darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_9000.weights
And comapre last output lines for each weights (7000, 8000, 9000):
Choose weights-file with the highest IoU (intersect of union) and mAP (mean average precision)
For example, bigger IOU gives weights yolo-obj_8000.weights - then use this weights for detection.
Example of custom object detection: darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights
IoU (intersect of union) - average instersect of union of objects and detections for a certain threshold = 0.24
mAP (mean average precision) - mean value of average precisions for each class, where average precision is average value of 11 points on PR-curve for each possible threshold (each probability of detection) for the same class (Precision-Recall in terms of PascalVOC, where Precision=TP/(TP+FP) and Recall=TP/(TP+FN) ), page-11: http://homepages.inf.ed.ac.uk/ckiw/postscript/ijcv_voc09.pdf
mAP is default metric of precision in the PascalVOC competition, this is the same as AP50 metric in the MS COCO competition. In terms of Wiki, indicators Precision and Recall have a slightly different meaning than in the PascalVOC competition, but IoU always has the same meaning.
'AI Research Topic > Object Detection' 카테고리의 다른 글
|[Object Detection] YOLO style 로 Bounding Box 값 바꾸기 (0)||2019.09.17|
|[Object Detection] Darknet python (2)||2019.09.04|
|[Object Detection] Darknet 학습 시 적절한 Weight 고르기 (0)||2019.09.04|
|[Object Detection] Convert Darknet yolov3 model to keras model (0)||2019.08.19|
|[Object Detection] 객체 탐지를 위한 데이터 주석 Yolo 형식으로 변환하기 (4)||2019.08.19|
|[Object Detection] Image Labeling Tool (5)||2019.07.08|