python - 如何在 yolo 暗网中找到每个类别的信心

标签 python opencv deep-learning yolo darknet

我使用 tiny yolo 和 darknet 开发了自定义对象检测器。它工作得很好,但我需要一个特定的功能: 网络输出边界框均由类数 + 5 个元素组成的向量表示。前 4 个元素代表 center_x、center_y、宽度和高度。第五个元素表示边界框包围对象的置信度。其余元素是与每个类(即对象类型)相关的置信度。对于每个框,我需要与每个类关联的置信度,但我在输出中只有最大置信度,其他置信度输出均为 0。

运行示例:

打印(分数) 返回

[0.        0.        0.5874982]

0.5874982 是最大置信度。这是第3节课。但我不明白,因为其他人的信心都是0。感谢重播,我很抱歉我的英语不好。这是代码

import cv2 as cv
 import argparse
 import sys
 import numpy as np
 import os.path

 confThreshold = 0.5 
 nmsThreshold = 0.6      
 inpWidth = 416          #Width of network's input image
 inpHeight = 416         #Height of network's input image


 parser = argparse.ArgumentParser(description='Object Detection using YOLO in OPENCV')
 parser.add_argument('--image', help='Path to image file.')
 parser.add_argument('--video', help='Path to video file.')
 args = parser.parse_args()

# Load names of classes
classesFile = "obj.names"
classes = None
with open(classesFile, 'rt') as f:
     classes = f.read().rstrip('\n').split('\n')

 # Give the configuration and weight files for the model and load the network using them.
 modelConfiguration = "yolov3-tiny-obj.cfg"
 modelWeights = "pesi/pesi_3_classi_new/yolov3-tiny-obj_7050.weights"

 net = cv.dnn.readNetFromDarknet(modelConfiguration, modelWeights)
 net.setPreferableBackend(cv.dnn.DNN_BACKEND_OPENCV)
 net.setPreferableTarget(cv.dnn.DNN_TARGET_CPU)

 # Get the names of the output layers
 def getOutputsNames(net):
    layersNames = net.getLayerNames()
    return [layersNames[i[0] - 1] for i in net.getUnconnectedOutLayers()]

 # Draw the predicted bounding box
 def drawPred(classId, conf, left, top, right, bottom):
    if classId==1:
        cv.rectangle(frame, (left, top), (right, bottom), (3, 14, 186), 3)
    elif classId==0:
        cv.rectangle(frame, (left, top), (right, bottom), (40, 198, 31), 3)
    elif classId==2:
        cv.rectangle(frame, (left, top), (right, bottom), (40, 198, 31), 3)

    label = '%.2f' % conf

    # Get the label for the class name and its confidence
    if classes:
       assert(classId < len(classes))
       label = '%s:%s' % (classes[classId], label)

    #Display the label at the top of the bounding box
    labelSize, baseLine = cv.getTextSize(label, cv.FONT_HERSHEY_SIMPLEX, 0.5, 1)
    top = max(top, labelSize[1])
    cv.rectangle(frame, (left, top - round(1*labelSize[1])), (left + round(1*labelSize[0]), top + baseLine), (255, 255, 255), cv.FILLED)
    cv.putText(frame, label, (left, top), cv.FONT_HERSHEY_SIMPLEX, 0.45, (0,0,0), 1)

  # Remove the bounding boxes with low confidence using non-maxima suppression
  def postprocess(frame, outs):
   frameHeight = frame.shape[0]
   frameWidth = frame.shape[1]

   # Scan through all the bounding boxes output from the network and keep only the
   # ones with high confidence scores. Assign the box's class label as the class with the highest score.
   classIds = []
   confidences = []
   boxes = []
   for out in outs:
       for detection in out:
           scores = detection[5:]
           classId = np.argmax(scores)
           confidence = scores[classId]
           if confidence > confThreshold:
               print(scores)
               center_x = int(detection[0] * frameWidth)
               center_y = int(detection[1] * frameHeight)
               width = int(detection[2] * frameWidth)
               height = int(detection[3] * frameHeight)
               left = int(center_x - width / 2)
               top = int(center_y - height / 2)
               classIds.append(classId)
               confidences.append(float(confidence))
               boxes.append([left, top, width, height])

    # Perform non maximum suppression to eliminate redundant overlapping boxes with
    # lower confidences.
    indices = cv.dnn.NMSBoxes(boxes, confidences, confThreshold, nmsThreshold)
    for i in indices:
        i = i[0]
        box = boxes[i]
        left = box[0]
        top = box[1]
        width = box[2]
        height = box[3]
        drawPred(classIds[i], confidences[i], left, top, left + width, top + height)

# Process inputs
winName = 'Deep learning object detection in OpenCV'
cv.namedWindow(winName, cv.WINDOW_NORMAL)

outputFile = "yolo_out_py.avi"
 if (args.image):
   # Open the image file
   if not os.path.isfile(args.image):
       print("Input image file ", args.image, " doesn't exist")
       sys.exit(1)
   cap = cv.VideoCapture(args.image)
   outputFile = args.image[:-4]+'_yolo_out_py.jpg'
 elif (args.video):
   if not os.path.isfile(args.video):
      print("Input video file ", args.video, " doesn't exist")
      sys.exit(1)
   cap = cv.VideoCapture(args.video)
   outputFile = args.video[:-4]+'_yolo_out_py.avi'
 else:
    cap = cv.VideoCapture(0)

if (not args.image):
   vid_writer = cv.VideoWriter(outputFile, cv.VideoWriter_fourcc('M','J','P','G'), 5, ( 
   round(cap.get(cv.CAP_PROP_FRAME_WIDTH)),round(cap.get(cv.CAP_PROP_FRAME_HEIGHT))))

while cv.waitKey(1) < 0:

   hasFrame, frame = cap.read()

   if not hasFrame:
      print("Done processing !!!")
      print("Output file is stored as ", outputFile)
      cv.waitKey(3000)
      # Release device
      cap.release()
      break

# Create a 4D blob from a frame.
blob = cv.dnn.blobFromImage(frame, 1/255, (inpWidth, inpHeight), [0,0,0], 1, crop=False)

# Sets the input to the network
net.setInput(blob)

# Runs the forward pass to get output of the output layers
outs = net.forward(getOutputsNames(net))

# Remove the bounding boxes with low confidence
postprocess(frame, outs)

# Put efficiency information. The function getPerfProfile returns the overall time for inference(t) and the timings for 
each of the layers(in layersTimes)
t, _ = net.getPerfProfile()
label = 'Inference time: %.2f ms' % (t * 1000.0 / cv.getTickFrequency())
cv.putText(frame, label, (0, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255))

if (args.image):
    cv.imwrite(outputFile, frame.astype(np.uint8))
else:
    vid_writer.write(frame.astype(np.uint8))

cv.imshow(winName, frame)

最佳答案

这可能是因为独立的逻辑分类器。这个描述可能有助于您理解。

Class Predictions : YOLOv3 uses independent logistic classifiers for each class instead of a regular softmax layer. This is done to make the classification multi-label classification. What it means and how it adds value? Take an example, where a woman is shown in the picture and the model is trained on both person and woman, having a softmax here will lead to the class probabilities been divided between these 2 classes with say 0.4 and 0.45 probabilities. But independent classifiers solves this issue and gives a yes vs no probability for each class, like what’s the probability that there is a woman in the picture would give 0.8 and what’s the probability that there is a person in the picture would give 0.9 and we can label the object as both person and woman.

https://github.com/AvivSham/YOLO_V3_from_scratch_colab

关于python - 如何在 yolo 暗网中找到每个类别的信心,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58621583/

相关文章:

python - 既然不允许使用 Content-Length header ,是否可以在 GAE 应用程序中设置 blob 下载大小?

python - Django ModuleNotFoundError : No module named 'sql_server' With Docker

python - 函数 'cv::resize' 中的 OpenCV(4.1.2) 错误 : (-215:Assertion failed) ! ssize.empty()

Tensorflow 构建量化工具 - bazel 构建错误

python - ValueError : `class_weight` must contain all classes in the data. 类{1,2,3}存在于数据中但不存在于 `class_weight`

python - 将字典转换为 Pandas 中的数据框列

python - 如何在 Pandas 中整理(融化)数据并保留所有其他列?

c++ - Kuwahara 过滤器的奇怪结果

opencv - 如何将 Mat 的一行复制到 OpenCv 中另一个 Mat 的列?

python - 谁使用 tf.estimator.train_and_evaluate 提前停止评估损失?