我需要在 OpenCV 或 NumPy 中复制 PyTorch 图像归一化。
快速背景故事:我正在做一个项目,我在 PyTorch 中进行培训,但由于部署到我没有存储空间来安装 PyTorch 的嵌入式设备,因此必须在 OpenCV 中进行推理。在 PyTorch 中训练并保存 PyTorch 图后,我将转换为 ONNX 图。为了在 OpenCV 中进行推理,我将图像作为 OpenCV 图像(即 NumPy 数组)打开,然后调整大小,然后依次调用 cv2.normalize
、cv2.dnn.blobFromImage
, net.setInput
和 net.forward
。
在 PyTorch 中进行测试推理与在 OpenCV 中进行推理时,我得到的准确度结果略有不同,我怀疑这种差异是由于标准化过程在两者之间产生了略微不同的结果。
这是我放在一起的一个快速脚本,用于显示单个图像的差异。请注意,我使用的是灰度(单 channel )并且我正在标准化到 -1.0 到 +1.0 范围内:
# scratchpad.py
import torch
import torchvision
import cv2
import numpy as np
import PIL
from PIL import Image
TRANSFORM = torchvision.transforms.Compose([
torchvision.transforms.Resize((224, 224)),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize([0.5], [0.5])
])
def main():
# 1st show PyTorch normalization
# open the image as an OpenCV image
openCvImage = cv2.imread('image.jpg', cv2.IMREAD_GRAYSCALE)
# convert OpenCV image to PIL image
pilImage = PIL.Image.fromarray(openCvImage)
# convert PIL image to a PyTorch tensor
ptImage = TRANSFORM(pilImage).unsqueeze(0)
# show the PyTorch tensor info
print('\nptImage.shape = ' + str(ptImage.shape))
print('ptImage max = ' + str(torch.max(ptImage)))
print('ptImage min = ' + str(torch.min(ptImage)))
print('ptImage avg = ' + str(torch.mean(ptImage)))
print('ptImage: ')
print(str(ptImage))
# 2nd show OpenCV normalization
# resize the image
openCvImage = cv2.resize(openCvImage, (224, 224))
# convert to float 32 (necessary for passing into cv2.dnn.blobFromImage which is not show here)
openCvImage = openCvImage.astype('float32')
# use OpenCV version of normalization, could also do this with numpy
cv2.normalize(openCvImage, openCvImage, 1.0, -1.0, cv2.NORM_MINMAX)
# show results
print('\nopenCvImage.shape = ' + str(openCvImage.shape))
print('openCvImage max = ' + str(np.max(openCvImage)))
print('openCvImage min = ' + str(np.min(openCvImage)))
print('openCvImage avg = ' + str(np.mean(openCvImage)))
print('openCvImage: ')
print(str(openCvImage))
print('\ndone !!\n')
# end function
if __name__ == '__main__':
main()
这是我正在使用的测试图像:
这是我目前得到的结果:
$ python3 scratchpad.py
ptImage.shape = torch.Size([1, 1, 224, 224])
ptImage max = tensor(0.9608)
ptImage min = tensor(-0.9686)
ptImage avg = tensor(0.1096)
ptImage:
tensor([[[[ 0.0431, -0.0431, 0.1294, ..., 0.8510, 0.8588, 0.8588],
[ 0.0510, -0.0510, 0.0980, ..., 0.8353, 0.8510, 0.8431],
[ 0.0588, -0.0431, 0.0745, ..., 0.8510, 0.8588, 0.8588],
...,
[ 0.6157, 0.6471, 0.5608, ..., 0.6941, 0.6627, 0.6392],
[ 0.4902, 0.3961, 0.3882, ..., 0.6627, 0.6471, 0.6706],
[ 0.3725, 0.4039, 0.5451, ..., 0.6549, 0.6863, 0.6549]]]])
openCvImage.shape = (224, 224)
openCvImage max = 1.0000001
openCvImage min = -1.0
openCvImage avg = 0.108263366
openCvImage:
[[ 0.13725497 -0.06666661 0.20000008 ... 0.8509805 0.8666668
0.8509805 ]
[ 0.15294124 -0.06666661 0.09019614 ... 0.8274511 0.8431374
0.8274511 ]
[ 0.12156869 -0.06666661 0.0196079 ... 0.8509805 0.85882366
0.85882366]
...
[ 0.5843138 0.74117655 0.5450981 ... 0.83529425 0.59215695
0.5764707 ]
[ 0.6862746 0.34117654 0.39607853 ... 0.67843145 0.6705883
0.6470589 ]
[ 0.34117654 0.4117648 0.5215687 ... 0.5607844 0.74117655
0.59215695]]
done !!
如您所见,结果相似但绝对不完全相同。
我怎样才能在 OpenCV 中进行归一化并使其与 PyTorch 归一化完全或几乎完全相同?我在 OpenCV 和 NumPy 中尝试了各种选项,但无法得到比上述结果更接近的结果,它们有很大的不同。
-- 编辑 --------------------------
针对Ivan,我也试过这个:
# resize the image
openCvImage = cv2.resize(openCvImage, (224, 224))
# convert to float 32 (necessary for passing into cv2.dnn.blobFromImage which is not show here)
openCvImage = openCvImage.astype('float32')
mean = np.mean(openCvImage)
stdDev = np.std(openCvImage)
openCvImage = (openCvImage - mean) / stdDev
# show results
print('\nopenCvImage.shape = ' + str(openCvImage.shape))
print('openCvImage max = ' + str(np.max(openCvImage)))
print('openCvImage min = ' + str(np.min(openCvImage)))
print('openCvImage avg = ' + str(np.mean(openCvImage)))
print('openCvImage: ')
print(str(openCvImage))
结果是:
openCvImage.shape = (224, 224)
openCvImage max = 2.1724665
openCvImage min = -2.6999729
openCvImage avg = 7.298528e-09
openCvImage:
[[ 0.07062991 -0.42616782 0.22349077 ... 1.809422 1.8476373
1.809422 ]
[ 0.10884511 -0.42616782 -0.04401573 ... 1.7520993 1.7903144
1.7520993 ]
[ 0.0324147 -0.42616782 -0.21598418 ... 1.809422 1.8285296
1.8285296 ]
...
[ 1.1597633 1.5419154 1.0642253 ... 1.7712069 1.178871
1.1406558 ]
[ 1.4081622 0.56742764 0.70118093 ... 1.3890547 1.3699471
1.3126242 ]
[ 0.56742764 0.7393961 1.0069026 ... 1.1024406 1.5419154
1.178871 ]]
这类似于 PyTorch 规范化,但显然不一样。
我正在尝试在 OpenCV 中实现标准化,它产生与 PyTorch 标准化相同的结果。
我意识到,由于调整大小操作存在细微差异(可能还有非常小的舍入差异),我可能永远无法获得完全相同的标准化结果,但我希望尽可能接近 PyTorch 结果。
最佳答案
这可能会有帮助
如果您查看
torchvision.transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225],
)
下面的 block 是它实际做的:
import numpy as np
from PIL import Image
MEAN = 255 * np.array([0.485, 0.456, 0.406])
STD = 255 * np.array([0.229, 0.224, 0.225])
img_pil = Image.open("ty.jpg")
x = np.array(img_pil)
x = x.transpose(-1, 0, 1)
x = (x - MEAN[:, None, None]) / STD[:, None, None]
这里是我在图片上做的
关于python - 如何在 OpenCV 或 NumPy 中复制 PyTorch 规范化?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/65617755/