通过 camera2 API,我们接收格式为 YUV_420_888 的图像对象。然后我们使用以下函数转换为 NV21:
private static byte[] YUV_420_888toNV21(Image image) {
byte[] nv21;
ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
ByteBuffer uBuffer = image.getPlanes()[1].getBuffer();
ByteBuffer vBuffer = image.getPlanes()[2].getBuffer();
int ySize = yBuffer.remaining();
int uSize = uBuffer.remaining();
int vSize = vBuffer.remaining();
nv21 = new byte[ySize + uSize + vSize];
//U and V are swapped
yBuffer.get(nv21, 0, ySize);
vBuffer.get(nv21, ySize, vSize);
uBuffer.get(nv21, ySize + vSize, uSize);
return nv21;
}
虽然此函数与 cameraCaptureSessions.setRepeatingRequest
一起工作正常,但在调用 cameraCaptureSessions.capture
时,我们会在进一步处理(在 JNI 端)中遇到段错误。两者都通过 ImageReader 请求 YUV_420_888 格式。
为什么请求的类型相同,但两次函数调用的结果却不同?
更新:如评论中所述,由于图像尺寸不同(捕获请求的尺寸要大得多),我会出现这种行为。但是我们在 JNI 端的进一步处理操作对于两种请求都是相同的,并且不依赖于图像尺寸(仅取决于纵横比,这在两种情况下都是相同的)。
最佳答案
如果根本没有填充,您的代码只会返回正确的 NV21,并且 U 和 V 平原重叠并且实际上表示交错 < strong>VU 值。这种情况在预览时经常发生,但在这种情况下,您会为数组分配额外的 w*h/4
字节(这可能不是问题)。也许对于捕获的图像,您需要更强大的实现,例如
private static byte[] YUV_420_888toNV21(Image image) {
int width = image.getWidth();
int height = image.getHeight();
int ySize = width*height;
int uvSize = width*height/4;
byte[] nv21 = new byte[ySize + uvSize*2];
ByteBuffer yBuffer = image.getPlanes()[0].getBuffer(); // Y
ByteBuffer uBuffer = image.getPlanes()[1].getBuffer(); // U
ByteBuffer vBuffer = image.getPlanes()[2].getBuffer(); // V
int rowStride = image.getPlanes()[0].getRowStride();
assert(image.getPlanes()[0].getPixelStride() == 1);
int pos = 0;
if (rowStride == width) { // likely
yBuffer.get(nv21, 0, ySize);
pos += ySize;
}
else {
long yBufferPos = -rowStride; // not an actual position
for (; pos<ySize; pos+=width) {
yBufferPos += rowStride;
yBuffer.position(yBufferPos);
yBuffer.get(nv21, pos, width);
}
}
rowStride = image.getPlanes()[2].getRowStride();
int pixelStride = image.getPlanes()[2].getPixelStride();
assert(rowStride == image.getPlanes()[1].getRowStride());
assert(pixelStride == image.getPlanes()[1].getPixelStride());
if (pixelStride == 2 && rowStride == width && uBuffer.get(0) == vBuffer.get(1)) {
// maybe V an U planes overlap as per NV21, which means vBuffer[1] is alias of uBuffer[0]
byte savePixel = vBuffer.get(1);
try {
vBuffer.put(1, (byte)~savePixel);
if (uBuffer.get(0) == (byte)~savePixel) {
vBuffer.put(1, savePixel);
vBuffer.position(0);
uBuffer.position(0);
vBuffer.get(nv21, ySize, 1);
uBuffer.get(nv21, ySize + 1, uBuffer.remaining());
return nv21; // shortcut
}
}
catch (ReadOnlyBufferException ex) {
// unfortunately, we cannot check if vBuffer and uBuffer overlap
}
// unfortunately, the check failed. We must save U and V pixel by pixel
vBuffer.put(1, savePixel);
}
// other optimizations could check if (pixelStride == 1) or (pixelStride == 2),
// but performance gain would be less significant
for (int row=0; row<height/2; row++) {
for (int col=0; col<width/2; col++) {
int vuPos = col*pixelStride + row*rowStride;
nv21[pos++] = vBuffer.get(vuPos);
nv21[pos++] = uBuffer.get(vuPos);
}
}
return nv21;
}
如果您打算将生成的数组传递给 C++,您可以利用 fact那个
the buffer returned will always have isDirect return true, so the underlying data could be mapped as a pointer in JNI without doing any copies with GetDirectBufferAddress.
这意味着可以在 C++ 中以最小的开销完成相同的转换。在C++中,你甚至会发现实际的像素排列已经是NV21了!
PS 实际上,这可以在 Java 中完成,开销可以忽略不计,请参见上面的 if (pixelStride == 2 && …
行。因此,我们可以批量复制所有色度字节到生成的字节数组,这比运行循环快得多,但仍然比 C++ 中这种情况下可以实现的速度慢。有关完整实现,请参阅 Image.toByteArray()。
关于android - camera2 拍摄的图片 - 从 YUV_420_888 到 NV21 的转换,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52726002/