ios - cv::Mat 与 UIImageView 宽度不匹配?

标签 ios opencv width avfoundation frame

我正在使用 AVFoundation 捕获视频帧,使用 opencv 进行处理并将结果显示在新 iPad 上的 UIImageView 中。 opencv 进程执行以下操作(“inImg”是视频帧):

cv::Mat testROI = inImg.rowRange(0,100);
testROI = testROI.colRange(0,10);
testROI.setTo(255); // this is a BGRA frame.

但是,我没有在框架的左上角获得垂直的白色条(100 行 x 10 列),而是从右上角到左下角获得 100 条类似楼梯的水平线,每条长度为 10 像素.

经过一番调查,我意识到显示帧的宽度似乎比 cv::Mat 宽 8 像素。 (即第二行的第 9 个像素位于第一行的第 1 个像素的正下方。)。

视频帧本身显示正确(行之间没有位移)。 当 AVCaptureSession.sessionPreset 为 AVCaptureSessionPresetMedium(frame rows=480,cols=360)时出现该问题,但当 AVCaptureSessionPresetHigh(frame rows=640,cols=480)时则不会出现该问题。

全屏显示 360 列。 (我尝试逐像素遍历并修改 cv::Mat。像素 1-360 正确显示。361-368 消失,369 显示在像素 1 的正下方)。

我尝试了 imageview.contentMode (UIViewContentModeScaleAspectFill 和 UIViewContentModeScaleAspectFit)和 imageview.clipsToBound (是/否)的组合,但没有运气。

可能是什么问题? 非常感谢。

我使用以下代码创建 AVCaptureSession:

NSArray* devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];

if ([devices count] == 0) {
    NSLog(@"No video capture devices found");
    return NO;
}


for (AVCaptureDevice *device in devices) {
     if ([device position] == AVCaptureDevicePositionFront) {
           _captureDevice = device;
     }
}


NSError* error_exp = nil;
if ([_captureDevice lockForConfiguration:&error_exp]) {
    [_captureDevice setWhiteBalanceMode:AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance];
    [_captureDevice unlockForConfiguration];
}
// Create the capture session
_captureSession = [[AVCaptureSession alloc] init];
_captureSession.sessionPreset = AVCaptureSessionPresetMedium;


// Create device input
NSError *error = nil;
AVCaptureDeviceInput *input = [[AVCaptureDeviceInput alloc] initWithDevice:_captureDevice error:&error];

// Create and configure device output
_videoOutput = [[AVCaptureVideoDataOutput alloc] init];

dispatch_queue_t queue = dispatch_queue_create("cameraQueue", NULL); 
[_videoOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue); 

_videoOutput.alwaysDiscardsLateVideoFrames = YES; 

OSType format = kCVPixelFormatType_32BGRA;

_videoOutput.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithUnsignedInt:format]forKey:(id)kCVPixelBufferPixelFormatTypeKey];


// Connect up inputs and outputs
if ([_captureSession canAddInput:input]) {
    [_captureSession addInput:input];
}

if ([_captureSession canAddOutput:_videoOutput]) {
    [_captureSession addOutput:_videoOutput];
}

AVCaptureConnection * captureConnection = [_videoOutput connectionWithMediaType:AVMediaTypeVideo];

if (captureConnection.isVideoMinFrameDurationSupported)
    captureConnection.videoMinFrameDuration = CMTimeMake(1, 60);
if (captureConnection.isVideoMaxFrameDurationSupported)
    captureConnection.videoMaxFrameDuration = CMTimeMake(1, 60);

if (captureConnection.supportsVideoMirroring)
    [captureConnection setVideoMirrored:NO];

[captureConnection setVideoOrientation:AVCaptureVideoOrientationPortraitUpsideDown];

当接收到帧时,将调用以下函数:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
@autoreleasepool {

    CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    OSType format = CVPixelBufferGetPixelFormatType(pixelBuffer);
    CGRect videoRect = CGRectMake(0.0f, 0.0f, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer));

    AVCaptureConnection *currentConnection = [[_videoOutput connections] objectAtIndex:0];

    AVCaptureVideoOrientation videoOrientation = [currentConnection videoOrientation];
    CGImageRef quartzImage;

    // For color mode a 4-channel cv::Mat is created from the BGRA data
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    void *baseaddress = CVPixelBufferGetBaseAddress(pixelBuffer);

    cv::Mat mat(videoRect.size.height, videoRect.size.width, CV_8UC4, baseaddress, 0);

    if ([self doFrame]) { // a flag to switch processing ON/OFF
            [self processFrame:mat videoRect:videoRect videoOrientation:videoOrientation];  // "processFrame" is the opencv function shown above
    }

    CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
    quartzImage = [self.context createCGImage:ciImage fromRect:ciImage.extent];
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

    UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0 orientation:UIImageOrientationUp];

    CGImageRelease(quartzImage);

    [self.imageView performSelectorOnMainThread:@selector(setImage:) withObject:image waitUntilDone:YES];

最佳答案

我假设您使用构造函数 Mat(int _rows, int _cols, int _type, void* _data, size_t _step=AUTO_STEP) 并且 AUTO_STEP 为 0 并假设行步长为宽度*每像素字节数

这通常是错误的 - 将行与某个较大的边界对齐非常很常见。在这种情况下,360 不是 16 的倍数,但 368 是;这强烈表明它正在与 16 像素边界对齐(也许是为了辅助处理 16×16 block 的算法?)。

尝试

cv::Mat mat(videoRect.size.height, videoRect.size.width, CV_8UC4, baseaddress, CVPixelBufferGetBytesPerRow(pixelBuffer));

关于ios - cv::Mat 与 UIImageView 宽度不匹配?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/15259789/

相关文章:

ios - 是否可以在 iOS 模拟器上使用 2 个不同的设备?

opencv - OpenCV 中的 GStreamer API : autovideosink vs appsink

svg - 为什么SVG笔画宽度为: 1 making lines transparent?

ios - 有没有办法调整 UITabBar 按钮项的宽度以适应屏幕上超过 5 个按钮?

span 标签上的 jquery .height() 和 .width() 得到不一致的结果

ios - 如何将可选 View 传递给 SwiftUI 中的另一个 View ?

iphone - 如何判断正在运行的场景是什么样的类/场景?

opencv - 如何正确使用单应矩阵?

ios - SwiftUI 强制在除一个 View 之外的所有 View 上显示肖像

python - 有什么办法可以在 OpenCV 中制作抗锯齿圆