ios - 如何将 BGRA 字节转换为 UIImage 进行保存?

标签 ios iphone uiimage avcapturesession

我想使用 GPUImage 框架捕获原始像素数据进行操作。我这样捕获数据:

 CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(imageSampleBuffer);
     CVPixelBufferLockBaseAddress(cameraFrame, 0);
     GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
     size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
     NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];

     //raw values
     UInt32 *values = [dataForRawBytes bytes];//, cnt = [dataForRawBytes length]/sizeof(int);

     //test out dropbox upload here
     [self uploadDropbox:dataForRawBytes];
     //end of dropbox upload


     // Do whatever with your bytes
     //         [self processImages:dataForRawBytes];

     CVPixelBufferUnlockBaseAddress(cameraFrame, 0);     }];

我正在为相机使用以下设置:

 NSDictionary *settings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey,[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil];

出于测试目的,我想将捕获的图像保存到保管箱,为此我需要将其保存到 tmp 目录,我将如何保存 dataForRawBytes? 任何帮助将不胜感激!

最佳答案

所以我能够弄清楚如何从原始数据中获取 UIImage,这是我修改后的代码:

CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(imageSampleBuffer);
     CVPixelBufferLockBaseAddress(cameraFrame, 0);
     Byte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
     size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
     size_t width = CVPixelBufferGetWidth(cameraFrame);
     size_t height = CVPixelBufferGetHeight(cameraFrame);
     NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];
     // Do whatever with your bytes

     // create suitable color space
     CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

     //Create suitable context (suitable for camera output setting kCVPixelFormatType_32BGRA)
     CGContextRef newContext = CGBitmapContextCreate(rawImageBytes, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);

     CVPixelBufferUnlockBaseAddress(cameraFrame, 0);

     // release color space
     CGColorSpaceRelease(colorSpace);

     //Create a CGImageRef from the CVImageBufferRef
     CGImageRef newImage = CGBitmapContextCreateImage(newContext);
     UIImage *FinalImage = [[UIImage alloc] initWithCGImage:newImage];
     //is the image captured, now we can test saving it.

我需要创建颜色空间等属性并生成 CDContexyRef 并使用它来最终获得 UIImage,并且在调试时我可以正确地看到我捕获的图像。

关于ios - 如何将 BGRA 字节转换为 UIImage 进行保存?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43852473/

相关文章:

ios - 我该如何解决可选问题?

objective-c - 使用 objective zip 解压 NSData

iphone - 如何使用 http/ftp 将文件上传到 iphone/ipad? (例如 ifile、goodreader)

ios - 如何恢复 ios 中的音频?

iphone - iOS:具有自定义事件源的事件套件(来自网络/ICS)?

iphone - Amazon S3 getDate() API 调用?

image-processing - 无法让 AVFoundation 使用 AVCaptureSessionPresetPhoto 分辨率

objective-c - UIImage - 根据温度调整图像

iphone - 如何在iPhone sdk中获取彩色图像

iphone - 如何在像 iphone 键盘一样触摸时使按钮键盘(A-Z)变大