android - IplImage 裁剪和旋转 - Android

标签 android opencv ffmpeg javacv

我使用 ffmpeg 进行视频捕获 30 秒。

@Override
        public void onPreviewFrame(byte[] data, Camera camera) {
            if (yuvIplimage != null && recording && rec) 
                {
                    new SaveFrame().execute(data);
                }
            }
        }

保存框架类如下

private class SaveFrame extends AsyncTask<byte[], Void, File> {
            long t;
            protected File doInBackground(byte[]... arg) {

                t = 1000 * (System.currentTimeMillis() - firstTime - pausedTime);
                toSaveFrames++;
                File pathCache = new File(Environment.getExternalStorageDirectory()+"/DCIM", (System.currentTimeMillis() / 1000L)+ "_" + toSaveFrames + ".tmp");
                BufferedOutputStream bos;
                try {
                    bos = new BufferedOutputStream(new FileOutputStream(pathCache));
                    bos.write(arg[0]);
                    bos.flush();
                    bos.close();
                } catch (FileNotFoundException e) {
                    e.printStackTrace();
                    pathCache = null;
                    toSaveFrames--;
                } catch (IOException e) {
                    e.printStackTrace();
                    pathCache = null;
                    toSaveFrames--;
                }
                return pathCache;


            }
            @Override
            protected void onPostExecute(File filename)
            {
                if(filename!=null)
                {
                    savedFrames++;
                    tempList.add(new FileFrame(t,filename));
                }
            }
        }

最后我添加了所有带有裁剪和旋转的帧

private class AddFrame extends AsyncTask<Void, Integer, Void> {
        private int serial = 0;
        @Override
        protected Void doInBackground(Void... params) {

            for(int i=0; i<tempList.size(); i++)
            {
                byte[] bytes = new byte[(int) tempList.get(i).file.length()];
                try {
                    BufferedInputStream buf = new BufferedInputStream(new FileInputStream(tempList.get(i).file));
                    buf.read(bytes, 0, bytes.length);
                    buf.close();

                    IplImage image = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_8U, 2);

//                                      final int startY = 640*(480-480)/2;
//                                      final int lenY = 640*480;
//                                      yuvIplimage.getByteBuffer().put(bytes, startY, lenY);
//                                      final int startVU = 640*480+ 640*(480-480)/4;
//                                      final int lenVU = 640* 480/2;
//                                      yuvIplimage.getByteBuffer().put(bytes, startVU, lenVU);

                    if (tempList.get(i).time > recorder.getTimestamp()) {
                        recorder.setTimestamp(tempList.get(i).time);
                    }

                    image = cropImage(image);
                    image = rotate(image, 270);
//                                       image = rotateImage(image);
                    recorder.record(image);
                    Log.i(LOG_TAG, "record " + i);
                    image = null;
                    serial++;
                    publishProgress(serial);
                } catch (FileNotFoundException e) {
                    e.printStackTrace();
                } catch (IOException e) {
                    e.printStackTrace();
                } catch (com.googlecode.javacv.FrameRecorder.Exception e) {
                    e.printStackTrace();
                }
            }
            return null;
        }
        @Override
        protected void onProgressUpdate(Integer... serial) {
            int value = serial[0];
            creatingProgress.setProgress(value);
        }
        @Override
        protected void onPostExecute(Void v)
        {
            creatingProgress.dismiss();
            if (recorder != null && recording) {
                recording = false;
                Log.v(LOG_TAG,"Finishing recording, calling stop and release on recorder");
                try {
                    recorder.stop();
                    recorder.release();
                    finish();
                    startActivity(new Intent(RecordActivity.this,AnswerViewActivity.class));
                } catch (FFmpegFrameRecorder.Exception e) {
                    e.printStackTrace();
                }
                recorder = null;
            }
        }
    }

我的裁剪和旋转方法如下

private IplImage cropImage(IplImage src)
    {
        cvSetImageROI(src, r);
        IplImage cropped = IplImage.create(imageHeight, imageHeight, IPL_DEPTH_8U, 2);
        cvCopy(src, cropped);
        return cropped;
    }

    public static IplImage rotate(IplImage image, double angle) {        
        IplImage copy = opencv_core.cvCloneImage(image);

        IplImage rotatedImage = opencv_core.cvCreateImage(opencv_core.cvGetSize(copy), copy.depth(), copy.nChannels()); 
        CvMat mapMatrix = opencv_core.cvCreateMat( 2, 3, opencv_core.CV_32FC1 );

        //Define Mid Point
        CvPoint2D32f centerPoint = new CvPoint2D32f();
        centerPoint.x(copy.width()/2);
        centerPoint.y(copy.height()/2);

        //Get Rotational Matrix
        opencv_imgproc.cv2DRotationMatrix(centerPoint, angle, 1.0, mapMatrix);

        //Rotate the Image
        opencv_imgproc.cvWarpAffine(copy, rotatedImage, mapMatrix, opencv_imgproc.CV_INTER_CUBIC +  opencv_imgproc.CV_WARP_FILL_OUTLIERS, opencv_core.cvScalarAll(170));
        opencv_core.cvReleaseImage(copy);
        opencv_core.cvReleaseMat(mapMatrix);        
        return rotatedImage;
    }

我的最终视频裁剪并旋转,但绿色帧和彩色帧与此混合。

如何解决这个问题。我不知道 iplimage。在一些博客中他们提到了它的 YUV 格式。首先你需要转换Y,然后转换UV。

如何解决这个问题?

最佳答案

我修改了这个Open Source Android Touch-To-Record library的onPreviewFrame方法进行转置并调整捕获帧的大小。

我在 setCameraParams() 方法中将“yuvIplImage”定义如下。

IplImage yuvIplImage = IplImage.create(mPreviewSize.height, mPreviewSize.width, opencv_core.IPL_DEPTH_8U, 2);

还按如下方式初始化您的 videoRecorder 对象,将宽度指定为高度,反之亦然。

//call initVideoRecorder() method like this to initialize videoRecorder object of FFmpegFrameRecorder class.
initVideoRecorder(strVideoPath, mPreview.getPreviewSize().height, mPreview.getPreviewSize().width, recorderParameters);

//method implementation
public void initVideoRecorder(String videoPath, int width, int height, RecorderParameters recorderParameters)
{
    Log.e(TAG, "initVideoRecorder");

    videoRecorder = new FFmpegFrameRecorder(videoPath, width, height, 1);
    videoRecorder.setFormat(recorderParameters.getVideoOutputFormat());
    videoRecorder.setSampleRate(recorderParameters.getAudioSamplingRate());
    videoRecorder.setFrameRate(recorderParameters.getVideoFrameRate());
    videoRecorder.setVideoCodec(recorderParameters.getVideoCodec());
    videoRecorder.setVideoQuality(recorderParameters.getVideoQuality());
    videoRecorder.setAudioQuality(recorderParameters.getVideoQuality());
    videoRecorder.setAudioCodec(recorderParameters.getAudioCodec());
    videoRecorder.setVideoBitrate(1000000);
    videoRecorder.setAudioBitrate(64000);
}

这是我的 onPreviewFrame() 方法:

@Override
public void onPreviewFrame(byte[] data, Camera camera)
{

    long frameTimeStamp = 0L;

    if(FragmentCamera.mAudioTimestamp == 0L && FragmentCamera.firstTime > 0L)
    {
        frameTimeStamp = 1000L * (System.currentTimeMillis() - FragmentCamera.firstTime);
    }
    else if(FragmentCamera.mLastAudioTimestamp == FragmentCamera.mAudioTimestamp)
    {
        frameTimeStamp = FragmentCamera.mAudioTimestamp + FragmentCamera.frameTime;
    }
    else
    {
        long l2 = (System.nanoTime() - FragmentCamera.mAudioTimeRecorded) / 1000L;
        frameTimeStamp = l2 + FragmentCamera.mAudioTimestamp;
        FragmentCamera.mLastAudioTimestamp = FragmentCamera.mAudioTimestamp;
    }

    synchronized(FragmentCamera.mVideoRecordLock)
    {
        if(FragmentCamera.recording && FragmentCamera.rec && lastSavedframe != null && lastSavedframe.getFrameBytesData() != null && yuvIplImage != null)
        {
            FragmentCamera.mVideoTimestamp += FragmentCamera.frameTime;

            if(lastSavedframe.getTimeStamp() > FragmentCamera.mVideoTimestamp)
            {
                FragmentCamera.mVideoTimestamp = lastSavedframe.getTimeStamp();
            }

            try
            {
                yuvIplImage.getByteBuffer().put(lastSavedframe.getFrameBytesData());

                IplImage bgrImage = IplImage.create(mPreviewSize.width, mPreviewSize.height, opencv_core.IPL_DEPTH_8U, 4);// In my case, mPreviewSize.width = 1280 and mPreviewSize.height = 720
                IplImage transposed = IplImage.create(mPreviewSize.height, mPreviewSize.width, yuvIplImage.depth(), 4);
                IplImage squared = IplImage.create(mPreviewSize.height, mPreviewSize.height, yuvIplImage.depth(), 4);

                int[] _temp = new int[mPreviewSize.width * mPreviewSize.height];

                Util.YUV_NV21_TO_BGR(_temp, data, mPreviewSize.width,  mPreviewSize.height);

                bgrImage.getIntBuffer().put(_temp);

                opencv_core.cvTranspose(bgrImage, transposed);
                opencv_core.cvFlip(transposed, transposed, 1);

                opencv_core.cvSetImageROI(transposed, opencv_core.cvRect(0, 0, mPreviewSize.height, mPreviewSize.height));
                opencv_core.cvCopy(transposed, squared, null);
                opencv_core.cvResetImageROI(transposed);

                videoRecorder.setTimestamp(lastSavedframe.getTimeStamp());
                videoRecorder.record(squared);
            }
            catch(com.googlecode.javacv.FrameRecorder.Exception e)
            {
                e.printStackTrace();
            }
        }

        lastSavedframe = new SavedFrames(data, frameTimeStamp);
    }
}

这段代码使用了一种方法“YUV_NV21_TO_BGR”,我是从这个link中找到的。

基本上这个方法是用来解决的,我称之为“Android 上的绿魔问题”,就像你的一样。我也遇到了同样的问题,浪费了将近3-4天的时间。在添加“YUV_NV21_TO_BGR”方法之前,当我刚刚对 YuvIplImage 进行转置时,更重要的是转置、翻转(无论是否调整大小)的组合,生成的视频中有绿色输出。这个“YUV_NV21_TO_BGR”方法挽救了局面。感谢上述 Google 群组帖子中的 @David Han。

您还应该知道,onPreviewFrame 中的所有这些处理(转置、翻转和调整大小)需要花费大量时间,这会对每秒帧数 (FPS) 速率造成非常严重的影响。当我在 onPreviewFrame 方法中使用此代码时,所录制视频的 FPS 从 30fps 降至 3 帧/秒。

我建议不要使用这种方法。相反,您可以在 AsyncTask 中使用 JavaCV 对视频文件进行后期录制处理(转置、翻转和调整大小)。希望这会有所帮助。

关于android - IplImage 裁剪和旋转 - Android,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/23190991/

相关文章:

c++ - OpenCV 和使用的颜色空间

c++ - 有什么方法可以将 vector<Point2f> 转换为 vector<Keypoint> 吗?

JAVA-OpenCV : Video Writer not opening

FFMPEG - 合并 2 个文件 (video_video),带有 TC 和水印

android - 收到错误 : java. lang.Stackoverflowerror

android - 具有 phonegap 的 Android 客户端的 Web 服务的权威签名证书或自己的证书

android - Xamarin.Forms 崩溃报告和分析建议/建议

android - 什么是 packaged_resources?

c++ - 在 SDL 1.2 中移动 SDL 窗口

java - Runtime.getRuntime().exec() 没有在centos上运行ffmpeg命令