我目前正在尝试校正 UIImage
扩展中图像的透视。
当 getPerspectiveTranform
被调用时,我得到以下断言。
错误
OpenCV Error: Assertion failed (src.checkVector(2, CV_32F) == 4 && dst.checkVector(2, CV_32F) == 4) in getPerspectiveTransform, file /Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/imgwarp.cpp, line 6748
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/imgwarp.cpp:6748: error: (-215) src.checkVector(2, CV_32F) == 4 && dst.checkVector(2, CV_32F) == 4 in function getPerspectiveTransform
代码
- (UIImage *)performPerspectiveCorrection {
Mat src = [self genereateCVMat];
Mat thr;
cv::cvtColor(src, thr, CV_BGR2GRAY);
cv::threshold(thr, thr, 70, 255, CV_THRESH_BINARY);
std::vector< std::vector <cv::Point> > contours; // Vector for storing contour
std::vector< cv::Vec4i > hierarchy;
int largest_contour_index=0;
int largest_area=0;
cv::Mat dst(src.rows,src.cols, CV_8UC1, cv::Scalar::all(0)); //create destination image
cv::findContours(thr.clone(), contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0)); // Find the contours in the image
for (int i = 0; i< contours.size(); i++) {
double a = cv::contourArea(contours[i], false); // Find the area of contour
if (a > largest_area){
largest_area=a;
largest_contour_index=i; //Store the index of largest contour
}
}
cv::drawContours( dst,contours, largest_contour_index, cvScalar(255,255,255),CV_FILLED, 8, hierarchy );
std::vector<std::vector<cv::Point> > contours_poly(1);
approxPolyDP( cv::Mat(contours[largest_contour_index]), contours_poly[0],5, true );
cv::Rect boundRect = cv::boundingRect(contours[largest_contour_index]);
if(contours_poly[0].size() >= 4){
std::vector<cv::Point> quad_pts;
std::vector<cv::Point> squre_pts;
quad_pts.push_back(cv::Point(contours_poly[0][0].x,contours_poly[0][0].y));
quad_pts.push_back(cv::Point(contours_poly[0][1].x,contours_poly[0][1].y));
quad_pts.push_back(cv::Point(contours_poly[0][3].x,contours_poly[0][3].y));
quad_pts.push_back(cv::Point(contours_poly[0][2].x,contours_poly[0][2].y));
squre_pts.push_back(cv::Point(boundRect.x,boundRect.y));
squre_pts.push_back(cv::Point(boundRect.x,boundRect.y+boundRect.height));
squre_pts.push_back(cv::Point(boundRect.x+boundRect.width,boundRect.y));
squre_pts.push_back(cv::Point(boundRect.x+boundRect.width,boundRect.y+boundRect.height));
Mat transmtx = getPerspectiveTransform(quad_pts, squre_pts);
Mat transformed = Mat::zeros(src.rows, src.cols, CV_8UC3);
warpPerspective(src, transformed, transmtx, src.size());
return [UIImage imageByCVMat:transformed];
}
else {
NSLog(@"Make sure that your are getting 4 corner using approxPolyDP...");
return self;
}
}
最佳答案
我知道已经晚了,但我遇到了同样的问题,所以也许它会对任何人有所帮助。
错误的发生是因为getPerspectiveTransform(src, dst)中的src和dst;必须是 vector<Point2f>
的类型不是vector<Point>
.
所以它应该是这样的:
std::vector<cv::Point2f> quad_pts;
std::vector<cv::Point2f> squre_pts;
quad_pts.push_back(cv::Point2f(contours_poly[0][0].x,contours_poly[0][0].y));
// etc.
squre_pts.push_back(cv::Point2f(boundRect.x,boundRect.y));
//etc.
关于c++ - OpenCV:断言失败(src.checkVector(2,CV_32F),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43888562/